Feeds
Golems GABB: Twig & PHP Templating in Drupal 11
Welcome here! This is your complete guide to Twig and PHP templating in Drupal 11. As you understand, Twig and PHP are important for the frontend and backend development in Drupal. If you know how they work, you can create beautiful, effective, and easy-to-maintain websites.
Today, Golems Drupal company explores Twig's smooth template engine and PHP's strong backend logic. Our blog will be helpful for every kind of person, whether you are a skilled Drupal developer, someone who owns a website or business owner, or simply starting your path in this field. We will dive into the details of Twig and PHP in Drupal to help you better understand how they work together so that your digital experiences can be crafted more effectively.
Python Bytes: #411 TLS Client: Hello <<guitar solo>>
Zato Blog: SSH commands as API microservices
This is a quick guide on how to turn SSH commands into a REST API service. The use-case may be remote administration of devices or equipment that does not offer a REST interface or making sure that access to SSH commands is restricted to selected external REST-based API clients only.
PythonThe first thing needed is code of the service that will connect to SSH servers. Below is a service doing just that - it receives name of the command to execute and host to run in on, translating stdout and stderr of SSH commands into response documents which Zato in turn serializes to JSON.
# -*- coding: utf-8 -*- # stdlib from traceback import format_exc # Zato from zato.server.service import Service class SSHInvoker(Service): """ Accepts an SSH command to run on a remote host and returns its output to caller. """ # A list of elements that we expect on input input = 'host', 'command' # A list of elements that our responses will contain output = 'is_ok', 'cid', '-stdout', '-stderr' def handle(self): # Local aliases host = self.request.input.host command = self.request.input.command # Correlation ID is always returned self.response.payload.cid = self.cid try: # Build the full command full_command = f'ssh {host} {command}' # Run the command and collect output output = self.commands.invoke(full_command) # Assign both stdout and stderr to response self.response.payload.stdout = output.stdout self.response.payload.stderr = output.stderr except Exception: # Catch any exception and log it self.logger.warn('Exception caught (%s), e:`%s', self.cid, format_exc()) # Indicate an error self.response.payload.is_ok = False else: # Everything went fine self.response.payload.is_ok = True DashboardIn the Zato Dashboard, let's go ahead and create an HTTP Basic Auth definition that a remote API client will authenticate against:
Now, the SSH service can be mounted on a newly created REST channel - note the security definition used and that data format is set to JSON. We can skip all the other details such as caching or rate limiting, for illustration purposes, this is not needed.
UsageAt this point, everything is ready to use. We could make it accessible to external API clients but, for testing purposes, let's simply invoke our SSH API gateway service from the command line:
$ curl "api:password@localhost:11223/api/ssh" -d \ '{"host":"localhost", "command":"uptime"}' { "is_ok": true, "cid": "27406f29c66c2ab6296bc0c0", "stdout": " 09:45:42 up 37 min, 1 user, load average: 0.14, 0.27, 0.18\n"} $ Note that, at this stage, the service should be used in trusted environments only, e.g. it will run any command that it is given on input which means that in the next iteration it could be changed to only allow commands from an allow-list, rejecting anything that is not recognized.
And this completes it - the service is deployed and made accessible via a REST channel that can be invoked using JSON. Any command can be sent to any host and their output will be returned to API callers in JSON responses.
More resources➤ Python API integration tutorial
➤ What is an integration platform?
➤ Python Integration platform as a Service (iPaaS)
➤ What is an Enterprise Service Bus (ESB)? What is SOA?
Wingware: Wing Python IDE Version 10.0.7 - November 25, 2024
This minor release reduces Python 3.12+ debugger overhead and improves Python code analysis.
See the change log for details.
Download Wing 10 Now: Wing Pro | Wing Personal | Wing 101 | Compare Products
What's New in Wing 10
AI Assisted Development
Wing Pro 10 takes advantage of recent advances in the capabilities of generative AI to provide powerful AI assisted development, including AI code suggestion, AI driven code refactoring, description-driven development, and AI chat. You can ask Wing to use AI to (1) implement missing code at the current input position, (2) refactor, enhance, or extend existing code by describing the changes that you want to make, (3) write new code from a description of its functionality and design, or (4) chat in order to work through understanding and making changes to code.
Examples of requests you can make include:
"Add a docstring to this method" "Create unit tests for class SearchEngine" "Add a phone number field to the Person class" "Clean up this code" "Convert this into a Python generator" "Create an RPC server that exposes all the public methods in class BuildingManager" "Change this method to wait asynchronously for data and return the result with a callback" "Rewrite this threaded code to instead run asynchronously"Yes, really!
Your role changes to one of directing an intelligent assistant capable of completing a wide range of programming tasks in relatively short periods of time. Instead of typing out code by hand every step of the way, you are essentially directing someone else to work through the details of manageable steps in the software development process.
Support for Python 3.12, 3.13, and ARM64 LinuxWing 10 adds support for Python 3.12 and 3.13, including (1) faster debugging with PEP 669 low impact monitoring API, (2) PEP 695 parameterized classes, functions and methods, (3) PEP 695 type statements, and (4) PEP 701 style f-strings.
Wing 10 also adds support for running Wing on ARM64 Linux systems.
Poetry Package ManagementWing Pro 10 adds support for Poetry package management in the New Project dialog and the Packages tool in the Tools menu. Poetry is an easy-to-use cross-platform dependency and package manager for Python, similar to pipenv.
Ruff Code Warnings & ReformattingWing Pro 10 adds support for Ruff as an external code checker in the Code Warnings tool, accessed from the Tools menu. Ruff can also be used as a code reformatter in the Source > Reformatting menu group. Ruff is an incredibly fast Python code checker that can replace or supplement flake8, pylint, pep8, and mypy.
Try Wing 10 Now!
Wing 10 is a ground-breaking new release in Wingware's Python IDE product line. Find out how Wing 10 can turbocharge your Python development by trying it today.
Downloads: Wing Pro | Wing Personal | Wing 101 | Compare Products
See Upgrading for details on upgrading from Wing 9 and earlier, and Migrating from Older Versions for a list of compatibility notes.
Steinar H. Gunderson: plocate 1.1.23 released
I've just released version 1.1.23 of plocate, almost a year after 1.1.22. The changes are mostly around the systemd unit this time, but perhaps more interestingly is that this is the first release where I don't have the majority of patches; in fact, I don't have any patches at all. All of them came from contributors, many of them through the “just do git push to send me a patch email” interface.
I guess this means that I'll need to actually start streamlining my “git am” workflow… it gets me every time. :-)
Hugo van Kemenade: A surprising thing about PyPI's BigQuery data
You can get download numbers for PyPI packages (or projects) from a Google BigQuery dataset. You need a Google account and credentials, and Google gives 1 TiB of free quota per month.
Each month, I have automation to fetch the download numbers for the 8,000 most popular packages over the past 30 days, and make it available as more accessible JSON and CSV files at Top PyPI Packages. This data is widely used for research in academia and industry.
However, as more packages and releases are uploaded to PyPI, and there are more and more downloads logged, the amount of billed data increases too.
This chart shows the amount of data billed per month.
At first, I was only collecting downloads data for 4,000 packages, and it was fetched for two queries: downloads over 365 days and over 30 days. But as time passed, it started using up too much quota to download data for 365 days.
So I ditched the 365-day data, and increased the 30-day data from 4,000 to 5,000 packages. Later, I checked how much quota was being used and increased from 5,000 packages to 8,000 packages.
But then I exceeded the BigQuery monthly quota of 1 TiB fetching data for July 2024.
To fetch the missing data and investigate what's going in, I started Google Cloud's 90-day, $300 (€277.46) free-trial 💸
Here's what I found!
Finding: it costs more to get data for downloads from only pip than from all installersI use the pypinfo client to help query BigQuery. By default, it only fetches downloads for pip.
Only pipThis command gets one day's download data for the top 10 packages, for pip only:
Results:
project download count boto3 37,251,744 aiobotocore 16,252,824 urllib3 16,243,278 botocore 15,687,125 requests 13,271,314 s3fs 12,865,055 s3transfer 12,014,278 fsspec 11,982,305 charset-normalizer 11,684,740 certifi 11,639,584 Total 158,892,247 All installersAdding the --all flag gets one day's download data for the top 10 packages, for all installers:
So we can see the default pip-only costs an extra 25% data processed and data billed, and costs an extra 25% in dollars.
Unsurprisingly, the actual download counts are higher for all installers. The ranking has changed a bit, but I expect we're still getting more-or-less the same packages in the top thousands of results.
QueriesIt sends a query like this to BigQuery for only pip:
And for all installers:
These queries are the same, except the default has an extra AND details.installer.name = "pip" condition. It seems reasonable it would cost more to do extra filtering work.
InstallersLet's look at the installers:
pip still by far the most popular, and unsurprising uv is up there too, with about 10% of pip's downloads.
The others are about 25% or less of uv. A lot of them are mirroring services that we wanted to exclude before.
I think given uv's importance, and my expectation that it will continue to take a bigger share of the pie, plus especially the extra cost for filtering by just pip, means that we should switch to fetching data for all downloaders. Plus the others don't account for that much of the pie.
Finding: the number of packages doesn't affect the costThis was the biggest surprise. Earlier I'd been increasing or decreasing the number to try and remain under quota. But it turns out it makes no difference how many packages you query!
I fetched data for just one day and all installers for different package limits: 1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000. Sample query:
Result: Interestingly, the cost is the same for all limits (1000-8000): $0.31.
Repeating with one day but filtering for pip only:
Result: Cost increased to $0.39 but again the same for all limits.
Let's repeat with all installers, but for 30 days, and this time query in decreasing limits, in case we were only paying for incremental changes: 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000:
Result: Again, the cost is the same regardless of package limit: $4.89 per query.
Well then, let's repeat with the limit increasing by powers of ten, up to 1,000,000! This last one fetches data for all 531,022 packages on PyPI:
limit projects count estimated cost bytes billed bytes processed 1 1 0.20 43,447,746,560 43,447,720,943 10 10 0.20 43,447,746,560 43,447,720,943 100 100 0.20 43,447,746,560 43,447,720,943 1000 1,000 0.20 43,447,746,560 43,447,720,943 8000 8,000 0.20 43,447,746,560 43,447,720,943 10000 10,000 0.20 43,447,746,560 43,447,720,943 100000 100,000 0.20 43,447,746,560 43,447,720,943 1000000 531,022 0.20 43,447,746,560 43,447,720,943Result: Again, same cost, whether for 1 package or 531,022 packages!
Finding: the number of days affects the costNo surprise. I'd earlier noticed 365 days too took much quota, and I could continue with 30 days.
Here's the estimated cost and bytes billed (for one package, all installers) between one and 30 days (f"pypinfo --all --json --indent 0 --days {days} --limit 1 '' project"), showing a roughly linear increase:
ConclusionIt doesn't matter how many packages I fetch data for, I might as well fetch all and make it available to everyone, depending on the size of the data file. It will make sense to still offer a smaller file with 8,000 or so packages: often you just need a large-ish yet manageable number.
It costs more to filter for only downloads from pip, so I've switched to fetching data for all installers.
The number of days affects the cost, so I will need to decrease this in the future to stay within quota. For example, at some point I may need to switch from 30 to 25 days, and later from 25 to 20 days.
More details from the investigation, the scripts and data files can be found at
hugovk/top-pypi-packages#36.
And let me know if you know any tricks to reduce costs!
Header photo: "The Balancing Rock, Stonehenge, Near Glen Innes, NSW" by the Royal Australian Historical Society, with no known copyright restrictions.
Django Weblog: 2024 Malcolm Tredinnick Memorial Prize awarded to Rachell Calhoun
This year it was hard to decide, and we wanted to also show who else got nominated, because they also deserve recognition, so it took a bit longer than we expected.
The Django Software Foundation Board is pleased to announce that the 2024 Malcolm Tredinnick Memorial Prize has been awarded to Rachell Calhoun.
Rachell Calhoun is an influential figure within the Django community, well known for being cheerful and always willing to help others. She consistently empowers folks behind the scenes.
Rachell got her start in the Django community through a Django Girls Seoul event. Being an educator, she started organizing Django Girls Seoul events. Her contributions to Django Girls Seoul and Django Girls Grand Rapids exemplify her commitment to sharing knowledge, spreading Django and lifting others up. Rachell is now a trustee for Django Girls +, contributing to its mission of helping women and other underrepresented groups around the world learn programming with Django.
In 2022, Rachell co-founded Djangonaut Space, an initiative designed to support new contributors to the Django ecosystem, encouraging leadership and growth. Rachell’s willingness to help people achieve their goals and celebrate their achievements has been imprinted in Djangonaut Space’s culture. Rachell and Djangonaut Space have done a stellar job on helping people become contributors and Django community members.
Her commitment to fostering diversity and inclusion extends beyond her organizational work; she has volunteered at multiple DjangoCon US events, bringing her welcoming and inclusive spirit to the community. A long-time volunteer and speaker at DjangoCon US and DjangoCon Europe from 2016 to 2024, she has shared her expertise and insights on various topics related to Django and web development.
Rachell has contributed to Django for many years, she has been instrumental in creating spaces where people of all backgrounds can thrive, making her a beloved and respected member of the global Django ecosystem.
Some quotes from the thirteen people who nominated Rachell had this to say about her:
Rachell advocates for others constantly through sponsorship, inclusivity, and connection. She is extremely empathic and seeks to not only welcome others in, but to actively bring them into the group.
She has been one of the core members of Djangonaut Space which has done a lot for bringing new contributors into the Django community. This program has done a lot to excite and energize the Django community and Rachell is one of the major reasons why. --
Throughout her career she's been involved in Django Girls starting about a decade ago in South Korea. She was a major organizer of the Grand Rapids, MI branch, before moving into the trustee role she occupies now.
Rachell is one of my favorite people and she's been doing an excellent job at growing Django and helping others feel more welcome here. Rachell is an excellent choice for the Malcolm Tredinnick 2024 award!
— Tim Shilling
Rachell is an extremely skillful leader who is always nurturing newcomers into leaders. She has been pivotal to my experience with the Djangonaut Space Program.
I started out as a nervous Djangonaut who didn’t schedule my 1:1s until Rachell checked in with me and made sure I knew the program was a safe space to discuss anything.
When I joined the program organizers as a Navigator Coordinator, I was initially much more of a follower. Rachell knew to step back while continuing to provide her support, so I could step into the leadership role and get my job done.
Rachell shows people that she believes in them. She does this in a friendly, gentle, and encouraging manner. She never forces anyone to make decisions that they don’t feel comfortable with. The community is really lucky to have Rachell.
— Lilian
Rachell Calhoun, one of the organizers and founders of Djangonaut Space, has been an open, supportive, and educational help on my Django journey. Her contributions to the Djangonaut Space program are invaluable—a program I hold quite dearly as a cornerstone of my technical interactions and growth.
Rachell's ideals of nurturing and guiding have shone through the program, for which I am grateful. Encouraging wonderful conversations, organizing and fostering mentorship, and being a great person!
I believe Rachell is an embodiment of the Malcolm Tredinnick spirit and am confident that should she win the prize, she would go on to create more impact for the Django community and the world at large.
— Emmanuel Katchy
Other nominations for this year included:
Anna Makarudze, Fundraising Coordinator at Django Girls+ Foundation, chair of the first DjangoCon Africa, previously served the DSF board as president.
Benjamin Balder Bach, chair of the DSF social media working group, organizer of Django Day Copenhagen for many years.
Black Python Devs, community founded by Jay Miller, to increase diversity and inclusion of typically underrepresented people.
Bhuvnesh Sharma, co-chair of the DSF social media working group, and co-founded and organized Django India.
Carlton Gibson, previously a Django fellow, co-host of Django Chat, volunteers in DjangoCon Europe and provides useful advice in forum and discord.
Christoph Bulter, active helper of the official and unofficial Django Discord.
Django Girls+, a non-profit organization and a community that empowers and helps women to organize free, one-day programming workshops by providing tools, resources and support.
Django Discord moderators and helpers, which are moderating the discord and provide help to keep the place welcoming and inclusive to everyone.
Daniel Moran, active contributor in various open-source projects, including django-tasks-scheduler. He is an administrator of the Django Commons organization.
Ester Beltrami, PyCon Italia and Django London organizer, is also a volunteer and a speaker in events such as EuroPython or DjangoCon Europe.
Felipe de Morais, co-founder of AfroPython, participant of Djangonaut Space program, organized and advised multiple Django Girls workshops across Brazil and Chile.
Jake Howard, speaker and contributor to Django, known for his work around background tasks.
Matt Westcott, frequent speaker and lead the development of Wagtail.
Russel Keith-Magee, python core contributor and previously Django core contributor and also served in the DSF board as President.
Ryan Cheley, django contributor and mentor (navigator) in Djangonaut Space program.
Simon Charette, long-time django contributor, previously member of the Django 5.x steering council
Sheena O’Connell, frequent speaker and DjangoCon Africa organizer.
Tom Carrick, Django Accessibility team creator and member, django contributor for many years and mentor (navigator) in Djangonaut Space program.
Tim Schilling, DEFNA secretary, DjangoCon Us organizer and co-founder of Djangonaut Space.
Will Vincent, former board member of the DSF, co-host of Django Chat and co-writer of Django News.
Each year we receive many nominations, and it is always hard to pick the winner. This year, as always, we received many nominations for the Malcolm Tredinnick Memorial Prize with some being nominated multiple times. Some have been nominated in multiple years. If your nominee didn’t make it this year, you can always nominate them again next year.
Malcolm would be very proud of the legacy he has fostered in our community!
Congratulations Rachell on the well-deserved honor!
GNU Guix: Guix/Hurd on a Thinkpad X60
A lot has happened with respect to the Hurd since our Childhurds and GNU/Hurd Substitutes post. As long as two years ago some of you have been asking for a progress update and although there have been rumours on a new blog post for over a year, we were kind of waiting for the rumoured x86_64 support.
With all the exciting progress on the Hurd coming available after the recent (last?) merger of core-updates we thought it better not to wait any longer. So here is a short overview of our Hurd work over the past years:
Update Hurd to 3ff7053, gnumach 1.8+git20220827, and fix build failures,
Initial rumpdisk support, more on this below, which needed to wait for:
A libc specific to Hurd, updating gnumach to 1.8+git20221224 and hurd to 0.9.git20230216,
Some 40 native package build fixes for the Hurd so that all development dependencies of the guix package are now available,
A hack to use Git source in commencement to update and fix cross build and native build for the Hurd,
Support for buiding guix natively on the Hurd by splitting the build into more steps for 32-bit hosts
Even nicer offloading support for Childhurds by introducing Smart Hurdloading so that now both the Bordeaux and Berlin build farms build packages for i586-gnu,
Locale fixes for wrong glibc-utf8-locales package used on GNU/Hurd,
More locale fixes to use glibc-utf8-locales/hurd in %standard-patch-inputs,
And even more locale fixes for using the right locales on GNU/Hurd,
A new glibc 2.38 allowing us to do (define-public glibc/hurd glibc)—i.e., once again use the same glibc for Linux and Hurd alike, and: Better Hurd support!,
Creation of hurd-team branch with build fixes, updating gnumach to 1.8+git20230410 and hurd to 0.9.git20231217,
A constructive meeting with sixteen people during the Guix Days just before FOSDEM '24 with notes that contain some nice ideas,
Another new glibc 2.39; even better Hurd support, opening the door to x86_64 support,
Yet another restoring of i586-gnu (32-bit GNU/Hurd) support,
The installer just learnt about the Hurd! More on this below, and finally,
Another set of updates: gnumach (1.8+git20240714), mig (1.8+git20231217), hurd (0.9.git20240714), netdde (c0ef248d), rumpkernel (f1ffd640), and initial support for x86_64-gnu, aka the 64bit Hurd.
Back in 2020, Ricardo Wurmus added the NetDDE package that provides Linux 2.6 network drivers. At the time we didn't get to integrate and use it though and meanwhile it bitrotted.
After we resurrected the NetDDE build, and with kind help of the Hurd developers we finally managed to support NetDDE for the Hurd.. This allows the usage of the Intel 82573L Gigabit Ethernet Controller of the Thinkpad X60 (and many other network cards, possibly even WIFI). Instead of using the builtin kernel driver in GNU Mach, it would be running as a userland driver.
What sparked this development was upstream's NetBSD rumpdisk support that would allow using modern hard disks such as SSDs, again running as a userland driver. Hard disk support builtin in GNU Mach was once considered to be a nice hack but it only supported disks up to 128 GiB…
First, we needed to fix the cross build on Guix.
After the initial attempt at rumpdisk support for the Hurd it took (v2) some (v3) work (v4) to finally arrive at rumpdisk support for the Hurd, really, *really* (v5)
Sadly when actually using them, booting hangs:
start: pci.arbiter:What did not really help is that upstream's rumpkernel archive was ridiculously large. We managed to work with upstream to remove unused bits from the archive. Upstream created a new archive that instead of 1.8 GiB (!) now “only” weighs 670 MiB.
Anyway, after a lot of building, rebuilding, and debugging and some more with kind help from upstream we finally got Rumpdisk and NetDDE to run in a Childhurd.
Initial Guix/Hurd on the Thinkpad X60Now that the last (!) core-updates merge has finally happened (thanks everyone!), the recipe of installing Guix/Hurd has been much simpfilied. It goes something along these lines.
Install Guix/Linux on your X60,
Reserve a partition and format it for the Hurd:
mke2fs -o hurd -L hurd /dev/sdaXIn your config.scm, add some code to add GRUB menuentries for booting the Hurd, and mount the Hurd partition under /hurd:
(use-modules (srfi srfi-26) (ice-9 match) (ice-9 rdelim) (ice-9 regex) (gnu build file-systems)) (define %hurd-menuentry-regex "menuentry \"(GNU with the Hurd[^{\"]*)\".*multiboot ([^ \n]*) +([^\n]*)") (define (text->hurd-menuentry text) (let* ((m (string-match %hurd-menuentry-regex text)) (label (match:substring m 1)) (kernel (match:substring m 2)) (arguments (match:substring m 3)) (arguments (string-split arguments #\space)) (root (find (cute string-prefix? "root=" <>) arguments)) (device-spec (match (string-split root #\=) (("root" device) device))) (device (hurd-device-name->device-name device-spec)) (modules (list-matches "module ([^\n]*)" text)) (modules (map (cute match:substring <> 1) modules)) (modules (map (cute string-split <> #\space) modules))) (menu-entry (label label) (device device) (multiboot-kernel kernel) (multiboot-arguments arguments) (multiboot-modules modules)))) (define %hurd-menuentries-regex "menuentry \"(GNU with the Hurd[^{\"]*)\" \\{([^}]|[^\n]\\})*\n\\}") (define (grub.cfg->hurd-menuentries grub.cfg) (let* ((entries (list-matches %hurd-menuentries-regex grub.cfg)) (entries (map (cute match:substring <> 0) entries))) (map text->hurd-menuentry entries))) (define (hurd-menuentries) (let ((grub.cfg (with-input-from-file "/hurd/boot/grub/grub.cfg" read-string))) (grub.cfg->hurd-menuentries grub.cfg))) ... (operating-system ... (bootloader (bootloader-configuration (bootloader grub-bootloader) (targets '("/dev/sda")) (menu-entries (hurd-menuentries)))) (file-systems (cons* (file-system (device (file-system-label "guix")) (mount-point "/") (type "ext4")) (file-system (device (file-system-label "hurd")) (mount-point "/hurd") (type "ext2")) %base-file-systems)) ...)Create a config.scm for your Hurd system. You can get inspiration from bare-hurd.tmpl and inherit from %hurd-default-operating-system. Use grub-minimal-bootloader and add a static-networking-service-type. Something like:
(use-modules (srfi srfi-1) (ice-9 match)) (use-modules (gnu) (gnu system hurd)) (operating-system (inherit %hurd-default-operating-system) (bootloader (bootloader-configuration (bootloader grub-minimal-bootloader) (targets '("/dev/sda")))) (kernel-arguments '("noide")) ... (services (cons* (service static-networking-service-type (list %loopback-static-networking (static-networking (addresses (list (network-address (device "eth0") (value "192.168.178.37/24")))) (routes (list (network-route (destination "default") (gateway "192.168.178.1")))) (requirement '()) (provision '(networking)) (name-servers '("192.168.178.1"))))) ...)))Install the Hurd. Assuming you have an ext2 filesystem mounted on /hurd, do something like:
guix system build --target=i586-pc-gnu vuurvlieg.hurd --verbosity=1 sudo -E guix system init --target=i586-pc-gnu --skip-checks \ vuurvlieg.hurd /hurd sudo -E guix system reconfigure vuurvlieg.scmReboot and...
Hurray!
We now have Guix/Hurd running on Thinkpad.
Guix/Hurd on Real IronWhile the initial manual install on the X60 was an inspiring milestone, we can do better. As mentioned above, just recently the installer learnt about the Hurd, right after some smaller problems were addressed, like guix system init creating essential devices for the Hurd, not attempting to run a cross-built grub-install to install Grub, soft-coding the hard-coded part:1:device:wd0 root file-system, adding support for booting Guix/Hurd more than once.
To install Guix/Hurd, first, build a 32bit installation image and copy it to a USB stick:
guix system image --image-type=iso9660 --system=i686-linux gnu/system/install.scm dd if=/gnu/store/cabba9e-image.iso of=/dev/sdX status=progress syncthen boot it on a not-too-new machine that has wired internet (although installation over WIFI is possible, there is currently no WIFI support for the installed Hurd to use it). On the new Kernel page:
choose Hurd. Do not choose a desktop environment, that's not available yet. On the Network management page:
choose the new Static networking service. In the final Configuration file step, don't forget to edit:
and fill-in your IP and GATEWAY:
You may want to add some additional packages such as git-minimal from (gnu packages version-control) and sqlite from (gnu packages sqlite).
If you also intend to do Guix development on the Hurd—e.g., debugging and fixing native package builds—then you might want to include all dependencies to build the guix package, see the devel-hurd.tmpl for inspiration on how to do that. Note that any package you add must already have support for cross-building.
Good luck, and let us know if it works for you and on what kind of machine, or why it didn't!
What's next?In an earlier post we tried to answer the question “Why bother with the Hurd anyway?” An obvious question because it is all too easy to get discouraged, to downplay or underestimate the potential social impact of GNU and the Hurd.
The most pressing problem currently is that the guix-daemon fails when invoking guix authenticate on the Hurd, which means that we cannot easily keep our substitutes up to date. guix pull is known to work but only by pulling from a local branch doing something like:
mkdir -p ~/src/guix cd src/guix git clone https://git.savannah.gnu.org/git/guix.git master guix pull --url=~/src/guix/masterkinda like we did it in the old days. Sometimes it seems that guix does not grok the sqlite store database. This is needs to be resolved but as sqlite does read it this can be easily worked around by doing something like:
mv /var/guix/db/db.sqlite /var/guix/db/db.sqlite.orig sqlite3 /var/guix/db/db.sqlite.orig .dump > /var/guix/db/db.sqlite.dump sqlite3 -init /var/guix/db/db.sqlite.dump /var/guix/db/db.sqlite .quitAlso, the boot process still fails to handle an unclean root file system. Whenever the Hurd has suffered an unclean shutdown, cleaning it must currently be done manually, e.g., by booting from the installer USB stick.
Upstream now has decent support for 64bit (x86_64) albeit with more bugs and fewer packages. As mentioned in the overview we are now working to have that in Guix and have made some progress:
more on that in another post. Other interesting task for Guix include:
- Have guix system reconfigure work on the Hurd,
- Figure out WiFi support with NetDDE (and add it to installer!),
- An isolated build environment (or better wait for, err, contribute to the Guile guix-daemon?),
- An installer running the Hurd, and,
- Packages, packages, packages!
We tried to make Hurd development as easy and as pleasant as we could. As you have seen, things start to work pretty nicely and there is still plenty of work to do in Guix. In a way this is “merely packaging” the amazing work of others. Some of the real work that needs to be done and which is being discussed and is in progress right now includes:
- Audio support (this is sponsored by NLnet, thanks!),
- Rumpnet,
- SMP,
- AArch64.
All these tasks look daunting, and indeed that’s a lot of work ahead. But the development environment is certainly an advantage. Take an example: surely anyone who’s hacked on device drivers or file systems before would have loved to be able to GDB into the code, restart it, add breakpoints and so on—that’s exactly the experience that the Hurd offers. As for Guix, it will make it easy to test changes to the micro-kernel and to the Hurd servers, and that too has the potential to speed up development and make it a very nice experience.
Join #guix and #hurd on libera.chat or the mailing lists and get involved!
Django Weblog: DjangoCon Europe 2026 call for organizers completed
The DjangoCon Europe 2026 call for organizers is now over. We’re elated to report we received three viable proposals, a clear improvement over recent years.
We’ll let the successful team decide when and how to make their announcement, but in the meantime – thank you to everyone who took part in this process ❤️ We’re elated to have such a strong community in Europe. And for now, look forward to DjangoCon Europe 2025 in Dublin, Ireland! 🍀
What about 2027?We’re not ready to plan that yet, but if you’re interested in organizing – take a moment to add your name and email to our DjangoCon Europe 2027 expression of interest form. We’ll make sure to reach out once the time is right.
Real Python: Python range(): Represent Numerical Ranges
In Python, the range() function generates a sequence of numbers, often used in loops for iteration. By default, it creates numbers starting from 0 up to but not including a specified stop value. You can also reverse the sequence with reversed(). If you need to count backwards, then you can use a negative step, like range(start, stop, -1), which counts down from start to stop.
The range() function is not just about iterating over numbers. It can also be used in various programming scenarios beyond simple loops. By mastering range(), you can write more efficient and readable Python code. Explore how range() can simplify your code and when alternatives might be more appropriate.
By the end of this tutorial, you’ll understand that:
- A range in Python is an object representing an interval of integers, often used for looping.
- The range() function can be used to generate sequences of numbers that can be converted to lists.
- for i in range(5) is a loop that iterates over the numbers from 0 to 4, inclusive.
- The range parameters start, stop, and step define where the sequence begins, ends, and the interval between numbers.
- Ranges can go backward in Python by using a negative step value and reversed by using reversed().
A range is a Python object that represents an interval of integers. Usually, the numbers are consecutive, but you can also specify that you want to space them out. You can create ranges by calling range() with one, two, or three arguments, as the following examples show:
Python >>> list(range(5)) [0, 1, 2, 3, 4] >>> list(range(1, 7)) [1, 2, 3, 4, 5, 6] >>> list(range(1, 20, 2)) [1, 3, 5, 7, 9, 11, 13, 15, 17, 19] Copied!In each example, you use list() to explicitly list the individual elements of each range. You’ll study these examples in more detail later on.
A range can be an effective tool. However, throughout this tutorial, you’ll also explore alternatives that may work better in some situations. You can click the link below to download the code that you’ll see in this tutorial:
Get Your Code: Click here to download the free sample code that shows you how to represent numerical ranges in Python.
Construct Numerical RangesIn Python, range() is built in. This means that you can always call range() without doing any preparations first. Calling range() constructs a range object that you can put to use. Later, you’ll see practical examples of how to use range objects.
You can provide range() with one, two, or three integer arguments. This corresponds to three different use cases:
- Ranges counting from zero
- Ranges of consecutive numbers
- Ranges stepping over numbers
You’ll learn how to use each of these next.
Count From ZeroWhen you call range() with one argument, you create a range that counts from zero and up to, but not including, the number you provided:
Python >>> range(5) range(0, 5) Copied!Here, you’ve created a range from zero to five. To see the individual elements in the range, you can use list() to convert the range to a list:
Python >>> list(range(5)) [0, 1, 2, 3, 4] Copied!Inspecting range(5) shows that it contains the numbers zero, one, two, three, and four. Five itself is not a part of the range. One nice property of these ranges is that the argument, 5 in this case, is the same as the number of elements in the range.
Count From Start to StopYou can call range() with two arguments. The first value will be the start of the range. As before, the range will count up to, but not include, the second value:
Python >>> range(1, 7) range(1, 7) Copied!The representation of a range object just shows you the arguments that you provided, so it’s not super helpful in this case. You can use list() to inspect the individual elements:
Python >>> list(range(1, 7)) [1, 2, 3, 4, 5, 6] Copied! Read the full article at https://realpython.com/python-range/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Real Python: Efficient String Concatenation in Python
Python string concatenation is a fundamental operation that combines multiple strings into a single string. In Python, you can concatenate strings using the + operator or the += operator for appending. For more efficient concatenation of multiple strings, the .join() method is recommended, especially when working with strings in a list. Other techniques include using StringIO for large datasets or the print() function for quick screen outputs.
By the end of this tutorial, you’ll understand that:
- You can concatenate strings in Python using the + operator and the += operator.
- You can use += to append a string to an existing string.
- The .join() method is used to combine strings in a list in Python.
- You can handle a stream of strings efficiently by using StringIO as a container with a file-like interface.
To get the most out of this tutorial, you should have a basic understanding of Python, especially its built-in string data type.
Get Your Code: Click here to download the free sample code that shows you how to efficiently concatenate strings in Python.
Doing String Concatenation With Python’s Plus Operator (+)String concatenation is a pretty common operation consisting of joining two or more strings together end to end to build a final string. Perhaps the quickest way to achieve concatenation is to take two separate strings and combine them with the plus operator (+), which is known as the concatenation operator in this context:
Python >>> "Hello, " + "Pythonista!" 'Hello, Pythonista!' >>> head = "String Concatenation " >>> tail = "is Fun in Python!" >>> head + tail 'String Concatenation is Fun in Python!' Copied!Using the concatenation operator to join two strings provides a quick solution for concatenating only a few strings.
For a more realistic example, say you have an output line that will print an informative message based on specific criteria. The beginning of the message might always be the same. However, the end of the message will vary depending on different criteria. In this situation, you can take advantage of the concatenation operator:
Python >>> def age_group(age): ... if 0 <= age <= 9: ... result = "a Child!" ... elif 9 < age <= 18: ... result = "an Adolescent!" ... elif 19 < age <= 65: ... result = "an Adult!" ... else: ... result = "in your Golden Years!" ... print("You are " + result) ... >>> age_group(29) You are an Adult! >>> age_group(14) You are an Adolescent! >>> age_group(68) You are in your Golden Years! Copied!In the above example, age_group() prints a final message constructed with a common prefix and the string resulting from the conditional statement. In this type of use case, the plus operator is your best option for quick string concatenation in Python.
The concatenation operator has an augmented version that provides a shortcut for concatenating two strings together. The augmented concatenation operator (+=) has the following syntax:
Python string += other_string Copied!This expression will concatenate the content of string with the content of other_string. It’s equivalent to saying string = string + other_string.
Here’s a short example of how the augmented concatenation operator works in practice:
Python >>> word = "Py" >>> word += "tho" >>> word += "nis" >>> word += "ta" >>> word 'Pythonista' Copied!In this example, every augmented assignment adds a new syllable to the final word using the += operator. This concatenation technique can be useful when you have several strings in a list or any other iterable and want to concatenate them in a for loop:
Python >>> def concatenate(iterable, sep=" "): ... sentence = iterable[0] ... for word in iterable[1:]: ... sentence += (sep + word) ... return sentence ... >>> concatenate(["Hello,", "World!", "I", "am", "a", "Pythonista!"]) 'Hello, World! I am a Pythonista!' Copied!Inside the loop, you use the augmented concatenation operator to quickly concatenate several strings in a loop. Later you’ll learn about .join(), which is an even better way to concatenate a list of strings.
Python’s concatenation operators can only concatenate string objects. If you use them with a different data type, then you get a TypeError:
Python >>> "The result is: " + 42 Traceback (most recent call last): ... TypeError: can only concatenate str (not "int") to str >>> "Your favorite fruits are: " + ["apple", "grape"] Traceback (most recent call last): ... TypeError: can only concatenate str (not "list") to str Copied!The concatenation operators don’t accept operands of different types. They only concatenate strings. A work-around to this issue is to explicitly use the built-in str() function to convert the target object into its string representation before running the actual concatenation:
Python >>> "The result is: " + str(42) 'The result is: 42' Copied!By calling str() with your integer number as an argument, you’re retrieving the string representation of 42, which you can then concatenate to the initial string because both are now string objects.
Read the full article at https://realpython.com/python-string-concatenation/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
UX Insights (that we cannot get right now)
After the criticism in the last post about the limitations of KUserFeedback (KUF) for doing data-driven UX work — let’s get more detailed and constructive:
What insights do we as KDE UX people need to do even better than we are currently doing?
Let us start with what we already get from KUF. We get usage data, like how many people are using Wayland vs. X11. But we only get usage data according to our telemetry policy. So we do not get any deeper insight into how users configure their sessions when using Wayland compared to X11. But this is the kind of information we would need to do proper data-driven UX. What settings are users changing? How many users have icons on their desktop, and which ones? Are people manually mounting network drives? Which System Tray icons are interacted with the most? And so on.
But while this information is already impossible to gather with our current approach, we’re only scratching the surface. We need even deeper UX insights, like understanding where people click. And where they click next (in terms of Markov chains). That way we can understand if people are using Plasma the way we intended when we designed it. Or, how long does it take them to get from point A to point B? Are they taking detours because we’ve laid out paths that users don’t understand in the way we intended?
None of these questions can be answered with our current approach to telemetry.
The basic problem is that we currently send all the raw data to the KDE servers to get the answers we need. And the data we need to collect in order to get the above described desired user insights could of course be used to “identify a specific user” – which is not allowed by our telemetry policy for good reason.
And yet we need even more data. We want to target all users, or only users who exhibit certain behaviors. We want them to fill out questionnaires to better understand why they behave the way they do, to understand their goals and intentions. This would be extremely helpful in understanding bug reports. Or to support our design discussions with relevant data from real users.
All of this can only be achieved with a fundamental change in the way we do telemetry.
Existing alternatives, such as the opt-out Endless OS metrics system, also do not allow enough user insights and share the problem that the data leaves the property of the data owners, the users. That is why we have been working on the privact ecosystem, which allows all the insights described above, while fully preserving users’ privacy. And because of that, we can not only ask for more intimate data, but we can also make participation opt-out and so get data from substantially more people. And why is that? Because with the privact ecosystem, there is no technical possibility that any individual’s personal data can ever be shared remotely. Never. But it would finally enable good user-data-driven UX work. For the sake of KDE and our users.
Please also join the discussion about this issue on invent.kde.org.
Edward Betts: A mini adventure at MiniDebConf Toulouse
Last week, I ventured to Toulouse, for a delightful mix of coding, conversation, and crepes at MiniDebConf Toulouse, part of the broader Capitole du Libre conference, akin to the more well-known FOSDEM but with a distinctly French flair.
This was my fourth and final MiniDebConf of the year.
My trek to Toulouse was seamless. I hopped on a bus from my home in Bristol to the airport, then took a short flight. I luxuriated in seat 1A, making me the first to disembark—a mere ten minutes later, I was already on the bus heading to my hotel.
Exploring the Pink CityOnce settled, I wasted no time exploring the charms of Toulouse. Just a short stroll from my hotel, I found myself beside a tranquil canal, its waters mirroring the golden hues of the trees lining its banks. Autumn in Toulouse painted the city in warm oranges and reds, creating a picturesque backdrop that was a joy to wander through. Every corner of the street revealed more of the city's rich cultural tapestry and striking architecture. Known affectionately as 'La Ville Rose' (The Pink City) for its unique terracotta brickwork, Toulouse captivated me with its blend of historical allure and vibrant modern life.
MiniDebCampPrior to the main event, the MiniDebCamp provided two days of hacking at Artilect FabLab—a space as creative as it was welcoming. It was a pleasure to reconnect with familiar faces and forge new friendships.
Culinary delightsThe hospitality was exceptional. Our lunches boasted a delicious array of quiches, an enticing charcuterie board, and a superb selection of cheeses, all perfectly complemented by exquisite petite fours. Each item was not only a feast for the eyes but also a delight for the palate.
Wine and cheeseLeftovers from these gourmet feasts fuelled our impromptu cheese and wine party on Thursday evening—a highlight where informal chats blended seamlessly with serious software discussions.
The river at nightThe enchantment of Toulouse doesn't dim with the setting sun; instead, it transforms. My evening strolls took me along the banks of the Garonne, under a sky just turning from twilight to velvet blue. The river, a dark mirror, perfectly reflected the illuminated grandeur of the city's architecture. Notably, the dome of the Hôpital de La Grave stood out, bathed in a warm glow against the night sky. This architectural gem, coupled with the soft lights of the bridge and the serene river, created a breathtaking scene that was both tranquil and awe-inspiring.
Capitole du LibreThe MiniDebConf itself, part of the larger Capitole du Libre event, was a fantastic immersion into the world of free software. Unlike the ticket-free FOSDEM, this conference required QR codes for entry and even had bag searches, adding an unusual layer of security for a software conference.
Highlights included the crepe-making by the organisers, reminiscent of street food scenes from larger festivals. The availability of crepes for MiniDebConf attendees and the presence of food trucks added a festive air, albeit with the inevitable long queues familiar to any festival-goer.
vélôToulouseThe city's bike rental system was a boon—easy to use with handy bike baskets perfect for casual city touring. I chose pedal power over electric, finding it a pleasant way to navigate the streets and absorb the city's vibrant atmosphere.
MarketsToulouse's markets were a delightful discovery. From a spontaneous visit to a market near my hotel upon arrival, to cycling past bustling marketplaces, each day presented new local flavours and crafts to explore.
The Za'atar flatbread from a Syrian stall was a particularly memorable lunch pick.
La brasserie Les ArcadesOur conference wrapped up with a spontaneous gathering at La Brasserie Les Arcades in Place du Capitole. Finding a café that could accommodate 30 of us on a Sunday evening without a booking felt like striking gold. What began with coffee and ice cream smoothly transitioned into dinner, where I enjoyed a delicious braised duck leg with green peppercorn sauce. This meal rounded off the trip with lively conversations and shared experiences.
The journey back homeReturning from Toulouse, I found myself once again in seat 1A, offering the advantage of being the first off the plane, both on departure and arrival. My flight touched down in Bristol ahead of schedule, and within ten minutes, I was on the A1 bus, making my way back into the heart of Bristol.
Anticipating DebConf 25 in BrittanyMy trip to Toulouse for MiniDebConf was yet another fulfilling experience; the city was delightful, and the talks were insightful. While I frequently travel, these journeys are more about continuous learning and networking than escape. The food in Toulouse was particularly impressive, a highlight I've come to expect and relish on my trips to France. Looking ahead, I'm eagerly anticipating DebConf in Brest next year, especially the opportunity to indulge once more in the excellent French cuisine and beverages.
GNU Artanis: new title
Zero to Mastery: Python Monthly Newsletter 💻🐍
Seth Michael Larson: How do I pay the publisher of a web page?
Published 2024-11-24 by Seth Larson
Reading time: minutes
Here's an unanswered question:
I have money and I have a URL, how do I send money to the publisher of that URL?
URLs tell you where to get content on the web, but they don't tell you anything about how to support the person who created the content. This story might sound similar to paying open source maintainers, where a user can almost abstract an entire project to a single download URL.
There are tons of people creating content for the web and plenty of ways to get paid (Patreon, Kofi, GitHub Sponsors, YouTube Paid Membership), but there's no standardized way to direct someone interested in paying for the content of a page in the right direction.
We have HTML meta headers for many things, including where to find an RSS feed or what my Fediverse handle is, but none for enumerating options to pay the creator of the content. I wish I could click a button to easily send a "tip" to someone who created something I enjoy or to browse other options for supporting them.
Existing technology Payment Request APIThere are things like the web "Payment Request API" which gives you a JavaScript API for generating a payment, but this doesn't fit my criteria.
For one: this means that every person creating content for the web needs to add JavaScript to their page. This is a much higher bar than simply linking to existing payment methods that a creator already likely uses to get paid. Being difficult means it's unlikely for large numbers of people to do the work.
I also don't see being able to automate this because of the JavaScript. Web creators likely have existing payment pages that they'd much rather link out to instead of trying to handle payments themselves individually.
Lastly, this API exists and I don't see it being used by creators today. That should say something about either its ease-of-use or return on investment from potential supporters.
Linking to payment methods in the pageYeah, we could scrape the payment URLs we know about embedded in the page. But there's a difference between potential URLs in the page due to non-creator generated content (links in comments, etc) and whatever the "authoritative" URLs are for paying the creator of the page. Being able to set <meta> tags in <head> is typically a higher bar than setting arbitrary URLs in the <body>.
Podcasting 2.0 RSS <podcast:funding> tagPodcasting 2.0 supports basically the exact tag that I want to use which encodes a URL and a human-readable name for that URL into the metadata description of a podcast publication. Really great to see some prior art here.
Thanks to DamonHD for sending me this reference.
FlattrFlattr is a service that tried to turn a "subscription" from users into micro-payouts based on a users' browsing history. Flattr shut down in 2023. This approach isn't one I'm interested in replicating for a few reasons:
- Access to the entire browsing history feels like a privacy nightmare. Yes you receive a "complete" sample of which web pages a user has visited but, yeah this doesn't seem great?
- Flattr tried to create its own payment platform for creators, rather than pushing users to send monetary support through existing stable payment methods like Patreon. They had to do this because of the micro-payments thing.
In general this is making me think micro-payments is extremely hard to do. I think having a handful of dedicated fans for small creators might be enough to "offset" the "loss" of micro-payments? Perhaps there can be a recommendation to note to users when certain creators are "niche" and therefore are receiving fewer payments relative to other creators and thus would benefit more from a contribution / boost?
Thanks to Quentin for sending me this reference.
BraveI know about Brave, and I would like to avoid crypto in my solution. Also many of the creators I pay for don't use crypto but do have multiple payment methods. I don't think the solution should require creators AND users adopt new technology to work.
What happens now?I'm no stranger to standards, so maybe I do some research and write a web standard proposal? Seems like fun! I'm imagining something like:
<head> <!-- ... --> <meta property="financial-support" content="https://patreon.com/c/MatthewCarlson"> </head>Because this is primarily for money, no doubt it will be abused to hell. First-party browsers probably wouldn't do anything with this information for the fear of legitimizing scammers' fake profiles.
The existence of the "Web Payments API" makes me think maybe it's not a huge deal and that whenever money gets involved peoples' spidey-senses start going off about whether a page is legitimate? Not sure.
Let me know what you think!
Have thoughts or questions? Let's chat over email or social:
sethmichaellarson@gmail.com
@sethmlarson@fosstodon.org
Want more articles like this one? Get notified of new posts by subscribing to the RSS feed or the email newsletter. I won't share your email or send spam, only whatever this is!
Want more content now? This blog's archive has ready-to-read articles. I also curate a list of cool URLs I find on the internet.
Find a typo? This blog is open source, pull requests are appreciated.
Thanks for reading! ♡ This work is licensed under CC BY-SA 4.0
︎Welcome to My Blog
Heyho together!
I am from now on writing my posts on GitHub pages. Apart from it being useful to keep my posts versioned using git, I had some issues with my previous blog. The idea was to simply use write.as and publish a post from time to time. This worked well except for more than a month ago me wanting to do a post about my KRunner plugins. It naturally contained a lot of links and thus the publishing was prevented and even the account blocked due to apparent spam. There was no response via mail for over a month.
So here we are not on another blog where I hopefully write more often and also be able to spent more time on KDE!
gnuboot @ Savannah: GNU Boot November 2024 News
A lot has changed since the two last news from the GNU Boot project.
People involved in the GNU Boot project will be organizing a 100% free
software install party within a bigger event that also has a regular
install party. There will also be a presentation about 100% free
software in there. The event will be mainly in French.
More details are available in French and in English in the following
link:
https://lists.gnu.org/archive/html/help-guix/2024-11/msg00112.html
Many changes were made since the RC3 and since then we fixed an
important bug that prevented Trisquel from booting (If during the
Trisquel installation you chose "LVM2" and didn't encrypt the
storage, GNU Boot images with GRUB would not find the Trisquel
installation).
Because of that we decided to do a new RC4 (release candidate 4)
and to publish new GNU Boot images.
There are still some work needed before doing a 0.1 release as we want
to make it easier for less technical users to install and use GNU
Boot, but more and more of the project structure are getting in place
(website, manual, automatic tests, guix, good development procedures,
enabling build on all distributions, etc) which then makes it easier
to contribute.
We also decided to use Guix for more of the software components
we build, and since this is a big change, we will need people to
help more with testing.
The last announcement we made was "Nonfree software found in GNU Boot
releases again, many distros affected"[1].
Some people misunderstood it (maybe we could have been more clear):
the nonfree software that we found was code that GNU Boot didn't use,
so it was easy to remove and it didn't affect the supported devices in
any way.
Finding nonfree software in 100% free distribution is also common:
this is part of the work to ensure these distribution remains 100%
free.
The first time it happened in GNU Boot we publicized it to explain why
we were re-releasing some of the GNU Boot files as it could be very
scary if this happens without any public communication.
The second time we published a news about it mainly to help propagate
the information to the affected distributions and this is probably why
it was misunderstood: it was mainly targeted at GNU Boot users and
maintainers of the affected packages. We also contacted upstream and
some affected distributions directly as well but contacting everybody
takes a lot of time so having a news about it helps. At least Debian
and Trisquel fixed the issue but we still need to contact some
distributions.
After that, and probably thanks to the previous news, Leah Rowe
contacted us on one of the GNU Boot mailing lists[2] to notify us that
she also found additional similar nonfree software in GNU Boot.
So we confirmed that and promptly removed them and re-made again the
source release. And here again even if the work was delayed a bit,
this was fast to do and it doesn't affect the supported devices in any
way.
But we also need help contacting distributions again because one of
the issue she found is very serious because it affects many
distributions and also important devices that GNU Boot doesn't
support.
The ARM trusted firmware ships a nonfree hdcp.bin binary in its source
code. ARM trusted firmware is a dependency of u-boot that is used to
support many ARM computers in other distributions (like Guix, Debian,
etc).
As contacting affected distributions is a tedious task, we also need
help to propagate the information and contact them especially because
we don't know if Leah intend or not to do that work (so far she didn't
reply when asked twice about it), so it's probably up to the GNU Boot
community as a whole (which also includes its maintainers and readers
of this news) to help here.
The details are in the commit 343515aee7ef34695ac45830fad419d9562f9c15
("coreboot: blobs.list: arm-trusted-firmware: Remove RK3399 hdcp.bin
firmware.") in the GNU Boot source code[3].
[1]https://savannah.gnu.org/news/?id=10684
[2]https://lists.gnu.org/archive/html/gnuboot-patches/2024-10/msg00028.html
[3]https://git.savannah.gnu.org/cgit/gnuboot.git/commit/?id=343515aee7ef34695ac45830fad419d9562f9c15
Jordán (isf) has been contributing some Spanish translations of the
most important website pages (the landing, status and how to
contribute pages). This is important as it could help get more
contributors. These contributions also helped us improve the process
for accepting pseudonymous contributions and enabled us to fix issues.
The work on improving the website in general also continued. Many of
the website pages were reviewed and improved (there is a lot of work
there and mentioning it all would make the news way too long).
The website also now shows the git revision from which it is build and
we also helped the FSF fix some server configuration that created
issue with the deployment of the GNU Boot website (more details are in
the commit message[1]) by reporting the issue to them and testing the
fix.
Patches for making a manual are also being reviewed. While there isn't
much in the manual yet, it also enables to better organize the
documentation and it has the potential to make GNU Boot more
accessible to less technical people.
The next goals is to look how to merge part of the website inside the
manual and continue improving both the website and the manual.
[1]https://git.savannah.gnu.org/cgit/gnuboot.git/commit/?id=d1df672383f6eb8d4218fdef7fbe9ec5e41803e4
We now have the ability to verify the source code when downloading it
from git. This is important to avoid certain type of attacks and it
also enables to write code to automatically download, verify and build
the GNU Boot source code.
The source can be verified with the following command (it requires to
have Guix installed):
$ guix git authenticate $(git rev-parse HEAD) \
"E23C 26A5 DEEE C5FA 9CDD D57A 57BC 26A3 6871 16F6" \
-k origin/keyring
If the authentication works it will print a message like that:
guix git: successfully
authenticated commit 05c09293d9660ea1f26b5b705a089b466a0aa718
The 05c09293d9660ea1f26b5b705a089b466a0aa718 might be different in
your case.
The "E23C 26A5 DEEE C5FA 9CDD D57A 57BC 26A3 6871 16F6" part in the
command above is Adrien Bourmault (neox)'s GPG key.
How to use that will be documented more in depth in the upcoming GNU
Boot manual that is currently being reviewed. Its importance will also
be explained in more details for people not familiar with the security
issues it's meant to solve. Also note that we also welcome help for
reviewing patches.
The GNU Boot source code has a complex history. It is based on the
last fully free software releases of Libreboot. And the Libreboot
source code history is very complex.
We found some missing authorship information in some of the files that
come from Libreboot and so we started such information from the
various git repositories that were used at some point by Libreboot or
some of the projects it was based on.
To help with this task we also added a page on the GNU Boot website
(https://www.gnu.org/software/gnuboot/web/docs/history/) to track the
status of the reconstruction of the missing authorship and to document
the GNU Boot source code history.
GNU Boot is just a distribution and like most distributions, it tries
to collaborate with various upstream projects whenever possible.
Since GNU Boot relies on Guix, we improved the Guix documentation
directly to help people install Guix on Trisquel and Parabola. We also
helped Trisquel fix security issues in the Guix package by bug
reporting and testing fixes (some bugs still need to be fixed in
Parabola and Debian, and reporting issues upstream takes time).
Since we also advise to use PureOS or Trisquel to build GNU Boot we
also enabled people with Guix to produce PureOS or Trisquel chroots
with Debootstrap. This was done through contributions to Debootstrap,
and to the Guix Debootstrap package. We could then mention that in the
GNU Boot build documentation
(https://www.gnu.org/software/gnuboot/web/docs/build/) and added a
script (in contrib/start-guix-daemon.py) to support building GNU Boot
in chroots. However there are still issue with the build in chroots
that need to be fixed to producing all released files. Instructions on
how to do build in chroots is also lacking.
In addition we also added the ability to build GNU Boot with Trisquel
11 (aramo).
An apt-cacher-ng package was also contributed in Guix upstream as it
can then be used to speed-up one of the automatic tests used in GNU
Boot but the support for apt-cacher-ng was not integrated yet in GNU
Boot. Last year we also contributed a GRUB package in Guix but we
didn't have the occasion to use it yet. It will probably happen soon
though.
How to build GNU Boot has changed a lot since GNU Boot 0.1 RC3.
Before Guix could only be optionally used to build the website.
In addition to that, Guix is now integrated in the build system so we
can now rely on Guix packages to build GNU Boot images. This also
means that you need to install Guix to build GNU Boot images.
We currently use Guix packages to build some tests. We also build some
installation utilities for the I945 ThinkPads (ThinkPad X60, X60s,
X60T and T60) but we don't have documentation for less technical
people yet on how to use them. We also would need help for testing
these computers as we have no idea if they still work fine or which
fully free distributions still work on them in practice.
We now also support the './configure' and 'make' commands to build GNU
Boot but not yet the 'make install' command as to work we would need
to adapt many of the scripts that are used during the build to be
compatible with that.
There is also less visible work that was done, like cleaning up a lot
of code, adding tests for code quality, documenting a bit the GNU Boot
source code structure, and so on.
Work on making GNU Boot reproducible also started. See
https://reproducible-builds.org/ or
https://en.wikipedia.org/wiki/Reproducible_builds for more detail on
the issue.
We took an extremely strict approach and put the checksum of some of
the things we build directly into GNU Boot and verify it the checksum
during the compilation. This enables us to automatically detect
issues without having to do anything.
We started to enable that for easy things, and we also added the
infrastructure to also use that in Guix packages as well by validating
one of the packages we use during automatic testing.
However at one point this guix package stopped being
reproducible. Since we wanted to keep that code (especially as it was
showing a good example of how to do it), we fixed the bug instead of
removing the test.
This then helped us detect a very subtle and interesting bug in one of
the components we use for automatic tests.
The bug could not be caught during testing because some time
information stored inside the FAT32 file system has a granularity of a
day, and since all the testing happened the same day, it was caught
only later on.
This bug was then fixed and the details are in the fix[1]. A bug
report was also opened upstream because bugs were found in diffoscope
along the way[2]. We still need to do some testing though to
understand if the bug is in diffoscope or one of the underlying
libraries (libguestfs) and then to report the remaining bugs to the
distributions we used during this work.
We also made it easier to update the checksum in the Guix package. If
you package software with Guix, this change is also a good example of
how not to break the '--without-tests' option when you override the
tests in the package you contribute. The commit message[3] and the
change have more details and references on all that.
[1]https://git.savannah.gnu.org/cgit/gnuboot.git/commit/?id=4c3de49fbb3b43940b43f8fdccc8e51ee7df8f46
[2]https://salsa.debian.org/reproducible-builds/diffoscope/-/issues/390
[3]https://git.savannah.gnu.org/cgit/gnuboot.git/commit/?id=40fcb94e2f7ab1df8d320f78311e623f801d8602
WodeShengli reported a very important bug[1]: GNU Boot images with
GRUB can't find LVM2 partitions if the partition itself is not
encrypted. For instance if you have LVM2 and no encryption at all or
if the disk is encrypted and that on top you have LVM2, GNU Boot will
not find the partition.
Since this is an extremely serious usability issue (because images
with GRUB are supposed to work out of the box) we spent time to fix
it.
The issue was that the GRUB configuration we ship hardcoded the name
of the LVM volumes to try to boot from. Fixing it required to be able
loop over all the partitions being found, but we found no command to
do that in GRUB (which is probably why the LVM partition names were
hardcoded in the first place).
So we started adding GRUB command options to do that but while the
code worked fine, it didn't integrate in GRUB well. So we contacted
GRUB looking for help as we would have needed to upstream our command
option in GRUB anyway.
And we were told that GRUB already had a way to do what we were
looking for so we used that to fix the issue.
We also added tests that automatically download the Trisquel installer
and installs Trisquel with LVM2 and test if GNU Boot can boot the new
Trisquel installation[2].
While this test is skipped for 32bit computers, it is still good to
have as some people will run it. The test also paves the way to add
more tests that would enable us to improve further the GRUB
configuration without breaking the boot.
[1]https://savannah.gnu.org/bugs/index.php?65663
[2]https://git.savannah.gnu.org/cgit/gnuboot.git/commit/?id=860b00bf1e798d86c8bb2a70d77633599dfa1da2
[3]https://git.savannah.gnu.org/cgit/gnuboot.git/commit/?id=9cc02ddde1e164fabfbddc8bbd3832ef9468d92d