Feeds
Specbee: A practical guide to Personalization with user personas (sample campaigns included!)
Vasudev Kamath: Note to Self: Enabling Secure Boot with UKI on Debian
Note
This post is a continuation of my previous article on enabling the Unified Kernel Image (UKI) on Debian.
In this guide, we'll implement Secure Boot by taking full control of the device, removing preinstalled keys, and installing our own. For a comprehensive overview of the benefits and process, refer to this excellent post from rodsbooks.
Key ComponentsTo implement Secure Boot, we need three essential keys:
- Platform Key (PK): The top-level key in Secure Boot, typically provided by the motherboard manufacturer. We'll replace the vendor-supplied PK with our own for complete control.
- Key Exchange Key (KEK): Used to sign updates for the Signatures Database and Forbidden Signatures Database.
- Database Key (DB): Used to sign or verify binaries (bootloaders, boot managers, shells, drivers, etc.).
There's also a Forbidden Signature Key (dbx), which is the opposite of the DB key. We won't be generating this key in this guide.
Preparing for Key EnrollmentBefore enrolling our keys, we need to put the device in Secure Boot Setup Mode. Verify the status using the bootctl status command. You should see output similar to the following image:
Generating KeysFollow these instructions from the Arch Wiki to generate the keys manually. You'll need the efitools and openssl packages. I recommend using rsa:2048 as the key size for better compatibility with older firmware.
After generating the keys, copy all .auth files to the /efi/loader/keys/<hostname>/ folder. For example:
❯ sudo ls /efi/loader/keys/chamunda db.auth KEK.auth PK.auth Signing the BootloaderSign the systemd-boot bootloader with your new keys:
sbsign --key <path-to db.key> --cert <path-to db.crt> \ /usr/lib/systemd/boot/efi/systemd-bootx64.efiInstall the signed bootloader using bootctl install. The output should resemble this:
Note
If you encounter warnings about mount options, update your fstab with the `umask=0077` option for the EFI partition.
Verify the signature using sbsign --verify:
Configuring UKI for Secure BootUpdate the /etc/kernel/uki.conf file with your key paths:
SecureBootPrivateKey=/path/to/db.key SecureBootCertificate=/path/to/db.crt Signing the UKI ImageOn Debian, use dpkg-reconfigure to sign the UKI image for each kernel:
sudo dpkg-reconfigure linux-image-$(uname -r) # Repeat for other kernel versions if necessaryYou should see output similar to this:
sudo dpkg-reconfigure linux-image-$(uname -r) /etc/kernel/postinst.d/dracut: dracut: Generating /boot/initrd.img-6.10.9-amd64 Updating kernel version 6.10.9-amd64 in systemd-boot... Signing unsigned original image Using config file: /etc/kernel/uki.conf + sbverify --list /boot/vmlinuz-6.10.9-amd64 + sbsign --key /home/vasudeva.sk/Documents/personal/secureboot/db.key --cert /home/vasudeva.sk/Documents/personal/secureboot/db.crt /tmp/ukicc7vcxhy --output /tmp/kernel-install.staging.QLeGLn/uki.efi Wrote signed /tmp/kernel-install.staging.QLeGLn/uki.efi /etc/kernel/postinst.d/zz-systemd-boot: Installing kernel version 6.10.9-amd64 in systemd-boot... Signing unsigned original image Using config file: /etc/kernel/uki.conf + sbverify --list /boot/vmlinuz-6.10.9-amd64 + sbsign --key /home/vasudeva.sk/Documents/personal/secureboot/db.key --cert /home/vasudeva.sk/Documents/personal/secureboot/db.crt /tmp/ukit7r1hzep --output /tmp/kernel-install.staging.dWVt5s/uki.efi Wrote signed /tmp/kernel-install.staging.dWVt5s/uki.efi Enrolling Keys in FirmwareUse systemd-boot to enroll your keys:
systemctl reboot --boot-loader-menu=0Select the enroll option with your hostname in the systemd-boot menu.
After key enrollment, the system will reboot into the newly signed kernel. Verify with bootctl:
Dealing with Lockdown ModeSecure Boot enables lockdown mode on distro-shipped kernels, which restricts certain features like kprobes/BPF and DKMS drivers. To avoid this, consider compiling the upstream kernel directly, which doesn't enable lockdown mode by default.
As Linus Torvalds has stated, "there is no reason to tie Secure Boot to lockdown LSM." You can read more about Torvalds' opinion on UEFI tied with lockdown.
Next StepsOne thing that remains is automating the signing of systemd-boot on upgrade, which is currently a manual process. I'm exploring dpkg triggers for achieving this, and if I succeed, I will write a new post with details.
AcknowledgmentsSpecial thanks to my anonymous colleague who provided invaluable assistance throughout this process.
parallel @ Savannah: GNU Parallel 20240922 ('Gold Apollo AR924') released
GNU Parallel 20240922 ('Gold Apollo AR924') has been released. It is available for download at: lbry://@GnuParallel:4
Quote of the month:
Recently executed a flawless live data migration of ~2.4pb using GNU parallel for scale and bash scripts.
-- @mechanicker@twitter Dhruva
New in this release:
- --fast disables a lot of functionality to speed up running jobs.
- Bug fixes and man page updates.
News about GNU Parallel:
- Job requiring GNU Parallel knowledge https://www.capgemini.com/ca-en/jobs/Id6D4pEBZ6aB2WPS2aAJ/systems-engineer/
GNU Parallel - For people who live life in the parallel lane.
If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.
GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.
If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.
GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.
For example you can run this to convert all jpeg files into png and gif files and have a progress bar:
parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif
Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:
find . -name '*.jpg' |
parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200
You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/
You can install GNU Parallel in just 10 seconds with:
$ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
fetch -o - http://pi.dk/3 ) > install.sh
$ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
12345678 883c667e 01eed62f 975ad28b 6d50e22a
$ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
cc21b4c9 43fd03e9 3ae1ae49 e28573c0
$ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52
79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
$ bash install.sh
Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.
When using programs that use GNU Parallel to process data for publication please cite:
O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.
If you like GNU Parallel:
- Give a demo at your local user group/team/colleagues
- Post the intro videos on Reddit/Diaspora*/forums/blogs/
Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
- Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
- Request or write a review for your favourite blog or magazine
- Request or build a package for your favourite distribution (if it is
not already there)
- Invite me for your next conference
If you use programs that use GNU Parallel for research:
- Please cite GNU Parallel in you publications (use --citation)
If GNU Parallel saves you money:
- (Have your company) donate to FSF https://my.fsf.org/donate/
GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.
The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.
When using GNU SQL for a publication please cite:
O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.
GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the
limit.
MidCamp - Midwest Drupal Camp: Join us to help plan MidCamp 2025
Please join us for our first MidCamp 2025 planning meeting!
Why come?
Because we value giving back to the Drupal community and this is one way you can do that.
What should I expect?
That's mostly up to you -- there are a lot of roles and skillsets needed to put on a conference like MidCamp. Regardless of what you do day-to-day, you can find a fit. Everything from
What if I don't live in Chicago?
That's OK! The planning of things is done remotely. A good portion of the planning team doesn't live in or near Chicago. People join because they care about Drupal and want to help make MidCamp happen.
How to join?
We'll share the Zoom link via Meetup. We also welcome you to join the #midcamp-organizers channel on our Slack team: https://mid.camp/slack
mark.ie: Need to hire Drupal developers? I can help you
Today I launched a new service, matching available Drupal developers with recruiters and agencies that are hiring.
Jonathan McDowell: The (lack of a) return-to-office conspiracy
During COVID companies suddenly found themselves able to offer remote working where it hadn’t previously been on offer. That’s changed over the past 2 or so years, with most places I’m aware of moving back from a fully remote situation to either some sort of hybrid, or even full time office attendance. For example last week Amazon announced a full return to office, having already pulled remote-hired workers in for 3 days a week.
I’ve seen a lot of folk stating they’ll never work in an office again, and that RTO is insanity. Despite being lucky enough to work fully remotely (for a role I’d been approached about before, but was never prepared to relocate for), I feel the objections from those who are pro-remote often fail to consider the nuances involved. So let’s talk about some of the reasons why companies might want to enforce some sort of RTO.
Real estate valueLet’s clear this one up first. It’s not about real estate value, for most companies. City planners and real estate investors might care, but even if your average company owned their building they’d close it in an instant all other things being equal. An unoccupied building costs a lot less to maintain. And plenty of companies rent and would save money even if there’s a substantial exit fee.
Occupancy levelsThat said, once you have anyone in the building the equation changes. If you’re having to provide power, heating, internet, security/front desk staff etc, you want to make sure you’re getting your money’s worth. There’s no point heating a building that can seat 100 for only 10 people present. One option is to downsize the building, but that leads to not being able to assign everyone a desk, for example. No one I know likes hot desking. There are also scheduling problems about ensuring there are enough desks for everyone who might turn up on a certain day, and you’ve ruled out the option of company/office wide events.
Coexistence builds relationshipsAs a remote worker I wish it wasn’t true that most people find it easier to form relationships in person, but it is. Some of this can be worked on with specific “teambuilding” style events, rather than in office working, but I know plenty of folk who hate those as much as they hate the idea of being in the office. I am lucky in that I work with a bunch of folk who are terminally online, so it’s much easier to have those casual conversations even being remote, but I also accept I miss out on some things because I’m just not in the office regularly enough. You might not care about this (“I just need to put my head down and code, not talk to people”), but don’t discount it as a valid reason why companies might want their workers to be in the office. This often matters even more for folk at the start of their career, where having a bunch of experience folk around to help them learn and figure things out ends up working much better in person (my first job offered to let me go mostly remote when I moved to Norwich, but I said no as I knew I wasn’t ready for it yet).
Coexistence allows for unexpected interactionsPeople hate the phrase “water cooler chat”, and I get that, but it covers the idea of casual conversations that just won’t happen the same way when people are remote. I experienced this while running Black Cat; every time Simon and I met up in person we had a bunch of useful conversations even though we were on IRC together normally, and had a VoIP setup that meant we regularly talked too. Equally when I was at Nebulon there were conversations I overheard in the office where I was able to correct a misconception or provide extra context. Some of this can be replicated with the right online chat culture, but I’ve found many places end up with folk taking conversations to DMs, or they happen in “private” channels. It happens more naturally in an office environment.
It’s easier for bad managers to manage bad performersAgain, this falls into the category of things that shouldn’t be true, but are. Remote working has increased the ability for people who want to slack off to do so without being easily detected. Ideally what you want is that these folk, if they fail to perform, are then performance managed out of the organisation. That’s hard though, there are (rightly) a bunch of rights workers have (I’m writing from a UK perspective) around the procedure that needs to be followed. Managers need organisational support in this to make sure they get it right (and folk are given a chance to improve), which is often lacking.
SummaryLook, I get there are strong reasons why offering remote is a great thing from the company perspective, but what I’ve tried to outline here is that a return-to-office mandate can have some compelling reasons behind it too. Some of those might be things that wouldn’t exist in an ideal world, but unfortunately fixing them is a bigger issue than just changing where folk work from. Not acknowledging that just makes any reaction against office work seem ill-informed, to me.
The Drop Times: Government Website Usability: Insights from DrupalCon Portland
Event Organizers: Connect with Event Organizers at DrupalCon Barcelona '24
There are many opportunities to connect with fellow event organizers throughout the week at DrupalCon Barcelona 2024. The Event Organizer Working Group also has an open call for board nominations until October 15. Join us and help shape the future of Drupal Community Events.
All WeekLocal Associations Booth
Expo Hall
Visit with the Network of European Drupal Associations (NEDA) and other event organizers in the Expo Hall. Be sure to bring some of your stickers and swag to share with the community!
- Tuesday, September 24, 2024 - 17:30 to 18:15
- Wednesday, September 25, 2024 - 08:45 to 09:30
- Wednesday, September 25, 2024 - 12:30 to 13:15
- Thursday, September 26, 2024 - 15:15 to 16:00
Open Meeting via Slack, second Tuesday of each month!
Tuesday, October 8 starting at 16:00 UTC / 12:00 pm ET.
The meeting will stay open for 24 hours to allow participation across all time zones.
- Initiative Updates
- Camp Reports
- DrupalCon Report
Join us to discuss these and other topics in the #event-organizers channel.
If there is something you want to share or discuss related to your camp, meetup, or other events organizer topics either leave a message in the Slack channel or comment on the Event Organizer issue queue.
Horizontal Digital Blog: Drupal as a prototyping tool to rapidly build a proof of concept
Real Python: Python Virtual Environments: A Primer
In this tutorial, you’ll learn how to work with Python’s venv module to create and manage separate virtual environments for your Python projects. Each environment can use different versions of package dependencies and different versions of Python.
Once you’ve learned to work with virtual environments, you’ll be able to help other programmers reproduce your development setup and make sure that your projects never create dependency conflicts.
By the end of this tutorial, you’ll know how to:
- Create and activate a Python virtual environment
- Explain why you want to isolate external dependencies
- Visualize what Python does when you create a virtual environment
- Customize your virtual environments using optional arguments to venv
- Deactivate and remove virtual environments
- Choose additional tools for managing your Python versions and virtual environments
Working with virtual environments is a common and effective technique used in Python development. Gaining a better understanding of how they work, why you need them, and what you can do with them will help you master your Python programming workflow.
Throughout the tutorial, you can select code examples for either Windows, Ubuntu Linux, or macOS. Pick your platform at the top right of the relevant code blocks to get the commands that you need, and feel free to switch between them if you want to learn how to work with virtual environments on other operating systems.
Free Bonus: Click here to download a free cheat sheet that summarizes the main venv commands you’ll learn about in this tutorial.
Take the Quiz: Test your knowledge with our interactive “Python Virtual Environments: A Primer” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Python Virtual Environments: A PrimerIn this quiz, you'll test your understanding of Python virtual environments. With this knowledge, you'll be able to avoid dependency conflicts and help other developers reproduce your development environment.
How Can You Work With a Python Virtual Environment?If you just need to get a virtual environment up and running to continue working on your favorite project, then this section is for you.
This tutorial uses Python’s venv module to create virtual environments. This module is part of Python’s standard library, and it’s been the officially recommended way to create virtual environments since Python 3.5.
Note: There are other great third-party tools for creating virtual environments, such as conda and virtualenv, that you’ll learn more about later in this tutorial. Either of these tools can help you set up a virtual environment and also go beyond just that.
For basic usage, venv is an excellent choice because it already comes packaged with your Python installation. With that in mind, you’re ready to create your first virtual environment.
Create ItAny time you’re working on a Python project that uses external dependencies you’re installing with pip, it’s best to first create a virtual environment:
Windows PowerShell PS> py -m venv venv\ Copied!This command allows the Python launcher for Windows to select an appropriate version of Python to execute. It comes bundled with the official installation and is the most convenient way to execute Python on Windows.
You can bypass the launcher and run the Python executable directly using the python command, but if you haven’t configured the PATH and PATHEXT variables, then you might need to provide the full path:
Windows PowerShell PS> C:\Users\Name\AppData\Local\Programs\Python\Python312\python -m venv venv\ Copied!The system path shown above assumes that you installed Python 3.12 using the Windows installer provided by the Python downloads page. The path to the Python executable on your system might be different. Working with PowerShell, you can find the path using the where.exe python command.
Note: You don’t need to include the backslash (\) at the end of the name of your virtual environment, but it’s a helpful reminder that you’re creating a folder.
Shell $ python3 -m venv venv/ Copied!Many Linux operating systems ship with a version of Python 3. If python3 doesn’t work, then you’ll have to first install Python and you may need to use the specific name of the executable version that you installed, for example, python3.12 for Python 3.12.x. If that’s the case for you, remember to replace mentions of python3 in the code blocks with your specific version number.
Note: You don’t need to include the slash (/) at the end of the name of your virtual environment, but it’s a helpful reminder that you’re creating a folder.
Shell $ python3 -m venv venv/ Copied!Older versions of macOS come with a system installation of Python 2.7.x that you should never use to run your scripts. If you’re working on macOS < 12.3 and invoke the Python interpreter with python instead of python3, then you might accidentally start up the outdated system Python interpreter.
If running python3 doesn’t work, then you’ll have to first install a modern version of Python.
Note: You don’t need to include the slash (/) at the end of the name of your virtual environment, but it’s a helpful reminder that you’re creating a folder.
This command creates a new virtual environment named venv using Python’s built-in venv module. The first venv that you use in the command specifies the module, and the second venv/ sets the name for your virtual environment. You could name it differently, but calling it venv is a good practice for consistency.
Activate ItGreat! Your project now has its own virtual environment. Generally, before you start to use it, you’ll activate the environment by executing a script that comes with the installation:
Windows PowerShell PS> venv\Scripts\activate (venv) PS> Copied!If your attempt to run this command produces an error, then you’ll first have to loosen the execution policy.
Shell $ source venv/bin/activate (venv) $ Copied!Before you run this command, make sure that you’re in the folder containing the virtual environment you just created. If you’ve named your virtual environment something other than venv, then you’ll have to use that name in the path instead of venv when you source the activation script.
Note: You can also work with your virtual environment without activating it. To do this, you provide the full path to its Python interpreter when executing a command. However, you’ll likely want to activate the virtual environment after you create it to save yourself the effort of having to repeatedly type long pathnames.
Once you can see the name of your virtual environment in your command prompt—in this case (venv)—then you’ll know that your virtual environment is active. Now you’re all set and ready to install your external packages!
Read the full article at https://realpython.com/python-virtual-environments-a-primer/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Open Source AI Definition – Weekly update September 23
- @nemobis points out that the term “skilled person” in the Open Source AI Definition needs clarification, especially when considering different legal systems. The term could lead to misinterpretations and suggests adjusting the wording to focus on access to data. Additionally, the term “substantially equivalent system” also requires a more precise definition.
- @shujisado adds that in Japan, the term “skilled person” is linked to patent law, which could complicate its interpretation. He proposes using a simpler term, like “person skilled in technology,” to avoid unnecessary debate.
- @stefano asks for suggestions for a better alternative to “skilled person,” such as “practitioner” or “AI practitioner.”
- @kjetilk jokingly suggests lowering the bar to “any random person with a computer,” emphasizing the importance of accessibility in open source, allowing anyone to engage regardless of formal training.
- @samj highlights that byte-for-byte reproducibility is unrealistic, as randomness and hardware variability make exact replication unachievable, similar to how different binaries perform equivalently despite differing checksums.
- @samj notes the existence of models like StarCoder2 and OLMo as examples of Open Source AI, refuting the claim that no models meet the standard. He stresses the need for the definition to encourage the development of new models rather than settling for an inadequate status quo.
- @kjetilk reflects on Mark Zuckerberg’s blog post about Llama 3.1, where Zuckerberg claims that “Open Source AI Is the Path Forward.” He points out that while it’s easy to agree with Zuckerberg’s sentiment, Llama 3.1 isn’t truly open source and wouldn’t meet the criteria for compliance under the OSAID. This raises important questions about how to engage with Meta: should the open-source community push them away, or guide them toward creating OSAID-compliant models? Furthermore, @kjetilk wonders how this affects perceptions of open source, especially in light of EU legislation and the broader governance issues around open source.
- @shujisado responds by noting that the Open Source Initiative (OSI) has already made it clear that Llama 2 (and by extension Llama 3.1) does not meet the Open Source definition, despite Zuckerberg’s claims. He suggests that Zuckerberg might be using a different definition of “open source,” particularly given the unclear legal landscape around AI training data and copyright. In his view, the creation of the Open Source AI Definition (OSAID) is the community’s formal response to Meta’s claims.
- The seventeenth edition of our town hall meetings was held on the 20th of September. If you missed it, the recording and slides can be found here.
Django Weblog: PyCharm &amp; Django Campaign 2024 - encore
The Django Software Foundation's biggest fundraising event of the year is here!
Get 30% off PyCharm, Support Django
Each year, our friends at JetBrains, the creators of PyCharm, run an incredible deal. You get a 30% discounted year of PyCharm, AND the DSF gets 100% of the money. Yes, 100%! It's making a donation and directly getting a great product in return! This is available for new users, and those who had used PyCharm in the past, stopped, and want to try again.
The fundraiserThe fundraiser started during DjangoCon Europe in June, and is now back on from September 22nd to October 6th. Buy PyCharm and support Django!
In the past, JetBrains through the PyCharm fundraiser has provided approximately one quarter of the Django Software Foundation's budget!
Donations like this fundraiser allow the DSF to function. Our two wonderful Fellows, Natalia Bidart and Sarah Boyce keep Django running smoothly, picking up pieces that would otherwise not happen.
The other side of the DSF is our support for Django groups across the globe. We supported every DjangoCon, particularly with donating funding towards opportunity grants for more people to be able to attend these conferences. The DSF also supports smaller events around the world, including DjangoGirls events.
PyCharmFinally, I want to tell you about PyCharm itself.
PyCharm is an integrated development environment (IDE) that helps professional Python web developers be more productive, be more confident, and write better code. It supports the full Python web workflow out of the box, including popular Python web frameworks, such as Django, frontend technologies, and databases.
Here are the main benefits of using PyCharm in your Django development:
- Django (including templates), Flask, FastAPI
- Database management (Postgres, Redis)
- JS, React, Node.js, TailwindCSS
- Built-in HTTP Client and endpoint tools
Get Django work done with PyCharm, a powerful IDE tailored for Django web development!
Consider this the easiest charitable donation you will ever make, when you get such a great product in return!
Get 30% off PyCharm, Support Django
Other ways to donateIf you would like to donate in another way, especially if you are already a PyCharm customer, here are other ways to donate to the DSF:
- On our website via credit card
- Via GitHub Sponsors
- For those able to make a larger donation, particularly corporate sponsors ($2000+), more information is here: Corporate membership
Golems GABB: Drupal Automation with CI/CD Pipelines
Welcome to the magical world of Drupal development! It can be not only innovative but also efficient by employing continuous integration and continuous delivery (CI/CD) pipelines.
CI/CD Pipelines are like magical tools for automating the integration, testing, and delivery of Drupal projects, thus making it easier for developers to concentrate on creating flawless digital experiences.
Let's take a look at how CI/CD Pipelines work with Drupal. Let’s learn how they maintain consistency in everything and reduce risks during development. This guide will give you everything you need to know about Drupal Automation with CI/CD Pipelines, whether you are a seasoned Drupal developer or a marketer who wishes to improve digital projects.
Wim Leers: XB week 19: flickering cliffhanger
Last week ended with 12 remaining issues. Did we make it? :D
Major loose endsLike last week, I’m starting with the major loose ends.
Thanks to the impressive work by Dang “sea2709” Tran and the reviews and guidance from Jesse “jessebaker” Baker as well as many others, Experience Builder (XB) now has a robust solution for previewing components when hovering them in the “insert” menu. It required both server-side changes (global theme asset libraries were missing previously) and client-side changes (shadow DOM didn’t offer sufficient isolation; we needed <iframe>).
The result is so nice that I almost spat out my coffee because of a deep, unavoidable “OOOOOHHHHHHHHHHHHHHHHHHHHHHHH!!!!!” when reviewing it! :D
Component previews prior to placing them on the canvas now provides accurate previews. (You can tell that I could not resist the temptation of hovering over Shoe badge multiple time :D)
Issue #3469856, image by me.
Once a component is placed, the preview canvas’ <iframe> must be updated: an updated HTML response is fetched and rendered. But every update to the component tree must result in an update to the preview. That means any typing the Content Creator does in the component props form results in the entire preview 1 getting re-rendered, which easily results in flickering. Jesse devised a very clever solution, inspired by … computer games!
He introduced an IframeSwapper that keeps two <iframe>s active, but with only one visible. Once the preview has updated (i.e. the invisible <iframe> has finished loading), he swaps it with the visible <iframe> 2 — eliminating all flicker:
Zero flickering when updating previews thanks to double buffering/<iframe> swapping.
Issue #3469677, image by Jesse.
Updating the props of a Single-Directory Component (SDC) can be done by clicking the placed component in the preview, and the “component props form” will appear on the right side. This generally works well, but there are still lots of rough edges. The roughest of edges has now been fixed by Atul, Dave “longwave” Long, Travis “traviscarden” Carden and Bálint “balintbrews” Kléri (with Ben “bnjmnm” Mullins shepherding that issue after its many twists and turns to clarity): the server side now correctly handles SDC props that are required, the client side now uses browsers’ native reportValidity functionality. The result is that premature preview updates no longer occur. 3
While placing components and inspecting the component tree you’re creating, it can be handy to quickly get an overview. Browsers have ⌘+/⌘- (Ctrl+/Ctrl-) keyboard shortcuts to zoom in/out. But for XB, you typically want to zoom in/out only the preview, not the entire UI. So thanks to Jesse and Atul “soaratul” Dubey, XB now allows zooming in/out just the preview by pressing + or -. 4
Another rough edge in that component props form was fixed: some field widgets are highly complex, and need to load CSS/JS to work correctly. An example is the most complex widget in Drupal core: the media library widget, which we the recently added support for. Our naïve initial approach failed whenever switching between different components that each used the media library widget: the same JS was loaded again, resulting in JS errors! Fortunately, Drupal already solved this problem: Ben added ajaxPageState support — solved!
With all of those UI improvements in, parts of XB are starting to feel solid!
Better defaultsTo make it easier for future (and existing) contributors to start contributing to/playing with XB, we changed two important defaults:
- Ben made XB depend on the Media Library module, because it offers a superior UX for (re)using images
- Deepak “deepakkm” Mishra, Ted “tedbow” Bowman and I updated the default XB config to start with an empty XB canvas
With only one 0.1 priority left (#3469672: The XB annotations and labels should not change size when zooming), it became possible to help land non-priority fixes, such as:
- Thanks to fazilitehreem and Utkarsh “utkarsh_33”, the “duplicate” action on component instances now works as expected, rather than resulting in an error
- Jesse and Gaurav “gauravvvv” made the canvas size dynamic based on browser viewport size, improving what we landed 4 weeks earlier
- Abhisek “abhisekmazumdar” Mazumdar, Dave and I updated the XB field type to no longer store SDC plugin IDs, but Component config entity IDs (a small but necessary first step towards supporting multiple component types — starting with Blocks)
With 0.1 essentially done, it’s important to prepare for what’s next, and set us up for success and facilitate wider contribution:
- James “q0rban” Sansbury, abhisekmazumdar and Alex “effulgentsia” Bronstein made MR reviews much easier and faster by adding Tugboat integration!
- Dave ensured that OpenAPI validation errors now result in a JSON response, which unblocks #3470321: Surface API error response in the UI — for better bug reports and faster DX — the issue title says it all!
- Together with Feliksas “f.mazeikis” Mazeikis and Dave, I documented the current component discovery + SDC criteria + `Component config entity, and described the status quo in an ADR. Because the status quo is now documented in depth, we’ll be able to make it crystal clear in #3454519: [META] Support component types other than SDC and child issues how we aim to evolve XB to supporting multiple component types, reducing the time to get to consensus on how to achieve that.
Can’t wait to see what product lead Lauri “lauriii” Timmanee prioritizes for milestone 0.2! :D (Spoiler: supporting blocks and actually saving what you see in the XB UI will definitely be in there!)
Week 19 was September 16–22, 2024.
-
This will improve later, once we do #3462360: Partial preview updates: update preview of modified component only, not entire component tree, although later the previously mentioned abstract syntax tree (AST) would make that unnecessary (in most cases). ↩︎
-
In lower-level contexts this is called double buffering — and for example Microsoft .NET forms documentation has a great explanation. ↩︎
-
This is not yet completely solved — next in line is #3474732: Premature prop validation can break the UI. The value entered by the user must first meet the required shape that the SDC’s metadata conveys it needs (using JSON schema in its *.component.yml file). ↩︎
-
Interesting follow-up issues for this: #3475838: Consider a11y impact and/or competitor analysis for preventing browser zoom and #3475749: Pinch gesture zooming sometimes invokes OS zoom behavior. ↩︎
Python Bytes: #402 How to monetize your blog
The Drop Times: DrupalCon Europe Beckons You to Barcelona
Dear Readers,
The much anticipated DrupalCon Europe for 2024 is all set to begin in Barcelona tomorrow. Hosted at the stunning CCIB (Barcelona International Convention Center), this year’s event, running until 27 September, promises to be one of the most memorable gatherings yet. DrupalCon is not just about code but about building connections, exchanging ideas, and forging the future of open-source technology. With four days packed full of sessions, workshops, and networking opportunities, here are some highlights you simply can't afford to miss.
Dries Buytaert, the founder of Drupal, will deliver his landmark 40th Driesnote, where he will provide the much-anticipated "State of Drupal" update. Most importantly, he will dive into the progress of the Drupal CMS (aka Starshot), which first launched at DrupalCon Portland in 2024. Attendees will get a sneak peek into Drupal CMS' upcoming product launch and a demo of what's to come. If you're curious about how to get involved in shaping the future of Drupal, this keynote is a must-attend.
2. Women in Drupal AwardCelebrating outstanding contributions to the Drupal community, the Women in Drupal Award, in partnership with JAKALA, will recognize individuals who have made significant impacts through their projects, businesses, or community engagement. Whether they are stellar developers or successful entrepreneurs, this award shines a light on those advancing the Drupal ecosystem.
3. Local Association StandFor the first time, fifteen European local Drupal associations will come together to host a Local Association Stand. This stand will be a gathering point where regional leaders, event organizers, and the community can exchange ideas and discuss challenges and opportunities specific to their areas. It’s the perfect spot to foster new connections and strengthen collaborations within the broader European Drupal community.
The event will also feature the Drupal Association Board Election Results, providing insight into the leadership shaping the future of the project. And for those looking to network with Drupal business leaders, the Drupal Business Dinner 2024 promises an elegant evening at the 1881 per SAGARDI, offering Mediterranean cuisine with a view of Barcelona's iconic port.
With that let's move on to important stories from last week.
On September 23, 2024, LagoonCon Barcelona will host developers, product managers, and technology leaders for a free event focused on improving the management of Drupal websites through the open-source platform, Lagoon. The event at the Hotel Barcelona Princess offers a deep dive into Lagoon's capabilities and its role in simplifying application delivery for Drupal and other open-source frameworks.
Local Associations, Camps, and initiatives come together during DrupalCon Barcelona 2024 for a joint Round table. This year's Round Table for Local Drupal Associations is a collaboration between the Network of European Drupal Associations (NEDA) the Local Associations Initiatives Project and the Drupal Association. Written by Esmeralda Braad-Tijhoff.
Lenny Moskalyk has published the Drupal CMS report as of mid-September 2024, providing an insightful overview of the latest developments within the Drupal community. This month, several initiatives have been seen highlighting the collective efforts to enhance the platform.
Volunteers can sign up for various roles, including marketing, registration, and session monitoring, to ensure the success of DrupalCamp Pune 2024. Selected volunteers will be rewarded with a 30% discount on their event tickets and a certificate recognizing their contributions.
The DropTimes has curated a list of key Drupal events happening this week, from September 23 to 29, 2024. Read more here.
The Drupalisms Working Group has launched a quiz to improve and open up Drupal's terminology, inviting the community to participate before the October 31st, 2024 deadline. With 20 questions focusing on various aspects of the Drupal platform, including user interface and functionality, the quiz allows developers and users alike to contribute to the evolution of Drupal's language.
The FOSSEPS and OSOR projects are conducting a survey to assess interest in forming a European Open Source User Group for public administrations. The initiative seeks input from IT professionals within EU public bodies on the current use and challenges of Free and Open Source Software (FOSS).
Mark Conroy has expanded a proof-of-concept module into a fully functional live preview module for LocalGov Drupal Microsites. This development aims to enhance the user experience by allowing real-time previews of design changes within the platform, making it easier for councils to manage and update their microsites.
Jay Callicott has announced a significant update to DrupalX, transforming it into an AI-powered platform. By integrating AI functionalities, DrupalX aims to become the most advanced Drupal starter on the market.
The Drupal Decoupled Project is set to receive several major updates, as announced by Jesus Manuel Olivas, Co-Founder and CEO of Octahedroid and Composabase. As a final update, Backdrop CMS has announced the release of version 1.29.0, bringing a host of new features, UX improvements, and essential bug fixes.
We acknowledge that there are more stories to share. However, due to selection constraints, we must pause further exploration for now.
To get timely updates, follow us on LinkedIn, Twitter and Facebook. You can also, join us on Drupal Slack at #thedroptimes.
Thank you,
Sincerely
Alka Elizabeth
Sub-editor, The DropTimes.
Quansight Labs Blog: Multi-dimensional Sparse Arrays in SciPy
Hynek Schlawack: Python Project-Local Virtualenv Management Redux
One of my first TIL entries was about how you can imitate Node’s node_modules semantics in Python on UNIX-like operating systems. A lot has happened since then (to the better!) and it’s time for an update. direnv still rocks, though.
Armin Ronacher: FSL: A Better Business/Open Source Balance Than AGPL
subtext: in my opinion, and for companies (and their users) that want a good balance between protecting their core business with Open Source ideals.
Following up to my thoughts on the case for funding Open Source, there is a second topic I want to discuss in more detail: Open Source and commercialization. As our founder likes to say: Open Source is not a business model. And indeed it really isn't. However, this does not mean that Open Source and Open Source licenses aren't a critical consideration for a technology company and a fascinating interconnection between the business model and license texts.
As some of you might know I'm a strong proponent of the concept now branded as “Fair Source” which we support at Sentry. Fair Source is defined by a family of springing licenses that give you the right to read and modify code, while also providing an exclusivity period for the original creator to protect their core business. After a designated time frame, the code transitions into Open Source via a process called DOSP: Delayed Open Source Publication. This is not an entirely new idea, and I have been writing about it a few times before [1] [2].
A recurring conversation I have in this context is the AGPL (Affero General Public License) as an alternative vehicle for balancing business goals and Open Source ideals. This topic also has resurfaced recently because of Elasticsearch'es Open Source, Again post where they announced that they will license Elasticsearch under the AGPL.
In my view, while AGPL is a true Open Source license, it is an inferior choice compared to the FSL (the Functional Source License, a Fair Source license) for many projects. Let me explain my reasoning.
The Single Vendor ModelWhen you take a project like Sentry, which started as an Open Source project and later turned into a VC funded company, its model revolves around a commercial entity being in charge. That model is often referred to as “single vendor.” This is also the case with companies like Clickhouse Inc. or Elastic and their respective projects.
Sentry today is no longer Open Source, it's Fair Source (FSL licensed). Elastic on the other hand is indeed unquestionable Open Source (AGPL among others). What both projects have in common is that they value brand (including trademarks), that they have strong opinions on how that project should be run, and they use a CLA to give themselves the right to re-licenses it under other terms.
In a "single vendor" setup, the company behind the project holds significant power (for ~150 years give or take).
The Illusion of EqualityWhen you look at the AGPL as a license it's easy to imagine that everybody is equal. Every contributor to a project agrees with the underlying license assumptions of the AGPL and acts accordingly. However, in practice, things are more complicated — especially when it comes to commercial usage. Many legal departments are wary of the AGPL and the broader GPL family of licenses. Some challenges are also inherent to the licenses such as not being able to publish *GPL code to the app store.
You can see this also with Elasticsearch. The code is not just AGPL licensed, you can also retrieve it under the ELv2 and SSPL licensing terms. Something that Elastic can do due to the CLAs in place.
Compare this to Linux, which is licensed under GPLv2 with a syscall exception. This very specific license was chosen by Linus Torvalds to ensure the project's continued success while keeping it truly open. In Linux' case, no single entity has more rights than anyone else. There is not even a realistic option to relicense to a newer version of the GPL.
The FSL explicitly recognizes the reality that the single vendor holds significant power but balances it by ensuring that this power diminishes over time. This idea can also be found in copyright law, where a creator's work eventually enters the public domain. A key difference with software though is that it continuously evolves, making it hard to pinpoint when it might eventually become public domain as thousands of people contribute to it.
The FSL is much more aggressive in that aspect. If we run Sentry into the ground and the business fails, within two years, anyone can pick up the pieces and revive it like a Phoenix from the ashes. This isn't just hypothetical. Bryan Cantrill recently mentioned the desire of Oxide forking CockroachDB once its BUSL change date kicks in. While that day hasn't come yet, it's a real possibility.
Dying CompaniesLet's face it: companies fail. I have no intentions for Sentry to be one of them, but you never know. Companies also don't just die just once, they can do so repeatedly. Xapian is an example I like to quote here. It started out as a GPL v2+ licensed search project called Muscat which was built at Cambridge. After several commercial acquisitions and transitions, the project eventually became closed source (which was possible because the creators held the copyright). Some of the original creators together with the community forked the last GPLv2 version into a project that eventually became known as Xapian.
What's the catch? The catch is that the only people who could license it more liberally than GPLv2 are long gone from the project. Xapian refers to its current license “a historical accident”. The license choice causes some challenges specifically to how Xapian is embedded. There are three remaining entities that would need to agree to the relicensing. From my understanding none of those entities commercially use Xapian's original code today but also have no interest in actually supporting a potential relicensing.
Unlike trademark law which has a concept of abandonment, the copyright situation is stricter. It would take two lifetimes for Xapian to enter the public domain and at that point it will be probably be mostly for archival purposes.
Equal Grounds Now or LaterIf Xapian's original code would have been FSL licensed, it would have been Apache 2.0 (or MIT with the alternative model) many times over. You don't need to hope that the original license holder still cares, by the time you get hold of the source code, you already have an irrevocable promise that it will eventually turn into Apache 2.0 (or MIT with the alternative license choice) which is about as non-strings attached as it can get.
So in some ways a comparison is “AGPL now and forever” vs “FSL now, Apache 2.0/MIT in two years”.
That's not to say that AGPL (or SSPL) don't have their merits. Xapian as much as it might suffer from their accidental license choice also is a successful Open Source project that helped a lot of developers out there. Maybe the license did in fact work out well for them, and because everybody is in the same boat it also has created a community of equals.
I do believe however it's important to recognize that “single-vendor AGPL with a CLA” is absolutely not the same as “community driven AGPL project without the CLA”.
The title claims that FSL balances Open Source better than AGPL, and it's fair to question how a license that isn't Open Source can achieve that. The key lies in understanding that Fair Source is built on the concept of delayed Open Source. Yes, there's a waiting period, but it’s a relatively short one: just two years. Count to two and the code transitions to full, unshackled openness. And that transition to Open Source is a promise that can't be taken from you.
[1]Originally about the BUSL license which introduced the idea (Open Source, SaaS and Monetization) [2]Later about our own DOSP based license, the FSL (FSL: A License For the Bazaar, Not the Cathedral).