Feeds
C.J. Collier: Managing HPE SAS Controllers
Notes to self. And anyone else who might find them useful. Following are some ssacli commands which I use infrequently enough that they fall out of cache. This may repeat information in other blogs, but since I search my posts first when commands slip my mind, I thought I’d include them here, too.
hpacucli is the wrong command. Use ssacli instead.
$ KR='/usr/share/keyrings/hpe.gpg' $ for fingerprint in \ 882F7199B20F94BD7E3E690EFADD8D64B1275EA3 \ 57446EFDE098E5C934B69C7DC208ADDE26C2B797 \ 476DADAC9E647EE27453F2A3B070680A5CE2D476 ; do \ curl "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x${fingerprint}" \ | gpg --no-default-keyring --keyring "${KR}" --import ; \ done $ gpg --list-keys --no-default-keyring --keyring "${KR}" /usr/share/keyrings/hpe.gpg --------------------------- pub rsa2048 2012-12-04 [SC] [expired: 2022-12-02] 476DADAC9E647EE27453F2A3B070680A5CE2D476 uid [ expired] Hewlett-Packard Company RSA (HP Codesigning Service) pub rsa2048 2014-11-19 [SC] [expired: 2024-11-16] 882F7199B20F94BD7E3E690EFADD8D64B1275EA3 uid [ expired] Hewlett-Packard Company RSA (HP Codesigning Service) - 1 pub rsa2048 2015-12-10 [SCEA] [expires: 2025-12-07] 57446EFDE098E5C934B69C7DC208ADDE26C2B797 uid [ unknown] Hewlett Packard Enterprise Company RSA-2048-25 $ echo "deb [signed-by=${KR}] http://downloads.linux.hpe.com/SDR/repo/mcp bookworm/current non-free" \ | sudo dd of=/etc/apt/sources.list.d status=none $ sudo apt-get update $ sudo apt-get install -y -qq ssacli > /dev/null 2>&1 $ sudo ssacli ctrl all show status HPE Smart Array P408i-p SR Gen10 in Slot 3 Controller Status: OK Cache Status: OK Battery/Capacitor Status: OK $ sudo ssacli ctrl all show detail HPE Smart Array P408i-p SR Gen10 in Slot 3 Bus Interface: PCI Slot: 3 Serial Number: PFJHD0ARCCR1QM RAID 6 Status: Enabled Controller Status: OK Hardware Revision: B Firmware Version: 2.65 Firmware Supports Online Firmware Activation: True Driver Supports Online Firmware Activation: True Rebuild Priority: High Expand Priority: Medium Surface Scan Delay: 3 secs Surface Scan Mode: Idle Parallel Surface Scan Supported: Yes Current Parallel Surface Scan Count: 1 Max Parallel Surface Scan Count: 16 Queue Depth: Automatic Monitor and Performance Delay: 60 min Elevator Sort: Enabled Degraded Performance Optimization: Disabled Inconsistency Repair Policy: Disabled Write Cache Bypass Threshold Size: 1040 KiB Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 15 secs Cache Board Present: True Cache Status: OK Cache Ratio: 10% Read / 90% Write Configured Drive Write Cache Policy: Disable Unconfigured Drive Write Cache Policy: Default Total Cache Size: 2.0 Total Cache Memory Available: 1.8 Battery Backed Cache Size: 1.8 No-Battery Write Cache: Disabled SSD Caching RAID5 WriteBack Enabled: True SSD Caching Version: 2 Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Spare Activation Mode: Activate on physical drive failure (default) Controller Temperature (C): 53 Cache Module Temperature (C): 43 Capacitor Temperature (C): 40 Number of Ports: 2 Internal only Encryption: Not Set Express Local Encryption: False Driver Name: smartpqi Driver Version: Linux 2.1.18-045 PCI Address (Domain:Bus:Device.Function): 0000:11:00.0 Negotiated PCIe Data Rate: PCIe 3.0 x8 (7880 MB/s) Controller Mode: Mixed Port Max Phy Rate Limiting Supported: False Latency Scheduler Setting: Disabled Current Power Mode: MaxPerformance Survival Mode: Enabled Host Serial Number: 2M20040D1Q Sanitize Erase Supported: True Sanitize Lock: None Sensor ID: 0 Location: Capacitor Current Value (C): 40 Max Value Since Power On: 42 Sensor ID: 1 Location: ASIC Current Value (C): 53 Max Value Since Power On: 55 Sensor ID: 2 Location: Unknown Current Value (C): 43 Max Value Since Power On: 45 Sensor ID: 3 Location: Cache Current Value (C): 43 Max Value Since Power On: 44 Primary Boot Volume: None Secondary Boot Volume: None $ sudo ssacli ctrl all show config HPE Smart Array P408i-p SR Gen10 in Slot 3 (sn: PFJHD0ARCCR1QM) Internal Drive Cage at Port 1I, Box 2, OK Internal Drive Cage at Port 2I, Box 2, OK Port Name: 1I (Mixed) Port Name: 2I (Mixed) Array A (SAS, Unused Space: 0 MB) logicaldrive 1 (1.64 TB, RAID 6, OK) physicaldrive 1I:2:1 (port 1I:box 2:bay 1, SAS HDD, 300 GB, OK) physicaldrive 1I:2:2 (port 1I:box 2:bay 2, SAS HDD, 1.2 TB, OK) physicaldrive 1I:2:3 (port 1I:box 2:bay 3, SAS HDD, 300 GB, OK) physicaldrive 1I:2:4 (port 1I:box 2:bay 4, SAS HDD, 1.2 TB, OK) physicaldrive 2I:2:5 (port 2I:box 2:bay 5, SAS HDD, 300 GB, OK) physicaldrive 2I:2:6 (port 2I:box 2:bay 6, SAS HDD, 300 GB, OK) physicaldrive 2I:2:7 (port 2I:box 2:bay 7, SAS HDD, 1.2 TB, OK) physicaldrive 2I:2:8 (port 2I:box 2:bay 8, SAS HDD, 1.2 TB, OK) SEP (Vendor ID HPE, Model Smart Adapter) 379 (WWID: 51402EC013705E88, Port: Unknown) $ sudo ssacli ctrl slot=3 pd 2I:2:7 show detail HPE Smart Array P408i-p SR Gen10 in Slot 3 Array A physicaldrive 2I:2:7 Port: 2I Box: 2 Bay: 7 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 1.2 TB Drive exposed to OS: False Logical/Physical Block Size: 512/512 Rotational Speed: 10000 Firmware Revision: U850 Serial Number: KZGN1BDE WWID: 5000CCA01D247239 Model: HGST HUC101212CSS600 Current Temperature (C): 46 Maximum Temperature (C): 51 PHY Count: 2 PHY Transfer Rate: 6.0Gbps, Unknown PHY Physical Link Rate: 6.0Gbps, Unknown PHY Maximum Link Rate: 6.0Gbps, 6.0Gbps Drive Authentication Status: OK Carrier Application Version: 11 Carrier Bootloader Version: 6 Sanitize Erase Supported: False Shingled Magnetic Recording Support: None Drive Unique ID: 5000CCA01D247238 TweetPython Morsels: Python's pathlib module
Python's pathlib module is the tool to use for working with file paths. See pathlib quick reference tables and examples.
Table of contents
- A pathlib cheat sheet
- The open function accepts Path objects
- Why use a pathlib.Path instead of a string?
- The basics: constructing paths with pathlib
- Joining paths
- Current working directory
- Absolute paths
- Splitting up paths with pathlib
- Listing files in a directory
- Reading and writing a whole file
- Many common operations are even easier
- No need to worry about normalizing paths
- Built-in cross-platform compatibility
- A pathlib conversion cheat sheet
- What about things pathlib can't do?
- Should strings ever represent file paths?
- Use pathlib for readable cross-platform code
Below is a cheat sheet table of common pathlib.Path operations.
The variables used in the table are defined here:
>>> import pathlib >>> path = Path("/home/trey/proj/readme.md") >>> relative = Path("readme.md") >>> base = Path("/home/trey/proj") >>> new = Path("/home/trey/proj/sub") >>> target = path.with_suffix(".txt") # .md -> .txt >>> pattern = "*.md" >>> name = "sub/f.txt" Path-related task pathlib approach Example Read all file contents path.read_text() 'Line 1\nLine 2\n' Write file contents path.write_text('new') Writes new to file Get absolute file path relative.resolve() Path('/home/trey/proj/readme.md') Get the filename path.name 'readme.md' Get parent directory path.parent Path('home/trey/proj') Get file extension path.suffix '.md' Get suffix-free name path.stem 'readme' Ancestor-relative path path.relative_to(base) Path('readme.md') Verify path is a file path.is_file() True Make new directory new.mkdir() Makes new directory Get current directory Path.cwd() Path('/home/trey/proj') Get home directory Path.home() Path('/home/trey') Get all ancestor paths path.parents [Path('/home/trey/proj'), ...] List files/directories base.iterdir() [Path('home/trey/proj/readme.md'), ...] Find files by pattern base.glob(pattern) [Path('/home/trey/proj/readme.md')] Find files recursively base.rglob(pattern) [Path('/home/trey/proj/readme.md')] Join path parts base / name Path('/home/trey/proj/sub/f.txt') Get file size (bytes) path.stat().st_size 14 Walk the file tree base.walk() Iterable of (path, subdirs, files) Rename file to new path path.rename(target) Path object for new path Remove file path.unlink()Note that iterdir, glob, rglob, and walk all return iterators. The examples above show lists for convenience.
The open function accepts Path objectsWhat does Python's open function …
Read the full article: https://www.pythonmorsels.com/pathlib-module/Philipp Kern: debian.org now supports Security Key-backed SSH keys
debian.org's infrastructure now supports using Security Key-backed SSH keys. DDs (and guests) can use the mail gateway to add SSH keys of the types sk-ecdsa-sha2-nistp256@openssh.com and sk-ssh-ed25519@openssh.com to their LDAP accounts.
This was done in support of hardening our infrastructure: Hopefully we can require these hardware-backed keys for sensitive machines in the future, to have some assertion that it is a human that is connecting to them.
As some of us shell to machines a little too often, I also wrote a small SSH CA that issues short-lived certificates (documentation). It requires the user to login via SSH using an SK-backed key and then issues a certificate that is valid for less than a day. For cases where you need to frequently shell to a machine or to a lot of machines at once that should be a nice compromise of usability vs. security.
The capabilities of various keys differ a lot and it is not always easy to determine what feature set they support. Generally SK-backed keys work with FIDO U2F keys, if you use the ecdsa key type. Resident keys (i.e. keys stored on the token, to be used from multiple devices) require FIDO2-compatible keys. no-touch-required is its own maze, e.g. the flag is not properly restored today when pulling the public key from a resident key. The latter is also one reason for writing my own CA.
SomeoneTM should write up a matrix on what is supported where and how. In the meantime it is probably easiest to generate an ed25519 key - or if that does not work an ecdsa key - and make a backup copy of the resulting on-disk key file. And copy that around to other devices (or OSes) that require access to the key.
Real Python: Interacting With Python
There are multiple ways of interacting with Python, and each can be useful for different scenarios. You can quickly explore functionality in Python’s interactive mode using the built-in Read-Eval-Print Loop (REPL), or you can write larger applications to a script file using an editor or Integrated Development Environment (IDE).
In this tutorial, you’ll learn how to:
- Use Python interactively by typing code directly into the interpreter
- Execute code contained in a script file from the command line
- Work within a Python Integrated Development Environment (IDE)
- Assess additional options, such as the Jupyter Notebook and online interpreters
Before working through this tutorial, make sure that you have a functioning Python installation at hand. Once you’re set up with that, it’s time to write some Python code!
Get Your Code: Click here to get the free sample code that you’ll use to learn about interacting with Python.
Take the Quiz: Test your knowledge with our interactive “Interacting With Python” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Interacting With PythonIn this quiz, you'll test your understanding of the different ways of interacting with Python. By working through this quiz, you'll revisit key concepts related to Python interaction in interactive mode using the REPL, through Python script files, and within IDEs and code editors.
Hello, World!There’s a long-standing custom in computer programming that the first code written in a newly installed language is a short program that displays the text Hello, World! to the console.
In Python, running a “Hello, World!” program only takes a single line of code:
Python print("Hello, World!") Copied!Here, print() will display the text Hello, World! in quotes to your screen. In this tutorial, you’ll explore several ways to execute this code.
Running Python in Interactive ModeThe quickest way to start interacting with Python is in a Read-Eval-Print Loop (REPL) environment. This means starting up the interpreter and typing commands to it directly.
When you interact with Python in this way, the interpreter will:
- Read the command you enter
- Evaluate and execute the command
- Print the output (if any) to the console
- Loop back and repeat the process
The interactive session continues like this until you instruct the interpreter to stop. Using Python in this interactive mode is a great way to test short snippets of Python code and get more familiar with the language.
When you install Python using an installer, the Start menu shows a program group labeled Python 3.x. The label may vary depending on the particular installation you chose. Click on that item to start the Python interpreter.
Alternatively, you can open your Command Prompt or PowerShell application and type the py command to launch it:
Windows PowerShell PS> py Copied!To start the Python interpreter, open your Terminal application and type python3 to launch it from the command line:
Shell $ python3 Copied!If you’re unfamiliar with this application, then you can use your operating system’s search function to find it.
After pressing Enter, you should see a response from the Python interpreter similar to the one below:
Python Python 3.13.0 (main, Oct 14 2024, 10:34:31) [Clang 15.0.0 (clang-1500.3.9.4)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> Copied!If you’re not seeing the >>> prompt, then you’re not talking to the Python interpreter. This could be because Python is either not installed or not in the path of your terminal window session.
Note: If you need additional help to get to this point, then you can check out the How to Install Python on Your System: A Guide tutorial.
If you’re seeing the prompt, then you’re off and running! With these next steps, you’ll execute the statement that displays "Hello, World!" to the console:
- Ensure that Python displays the >>> prompt, and that you position your cursor after it.
- Type the command print("Hello, World!") exactly as shown.
- Press the Enter key.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
1xINTERNET blog: Open-source innovation: Drupal Recipes and the upcoming Drupal CMS
Explore how the integration of Drupal Recipes is transforming Drupal development by simplifying configurations and enhancing reusability. Learn how these innovations are setting the stage for the highly anticipated release of the official Drupal CMS in January 2025.
1xINTERNET blog: Open-source innovation: embracing Symfony Recipes and the upcoming Drupal CMS
Explore how the integration of Symfony Recipes is transforming Drupal development by simplifying configurations and enhancing reusability. Learn how these innovations are setting the stage for the highly anticipated release of the official Drupal CMS in January 2025.
Python Software Foundation: Help power Python and join in the PSF year-end fundraiser & membership drive!
The Python Software Foundation (PSF) is the charitable organization behind Python, dedicated to advancing, supporting, and protecting the Python programming language and the community that sustains it. That mission and cause are more than just words we believe in. Our tiny but mighty team works hard to deliver the projects and services that allow Python to be the thriving, independent, community-driven language it is today. Some of what the PSF does includes producing PyCon US, hosting the Python Packaging Index (PyPI), awarding grants to Python initiatives worldwide, maintaining critical community infrastructure, and more.
To build the future of Python and sustain the thriving community that its users deserve, we need your help. By backing the PSF, you’re investing in Python’s growth and health, and your contributions directly impact the language's future. Is your community, work, or hobby powered by Python? Join this year’s drive and power Python’s future with us by donating or becoming a Supporting Member today.
There are three ways to join in:
Your donations:
- Keep Python thriving
- Support CPython and PyPI progress
- Increase security across the Python ecosystem
- Bring the global Python community together
- Make our community more diverse and robust every year
Highlights from 2024:
- A record-making PyCon US - We produced the 21st PyCon US, in Pittsburgh, US, and online, and it was a huge success! For the first time post-2020, PyCon US 2024 sold out with over 2,500 in-person attendees.
- Advances in our Grants Program - 2024 has been a year of change and reflection for the Grants Program, starting with the addition of Marie Nordin to the grants administration team who has supported the PSF in launching several new grants initiatives. We set up Grants Program Office Hours, published a Grants Program Transparency Report for 2022 and 2024, invested in a third-party retrospective, launched a major refresh of all areas of our Grants program and updated our Grants Workgroup Charter. With more changes to come, we are thrilled to share that we awarded a record-breaking amount of grant funds in 2024!
- Empowering the Python community through Fiscal Sponsorship - We are proud to continue supporting our 20 fiscal sponsoree organizations with their initiatives and events all year round. The PSF provides 501(c)(3) tax-exempt status to fiscal sponsorees such as PyLadies and Pallets, and provides back office support so they can focus on their missions. Consider donating to your favorite PSF Fiscal Sponsoree and check out our Fiscal Sponsorees page to learn more about what each of these awesome organizations is all about!
- Connecting directly through Office Hours - The current PSF Board has decided to invest more in connecting and serving the global Python community by establishing a forum to have regular conversations. The board members of the PSF with the support of PSF staff are now holding monthly PSF Board Office Hours on the PSF Discord. The Office Hours are sessions where folks from the community can share with us how we can help your regional community, express perspectives, and provide feedback for the PSF.
- Paying more engineers to work directly on Python, PyPI, and security - We welcomed Petr Viktorin, Deputy Developer in Residence (DiR), and Serhiy Storchaka, Supporting DiR. It’s been exciting to begin to realize the full vision of the DiR program, with special thanks to Bloomberg for making it possible for us to bring Petr on board. The DiR team is taking an active role in shaping the development of the language, and with three people on the team each DiR can now also spend a percentage of their time on feature work aligned with their interests.
- Continuing to enhance Python’s security through Developers-in-Residence - Seth Larson, PSF Security Developer in Residence (DiR) had a busy year thanks to continued support from Alpha-Omega. Seth worked on a variety of projects including the creation of SBOMs for Source and Windows CPython artifacts, implementing build reproducibility for CPython source artifacts, and auditing and migrating Sigstore, to name just a few. Check out Seth's blog to keep up to date with his work. Mike Fiedler, PyPI Safety & Security Engineer, also worked on a variety of projects such as two-factor authentication for all users on PyPI, an audit of PyPI, made significant progress on malware response and reporting, collaborated on the PSF’s submission for the Cybersecurity and Infrastructure Security Agency (CISA)’s Request for Information (RFI), and more! Thanks to AWS and Georgetown for making Mike’s PyPI security accomplishments possible. Stay up to date with Mike's work on the PyPI blog.
- New PSF Staff dedicated to critical infrastructure - We established the PyPI Support Specialist role, filled by Maria Ashna. Over the past 23 years, PyPI has seen essentially exponential growth in traffic and users, relying for the most part on volunteers to support it. The load far outstretched volunteers and prior staff capacity, so we are very excited to have Maria on board. We also filled our Infrastructure Engineer role, welcoming Jacob Coffee to the team, to ensure PSF-maintained systems and services are running smoothly.
We appreciate you and we’re so excited to see where we can go together in the year to come!
Python Bytes: #410 Entering the Django core
Open Data and Open Source AI: Charting a course to get more of both
While working to define Open Source AI, we realized that data governance is an unresolved issue. The Open Source Initiative organized a workshop to discuss data sharing and governance for AI training. The critical question posed to attendees was “How can we best govern and share data to power Open Source AI?” The main objective of this workshop was to establish specific approaches and strategies for both Open Source AI developers and other stakeholders.
The Workshop: Building bridges across “Open” streamsHeld on October 10-11, 2024, and hosted by Linagora’s Villa Good Tech, the OSI workshop brought together 20 experts from diverse fields and regions. Funded by the Alfred P. Sloan Foundation, the event focused on actionable steps to align open data practices with the goals of Open Source AI.
Participants, listed below, comprised academics, civil society leaders, technologists, and representatives from organizations like Mozilla Foundation, Creative Commons, EleutherAI Institute and others.
- Ignatius Ezeani University of Lancaster / Nigeria
- Masayuki Hatta Debian, Open Source Group Japan / Japan
- Aviya Skowron EleutherAI Institute / Poland
- Stefano Zacchiroli Software Heritage / Italy
- Ricardo Torres Digital Public Goods Alliance / Mexico
- Kristina Podnar Data and Trust Alliance / Croatia + USA
- Joana Varon Coding Rights / Brazil
- Renata Avila Open Knowledge Foundation / Guatemala
- Alek Tarkowski Open Future / Poland
- Maximilian Gantz Mozilla Foundation / Germany
- Stefaan Verhulst GovLab / USA/Belgium
- Paul Keller Open Future / Germany
- Thom Vaughan Common Crawl / UK
- Julie Hunter Linagora / USA
- Deshni Govender GIZ FAIR Forward – AI for All / South Africa
- Ramya Chandrasekhar CNRS / India
- Anna Tumadóttir Creative Commons / Iceland
- Stefano Maffulli Open Source Initiative / Italy
Over two days, the group worked to frame a cohesive approach to data governance. Alek Tarkowski and Paul Keller of the Open Future Foundation are working with OSI to complete the white paper summarizing the group’s work. In the meantime, here is a quick “tease”—just a few of the many topics that the group discussed:
The streams of “open” merge, creating wavesAI is where Open Source software, open data, open knowledge, and open science meet in a new way. Since OpenAI released ChatGPT, what once were largely parallel tracks with occasional junctures are now a turbulent merger of streams, creating ripples in all of these disciplines and forcing us to reassess our principles: How do we merge these streams without eroding the principles of transparency and access that define openness?
We discovered in the process of defining Open Source AI that the basic freedoms we’ve put in the Open Source Definition and its foundation, the Free Software Definition, are still good and relevant. Open Source software has had decades to mature into a structured ecosystem with clear rules, tools, and legal frameworks. Same with Open Knowledge and Open Science: While rooted in age-old traditions, open knowledge and science have seen modern rejuvenation through platforms like Wikipedia and the Open Knowledge Foundation. Open data, however, feels less solid: often serving as a one-way pipeline from public institutions to private profiteers, is now dragged into a whole new territory.
How are these principles of “open” interacting with each other, how are we going to merge Open Data with Open Source with Open Science and Open Knowledge in Open Source AI?
The broken social contract of dataData fuels AI. The sheer scale of data required to train models like ChatGPT reveals not just a technological challenge but also a societal dilemma. Much of this data comes from us—the blogs we write, the code we share, the information we give freely to platforms.
OpenAI, for example, “slurps” all the data it can find, and much of it is what we willingly give: the blogs we write; the code we share; the pictures, emails and address books we keep in “the cloud”; and all the other information we give freely to platforms.
We, the people, make the “data,” but what are we getting in exchange? OpenAI owns and controls the machine built with our data, and it grants us access via API, until it changes its mind. We are essentially being stripmined for a proprietary system that grants access at a price—until the owner decides otherwise.
We need a different future, one where data empowers communities, not just corporations. That starts with revisiting the principles of openness that underpin the open source, open science, and open knowledge movements. The question is: How do we take back control?
Charting a path forwardWe want the machine for ourselves. We want machines that the people can own and control. We need to find a way to swing the pendulum back to our meaning of Open. And it’s all about the “data.”
The OSI’s work on the Open Source AI Definition provides a starting point. An Open Source AI machine is one that the people can meaningfully fork without having to ask for permission. For AI to truly be open, developers need access to the same tools and data as the original creators. That means transparent training processes, open filtering code, and, critically, open datasets.
Group photo of the participants to the workshop on data governance, Paris, Oct 2024. Next stepsThe white paper, expected in December, will synthesize the workshop’s discussions and propose concrete strategies for data governance in Open Source AI. Its goal is to lay the groundwork for an ecosystem where innovation thrives without sacrificing openness or equity.
As the lines between “open” streams continue to blur, the choices we make now will define the future of AI. Will it be a tool controlled by a few, or a shared resource for all?
The answer lies in how we navigate the waves of data and openness. Let’s get it right.
This Week in KDE Apps: Python bindings
Welcome to a new issue of "This Week in KDE Apps"! Every week we cover as much as possible of what's happening in the world of KDE apps.
This week, we release the first beta of what will become KDE Gear 24.12.0. If your distro provides testing package, please help with testing. Meanwhile, and as part of the 2024 end-of-year fundraiser, you can "Adopt an App" in a symbolic effort to support your favorite KDE app.
This week, we are particularly grateful to George Fakidis, tmpod, Paxriel for showing their support for Okular; Ian Lohmann, Anthony Perrett, Linus Seelinger and Nils Martens for Dolphin, Erik Bernoth for Arianna and Daniel Lloyd-Miller and mdPlusPlus for KDE Connect.
Any monetary contribution, however small, will help us cover operational costs, salaries, travel expenses for contributors and in general just keep KDE bringing Free Software to the world. So consider donating today!
Getting back to all that's new in the KDE App scene, let's dig in!
Global ChangesKWidgetsAddons, a collection of add-on widgets for QtWidgets, and KUnitConversion now have Python bindings. (Manuel Alcaraz Zambrano, KDE Frameworks 6.9.0. Link 1 and link 2)
Call to Action
KDE has over 70 frameworks, we need your help to add Python bindings to the relevant remaining frameworks. See metatask.The "About" page of Kirigami applications now provides helpful "Copy" button that lets you copy system information, which can be useful when filling a bug report. The same feature was also implemented for QtWidgets-based applications. (Carl Schwan, Kirigami Addons 1.6.0 and KDE Frameworks 6.9. Link for Kirigami apps and link for QtWidget apps)
Additionally Joshua added icons to the "Getting involved", "Donate", and other actions for the Kirigami version. (Joshua Goins, Kirigami Addons 1.6.0. Link)
The "share" context menu of many applications can now copy the data to clipboard. (Aleix Pol Gonzalez, Frameworks 6.9.0. Link)
Alligator RSS feed readerAlligator now lets you bookmark your favorite posts. (Soumyadeep Ghosh, 25.04.0. Link)
The selected feed will also now be highlighted correctly and the text of an article can now be selected and copied. (Soumyadeep Ghosh, 24.12.0. Link and link 2)
AudioTube YouTube Music appFix parsing certain playlists. (Eren Karakas, 24.12.0. Link)
Clock Keep time and set alarmsFix a crash of the Clock Daemon when waking up. (Devin Lin, 24.12.0. Link)
digiKam Photo Management ProgramDigikam 8.5.0 is out! This releases improves the Face Management system, adds colored labels to identify important items, increases its list of supported languages to 61, and fixes over 160 bugs.
Dolphin Manage your filesWhen Dolphin is started on a location which does not report a storage size (for example "remote", or "bluetooth") the status bar will no longer pointlessly show a empty storage size indicator for a split second before hiding it again. (Felix Ernst, 25.04.0. Link)
Gwenview Image ViewerWe fixed a bug where indexed-color or monochrome-palette images (e.g. from pngquant) would render with garbled colors or black and white noise when zoomed. (Tabby Kitten, 24.12.0. Link)
KDE Itinerary Digital travel assistantItinerary's Matrix integration now uses encrypted Matrix rooms by default and Itinerary can now do session verification, which is going to be mandatory in the future. (Volker Krause, 25.04.0. Link 1 and link 2). Volker also fixed various small issues with the Matrix integration (too many to list them all) and backported these fixes for the 24.12.0 release.
Kate Advanced Text EditorIt is now possible to disable the autocompletion popup which appears when you are just typing. (Waqar Ahmed, 25.04.0. BUG 476620)
KCalc Scientific calculatorWe redesigned the bit edit feature of KCalc. (Tomasz Bojczuk, 25.04.0. Link)
Kdenlive Video editorSeveral Kdenlive effects got the capacity to animate their parameters with keyframes. (Bernd Jordan, Julius Künzel, and Massimo Stella, 25.04.0. Link 1, link 2, link 3 and link 4)
Keysmith Two-factor code generator for Plasma Mobile and DesktopKeysmith can now import OTPs from andOTP's backup files. (Martin Reboredo, 25.04.0, Link)
Akonadi Background service for KDE PIM appsFixed a crash when migrating old iCal entries in Akonadi to be properly tagged. (Daniel Vrátil, 24.12.0. Link)
Fix style of configuration dialogs for Akonadi agents on platforms other than Plasma. (Laurent Montel, 24.12.0. Link)
Port away IMAP resource from KWallet and use QtKeychain instead. This ensures your email's credentials are correctly stored and retrieved on other platforms like Windows. (Carl Schwan, 24.12.0. Link)
KMail A feature-rich email applicationReduce temporary memory allocation by 25% when starting KMail. If you are curious how, the merge requests are super interesting. (Volker Krause, 24.12.0. Link 1, link 2, and link 3)
Kodaskanna A multi-format 1D/2D code scannerKodaskanna was ported to Qt6/KF6. (Friedrich W. H. Kossebau. Link)
KRDC Connect with RDP or VNC to another computerWe added various options related to security of the RDP connection and the redirection of smartcards to the remote host. (Roman Katichev, 25.04.0. Link)
Kup Backup scheduler for KDE's Plasma desktopWe rephrased all yes/no buttons in Kup's notifications to use more descriptive names. (Nate Graham, 25.04.0. Link)
NeoChat Chat on MatrixWhen receiving stickers with NeoChat, they will be displayed with a more appropriate size (256x256px). Same with custom emoticons, which are now displayed with the same height as the rest of the message. (Joshua Goins, 24.12.0. Link)
We don't show the filename underneath images anymore, and also make the download file dialog fill out the filename by default. (Joshua Goins, 24.12.0. Link 1 and link 2)
We redesigned the list of accounts in the welcome page. Now we show the display name and avatar of your accounts there, which makes it easier to recognize. (Joshua Goins, 24.12.0. Link)
We rearranged the room, file and message context menus. (Joshua Goins, 24.12.0. Link 1 and link 2)
Tokodon Browse the FediverseAdd "Open in Browser" action to profile pages. (Sean Baggaley, 24.12.0. Link)
Fix various issues on Android, most prominently ensure all icons required by Tokodon are packaged correctly. (Joshua Goins, 24.12.0)
... And Everything ElseThis blog only covers the tip of the iceberg! If you’re hungry for more, check out Nate's blog about Plasma and be sure not to miss his This Week in Plasma series, where every Saturday he covers all the work being put into KDE's Plasma desktop environment.
For a complete overview of what's going on, visit KDE's Planet, where you can find all KDE news unfiltered directly from our contributors.
Get InvolvedThe KDE organization has become important in the world, and your time and contributions have helped us get there. As we grow, we're going to need your support for KDE to become sustainable.
You can help KDE by becoming an active community member and getting involved. Each contributor makes a huge difference in KDE — you are not a number or a cog in a machine! You don’t have to be a programmer either. There are many things you can do: you can help hunt and confirm bugs, even maybe solve them; contribute designs for wallpapers, web pages, icons and app interfaces; translate messages and menu items into your own language; promote KDE in your local community; and a ton more things.
You can also help us by donating. Any monetary contribution, however small, will help us cover operational costs, salaries, travel expenses for contributors and in general just keep KDE bringing Free Software to the world.
To get your application mentioned here, please ping us in invent or in Matrix.
Thanks to Tobias Fella and Michael Mischurow for the proofreading.
Droptica: Headless CMS. How to Expose Data Using REST API and JSON API Modules?
Headless CMS allows flexible data exposure for different applications. In systems like Drupal, this concept becomes especially important for teams that want to build modern applications with a separate frontend layer. In this blog post, you'll see the process of exposing Drupal data using the REST API and JSON API. You'll also discover how to customize views, generate content, and manage settings to ensure seamless collaboration with frontend frameworks.
Russ Allbery: Review: Delilah Green Doesn't Care
Review: Delilah Green Doesn't Care, by Ashley Herring Blake
Series: Bright Falls #1 Publisher: Jove Copyright: February 2022 ISBN: 0-593-33641-0 Format: Kindle Pages: 374Delilah Green Doesn't Care is a sapphic romance novel. It's the first of a trilogy, although in the normal romance series fashion each book follows a different protagonist and has its own happy ending. It is apparently classified as romantic comedy, which did not occur to me while reading but which I suppose I can see in retrospect.
Delilah Green got the hell out of Bright Falls as soon as she could and tried not to look back. After her father died, her step-mother lavished all of her perfectionist attention on her overachiever step-sister, leaving Delilah feeling like an unwanted ghost. She escaped to New York where there was space for a queer woman with an acerbic personality and a burgeoning career in photography. Her estranged step-sister's upcoming wedding was not a good enough reason to return to the stifling small town of her childhood. The pay for photographing the wedding was, since it amounted to three months of rent and trying to sell photographs in galleries was not exactly a steady living. So back to Bright Falls Delilah goes.
Claire never left Bright Falls. She got pregnant young and ended up with a different life than she expected, although not a bad one. Now she's raising her daughter as a single mom, running the town bookstore, and dealing with her unreliable ex. She and Iris are Astrid Parker's best friends and have been since fifth grade, which means she wants to be happy for Astrid's upcoming wedding. There's only one problem: the groom. He's a controlling, boorish ass, but worse, Astrid seems to turn into a different person around him. Someone Claire doesn't like.
Then, to make life even more complicated, Claire tries to pick up Astrid's estranged step-sister in Bright Falls's bar without recognizing her.
I have a lot of things to say about this novel, but here's the core of my review: I started this book at 4pm on a Saturday because I hadn't read anything so far that day and wanted to at least start a book. I finished it at 11pm, having blown off everything else I had intended to do that evening, completely unable to put it down.
It turns out there is a specific type of romance novel protagonist that I absolutely adore: the sarcastic, confident, no-bullshit character who is willing to pick the fights and say the things that the other overly polite and anxious characters aren't able to get out. Astrid does not react well to criticism, for reasons that are far more complicated than it may first appear, and Claire and Iris have been dancing around the obvious problems with her surprise engagement. As the title says, Delilah thinks she doesn't care: she's here to do a job and get out, and maybe she'll get to tweak her annoying step-sister a bit in the process. But that also means that she is unwilling to play along with Astrid's obsessively controlling mother or her obnoxious fiance, and thus, to the barely disguised glee of Claire and Iris, is a direct threat to the tidy life that Astrid's mother is trying to shoehorn her daughter into.
This book is a great example of why I prefer sapphic romances: I think this character setup would not work, at least for me, in a heterosexual romance. Delilah's role only works if she's a woman; if a male character were the sarcastic conversational bulldozer, it would be almost impossible to avoid falling into the gender stereotype of a male rescuer. If this were a heterosexual romance trying to avoid that trap, the long-time friend who doesn't know how to directly confront Astrid would have to be the male protagonist. That could work, but it would be a tricky book to write without turning it into a story focused primarily on the subversion of gender roles. Making both protagonists women dodges the problem entirely and gives them so much narrative and conceptual space to simply be themselves, rather than characters obscured by the shadows of societal gender rules.
This is also, at it's core, a book about friendship. Claire, Astrid, and Iris have the sort of close-knit friend group that looks exclusive and unapproachable from the outside. Delilah was the stereotypical outsider, mocked and excluded when they thought of her at all. This, at least, is how the dynamics look at the start of the book, but Blake did an impressive job of shifting my understanding of those relationships without changing their essential nature. She fleshes out all of the characters, not just the romantic leads, and adds complexity, nuance, and perspective. And, yes, past misunderstanding, but it's mostly not the cheap sort that sometimes drives romance plots. It's the misunderstanding rooted in remembered teenage social dynamics, the sort of misunderstanding that happens because communication is incredibly difficult, even more difficult when one has no practice or life experience, and requires knowing oneself well enough to even know what to communicate.
The encounter between Delilah and Claire in the bar near the start of the book is cornerstone of the plot, but the moment that grabbed me and pulled me in was Delilah's first interaction with Claire's daughter Ruby. That was the point when I knew these were characters I could trust, and Blake never let me down. I love how Ruby is handled throughout this book, with all of the messy complexity of a kid of divorced parents with her own life and her own personality and complicated relationships with both parents that are independent of the relationship their parents have with each other.
This is not a perfect book. There's one prank scene that I thought was excessively juvenile and should have been counter-productive, and there's one tricky question of (nonsexual) consent that the book raises and then later seems to ignore in a way that bugged me after I finished it. There is a third-act breakup, which is not my favorite plot structure, but I think Blake handles it reasonably well. I would probably find more niggles and nitpicks if I re-read it more slowly. But it was utterly engrossing reading that exactly matched my mood the day that I picked it up, and that was a fantastic reading experience.
I'm not much of a romance reader and am not the traditional audience for sapphic romance, so I'm probably not the person you should be looking to for recommendations, but this is the sort of book that got me to immediately buy all of the sequels and start thinking about a re-read. It's also the sort of book that dragged me back in for several chapters when I was fact-checking bits of my review. Take that recommendation for whatever it's worth.
Content note: Reviews of Delilah Green Doesn't Care tend to call it steamy or spicy. I have no calibration for this for romance novels. I did not find it very sex-focused (I have read genre fantasy novels with more sex), but there are several on-page sex scenes if that's something you care about one way or the other.
Followed by Astrid Parker Doesn't Fail.
Rating: 9 out of 10
James Bennett: Introducing DjangoVer
Version numbering is hard, and there are lots of popular schemes out there for how to do it. Today I want to talk about a system I’ve settled on for my own Django-related packages, and which I’m calling “DjangoVer”, because it ties the version number of a Django-related package to the latest Django version that package supports.
But one quick note to start with: this is not really “introducing” the idea of DjangoVer, because I know I’ve used the name a few times already in other places. I’m also not the person who invented this, and I don’t know for certain who did — I’ve seen several packages which appear to follow some form of DjangoVer and took inspiration from them in defining my own take on it.
Django’s version scheme: an overviewThe basic idea of DjangoVer is that the version number of a Django-related package should tell you which version of Django you can use it with. Which probably doesn’t help much if you don’t know how Django releases are numbered, so let’s start there. In brief:
- Django issues a “feature release” — one which introduces new features — roughly once every eight months. The current feature release series of Django is 5.1.
- Django issues “bugfix releases” — which fix bugs in one or more feature releases — roughly once each month. As I write this, the latest bugfix release for the 5.1 feature release series is 5.1.3 (along with Django 5.0.9 for the 5.0 feature release series, and Django 4.2.16 for the 4.2 feature release series).
- The version number scheme is MAJOR.FEATURE.BUGFIX, where MAJOR, FEATURE, and BUGFIX are integers.
- The FEATURE component starts at 0, then increments to 1, then to 2, then MAJOR is incremented and FEATURE goes back to 0. BUGFIX starts at 0 with each new feature release, and increments for the bugfix releases for that feature release.
- Every feature release whose FEATURE component is 2 is a long-term support (“LTS”) release.
This has been in effect since Django 2.0 was released, and the feature releases have been: 2.0, 2.1, 2.2 (LTS); 3.0, 3.1, 3.2 (LTS); 4.0, 4.1, 4.2 (LTS); 5.0, 5.1. Django 5.2 (LTS) is expected in April 2025, and then eight months later (if nothing is changed) will come Django 6.0.
I’ll talk more about SemVer in a bit, but it’s worth being crystal clear that Django does not follow Semantic Versioning, and the MAJOR number is not a signal about API compatibility. Instead, API compatibility runs LTS-to-LTS, with a simple principle: if your code runs on a Django LTS release and raises no deprecation warnings, it will run unmodified on the next LTS release. So, for example, if you have an application that runs without deprecation warnings on Django 4.2 LTS, it will run unmodified on Django 5.2 LTS (though at that point it might begin raising new deprecation warnings, and you’d need to clear them before it would be safe to upgrade any further).
DjangoVer, definedIn DjangoVer, a Django-related package has a version number of the form DJANGO_MAJOR.DJANGO_FEATURE.PACKAGE_VERSION, where DJANGO_MAJOR and DJANGO_FEATURE indicate the most recent feature release series of Django supported by the package, and PACKAGE_VERSION begins at zero and increments by one with each release of the package supporting that feature release of Django.
Since the version number only indicates the newest Django feature release supported, a package using DjangoVer should also use Python package classifiers to indicate the full range of its Django support (such as Framework :: Django :: 5.1 to indicate support for Django 5.1 — see examples on PyPI).
But while Django takes care to maintain compatibility from one LTS to the next, I do not think DjangoVer packages need to do that; they can use the simpler approach of issuing deprecation warnings for two releases, and then making the breaking change. One of the stated reasons for Django’s LTS-to-LTS compatibility policy is to help third-party packages have an easier time supporting Django releases that people are actually likely to use; otherwise, Django itself generally just follows the “deprecate for two releases, then remove it” pattern. No matter what compatibility policy is chosen, however, it should be documented clearly, since DjangoVer explicitly does not attempt to provide any information about API stability/compatibility in the version number.
That’s a bit wordy, so let’s try an example:
- If you started a new Django-related package today, you’d (hopefully) support the most recent Django feature release, which is 5.1. So the DjangoVer version of your package should be 5.1.0.
- As long as Django 5.1 is the newest Django feature release you support, you’d increment the third digit of the version number. As you add features or fix bugs you’d release 5.1.1, 5.1.2, etc.
- When Django 5.2 comes out next year, you’d (hopefully) add support for it. When you do, you’d set your package’s version number to 5.2.0. This would be followed by 5.2.1, 5.2.2, etc., and then eight months later by 6.0.0 to support Django 6.0.
- If version 5.1.0 of your package supports Django 5.1, 5.0, and 4.2 (the feature releases receiving upstream support from Django at the time of the 5.1 release), it should indicate that by including the Framework :: Django, Framework :: Django :: 4.2, Framework :: Django :: 5.0, and Framework :: Django :: 5.1 classifiers in its package metadata.
Some of you probably didn’t even read this far before rushing to instantly post the XKCD “Standards” comic as a reply. Thank you in advance for letting the rest of us know we don’t need to bother listening to or engaging with you. For everyone else: here’s why I think in this case adding yet another “standard” is actually a good idea.
The elephant in the room here is Semantic Versioning (“SemVer”). Others have written about some of the problems with SemVer, but I’ll add my own two cents here: “compatibility” is far too complex and nebulous a concept to be usefully encoded in a simple value like a version number. And if you want my really cynical take, the actual point of SemVer in practice is to protect developers of software from users, by providing endless loopholes and ways to say “sure, this change broke your code, but that doesn’t count as a breaking change”. It’ll turn out that the developer had a different interpretation of the documentation than you did, or that the API contract was “underspecified” and now has been “clarified”, or they’ll just throw their hands up, yell “Hyrum’s Law” and say they can’t possibly be expected to preserve that behavior.
A lot of this is rooted in the belief that changes, and especially breaking changes, are inherently bad and shameful, and that if you introduce them you’re a bad developer who should be ashamed. Which is, frankly, bullshit. Useful software almost always evolves and changes over time, and it’s unrealistic to expect it not to. I wrote about this a few years back in the context of the Python 2/3 transition:
Though there is one thing I think gets overlooked a lot: usually, the anti-Python-3 argument is presented as the desire of a particular company, or project, or person, to stand still and buck the trend of the world to be ever-changing.
But really they’re asking for the inverse of that. Rather than being a fixed point in a constantly-changing world, what they really seem to want is to be the only ones still moving in a world that has become static around them. If only the Python team would stop fiddling with the language! If only the maintainers of popular frameworks would stop evolving their APIs! Then we could finally stop worrying about our dependencies and get on with our real work! Of course, it’s logically impossible for each one of those entities to be the sole mover in a static world, but pointing that out doesn’t always go well.
But that’s a rant for another day and another full post all its own. For now it’s enough to just say I don’t believe SemVer can ever deliver on what it promises. So where does that leave us?
Well, if the version number can’t tell you whether it’s safe to upgrade from one version to another, perhaps it can still tell you something useful. And for me, when I’m evaluating a piece of third-party software for possible use, one of the most important things I want to know is: is someone actually maintaining this? There are lots of potential signals to look for, but some version schemes — like CalVer — can encode this into the version number. Want to know if the software’s maintained? With CalVer you can guess a package’s maintenance status, with pretty good accuracy, from a glance at the version number.
Over the course of this year I’ve been transitioning all my personal non-Django packages to CalVer for precisely this reason. Compatibility, again, is something I think can’t possibly be encoded into a version number, but “someone’s keeping an eye on this” can be. Even if I’m not adding features to something, Python itself does a new version every year and I’ll push a new release to explicitly mark compatibility (as I did recently for the release of Python 3.13). That’ll bump the version number and let anyone who takes a quick glance at it know I’m still there and paying attention to the package.
For packages meant to be used with Django, though, the version number can usefully encode another piece of information: not just “is someone maintaining this”, but “can I use this with my Django installation”. And that is what DjangoVer is about: telling you at a glance the maintenance and Django compatibility status of a package.
DjangoVer in practiceAll of my own personal Django-related packages are now using DjangoVer, and say so in their documentation. If I start any new Django-related projects they’ll do the same thing.
A quick scroll through PyPI turns up other packages doing something that looks similar; django-cockroachdb and django-snowflake, for example, versioned their Django 5.1 packages as “5.1”, and explicitly say in their READMEs to install a package version corresponding to the Django version you use (they also have a maintainer in common, who I suspect of having been an early inventor of what I’m now calling “DjangoVer”).
If you maintain a Django-related package, I’d encourage you to at least think about adopting some form of DjangoVer, too. I won’t say it’s the best, period, because something better could always come along, but in terms of information that can be usefully encoded into the version number, I think DjangoVer is the best option I’ve seen for Django-related packages.
Dries Buytaert: Installing Drupal CMS (or Drupal Starshot) using DDEV
DDEV is an Open Source development environment that makes it easy to setup Drupal on your computer. It handles all the complex configuration by providing pre-configured Docker containers for your web server, database, and other services.
On macOS, you can install DDEV using Homebrew:
[code bash]$ brew install ddev[/code]Next, clone the Drupal CMS Git repository:
[code bash]$ git clone https://git.drupalcode.org/project/drupal_cms.git[/code]This command fetches the latest version of Drupal CMS from the official repository and saves it in the drupal_cms directory.
Next, configure DDEV for your Drupal project:
[code bash]$ cd drupal_cms $ ddev config --docroot=web --project-type=drupal[/code]The --docroot=web parameter tells DDEV where your Drupal files will live, while --project-type=drupal ensures DDEV understands the project type.
Next, let's start our engines:
[code bash]$ ddev start[/code]The first time you start DDEV, it will setup Docker containers for the web server and database. It will also use Composer to download the necessary Drupal files and dependencies.
The final step is configuring Drupal itself. This includes things like setting your site name, database credentials, etc. You can do this in one of two ways:
The final step is configuring Drupal itself. This includes things like your site name, database configuration, etc. You can do this in one of two ways:
-
Option 1: Configure Drupal via the command line
[code bash]$ ddev drush site:install[/code]
This method is the easiest and the fastest, as things like the database credentials are automatically setup. The downside is that, at the time of this writing, you can't choose which Recipes to enable during installation.
-
Option 2: Configure Drupal via the web installer
You can also use the web-based installer to configure Drupal, which allows you to enable individual Recipes. You'll need your site's URL and database credentials. Run this command to get both:
[code bash]$ ddev describe[/code]Navigate to your site and step through the installer.
Once everything is installed and configured, you can access your new Drupal CMS site. You can simply use:
[code bash]$ ddev launch[/code]This command opens your site's homepage in your default browser — no need to remember the specific URL that DDEV created for your local development site.
To build or manage a Drupal site, you'll need to log in. By default, Drupal creates a main administrator account. It's a good idea to update the username and password for this account. To do so, run the following command:
[code bash]$ ddev drush uli[/code]This command generates a one-time login link that takes you directly to the Drupal page where you can update your Drupal account's username and password.
That's it! Happy Drupal-ing!
Dries Buytaert: Join the Drupal Starshot team as a track lead
The Drupal Starshot initiative has been making significant progress behind the scenes, and I'm excited to share some updates with the community.
Leadership team formation and product definitionOver the past few months, we've been working diligently on Drupal Starshot. One of our first steps was to appoint a leadership team to guide the project. With the leadership team in place as well as the new Starshot Advisory Council, we shifted our focus to defining the product. We've made substantial progress on this front and will be sharing more details about the product strategy in the coming weeks.
Introducing Drupal Starshot tracksWe already started to break down the initiative into manageable components, and are introducing the concept of "tracks". Tracks are smaller, focused parts of the Drupal Starshot project that allow for targeted development and contributions. We've already published the first set of tracks on the Drupal Starshot issue queue on Drupal.org.
Example tracks include:
- Creating Drupal Recipes for features like contact forms, advanced search, events, SEO and more.
- Enhancing the Drupal installer to enable Recipes during installation.
- Updating Drupal.org for Starshot, including product marketing and a trial experience.
While many tracks are technical and need help from developers, most of the tracks need contribution from designers, UX experts, marketers, testers and site builders.
Recruiting more track leadsSeveral tracks already have track leads and have made significant progress:
- Matt Glaman is spearheading the development of a trial experience.
- The marketing team, led by Suzanne Dergacheva, is crafting product marketing documentation.
- Martin Anderson-Clutz has been appointed as the track lead for event management.
However, we need many additional track leads to drive our remaining tracks to completion.
We're now accepting applications for track lead positions. Interested individuals and organizations can apply by completing our application form. The application window closes on July 31st, two weeks from today.
Key responsibilities of a track leadTrack leads can be individuals, teams, or organizations, including Drupal Certified Partners. While technical expertise is beneficial, the role primarily focuses on strategic coordination and project management. Key responsibilities include:
- Defining and validating requirements to ensure the track meets the expectations of our target audience.
- Developing and maintaining a prioritized task list, including creating milestones and timelines.
- Overseeing and driving the track's implementation.
- Collaborating with key stakeholders, including the Drupal Starshot leadership team, module maintainers, the marketing team, etc.
- Communicating progress to the community (e.g. blogging).
After the application deadline, the Drupal Starshot Leadership Team will review the applications and appoint track leads. We expect to announce the selected track leads in the first week of August.
While the application period is open, we will be available to answer any questions you may have. Feel free to reach out to us through the Drupal.org issue queue, or join us in an upcoming zoom meeting (details to be announced / figured out).
Looking ahead to DrupalCon BarcelonaOur goal is to make significant progress on these tracks by DrupalCon Barcelona, where we plan to showcase the advancements we've made. We're excited about the momentum building around Drupal Starshot and can't wait to see the contributions from the community.
If you're passionate about Drupal and want to play a key role in shaping its future, consider applying for a track lead position.
Stay tuned for more updates on Drupal Starshot, and thank you for your continued support of the Drupal community.
Dries Buytaert: Announcing the Drupal Starshot Advisory Council
I'm excited to announce the formation of the Drupal Starshot Advisory Council. When I announced Starshot's Leadership Team, I explained that we are innovating on the leadership model by adding a team of advisors. This council will provide strategic input and feedback to help ensure Drupal Starshot meets the needs of key stakeholders and end-users.
The Drupal Starshot initiative represents an ambitious effort to expand Drupal's reach and impact. To guide this effort, we've established a diverse Advisory Council that includes members of the Drupal Starshot project team, Drupal Association staff and Board of Directors, representatives from Drupal Certified Partners, Drupal Core Committers, and last but not least, individuals representing the target end-users for Drupal Starshot. This ensures a wide range of perspectives and expertise to inform the project's direction and decision-making.
The initial members include:
- Imre Gmelig Meijling, React Online / Drupal Association Board - Fundraising Committee
- Suzanne Dergachev, Evolving Web / Drupal Association Board - Marketing Committee
- Mike Herchel, Agileana / Drupal Association Board - Innovation Committee
- Tim Doyle, CEO at the Drupal Association
- Tim Lehnen, CTO at the Drupal Association
- Kristen Pol, Salsa Digital - Drupal Certified Partner representative
- Chris Yates, Pantheon - Drupal Certified Partner representative
- Luis Ribeiro, CI&T - Drupal Certified Partner representative
- Kathryn Carruthers, alakasam - End User representative
- Emma Horrell, University of Edinburgh - End User representative
- Nathaniel Catchpole (catch), Third and Grove / Tag1 Consulting - Drupal Core Committer
- Théodore Biadala (nod_), Très Bien Tech - Drupal Core Committer
The council has been meeting monthly to receive updates from myself and the Drupal Starshot Leadership Team. Members will provide feedback on project initiatives, offer recommendations, and share insights based on their diverse experiences and areas of expertise.
In addition to guiding the strategic direction of Drupal Starshot, the Advisory Council will play a vital role in communication and alignment between the Drupal Starshot team, the Drupal Association, Drupal Core, and the broader Drupal community.
I'm excited to be working with this accomplished group to make the Drupal Starshot vision a reality. Together we can expand the reach and impact of Drupal, and continue advancing our mission to make the web a better place.
Armin Ronacher: Playground Wisdom: Threads Beat Async/Await
It's been a few years since I wrote about my challenges with async/await-based systems and how they just seem to not support back pressure well. A few years later, I do not think that this problem has subsided much, but my thinking and understanding have perhaps evolved a bit. I'm now convinced that async/await is, in fact, a bad abstraction for most languages, and we should be aiming for something better instead and that I believe to be thread.
In this post, I'm also going to rehash many arguments from very clever people that came before me. Nothing here is new, I just hope to bring it to a new group of readers. In particular, you should really consider these who highly influential pieces:
- Bob Nystrom's What Color is Your Function post, which makes a very strong case that having two types of functions, which are only compatible in one direction, causes problems.
- Ron Pressler's Please stop polluting our imperative languages with pure concepts which I think is probably the single most important talk on that topic.
- Nathaniel J. Smith's Notes on structured concurrency, or: Go statement considered harmful which does a really good job laying out the motivation for structured concurrency.
As programmers, we are so used to how things work that we make some implicit assumptions that really cloud our ability to think freely. Let me present you with a piece of code that demonstrates this:
def move_mouse(): while mouse.x < 200: mouse.x += 5 sleep(10) def move_cat(): while cat.x < 200: cat.x += 10 sleep(10) move_mouse() move_cat()Read that code and then answer this question: do the mouse and cat move at the same time, or one after another? I guarantee you that 10 out of 10 programmers will correctly state that they move one after another. It makes sense because we know Python and the concept of threads, scheduling and whatnot. But if you speak to a group of children familiar with Scratch, they are likely to conclude that mouse and cat move simultaneously.
The reason is that if you are exposed to programming via Scratch you are exposed to a primitive form of actor programming. The cat and the mouse are both actors. In fact, the UI makes this pretty damn clear, just that the actors are called “sprites”. You attach logic to a sprite on the screen and all these pieces of logic run at the same time. Mind-blowing. You can even send messages from sprite to sprite.
The reason I want you to think about this for a moment is that I think this is rather profound. Scratch is a very, very simple system and it's intended to teaching programming to young kids. Yet the model it promotes is an actor system! If you were to foray into programming via a traditional book on Python, C# or some other language, it's quite likely that you will only learn about threads at the very end. Not just that, it will likely make it sound really complex and scary. Worse, you will probably only learn about actor patterns in some advanced book that will bombard you with all the complexities of large scale applications.
There is something else though you should keep in mind: Scratch will not talk about threads, it will not talk about monads, it will not talk about async/await, it will not talk about schedulers. As far as you are concerned as a programmer, it's an imperative (though colorful and visual) language with some basic “syntax” support for message passing. Concurrency comes natural. A child can program it. It's not something to be afraid of.
Imperative Programming Is Not InferiorThe second thing I want you to take away is that imperative languages are not inferior to functional ones.
While probably most of us are using imperative programming languages to solve problems, I think we all have been exposed to the notion that it's inferior and not particularly pure. There is this world of functional programming, with monads and other things. This world have these nice things involving composition, logic and maths and fancy looking theorems. If you program in that, you're almost transcending to a higher plane and looking down to the folks who are stitching together if statements, for loops, make side effects everywhere, and are doing highly inappropriate things with IO.
Okay, maybe it's not quite as bad, but I don't think I'm completely wrong with those vibes. And look, I get it. I feel happy chaining together lambdas in Rust and JavaScript. But we should also be aware that these constructs are, in many languages, bolted on. Go, for instance, gets away without most of this, and that does not make it an inferior language!
So what you should keep in mind here is that there are different paradigms, and mentally you should try to stop thinking for a moment that functional programming has all its stuff figured out, and imperative programming does not.
Instead, I want to talk about how functional languages and imperative languages are dealing with “waiting”.
The first thing I want to back to is the example from above. Both of the functions (for the cat and the mouse) can be seen as separate threads of execution. When the code calls sleep(10) there's clearly an expectation by the programmer that the computer will temporarily pause the execution and continue later. I don't want to bore you with monads, so as my “functional” programming language, I will use JavaScript and promises. I think that's an abstraction that most readers will be sufficiently familiar with:
function moveMouseBlocking() { while (mouse.x < 200) { mouse.x += 5; sleep(10); // a blocking sleep } } function moveMouseAsync() { return new Promise((resolve) => { function iterate() { if (mouse.x < 200) { mouse.x += 5; sleep(10).then(iterate); // non blocking sleep } else { resolve(); } } iterate(); }); }You can immediately see a challenge here: it's very hard to translate the blocking example into a non blocking example because all the sudden we need to find a way to express our loop (or really any control flow). We need to manually decompose it into a form of recursive function calling and we need the help of a scheduler and executor here to do the waiting.
This style obviously eventually became annoying enough to deal with that async/await was introduced to mostly restore the sanity of the old code. So it now can look more like this:
async function moveMouseAsync() { while (mouse.x < 200) { mouse.x += 5; await sleep(10); } }Behind the scenes though, nothing has really changed, and in particular, when you call that function, you just get an object that encompasses the “composition of the computation”. That object is a promise which will eventually hold the resulting value. In fact, in some languages like C#, the compiler will really just transpile this into chained function calls. With the promise in hand, you can await the result, or register a callback with then which gets invoked if this thing ever runs to completion.
For a programmer, I think async/await is clearly understood as some sort of neat abstraction — an abstraction over promises and callbacks. However strictly speaking, it's just worse than where we started out, because in terms of expressiveness, we have lost an important affordance: we cannot freely suspend.
In the original blocking code, when we invoked sleep we suspended for 10 milliseconds implicitly; we cannot do the same with the async call. Here we have to “await” the sleep operation. This is the crucial aspect of why we're having these “colored functions”. Only an async function can call another async function, as you cannot await in a sync function.
Halting ProblemsThe above example shows another problem that async/await causes: what if we never resolve? A normal function call eventually returns, the stack unwinds, and we're ready to receive the result. In an async world, someone has to call resolve at the very end. What if that is never called? Now in theory, that does not seem all that different from someone calling sleep() with a large number to suspend for a very long time, or waiting on a pipe that never gets data sent into. But it is different! In one case, we keep the call stack and everything that relates to it alive; in another case, we just have a promise and are waiting for independent garbage collection with everything already unwound.
Contract wise, there is absolutely nothing that says one has to call resolve. As we know from theory the halting problem is undecidable so it's going to be actually impossible to know if someone will call resolve or not.
That sounds pedantic, but it's very important because promises/futures and async/await are making something strictly worse than not having them. Let's consider a JavaScript promise to be the most canonical example of what this looks like. A promise is created by an anonymous function, that is invoked to eventually call resolve. Take this example:
let neverSettle = new Promise((resolve) => { // this function ends, but we never called resolve });Let me clarify first that this is not a JavaScript specific problem, but it's nice to show it this way. This is a completely legal thing! It's a promise, that never resolves. That is not a bug! The anonymous function in the promise itself will return, the stack will unwind, and we are left with a “pending” promise that will eventually get garbage collected. That is a bit of a problem because since it will never resolve, you can also never await it.
Think of the following example, which demonstrates this problem a bit. In practice you might want to reduce how many things can work at once, so let's imagine a system that can handle up to 10 things that run concurrently. So we might want to use a semaphore to give out 10 tokens so up to 10 things can run at once; otherwise, it applies back pressure. So the code looks like this:
const semaphore = new Semaphore(10); async function execute(f) { let token = await semaphore.acquire(); try { await f(); } finally { await semaphore.release(token); } }But now we have a problem. What if the function passed to the execute function returns neverSettle? Well, clearly we will never release the semaphore token. This is strictly worse compared to blocking functions! The closest equivalent would be a stupid function that calls a very long running sleep. But it's different! In one case, we keep the call stack and everything that relates to it alive; in the other case case we just have a promise that will eventually get garbage collected, and we will never see it again. In the promise case, we have effectively decided that the stack is not useful.
There are ways to fix this, like making promise finalization available so we can get informed if a promise gets garbage collected etc. However I want to point out that as per contract, what this promise is doing is completely acceptable and we have just caused a new problem, one that we did not have before.
And if you think Python does not have that problem, it does too. Just await Future() and you will be waiting until the heat death of the universe (or really when you shut down your interpreter).
The promise that sits there unresolved has no call stack. But that problem also comes back in other ways, even if you use it correctly. The decomposed functions calling functions via the scheduler flow means that now you need extra affordances to stitch these async calls together into full call stacks. This all creates extra problems that did not exist before. Call stacks are really, really important. They help with debugging and are also crucial for profiling.
Blocking is an AbstractionOkay, so we know there is at least some challenge with the promise model. What other abstractions are there? I will make the argument that a function being able to “suspend” a thread of execution is a bloody great capability and abstraction. Think of it for a moment: no matter where I am, I can say I need to wait for something and continue later where I left off. This is particularly crucial to apply back-pressure if you decide to need it later. The biggest footgun in Python asyncio remains that write is non blocking. That function will stay problematic forever and you need to follow up with await s.drain() to avoid buffer bloat.
In particular it's an important abstraction because in the real world we have constantly faced with things in fact not being async all the time, and some of the things we think might not block, will in fact block. Just like Python did not think that write should be able to block when it was designed. I want to give you a colorful example of this. Why is the following code blocking, and what is?
def decode_object(idx): header = indexes[idx] object_buf = buffer[header.start:header.start + header.size] return brotli.decompress(object_buf)It's a bit of a trick question, but not really. The reason it's blocking is because memory access can be blocking! You might not think of it this way, but there are many reasons why just touching a memory region can take time. The most obvious one is memory-mapped files. If you're touching a page that hasn't been loaded yet, the operating system will have to shovel it into memory before returning back to you. There is no “await touching this memory” expression, because if there were, we would have to await everywhere. That might sound petty but blocking memory reads were at the source of a series of incidents at Sentry [1].
The trade-off that async/await makes today is that the idea is that not everything needs to block or needs to suspend. The reality, however, has shown me that many more things really want to suspend, and if a random memory access is a case for suspending, then is the abstraction worth anything?
So maybe to allow any function call block and suspend really was the right abstraction to begin with.
But then we need to talk about spawning threads next, because a single thread is not worth much. The one affordance that async/await system gives you that you don't have otherwise, is actually telling two things to run concurrently. You get that by starting the async operation and deferring the awaiting to later. This is where I will have to concede that async/await has something going for it. It moves the reality of concurrent execution right into the language. The reason concurrency comes so natural to a Scratch programmer is that it's right there, so async/await solves a very similar purpose here.
In a traditional imperative language based on threads, the act of spawning a thread is usually hidden behind a (often convoluted) standard library function. More annoyingly threads very much feel bolted on and completely inadequate to even to the most basic of operations. Because not only do we want to spawn threads, we want to join on them, we want to send values across thread boundaries (including errors!). We want to wait for either a task to be done, or a keyboard input, messages being passed etc.
Classic ThreadingSo lets focus on threads for a second. As said before, what we are looking for is the ability for any function to yield / suspend. That's what threads allow us to do!
When I am talking about “threads” here, I'm not necessarily referring to a specific kind of implementation of threads. Think of the example of promises from above for a moment: we had the concept of “sleeping”, but we did not really say how that is implemented. There is clearly some underlying scheduler that can enable that, but how that takes places is outside the scope of the language. Threads can be like that. They could be real OS threads, they could be virtual and be implemented with fibers or coroutines. At the end of the day, we don't necessarily have to care about it as developer if the language gets it right.
The reason this matters is that when I talk about “suspending” or “continuing somewhere else,” immediately the thought of coroutines and fibers come to mind. That's because many languages that support them give you those capabilities. But it's good to step back for a second and just think about general affordances that we want, and not how they are implemented.
We need a way to say: run this concurrently, but don't wait for it to return, we want to wait later (or never!). Basically, the equivalent in some languages to call an async function, but to not await. In other words: to schedule a function call. And that is, in essence, just what spawning a thread is. If we think about Scratch: one of the reasons concurrency comes natural there is because it's really well integrated, and a core affordance of the language. There is a real programming language that works very much the same: go with its goroutines. There is syntax for it!
So now we can spawn, and that thing runs. But now we have more problems to solve: synchronization, waiting, message passing and all that jazz are not solved. Even Scratch has answers to that! So clearly there is something else missing to make this work. And what even does that spawn call return?
A Detour: What is Async EvenThere is an irony in async/await and that irony is that it exists in multiple languages, it looks completely the same on the surface, but works completely different under the hood. Not only that, the origin stories of async/await in different languages are not even the same.
I mentioned earlier that code that can arbitrary block is an abstraction of sorts. That abstraction for many applications really only makes sense is if the CPU time while you're blocking can be used in other useful ways. On the one hand, because the computer would be pretty bored if it was only doing things in sequence, on the other hand, because we might need things to run in parallel. At times as programmers we need to do two things to make progress simultaneously before we can continue. Enter creating more threads. But if threads are so great, why all that talking about coroutines and promises that underpins so much of async/await in different languages?
I think this is the point where the story actually becomes confusing quickly. For instance JavaScript has entirely different challenges than Python, C# or Rust. Yet somehow all those languages ended up with a form of async/await.
Let's start with JavaScript. JavaScript is a single threaded language where a function scope cannot yield. There is no affordance in the language to do that and threads do not exist. So before async/await, the best you could do is different forms of callback hell. The first iteration of improving that experience was adding promises. async/await only became sugar for that afterward. The reason that JavaScript did not have much choice here is that promises was the only thing that could be accomplished without language changes, and async/await is something that can be implemented as a transpilation step. So really; there are no threads in JavaScript. But here is an interesting thing that happens: JavaScript on the language level has the concept of concurrency. If you call setTimeout, you tell the runtime to schedule a function to be called later. This is crucial! In particular it also means that a promise created, will be scheduled automatically. Even if you forget about it, it will run!
Python on the other hand had a completely different origin story. In the days before async/await, Python already had threads — real, operating system level threads. What it did not have however was the ability for multiple of those threads to run in parallel. The reason for this obviously the GIL (Global Interpreter Lock). However that “just” makes things not to scale to more than one core, so let's ignore that for a second. Because it had threads, it also rather early had people experiment with implementing virtual threads in Python. Back in the day (and to some extend today) the cost of an OS level thread was pretty high, so virtual threads were seen as a fast way to spawn more of these concurrent things. There were two ways in which Python got virtual threads. One was the Stackless Python project, which was an alternative implementation of Python (many patches for cpython rather) that implemented what's called a “stackless VM” (basically a VM that does not maintain a C stack). In short, what that enabled is implementing something that stackless called “tasklets” which were functions that could be suspended and resumed. Stackless did not have a bright future because the stackless nature meant that you could not have interleaving Python -> C -> Python calls and suspend with them on the stack.
There was a second attempt in Python called “greenlet”. The way greenlet worked was implementing coroutines in a custom extension module. It is pretty gnarly in its implementation, but it does allow for cooperative multi tasking. However, like stackless, that did not win out. Instead, what actually happened is that the generator system that Python had for years was gradually upgraded into a coroutine system with syntax support, and the async system was built on top of that.
One of the consequences of this is that it requires syntax support to suspend from a coroutine. This meant that you cannot implement a function like sleep that, when called, yields to a scheduler. You need to await it (or in earlier times you could use yield from). So we ended up with async/await because of how coroutines work in Python under the hood. The motivation for this was that it was seen as a positive thing that you know when something suspends.
One interesting consequence of the Python coroutine model is that at least on the coroutine model it can transcend OS level threads. I could make a coroutine on one thread, ship it off to another, and continue it there. In practice, that does not work because once hooked up with the IO system, it cannot travel to another event loop on anther thread any more. But you can already see that fundamentally it does something quite different to JavaScript. It can travel between threads at least in theory; there are threads; there is syntax to yield. A coroutine in Python will also start out with not running, unlike in JavaScript where it's effectively always scheduled. This is also in parts because the scheduler in python can be swapped out, and there are competing and incompatible implementations.
Lastly let's talk about C#. Here the origin story is once again entirely different. C# has real threads. Not only does it have real threads, it also has per-object locks and absolutely no problems with dealing with multiple threads running in parallel. But that does not mean that it does not have other issues. The reality is that threads alone are just not enough. You need to synchronize and talk between threads quite often and sometimes you just need to wait. For instance you need to wait for user input. You still want to do something, while you're stuck there processing that input. So over time .NET introduced “tasks” which are an abstraction over async operations. They are part of the .NET threading system and the way you interact with them is that you write your code in there, you can suspend from tasks with syntax. .NET will run the task on the current thread, and if you do some blocking you stay blocked. This is in that sense, quite different from JavaScript where while no new “thread” is created, you pend the execution in the scheduler. The reason it works this way in .NET is that some of the motivation of this system was to allow UI triggered code to access the main UI thread without blocking it. But the consequence again is, that if you block for real, you just screwed something up. That however is also why at least at one point what C# did was just to splice functions into chained closures whenever it hit an await. It just decomposes one logical piece of code into many separate functions.
I really don't want to go into Rust, but Rust's async system is probably the weirdest of them all because it's polling-based. In short: unless you actively “wait” for a task to complete, it will not make progress. So the purpose of a scheduler there is to make sure that a task actually can make progress. Why did rust end up with async/await? Primarily because they wanted something that works without a runtime and a scheduler and the limitations of the borrow checker and memory model.
Of all those languages, I think the argument for async/await is the strongest for Rust and JavaScript. Rust because it's a systems language and they wanted a design that works with a limited runtime. JavaScript to me also makes sense because the language does not have real threads, so the only alternative to async/await is callbacks. But for C# the argument seems much weaker. Even the problem of having to force code to run on the UI thread could be just used by having a scheduling policy for virtual threads. The worst offender here in my mind is Python. async/await has ended up with a really complex system where the language now has coroutines and real threads, different synchronization primitives for each and async tasks that end up being pinned to one OS thread. The language even has different futures in the standard library for threads and async tasks!
The reason I wanted you to understand all this is that all these different languages share the same syntax, yet what you can do with it is completely different. What they all have in common is that async functions can only be called by async functions (or the scheduler).
What Async Isn'tOver the years I heard a lot of arguments about why for instance Python ended up with async/await and some of the arguments presented don't hold up to scrutiny from my perspective. One argument that I have heard repeatedly is that if you control when you suspend, you don't need to deal with locking or synchronization. While there is some truth to that (you don't randomly suspend), you still end up with having to lock. There is still concurrency so you need to still protect all your stuff. In Python in particular this is particularly frustrating because not only do you have colored functions, you also have colored locks. There are locks for threads and there are locks for async code, and they are different.
There is a very good reason why I showed the example above of the semaphore: semaphores are real in async programming. They are very often needed to protect a system from taking on too much work. In fact, one of the core challenges that many async/await-based programs suffer from is bloating buffers because there is an inability to exert back pressure (I once again point you to my post on that). Why can they not? Because unless an API is async, it is forced to buffer or fail. What it cannot do, is block.
Async also does not magically solve the issues with GIL in Python. It does not magically make real threads appear in JavaScript, it does not solve issues when random code starts blocking (and remember, even memory access can block). Or you very slowly calculate a large Fibonacci number.
Threads are the Answer, Not CoroutinesI already alluded to this above a few times, but when we think about being able to “suspend” from an arbitrary point in time, we often immediately think of coroutines as a programmers. For good reasons: coroutines are amazing, they are fun, and every programming language should have them!
Coroutines are an important building block, and if any future language designer is looking at this post: you should put them in.
But coroutines should be very lightweight, and they can be abused in ways that make it very hard to follow what's going on. Lua, for instance, gives you coroutines, but it does not give you the necessary structure to do something with them easily. You will end up building your own scheduler, your own threading system, etc.
So what we really want is where we started out with: threads! Good old threads!
The irony in all of this is, that the language that I think actually go this right is modern Java. Project Loom in Java has coroutines and all the bells and whistles under the hood, but what it exposes to the developer is good old threads. There are virtual threads, which are mounted on carrier OS threads, and these virtual threads can travel from thread to thread. If you end up issuing a blocking call on a virtual thread, it yields to the scheduler.
Now I happen to think that threads alone are not good enough! Threads require synchronization, they require communication primitives etc. Scratch has message passing! So there is more that needs to be built to make them work well.
I want to follow up on an another blog post about what is needed to make threads easier to work with. Because what async/await clearly innovated is bringing some of these core capabilities closer to the user of the language, and often modern async/await code looks easier to read than traditional code using threads is.
Structured Concurrency and ChannelsLastly I do want to say something nice about async/await and celebrate the innovations that it has brought up. I believe that this language feature singlehandedly drove some crucial innovation about concurrent programming by making it widely accessible. In particular it moved many developers from a basic “single thread per request” model to breaking down tasks into smaller chunks, even in languages like Python. For me, the biggest innovation here goes to Trio, which introduced the concept of structured concurrency via its nursery. That concept has eventually found a home even in asyncio with the concept of the TaskGroup API and is finding its way into Java.
I recommend you to read Nathaniel J. Smith's Notes on structured concurrency, or: Go statement considered harmful for a much better introduction. However if you are unfamiliar with it, here is my attempt of explaining it:
- There is a clear start and end of work: every thread or task has a clear beginning and end, which makes it easier to follow what each thread is doing. All threads spawned in the context of a thread, are known to that thread. Think of it like creating a small team to work on a task: they start together, finish together, and then report back.
- Threads don't outlive their parent: if for whatever reason the parent is done before the children threads, it automatically awaits before returning.
- Error propagate and cause cancellations: If something goes wrong in one thread, the error is passed back to the parent. But more importantly, it also automatically causes other child threads to cancel. Cancellations are a core of the system!
I believe that structured concurrrency needs to become a thing in a threaded world. Threads must know their parents and children. Threads also need fo find convenient ways to ways to pass their success values back. Lastly context should flow from thread to thread implicity through context locals.
The second part is that async/await made it much more apparent that tasks / threads need to talk with each other. In particular the concept of channels and selecting on channels became more prevalent. This is an essential building block which I think can be further improved upon. As food for thought: if you have structured concurrency, in principle each thread's return value really can be represented as a buffered channel attached to the thread, holding up to a single value (successful return value or error) that you can select on.
Today, although no language has perfected this model, thanks to many years of experimentation, the solution seems clearer than ever, with structured concurrency at its core.
ConclusionI hope I was able to demonstrate to you that async/await has been a mixed bag. It brought some relief from callback hell, but it also saddled us with new issues like colored functions, new back-pressure challenges, and introduced new problems all entirely such as promises that can just sit around forever without resolving. It has also taken away a lot of utility that call stacks brought, in particular for debugging and profiling. These aren't minor hiccups; they're real obstacles that get in the way of the straightforward, intuitive concurrency we should be aiming for.
If we take a step back, it seems pretty clear to me that we have veered off course by adopting async/await in languages that have real threads. Innovations like Java's Project Loom feel like the right fit here. Virtual threads can yield when they need to, switch contexts when blocked, and even work with message-passing systems that make concurrency feel natural. If we free ourselves from the idea that the functional, promise system has figured out all the problems we can look at threads properly again.
However at the same time async/await has moved concurrent programming to the forefront and has resulted in real innovation. Making concurrency a core feature of the language (via syntax even!) is a good thing. Maybe the increased adoption and people struggling with it, was what made structured concurrency a real thing in the Python async/await world.
Future language design should rethink concurrency once more: Instead of adopting async/await, new languages should model themselves more like Java's Project Loom but with more user friendly primitives. But like Scratch, it should give programmers really good APIs that make concurrency natural. I don't think actor frameworks are the right fit, but a combination of structured concurrency, channels, syntax support for spawning/joining/selecting will go a long way. Watch this space for a future blog post about some things I found to work better than others.
[1]Sentry works with large debug information files such as PDB or DWARF. These files can be gigabytes in size and we memory map terabytes of preprocessed files into memory during processing. Memory mapped files can block is hardly a surprise, but what we learned in the process is that thanks to containerization and memory limits, you can easily navigate yourself into a situation where you spend much more time on page faults than you expected and the system crawls to a halt.Django Weblog: 2025 DSF Board Election Results
The 2025 DSF Board Election has closed, and the following candidates have been elected:
- Abigail Gbadago
- Jeff Triplett
- Paolo Melchiorre
- Tom Carrick
They will all serve two years for their term.
Directors elected for the 2024 DSF Board, Jacob, Sarah, and Thibaud are continuing with one year left to serve on the board.
Therefore, the combined 2025 DSF Board of Directors are:
- Jacob Kaplan-Moss
- Sarah Abderemane
- Thibaud Colas
- Abigail Gbadago*
- Jeff Triplett*
- Paolo Melchiorre*
-
Tom Carrick*
-
Elected to a two (2) year term
Congratulations to our winners, and a huge thank you to our departing board members Çağıl Uluşahin Sonmez, Chaim Kirby, Kátia Yoshime Nakamura, Katie McLaughlin.
Thank you again to everyone who nominated themselves. Even if you were not successful, you gave our community the chance to make their voices heard in who they wanted to represent them.
Go Deh: There's the easy way...
Best seen on a larger than landscape phone
Someone blogged about a particular problem:
From: https://theweeklychallenge.org/blog/perl-weekly-challenge-294/#TASK1Given an unsorted array of integers, `ints`Write a script to return the length of the longest consecutive elements sequence.Return -1 if none found. *The algorithm must run in O(n) time.*
The solution they blogged used a sort which meant it could not be O(n) in time, but the problem looked good so I gave it some thought.
Sets! sets are O(1) in Python and are good for looking things up.
What if when looking at the inputted numbers, one at a time, you also looked for other ints in the input that would extend the int you have to form a longer range? Keep tab of the longest range so far and if you remove ints from the pool as they form ranges, when the pool is empty, you should know the longest range.
I added the printout of the longest range too.
My codedef consec_seq(ints) -> tuple[int, int, int]: "Extract longest_seq_length, its_min, its_max" pool = set(ints) longest, longest_mn, longest_mx = 0, 1, 0 while pool: this = start = pool.pop() ln = 1 # check down while (this:=(this - 1)) in pool: ln += 1 pool.remove(this) mn = this + 1 # check up this = start while (this:=(this + 1)) in pool: ln += 1 pool.remove(this) mx = this - 1 # check longest if ln > longest: longest, longest_mn, longest_mx = ln, mn, mxreturn longest,longest_mn,longest_mx
def _test(): for ints in[(), (69,), (-20, 78, 79, 1, 100), (10, 4, 20, 1, 3, 2), (0, 6, 1, 8, 5, 2, 4, 3, 0, 7), (10, 30, 20), (2,4,3,1,0, 10,12,11,8,9), # two runs of five (10,12,11,8,9, 2,4,3,1,0), # two runs of five - reversed (2,4,3,1,0,-1, 10,12,11,8,9), # runs of 6 and 5 (2,4,3,1,0, 10,12,11,8,9,7), # runs of 5 and 6 ]: print(f"Input {ints = }") longest, longest_mn, longest_mx = consec_seq(ints)
if longest <2: print(" -1") else: print(f" The/A longest sequence has {longest} elements {longest_mn}..{longest_mx}")
# %%if __name__ == '__main__': _test()
Sample outputInput ints = () -1Input ints = (69,) -1Input ints = (-20, 78, 79, 1, 100) The/A longest sequence has 2 elements 78..79Input ints = (10, 4, 20, 1, 3, 2) The/A longest sequence has 4 elements 1..4Input ints = (0, 6, 1, 8, 5, 2, 4, 3, 0, 7) The/A longest sequence has 9 elements 0..8Input ints = (10, 30, 20) -1Input ints = (2, 4, 3, 1, 0, 10, 12, 11, 8, 9) The/A longest sequence has 5 elements 0..4Input ints = (10, 12, 11, 8, 9, 2, 4, 3, 1, 0) The/A longest sequence has 5 elements 0..4Input ints = (2, 4, 3, 1, 0, -1, 10, 12, 11, 8, 9) The/A longest sequence has 6 elements -1..4Input ints = (2, 4, 3, 1, 0, 10, 12, 11, 8, 9, 7) The/A longest sequence has 6 elements 7..12Another Algorithm
What if, you kept and extended ranges untill you amassed all ranges then chose the longest? I need to keep the hash lookup. dict key lookup should also be O(1). What to look up? Look up ints that would extend a range!
If you have an existing (integer) range, say 1..3 inclusive of end points then finding 0 would extend the range to 0..3 or finding one more than the range maximum, 4 would extend the original range to 1..4
So if you have ranges then they could be extended by finding rangemin - 1 or rangemax +1. I call then extends
If you do find that the next int from the input ints is also an extends value then you need to find the range that it extends, (by lookup), so you can modify that range. - use a dict to map extends to their range and checking if an int is in the extends dict keys should also take O(1) time.
I took that sketch of an algorithm and started to code. It took two evenings to finally get something that worked and I had to work out several details that were trying. The main problem was what about coalescing ranges? if you have ranges 1..2 and 4..5 what happens when you see a 3? the resultant is the single range 1..5. It took particular test cases and extensive debugging to work out that the extends2range mapping should map to potentially more than one range and that you need to combine ranges if two of them are present for any extend value being hit.
So for 1..2 the extends being looked for are 0 and 3. For 4..5 the extends being looked for are 3, again, and 6. The extends2ranges data structure for just this should look like:
{0: [[1, 2]], 3: [[1, 2], [4, 5]], 6: [[4, 5]]} The Code #2from collections import defaultdictdef combine_ranges(min1, max1, min2, max2): "Combine two overlapping ranges return the new range as [min, max], and a set of limits unused in the result" assert (min1 <= max1 and min2 <= max2 # Well formed and ( min1 <= max2 and min2 <= max1 )) # and ranges touch or overlap range_limits = set([min1, max1, min2, max2]) new_mnmx = [min(range_limits), max(range_limits)] unused_limits = range_limits - set(new_mnmx)
return new_mnmx, unused_limits
def consec_seq2(ints) -> tuple[int, int, int]: "Extract longest_seq_length, its_min, its_max" if not ints: return -1, 1, -1 seen = set() # numbers seen so far extends2ranges = defaultdict(list) # map extends to its ranges for this in ints: if this in seen: continue else: seen.add(this)
if this not in extends2ranges: # Start new range mnmx = [this, this] # Range of one int # add in the extend points extends2ranges[this + 1].append(mnmx) extends2ranges[this - 1].append(mnmx) else: # Extend an existing range ranges = extends2ranges[this] # The range(s) that could be extended by this if len(ranges) == 2: # this joins the two ranges extend_and_join_ranges(extends2ranges, this, ranges) else: # extend one range, copied extend_and_join_ranges(extends2ranges, this, [ranges[0], ranges[0].copy()])
all_ranges = sum(extends2ranges.values(), start=[]) longest_mn, longest_mx = max(all_ranges, key=lambda mnmx: mnmx[1] - mnmx[0])
return (longest_mx - longest_mn + 1), longest_mn, longest_mx
def extend_and_join_ranges(extends2ranges, this, ranges): mnmx, mnmx2 = ranges mnmx_orig, mnmx2_orig = mnmx.copy(), mnmx2.copy() # keep copy of originals mn, mx = mnmx mn2, mx2 = mnmx2 if this == mn - 1: mnmx[0] = mn = this # Extend lower limit of the range if this == mn2 - 1: mnmx2[0] = mn2 = this # Extend lower limit of the range if this == mx + 1: mnmx[1] = mx = this # Extend upper limit of the range if this == mx2 + 1: mnmx2[1] = mx2 = this # Extend lower limit of the range new_mnmx, _unused_limits = combine_ranges(mn, mx, mn2, mx2)
remove_merged_from_extends(extends2ranges, this, mnmx, mnmx2) add_combined_range_to_extends(extends2ranges, new_mnmx)
def add_combined_range_to_extends(extends2ranges, new_mnmx): "Add in the combined of two ranges's extends" new_mn, new_mx = new_mnmx for extend in (new_mn - 1, new_mx + 1): r = extends2ranges[extend] # ranges at new limit extension if new_mnmx not in r: r.append(new_mnmx)
def remove_merged_from_extends(extends2ranges, this, mnmx, mnmx2): "Remove original ranges that were merged from extends" for lohi in (mnmx, mnmx2): lo, hi = lohi for extend in (lo - 1, hi + 1): if extend in extends2ranges: r = extends2ranges[extend] for r_old in (mnmx, mnmx2): if r_old in r: r.remove(r_old) if not r: del extends2ranges[extend] # remove joining extend, this del extends2ranges[this]
def _test(): for ints in[ (), (69,), (-20, 78, 79, 1, 100), (4, 1, 3, 2), (10, 4, 20, 1, 3, 2), (0, 6, 1, 8, 5, 2, 4, 3, 0, 7), (10, 30, 20), (2,4,3,1,0, 10,12,11,8,9), # two runs of five (10,12,11,8,9, 2,4,3,1,0), # two runs of five - reversed (2,4,3,1,0,-1, 10,12,11,8,9), # runs of 6 and 5 (2,4,3,1,0, 10,12,11,8,9,7), # runs of 5 and 6 ]: print(f"Input {ints = }") longest, longest_mn, longest_mx = consec_seq2(ints)
if longest <2: print(" -1") else: print(f" The/A longest sequence has {longest} elements {longest_mn}..{longest_mx}")
# %%if __name__ == '__main__': _test()
Input ints = () -1Input ints = (69,) -1Input ints = (-20, 78, 79, 1, 100) The/A longest sequence has 2 elements 78..79Input ints = (4, 1, 3, 2) The/A longest sequence has 4 elements 1..4Input ints = (10, 4, 20, 1, 3, 2) The/A longest sequence has 4 elements 1..4Input ints = (0, 6, 1, 8, 5, 2, 4, 3, 0, 7) The/A longest sequence has 9 elements 0..8Input ints = (10, 30, 20) -1Input ints = (2, 4, 3, 1, 0, 10, 12, 11, 8, 9) The/A longest sequence has 5 elements 0..4Input ints = (10, 12, 11, 8, 9, 2, 4, 3, 1, 0) The/A longest sequence has 5 elements 8..12Input ints = (2, 4, 3, 1, 0, -1, 10, 12, 11, 8, 9) The/A longest sequence has 6 elements -1..4Input ints = (2, 4, 3, 1, 0, 10, 12, 11, 8, 9, 7) The/A longest sequence has 6 elements 7..12
This second algorithm gives correct results but is harder to develop and explain. It's a testament to my stubbornness as I thought their was a solution there, and debugging was me flexing my skills to keep them honed.
END.
Paolo Melchiorre: Thoughts on my election as a DSF board member
My thoughts on my election as a member of the Django Software Foundation (DSF) board of directors.