FLOSS Project Planets

Alberto García: More ways to install software in SteamOS: Distrobox and Nix

Planet Debian - Wed, 2024-06-05 11:53

In my previous post I talked about how to use systemd-sysext to add software to the Steam Deck without modifying the root filesystem. In this post I will give a brief overview of two additional methods.


distrobox is a tool that uses containers to create a mutable environment on top of your OS.

With distrobox you can open a terminal with your favorite Linux distro inside, with full access to the package manager and the ability to install additional software. Containers created by distrobox are integrated with the system so apps running inside have normal access to the user’s home directory and the Wayland/X11 session.

Since these containers are not stored in the root filesystem they can survive an OS update and continue to work fine. For this reason they are particularly suited to systems with an immutable root filesystem such as Silverblue, Endless OS or SteamOS.

Starting from SteamOS 3.5 the system comes with distrobox (and podman) preinstalled and it can be used right out of the box without having to do any previous setup.

For example, in order to create a Debian bookworm container simply open a terminal and run this:

$ distrobox create -i debian:bookworm debbox

Here debian:bookworm is the image that this container is created from (debian is the name and bookworm is the tag, see the list of supported tags here) and debbox is the name that is given to this new container.

Once the container is created you can enter it:

$ distrobox enter debbox

Or from the ‘Debian’ entry in the desktop menu -> Lost & Found.

Once inside the container you can run your Debian commands normally:

$ sudo apt update $ sudo apt install vim-gtk3 Nix

Nix is a package manager for Linux and other Unix-like systems. It has the property that it can be installed alongside the official package manager of any distribution, allowing the user to add software without affecting the rest of the system.

Nix installs everything under the /nix directory, and packages are made available to the user through a new entry in the PATH and a ~/.nix-profile symlink stored in the home directory.

Nix is more things, including the basis of the NixOS operating system. Explaning Nix in more detail is beyond the scope of this blog post, but for SteamOS users these are perhaps its most interesting properties:

  • Nix is self-contained: all packages and their dependencies are installed under /nix.
  • Unlike software installed with pacman, Nix survives OS updates.
  • Unlike podman / distrobox, Nix does not create any containers. All packages have normal access to the rest of the system, just like native SteamOS packages.
  • Nix has a very large collection of packages, here is a search engine: https://search.nixos.org/packages

The only thing that Nix needs from SteamOS is help to set up the /nix directory so its contents are not stored in the root filesystem. This is already happening starting from SteamOS 3.5 so you can install Nix right away in single-user mode:

$ sudo chown deck:deck /nix $ wget https://nixos.org/nix/install $ sh ./install --no-daemon

This installs Nix and adds a line to ~/.bash_profile to set up the necessary environment variables. After that you can log in again and start using it. Here’s a very simple example (refer to the official documentation for more details):

# Install and run Midnight Commander $ nix-env -iA nixpkgs.mc $ mc # List installed packages $ nix-env -q mc-4.8.31 nix-2.21.1 # Uninstall Midnight Commander $ nix-env -e mc-4.8.31

What we have seen so far is how to install Nix in single-user mode, which is the simplest one and probably good enough for a single-user machine like the Steam Deck. The Nix project however recommends a multi-user installation, see here for the reasons.

Unfortunately the official multi-user installer does not work out of the box on the Steam Deck yet, but if you want to go the multi-user way you can use the Determinate Systems installer: https://github.com/DeterminateSystems/nix-installer


Distrobox and Nix are useful tools and they give SteamOS users the ability to add additional software to the system without having to modify the base operating system.

While for graphical applications the recommended way to install third-party software is still Flatpak, Distrobox and Nix give the user additional flexibility and are particularly useful for installing command-line utilities and other system tools.

Categories: FLOSS Project Planets

KDE e.V. is looking for a contractor to coordinate the KDE Goals process

Planet KDE - Wed, 2024-06-05 11:00

KDE e.V., the non-profit organization supporting the KDE community, is looking for a proactive contractor to support and coordinate the KDE Goals process. The role involves providing project management, community engagement, event planning, and other needed services to help drive the success of KDE Goals. Please see the job ad for more details about this contracting opportunity.

We are looking forward to your application.

Categories: FLOSS Project Planets

The Drop Times: Drupal Pune Team Joins PHPCamp 2024 as Volunteers

Planet Drupal - Wed, 2024-06-05 10:50
The Drupal Pune team will volunteer at PHPCamp 2024 in Pune on June 8th. Don't miss the chance to participate in this collaborative event! Secure your tickets now.
Categories: FLOSS Project Planets

Real Python: Python String Formatting: Available Tools and Their Features

Planet Python - Wed, 2024-06-05 10:00

String formatting is the process of applying a proper format to a given value while using this value to create a new string through interpolation. Python has several tools for string interpolation that support many formatting features. In modern Python, you’ll use f-strings or the .format() method most of the time. However, you’ll see the modulo operator (%) being used in legacy code.

In this tutorial, you’ll learn how to:

  • Format strings using f-strings for eager interpolation
  • Use the .format() method to format your strings lazily
  • Work with the modulo operator (%) for string formatting
  • Decide which interpolation and formatting tool to use

To get the most out of this tutorial, you should be familiar with Python’s string data type and the available string interpolation tools. Having a basic knowledge of the string formatting mini-language is also a plus.

Get Your Code: Click here to download the free sample code you’ll use to learn about Python’s string formatting tools.

Take the Quiz: Test your knowledge with our interactive “Python String Formatting: Available Tools and Their Features” quiz. You’ll receive a score upon completion to help you track your learning progress:

Interactive Quiz

Python String Formatting: Available Tools and Their Features

Take this quiz to test your understanding of the available tools for string formatting in Python, as well as their strengths and weaknesses. These tools include f-strings, the .format() method, and the modulo operator.

Interpolating and Formatting Strings in Python

String interpolation involves generating strings by inserting other strings or objects into specific places in a base string or template. For example, here’s how you can do some string interpolation using an f-string:

Python >>> name = "Bob" >>> f"Hello, {name}!" 'Hello, Bob!' Copied!

In this quick example, you first have a Python variable containing a string object, "Bob". Then, you create a new string using an f-string. In this string, you insert the content of your name variable using a replacement field. When you run this last line of code, Python builds a final string, 'Hello, Bob!'. The insertion of name into the f-string is an interpolation.

Note: To dive deeper into string interpolation, check out the String Interpolation in Python: Exploring Available Tools tutorial.

When you do string interpolation, you may need to format the interpolated values to produce a well-formatted final string. To do this, you can use different string interpolation tools that support string formatting. In Python, you have these three tools:

  1. F-strings
  2. The str.format() method
  3. The modulo operator (%)

The first two tools support the string formatting mini-language, a feature that allows you to fine-tune your strings. The third tool is a bit old and has fewer formatting options. However, you can use it to do some minimal formatting.

Note: The built-in format() function is yet another tool that supports the format specification mini-language. This function is typically used for date and number formatting, but you won’t cover it in this tutorial.

In the following sections, you’ll start by learning a bit about the string formatting mini-language. Then, you’ll dive into using this language, f-strings, and the .format() method to format your strings. Finally, you’ll learn about the formatting capabilities of the modulo operator.

Using F-Strings to Format Strings

Python 3.6 added a string interpolation and formatting tool called formatted string literals, or f-strings for short. As you’ve already learned, f-strings let you embed Python objects and expressions inside your strings. To create an f-string, you must prefix the string with an f or F and insert replacement fields in the string literal. Each replacement field must contain a variable, object, or expression:

Python >>> f"The number is {42}" 'The number is 42' >>> a = 5 >>> b = 10 >>> f"{a} plus {b} is {a + b}" '5 plus 10 is 15' Copied!

In the first example, you define an f-string that embeds the number 42 directly into the resulting string. In the second example, you insert two variables and an expression into the string.

Formatted string literals are a Python parser feature that converts f-strings into a series of string constants and expressions. These are then joined up to build the final string.

Using the Formatting Mini-Language With F-Strings

When you use f-strings to create strings through interpolation, you need to use replacement fields. In f-strings, you can define a replacement field using curly brackets ({}) as in the examples below:

Python >>> debit = 300.00 >>> credit = 450.00 >>> f"Debit: ${debit}, Credit: ${credit}, Balance: ${credit - debit}" 'Debit: $300, Credit: $450.0, Balance: $150.0' Copied!

Inside the brackets, you can insert Python objects and expressions. In this example, you’d like the resulting string to display the currency values using a proper format. However, you get a string that shows the currency values with at most one digit on its decimal part.

Read the full article at https://realpython.com/python-string-formatting/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Tag1 Consulting: Migrating Your Data from Drupal 7 to Drupal 10: Example repository setup and Drupal 7 site audit

Planet Drupal - Wed, 2024-06-05 09:39

Series Overview & ToC | Previous Article | Next Article - coming June 13th Now that we have covered how to prepare for a migration, let’s put that knowledge into practice. In this article we introduce the example project: a Drupal 7 site that we will be migrating to Drupal 10. After providing an overview of project setup, we will perform an audit of the Drupal 7 site and draft a migration plan to Drupal 10. ### Example repository The repository is available on Github. We will be using DDEV to set up local development environments for Drupal 7 and 10. Refer to DDEV’s official documentation for installation instructions. If you choose to use a different development environment, adjust the commands accordingly. To get the Drupal 7 site up and running, execute the following commands: bash git clone https://github.com/tag1consulting/d7_to_d10_migration.git d7_to_d10_migration cd d7_to_d10_migration/drupal7 ddev start ddev import-db -f ../assets/drupal7_db.sql.gz ddev import-files --source ../assets/drupal7_files.tar.gz ddev restart ddev launch ddev drush uli This will clone the repository into a folder named d7_to_d10_migration. Inside, you will find a drupal7 folder with the code for a Drupal 7 installation including contrib modules. The commands also import an already populated database and user uploaded...

Read more mauricio Wed, 06/05/2024 - 14:17
Categories: FLOSS Project Planets

enscript @ Savannah: GNU Enscript 1.7rc released

GNU Planet! - Wed, 2024-06-05 08:21

Version 1.7rc is available for download from:

  git clone https://git.savannah.gnu.org/git/enscript.git

We are looking forward for your feedback.

Categories: FLOSS Project Planets

Real Python: Quiz: Python String Formatting: Available Tools and Their Features

Planet Python - Wed, 2024-06-05 08:00

Test your understanding of Python’s tools for string formatting, including f-strings, the .format() method, and the modulo operator.

Take this quiz after reading our Python String Formatting: Available Tools and Their Features tutorial.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

What should KDE focus on for the next 2 years? You can propose a goal!

Planet KDE - Wed, 2024-06-05 06:51

Every 2 to 3 years KDE selects 3 goals that the whole community can focus on for the coming years. For the past 2 years we have focused on improving the accessibility of our applications, worked to make our software more sustainable and automated and improved a lot of processes to make developing software in KDE smoother. To learn more about these goals check out the KDE Goals page. We will wrap up these goals at Akademy in Würzburg later this year.

It is now time to figure out what the next goals should be. We are starting this today by opening the floor for proposals. This means you (yes you!) can campaign for something you and others want to work on over the next 2 years and rally the KDE community behind it. To give you some inspiration you can have a look at the complete list of goals we’ve had in previous years.

How does it work?

You (and a small group of others) can submit a proposal by opening a ticket on this Phabricator board. Copy the template into a new ticket and explain your idea. The template gives you a few hints to help you create a meaningful proposal. This process is open until July 5th. (On July 5th we will start the refinement stage, where others can further help you improve the proposal.)

Some things to keep in mind
  • The process is explicitly open to proposals from people who are not yet KDE contributors.
  • The process is explicitly open to everyone, not just developers.
  • We expect the champions of the goal to be the champions, not the only ones working on the goal. At the same time they will need to put in significant work to rally people around their goal.
What is different compared to previous editions?

We are tweaking the process a bit this time. If you’re familiar with the process from previous years here are the most important changes:

  • We are moving from an individual champion to a small team of champions. Each goal should have someone who can carry the vision of the goal forward, someone who can technically steer it and someone to promote it. Other setups are possible where it makes sense for the particular goal but a goal needs a small team. Don’t have a team yet? That’s ok. Submit a proposal and say what you need. We will try to help find others to join.
  • We are focusing the champion role more on driving the goal forward through others and less by doing all the work themselves.
  • We will work with the goal champions on fundraising for specific projects that support their goal.
What’s the timeline?
  • Starting today until July 5th: Propose goals and find team
  • July 5th to August 15th: Refine proposals together with the community (identify issues, remove blockers, sharpen the proposal, …)
  • August 15th to 31st: Voting on the proposed goals by active KDE contributors
  • September 6/7th: Selected goals are announced at Akademy
Still got questions?

If you still have questions you can ask them in various places:

Categories: FLOSS Project Planets

The Drop Times: Esmeralda Braad-Tijhoff: The Woman Who Revived Her Life with Drupal

Planet Drupal - Wed, 2024-06-05 03:18
Esmeralda Braad-Tijhoff transformed her life through Drupal, rising from poverty to a leading role in the tech industry. As a Drupal Product Owner at DICTU and a Dutch Drupal Association board member, Esmeralda shares her experiences and insights in an interview with Alka Elizabeth of The DropTimes. They discuss Esmeralda's background, transition into IT, and efforts to foster collaboration within the Drupal community. The interview also covers her advocacy for open-source solutions in government projects and her vision for the future of tech inclusivity.
Categories: FLOSS Project Planets

joshics.in blog: Golf EMS

Planet Drupal - Wed, 2024-06-05 03:12
Golf EMS bhavinhjoshi Wed, 06/05/2024 - 12:42 Brochure site Community E-Commerce Entertainment Small business Technology

Golf EMS is an industry-leading online registration platform, designed specifically for the unique needs of golf facilities. The platform streamlines the process of managing golf events, offering a centralised solution for handling online registrations, payments, and event management efficiently.

Before Joshi Consultancy Services came onboard, Golf EMS was encountering several challenges that impeded its growth. Their existing system was insubstantial and underperforming, not being able to support the increasing demands of their expanding user base.

Joshi Consultancy Services revamped Golf EMS's online registration platform. They replaced the outdated system with a robust, highly scalable Drupal solution. This new system significantly improved the platform's performance, allowing it to handle higher traffic and provide an enhanced user experience. As a result, Golf EMS gained the capacity to support even the most complex golf event registrations.

Moreover, by leveraging Drupal's flexibility and scalability, Joshi Consultancy Services was able to introduce new features that further streamlined Golf EMS's operations. This included advanced reporting tools, custom event registration forms, and secure payment gateways.

Joshi's mastery in Drupal was crucial in transforming Golf EMS's platform into a powerful, efficient registration tool for golf facilities. By optimising their system, Golf EMS was able to achieve their goals, boosting their market share and solidifying their position as a leader in the golf industry.

Under the guidance of Joshi Consultancy Services, Golf EMS significantly improved their operational efficiency and client satisfaction, achieving their objectives, and paving the way for their continual success in the competitive golf industry.

The Golf EMS project was a meticulous undertaking that called for a significant system overhaul. The goal for the project was to transform their existing platform into an advanced, robust system that could efficiently handle the demands of their user base and support their growth trajectory.

The main objectives for Golf EMS included:

1. A scalable platform to support a growing user base.
2. Enhanced productivity through streamlined operations.
3. A secure platform for online registrations and payments.
4. Improved functionality with features tailored for golf event management.

The key requirements for the project were:

1. Development of a scalable Drupal solution.
2. Integration of APIs for third-party services.
3. Implementation of customizable features for advanced reporting and detailed customer insights.
4. A secure payment gateway.
5. Compliance with industry regulations and standards.

The partnership with Joshi Consultancy Services yielded remarkable results. With a revamped platform, Golf EMS experienced considerable improvements in their operations. The new Drupal solution provided the scalability they needed to handle their growing needs without compromising on performance.

The integration of APIs and the addition of customised features such as advanced reporting tools, custom event registration forms, and secure payment gateways significantly enhanced the platform's functionality.

By successfully meeting the requirements and achieving their goals, Golf EMS has cemented their position as an industry leader, offering a sophisticated online golf event management system that is trusted and relied upon by golf facilities everywhere.

Why Drupal was chosen

When it came to developing the Golf EMS platform, Drupal was the chosen content management system due to several reasons reflecting its robustness, flexibility, and scalability.

Firstly, Drupal is highly customisable, allowing for the creation and management of content types, views, user roles, and more. This was essential for a dynamic platform like Golf EMS which needed numerous custom functionalities tailored to the specific requirements of golf event management.

Secondly, Drupal’s extensive API support was a significant factor. APIs facilitate the smooth integration of third-party systems essential for expanding the platform's capabilities. With it, Joshi Consultancy Services seamlessly integrated payment gateways, reporting tools and other services into Golf EMS's platform.

Furthermore, Drupal's scalability was a crucial consideration. As Golf EMS aimed to cater to a growing user base, they needed a system that would handle increased traffic and data without affecting the website's performance. Drupal's inherent ability to scale made it the perfect fit.

Lastly, the security provided by Drupal is industry-leading. Given the sensitive data involved with online registrations and payments, Drupal's strong focus on security was pivotal to ensure that user data remained secure and well-protected.

In summary, the choice to use Drupal was driven by its customizability, robust API support, scalability, and security. These features, coupled with Bhavin Joshi's deep expertise in Drupal, ensured that Golf EMS was built on a solid foundation that could support its aims and future growth.

Drupal 7.x
Categories: FLOSS Project Planets

PreviousNext: Filtering and sorting search results by date range with OpenSearch

Planet Drupal - Tue, 2024-06-04 20:53

How to make use of OpenSearch (and Elasticsearch) built-in data-types specifically designed for filtering and sorting date ranges.


by lee.rowlands / 5 June 2024

It's fairly common to have content with a date range. E.g., events or content that is only applicable for a given date.

Sorting and filtering results by date can be complicated, especially if you have to support features such as multiple date ranges and optional end dates.

But OpenSearch (and Elasticsearch) have data types specifically designed for this use case.

The key here is to index your data using a Range field type.

Let's consider an example we recently built for NSW.gov.au. The requirements for the field were as follows:

  • Index a grant content-type
  • The grant content-type has a date-range field
  • The field has optional end dates via the Optional End Date module
  • The field is multi value
  • There is also a boolean 'Grant is ongoing' field that flags the grant as 'Evergreen'

The filtering requirements include a 'Status' field, which has options for:

  • Open - the current date must fall inside one of the date-range values
  • Closed - the current date is after date-range values
  • Opening soon - the current date falls inside a future date range

Additionally, the search results should be sorted in the following order when no keywords exist:

  • Open
  • Opening soon
  • Closed
Defining a date range field in Drupal/Search API

The first step is to inform Search API about the date-range data type. How you do this will depend on whether you're using Elasticsearch connector or Search API OpenSearch.

If you're using Search API OpenSearch, the good news is that it is already supported in the 2.x branch.

If you're using Elasticsearch connector, you should implement hook_elasticsearch_connector_supported_data_types_alter and add date range as a data-type - something like this

/**  * Implements hook_elasticsearch_connector_supported_data_types_alter().  */ function YOURMODULE_elasticsearch_connector_supported_data_types_alter(array &$data_types) {   if (!in_array('date_range', $data_types)) {     $data_types[] = 'date_range';   } }

If you're using Elasticsearch connector, you also need a processor plugin to put the field values into the index. You can take inspiration from how Search API OpenSearch achieves this.

Indexing Drupal data

Once you have those pieces in place, you simply need to configure your date field to use the Date Range data type in the Search API index.

Constructing the queries

With the data in your index, the only thing left is to construct your queries to make use of the date range data-type.

In these examples, we'll assume your date-range field is called 'application_dates', that the field is multi-value and that the end date is optional.

We will use 1710201396000 as the current timestamp (in epoch microseconds).

Filtering to 'open'

This one is the simplest. You need a new term query that uses the current date

{   "from": 0,   "size": 10,   "query": {     "bool": {       "must": [         {           "term": {             "application_dates": 1710201396000           }         }       ]     }   } }

For a date-range field, the use of a term query will match any document where the given value is between the start and end of the range.

Filtering to 'closed'

This one is a bit more involved. We need to combine some conditions

{   "from": 0,   "size": 10,   "query": {     "bool": {       "minimum_should_match": 1,       "should": {         "bool": {           "minimum_should_match": 1,           "must_not": [             {               "range": {                 "application_dates": {                   "gte": 1710201396000                 }               }             },             {               "term": {                 "grant_is_ongoing": true               }             }           ],           "should": [             {               "range": {                 "application_dates": {                   "relation": "WITHIN",                   "lte": 1710201396000                 }               }             }           ]         }       }     }   } }

First, we have a should, which is an OR query in OpenSearch parlance. We're saying the grant must not have any future open dates (gte: 1710201396000) AND must not be flagged as ongoing. Then we're saying all of the application dates should be in the past (lte: 1710201396000).

Filtering to 'opening soon'

Here again, we need multiple conditions.

{   "from": 0,   "size": 10,   "query": {     "bool": {       "must": {         "range": {           "application_dates": {             "relation": "WITHIN",             "gte": 1710201396000           }         }       },       "must_not": [         {           "term": {             "application_dates": 1710201396000           }         },         {           "term": {             "grant_is_ongoing": true           }         }       ]     }   } }

First, we are saying there must be at least one date where the date range is in the future (gte: 1710201396000) and that the grant must not be ongoing OR have any date ranges that match the current date (term: 1710201396000) (i.e. are open).


Sorting is a little more complex. We can make use of filters in our sort expressions.

To do this, we also need to index our date-range as an object so we can use nesting.

We duplicate the application_dates field to an application_dates_sort field and make this use the nested data-type. The mapping will look like this

{   "application_dates_sort": {     "type": "nested",     "properties": {       "gte": {         "type": "long"       },       "lte": {         "type": "long"       }     }   } }

Then, we can make use of this in our sort expressions. E.g. to have open items appear first, we can use

{   "from": 0,   "sort": [     {       "application_dates_sort.lte": {         "order": "asc",         "mode": "min",         "nested": {           "path": "application_dates_sort",           "filter": {             "bool": {               "must": [                 {                   "range": {                     "application_dates_sort.lte": {                       "gte": 1710201396000                     }                   }                 },                 {                   "range": {                     "application_dates_sort.gte": {                       "lte": 1710201396000                     }                   }                 }               ]             }           }         },         "missing": "_last"       }     }   ],   "size": 10 }

What we're doing here is sorting by the opening date, ascending, but using a filter to only match on records where the current date falls between the start/end date. We're telling OpenSearch to put any objects that don't match (missing) at the end.

We can stack sort expressions like this — subsequent expressions will apply to those that were 'missing' from the first sort. Combining these lets us achieve the desired sort order of opening -> opening soon -> closed.

Wrapping up

OpenSearch is pretty powerful. But to make the most of its features, you need to ensure your data is mapped into the right data-type. Using date-range fields for storing date-ranges takes a little bit of extra planning, but it leads to much more efficient queries and a better experience for your users.

Tagged OpenSearch
Categories: FLOSS Project Planets

Ruqola 2.2.0

Planet KDE - Tue, 2024-06-04 20:00

Ruqola 2.2.0 is a feature and bugfix release of the Rocket.chat app.


  • Allow to increase/decrease font (CTRL++/CTRL+-)
  • Add channel list style (Condensed/Medium/Extended)
  • Add forward message
  • Improve mentions support.
  • Add support for deep linking Deep Linking.
  • Implement block actions.
  • Implement personal token authentication. Bug 481400
  • Add Plasma Activities Support
  • Add Report User Support
  • Implement Channel Sound Notification.
  • Implement New Room Sound Notification.
  • Implement Sorted/Unsorted markdown list.

Some bug fixing:

  • Fix dark mode support.
  • Fix jitsi support.Fix translate message in direct channel.
  • Don't show @here/@all as user.
  • Reduce memory footprint.
  • Use RESTAPI for logging.
  • Allow to send multi files.
  • Fix preview url.

URL: https://download.kde.org/stable/ruqola/
Source: ruqola-2.2.0.tar.xz
SHA256: 4091126316ab0cd2d4a131facd3cd8fc8c659f348103b852db8b6d1fd4f164e2
Signed by: E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Esk-Riddell jr@jriddell.org

Categories: FLOSS Project Planets

Armin Ronacher: Your Node is Leaking Memory? setTimeout Could be the Reason

Planet Python - Tue, 2024-06-04 20:00

This is mostly an FYI for node developers. The issue being discussed in this post has caused us quite a bit of pain. It has to do with how node deals with timeouts. In short: you can very easily create memory leaks [1] with the setTimeout API in node. You're probably familiar with that API since it's one that browsers provide for many, many years. The API is pretty straightforward: you schedule a function to be called later, and you get a token back that can be used to clear the timeout later. In short:

const token = setTimeout(() => {}, 100); clearTimeout(token);

In the browser the token returned is just a number. If you do the same thing in node however, the token ends up being an actual Timeout object:

> setTimeout(() => {}) Timeout { _idleTimeout: 1, _idlePrev: [TimersList], _idleNext: [TimersList], _idleStart: 4312, _onTimeout: [Function (anonymous)], _timerArgs: undefined, _repeat: null, _destroyed: false, [Symbol(refed)]: true, [Symbol(kHasPrimitive)]: false, [Symbol(asyncId)]: 78, [Symbol(triggerId)]: 6 }

This “leaks” out some of the internals of how the timeout is implemented internally. For the last few years I think this has been just fine. Typically you used this object primarily as a token similar to how you would do that with a number. Might look something like this:

class MyThing { constructor() { this.timeout = setTimeout(() => { ... }, INTERVAL); } clearTimeout() { clearTimeout(this.timeout); } }

For the lifetime of MyThing, even after clearTimeout has been called or the timeout runs to completion, the object holds on to this timeout. While on completion or cancellation, the timeout is marked as “destroyed” in node terms and removed from it's internal tracking. What however happens is that this Timeout object is actually surviving until someone overrides or deletes the this.timeout reference. That's because the actual Timeout object is held and not just some token. This further means that the garbage collector won't actually collect this thing at all and everything that it references. This does not seem too bad as the Timeout seems somewhat hefty, but not too hefty. The most problematic part is most likely the _onTimeout member on it which might pull in a closure, but it's probably mostly okay in practice.

However the timeout object can act as a container for more state which is not quite as obvious. Annew API that has been added over the last couple of years called AsyncLocalStorage which is getting some traction is attaching additional state onto all timeouts that fire. Async locals storage is implemented in a way that timeouts (and promises and similar constructs) carry forward hidden state until they run:

const { AsyncLocalStorage } = require('node:async_hooks'); const als = new AsyncLocalStorage(); let t; als.run([...Array(10000)], () => { t = setTimeout(() => { // const theArray = als.getStore(); assert(theArray.length === 10000); }, 100); }); console.log(t);

When you run this, you will notice that the Timeout holds a reference to this large array:

Timeout { _idleTimeout: 100, _idlePrev: [TimersList], _idleNext: [TimersList], _idleStart: 10, _onTimeout: [Function (anonymous)], _timerArgs: undefined, _repeat: null, _destroyed: false, [Symbol(refed)]: true, [Symbol(kHasPrimitive)]: false, [Symbol(asyncId)]: 2, [Symbol(triggerId)]: 1, [Symbol(kResourceStore)]: [Array] // reference to that large array is held here }

That's because every single async local storage that is created registers itself with the timeout with a custom Symbol(kResourceStore) which even remains on there after a timeout has been cleared or the timeout ran to completion. This means that the more async local storage you use, the more “stuff” you hold on if you don't clear our the timeouts.

The fix seems obvious: rather than holding on to timeouts, hold on to the underlying ID. That's because you can convert a Timeout into a primitive (with for instance the unary + operator). The primitive is just a number like it would be in the browser which then can also be used for clearing. Since a number holds no reference, this should resolve the issue:

class MyThing { constructor() { // the + operator forces the timeout to be converted into a number this.timeout = +setTimeout(() => { ... }, INTERVAL); } clearTimeout() { // clearTimeout and other functions can resolve numbers back into // under internal timeout object clearTimeout(this.timeout); } }

Except it doesn't (today). In fact today doing this will cause an unrecoverable memory leak because of a bug in node [2]. Once that will be resolved however that should be a fine way to avoid problem.

Workaround for the leak with a Monkey-Patch

Since the bug is only triggered when a timer manages to run to completion, you could in theory forcefully clear the timeout or interval on completion if node “allocated” a primitive ID for it like so:

const kHasPrimitive = Reflect .ownKeys(setInterval(() => {})) .find((x) => x.toString() === 'Symbol(kHasPrimitive)'); function invokeSafe(t, callable) { try { return callable(); } finally { if (t[kHasPrimitive]) { clearTimeout(t); } } } const originalSetTimeout = global.setTimeout; global.setTimeout = (callable, ...rest) => { const t = originalSetTimeout(() => invokeSafe(t, callable), ...rest); return t; }; const originalSetInterval = global.setInterval; global.setInterval = (callable, ...rest) => { const t = originalSetInterval(() => invokeSafe(t, callable), ...rest); return t; };

This obviously makes a lot of assumptions about the internals of node, it will slow down every timer slightly created via setTimeout and setInterval but might help you in the interim if you do run into that bug.

Until then the second best thing you can do for now is to just be very aggressive in deleting these tokens manually the moment you no longer need them:

class MyThing { constructor() { this.timeout = setTimeout(() => { this.timeout = null; ... }, INTERVAL); } clearTimeout() { if (this.timeout) { clearTimeout(this.timeout); this.timeout = null; } } }

How problematic are timeouts? It's hard for me to say, but there are a lot of places where code holds on to timeouts and intervals in node for longer than is healthy. If you are trying to make things such as hot code reloading work, you are working with long lasting or recurring timeouts it might be very easy to run into this problem. Due to how widespread these timeouts are and the increased use of async local storage I can only assume that this will become a more common issue people run into. It's also a bit devious because you might not even know that you use async local storage as a user.

We're not the first to run into issues like this. For instance Next.js is trying to work around related issues by periodically patching setTimeout and setInterval to forcefully clearning out intervals to avoid memory leakage in the dev server. (Which unfortunately sometimes runs into the node bug mentioned above due to it's own use of toPrimitive)

How widespread is async local storage? It depends a bit on what you do. For instance we (and probably all players in the observability space including the OpenTelemetry project itself) use it to track tracing information with the local flow of execution. Modern JavaScript frameworks also sometimes are quite active users of async local storage. In the particular case we were debugging earlier today a total of 7 async local storages were attached to the timeouts we found in the heap dumps, some of which held on to massive react component trees.

Async local storage is great: I'm a huge proponent of it! If you have ever used Flask you will realize that Flask is built on a similar concept (thread locals, nowadays context vars) to give you access to the right request object. What however makes async local storage a bit scary is that it's very easy to hold on to memory accidentally. In node's case particularly easy with timeouts.

At the very least for timeouts in node there might be a simple improvement by no longer exposing the internal Timeout object. Node could in theory return a lightweight proxy object that breaks the cycle after the timeout has been executed or cleared. How backwards compatible this can be done I don't know however.

For improving async local storage longer term I think the ecosystem might have to embrace the idea about shedding contextual state. It's incredibly easy to leak async local storage today if you spawn "background" operations that last. For instance today a console.log will on first use allocate an internal TTY resource which accidentally holds on to the calling async local storage of completely unrelated stuff. Whenever a thing such as console.log wants to create a long lasting resource until the end of the program, helper APIs could be provided that automatically prevent all async local storage from propagating. Today there is only a way to prevent a specific local storage from propagating by disabling it, but that requires knowing which ones exist.

[1]Under normal circumstances these memory leaks would not be permanent leaks. They would resolve themselves when you finally drop a reference to that token. However due to a node bug it is currently possible for these leaks to be unrecoverable. [2]How we found that bug might be worth a story for another day.
Categories: FLOSS Project Planets

First Beta for Krita 5.2.3 Released

Planet KDE - Tue, 2024-06-04 20:00

We are releasing the first Beta of Krita 5.2.3. This release primarily brings a complete overhaul of our build system, making it so that our CI system can now build for all 4 platforms (a Continuous Integration system basically builds a program after every change, runs some tests and based on that helps us track down mistakes we made when changing Krita's code).

Beyond the rework of the build system, this release also has numerous fixes, particularly with regards to animated transform masks, jpeg-xl support, shortcut handling on Windows and painting assistants.

In addition to the core team, special thanks goes out to to Freya Lupen, Grum 999, Mathias Wein, Nabil Maghfur Usman, Alvin Wong, Deif Lou and Rasyuqa A. H. for various fixes, as well as the first time contributors in this release cycle (Each mentioned after their contribution).

  • Fix docker ShowOnWelcomePage flag (Bug 476683)
  • windows: Exclude meson in release package
  • Fix integer division rounding when scaling image (Patch by David C. Page, Thanks!)
  • Fix Comic Manager export with cropping TypeError
  • WebP: properly export color profile on RGBA model (Bug 477219)
  • WebP: properly assign profile when importing image with non-RGBA model (Bug 478307)
  • Fix saving animated webp
  • Added missing layer conversion options to layer docker rightclick menu (Patch by Ken Lo, thanks! MR 2039
  • Fix Last Documents Docker TypeError
  • Replace the Backers tab with a Sponsors tab
  • Comic Project Management Tools: Fix crash in comic exporter due incorrect api usage. (Bug 478668) Also turn off metadata retrieval for now.
  • Properly check color patch index for validity (Bug 480188)
  • Fix build with libjxl 0.9.0 (Bug 478987, patch by Timo Gurr, thanks!)
  • Included Krita's MLT plugin in our source tree, as well as several fixes to it.
  • Possibly fix ODR violation in the transform tool strategies (Bug 480520)
  • Fix framerate issue when adding audio to the image (Bug 481388)
  • Fix artifacts when drawing on a paused animation (Bug 481244)
  • Fix issues generating reference image in KisMergeLabeledLayersCommand (Bug 471896, Bug 472700)
  • Simplify similar color selection tool and add spread option
  • Fix artifacts when moving a layer under a blur adjustment layer (Bug 473853)
  • Make sure that tool-related shortcuts have priority over canvas actions (Bug 474416)
  • Refactor KisLodCapableLayerOffset into a reusable structure
  • Fix current time loading from a .kra file with animation
  • Clone scalar keyframes when creating a new one in the GUI
  • Refactor transform masks not to discard keyframe channels on every change (Bug 475385)
  • Fix artifacts when flattening animated transform mask (Bug 475334)
  • Fix extra in-stroke undo step when using Transform and Move tools (Bug 444791)
  • Fix rounding in animated transform masks with translation (Bug 456731)
  • Fix canvas inputs breaking due to inconsistent key states (BUG: 451424)
  • Make wave filter allowed in filter masks/layers (Bug 476033)
  • Fix an offset when opening files with animated transform masks (Bug 476317)
  • Fix transform mask offsets after running a transform tool on that (Bug 478966)
  • Fix creation of keyframes before the first frame for scalar channels (Bug 478448)
  • Support xsimd 12 (Bug 478986)
  • Fix new scalar frames to use interpolated values (Bug 479664)
  • Don't skip updates on colorize mask props change (Bug 475927)
  • Fix restoring layer's properties after new-layer-from-visible action (Bug 478225)
  • Fix transform masks being broken after duplication (Bug 475745)
  • Fix a crash when trying to open multiple files with one D&D (Bug 480331)
  • Fix cache being stuck when adding hold frames (Bug 481099)
  • Make PerspectiveAssistant respect snapToAny parameter
  • Fix logic in 2-Point Perspective Assistant
  • Remove unnecessary locking code from several assistants
  • Move a bunch of code repeated in derived classes to KisPaintingAssistant
  • Fix wg selector popup flickering with shortcut
  • Fix updating KoCanvasControllerWidget's canvas Y offset when resizing the canvas
  • Fix esthetics of the Welcome Page, remove banner.
  • Re-enable the workaround for plasma global menu (Bug 483170)
  • Improvements to support for Haiku OS.
  • Fix crash on KisMergeLabeledLayersCommand when using color labeled transform masks (Bug 480601)
  • PNG: prevent multiple color conversions (Bug 475737)
  • Fix crash when pasting font family in TextEditor (Bug 484066)
  • Fix grids glitches when subdivision style is solid (Bug 484889)
  • Add 1px padding to rects in KisPerspectiveTransformWorker to account for bilinear filtering (Bug 484677)
  • Fix activation of the node on opening .kra document (Bug 480718)
  • Various snapcraft fixes by Scarlet Moore
  • Fix sequence of imagepipe brushes when used in the line tool (Bug 436419)
  • tilt is no longer ignored in devices that support tilt (Bug 485709, patch by George Gianacopoulos, thanks!)
  • Make Ogg render consistently check for Theora
  • Fix issue with appbundle builder using libs.xml from arm64-v8a (Bug 485707)
  • Don't reset redo when doing color picking (Bug 485910)
  • Show and error when trying to import version 3 xcf files (Bug 485420)
  • Remove a bogus warning about Intel's openGL driver
  • Fix incorrect inclusion of "hard mix softer" blend mode in layer styles
  • Fixed issue on S24 ultra regarding S-pen settings when HiDPI is turned off (Patch by Spencer Sawyer, thanks!).
  • Revive code path in perspective ellipse and 2pp assistant draw routine
  • Don't try to regenerate frames on an image that has no animation
  • Fix data loss on reordering storyboards.
  • Improving the quality of icons for Android (Patch by Jesse 205, thanks! Bug 463043)
  • Enable SIP type stub file generation (Patch by Kate Corcoran, thanks!)
Download Windows

If you're using the portable zip files, just open the zip file in Explorer and drag the folder somewhere convenient, then double-click on the Krita icon in the folder. This will not impact an installed version of Krita, though it will share your settings and custom resources with your regular installed version of Krita. For reporting crashes, also get the debug symbols folder.

Note that we are not making 32 bits Windows builds anymore.


The separate gmic-qt AppImage is no longer needed.

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)


Note: if you use macOS Sierra or High Sierra, please check this video to learn how to enable starting developer-signed binaries, instead of just Apple Store binaries.


We consider Krita on ChromeOS as ready for production. Krita on Android is still beta. Krita is not available for Android phones, only for tablets, because the user interface requires a large screen.

We are no longer building the 32 bits Intel CPU APK, as we suspect it was only installed by accident and none of our users actually used it. We are hoping for feedback on this matter.

Source code md5sum

For all downloads, visit https://download.kde.org/unstable/krita/5.2.3-beta1/ and click on Details to get the hashes.


The Linux AppImage and the source .tar.gz and .tar.xz tarballs are signed. You can retrieve the public key here. The signatures are here (filenames ending in .sig).

Categories: FLOSS Project Planets

Dirk Eddelbuettel: ulid 0.4.0 on CRAN: Extended to Milliseconds

Planet Debian - Tue, 2024-06-04 19:43

A new version of the ulid package is now on CRAN. The packages provides ‘universally (unique) lexicographically (sortable) identifiers’ – see the spec at GitHub for details on those – which offer sorting which uuids lack. The R package provides access via the standard C++ library, had been put together by Bob Rudis and is now maintained by me.

Mark Heckmann noticed that a ulid round trip of generating and unmarshalling swallowed subsecond informationm and posted on a well-known site I no longer go to. Duncan Murdoch was kind enough to open an issue to make me aware, and in it included the nice minimally complete verifiable example by Mark.

It turns out that this issue was known, documented upstream in two issues and fixed in fork by the authors of those issues, Chris Bove. It replaces time_t as the value of record (constrained at the second resolution) with a proper std::chrono object which offers milliseconds (and much more, yay Modern C++). So I switched the two main files of library to his, and updated the wrapper code to interface from POSIXct to std::chrono object. And with that we are in business. The original example of five ulids create 100 millisecond part, then unmarshalled and here printed as a data.table as data.frame by default truncates to seconds:

> library(ulid) > gen_ulid <- \(sleep) replicate(5, {Sys.sleep(sleep); generate()}) > u <- gen_ulid(.1) > df <- unmarshal(u) > data.table::data.table(df) ts rnd <POSc> <char> 1: 2024-05-30 16:38:28.588 CSQAJBPNX75R0G5A 2: 2024-05-30 16:38:28.688 XZX0TREDHD6PC1YR 3: 2024-05-30 16:38:28.789 0YK9GKZVTED27QMK 4: 2024-05-30 16:38:28.890 SC3M3G6KGPH7S50S 5: 2024-05-30 16:38:28.990 TSKCBWJ3TEKCPBY0 >

We updated the documentation accordingly, and added some new tests as well. The NEWS entry for this release follows.

Changes in version 0.4.0 (2024-06-03)
  • Switch two functions to fork by Chris Bove using std::chrono instead of time_t for consistent millisecond resolution (#3 fixing #2)

  • Updated documentation showing consistent millisecond functionality

  • Added unit tests for millisecond functionality

Courtesy of my CRANberries, there is also a diffstat report for this release. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

TestDriven.io: Storing Django Static and Media Files on DigitalOcean Spaces

Planet Python - Tue, 2024-06-04 18:28
This tutorial shows how to configure Django to load and serve up static and media files, public and private, via DigitalOcean Spaces.
Categories: FLOSS Project Planets

FSF Events: Free Software Directory meeting on IRC: Friday, June 07, starting at 12:00 EDT (16:00 UTC)

GNU Planet! - Tue, 2024-06-04 16:28
Join the FSF and friends on Friday, June 07, from 12:00 to 15:00 EDT (16:00 to 19:00 UTC) to help improve the Free Software Directory.
Categories: FLOSS Project Planets

PyCoder’s Weekly: Issue #632 (June 4, 2024)

Planet Python - Tue, 2024-06-04 15:30

#632 – JUNE 4, 2024
View in Browser »

What’s a Python Hashable Object?

You can ignore reading about hashable objects for quite a bit. But eventually, it’s worth having an idea of what they are. This post follows Winston on his first day at work to understand hashable objects
THEPYTHONCODINGSTACK.COM • Shared by Stephen Gruppetta

PyCon US 2024 Recap

Part travelogue, part conference diary, Katherine writes about her experience at PyCon US 2024 held in Pittsburgh, PA.

[Video] Saga Pattern Simplified: Building Sagas with Temporal

In this on-demand webinar, we give an overview of Sagas, including challenges and benefits. Then, we introduce Temporal and demonstrate how easy it is to build, test, and run Sagas using our platform and coding in your preferred language. No prior knowledge of Temporal is needed to follow along →
TEMPORAL sponsor

One Way to Fix Python Circular Imports

Python circular imports can be confusing. Simply using a different form of import can sometimes fix the problem.

Guido Withdraws Ownership From Core Interpreter

Associated HN Discussion

Django Day Copenhagen 2024 Call for Proposals


Discussions What Would You Spend Time on if You Didn’t Need Money?


Articles & Tutorials Three Laws of Software Complexity

Mahesh states that most software engineers (particularly those working on infrastructural systems) are destined to wallow in unnecessary complexity due to three fundamental laws. Associated HN Discussion.

Efficient Iterations With Python Iterators and Iterables

In this video course, you’ll learn what iterators and iterables are in Python. You’ll learn how they differ and when to use them in your code. You’ll also learn how to create your own iterators and iterables to make data processing more efficient.

Authentication Your Whole Team Will Love

“With PropelAuth, I think I’ve spent about a day – total – on auth over the past year.” PropelAuth is easy to integrate and provides all the tools your team needs to manage your users - dashboards, user insights, impersonation, SSO and more →

My BDFL Guiding Principles

Daniel is the Benevolent Dictator For Life of the curl library, and in this post he considers how the “dictator” part may not be the best way to run an open source project. He covers 10 principles he tries to embody when guiding the curl project.

Writing Fast String ufuncs for NumPy 2.0

NumPy 2.0 is coming shortly and has loads of big changes and new features. This article talks about NumPy’s universal functions which can be applied to the elements of a NumPy array, and how to write good ones when dealing with strings.

How Python Compares Floats and Integers

Floating point isn’t a precise representation, this can mean that when you compare a float with an int, something that looks like it should be the same, might not be. This article deep dives on how that works in Python.

Thinking About Running for the PSF Board of Directors?

The Python Software Foundation runs office hours and this year they’ve set aside two times for asking questions of current board members. If you’re thinking of running, this is a perfect place to get more information.

How EuroPython Proposals Are Selected

This post from the folks at EuroPython walks you through how their conference selection process works. Although EuroPython specific, it contains lots of good general advice for getting accepted at any conference.

Django Enhancement Proposal 14: Background Workers

A DEP is like a PEP, but for Django. There are many libraries out there for dealing with background tasks when writing Django code, DEP 14 proposes an interface for such systems as well as a default backend.

How to Create Pivot Tables With pandas

In this tutorial, you’ll learn how to create pivot tables using pandas. You’ll explore the key features of DataFrame’s pivot_table() method and practice using them to aggregate your data in different ways.

Essays on Programming I Think About a Lot

A collection of essays on software from a variety of sources. Content includes how to choose your tech stack, products, abstractions, and more.

Projects & Code django-auditlog: Log of Changes to an Object


Python Resources for Working With Excel


Pytest Plugin to Disable Tests in One or More Files

GITHUB.COM/POMPONCHIK • Shared by Evgeniy Blinov (pomponchik)

UXsim: Vehicular Traffic Flow Simulator in Road Network


Python Finite State Machines Made Easy

GITHUB.COM/FGMACEDO • Shared by Fernando Macedo

Events Weekly Real Python Office Hours Q&A (Virtual)

June 5, 2024

DjangoCon Europe 2024

June 5 to June 10, 2024

Canberra Python Meetup

June 6, 2024

PyCon Colombia 2024

June 7 to June 10, 2024

IndyPy Hosts “AI Powered by Python Panel”

June 11, 2024
MEETUP.COM • Shared by Laura Stephens

Wagtail Space NL

June 12 to June 15, 2024

Happy Pythoning!
This was PyCoder’s Weekly Issue #632.
View in Browser »

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

Categories: FLOSS Project Planets

Brian Perry: Matching Drupal’s GitLab CI ESLint Configuration in a Contrib Module

Planet Drupal - Tue, 2024-06-04 12:04

The Drupal Association now maintains a GitLab CI Template that can be used for all Drupal contrib projects. It's an excellent way to quickly take advantage of Drupal.org's CI system and ensure your project is following code standards and best practices. And using it has the bonus of giving you a sweet green checkmark on your project page!

We recently added this template to the Same Page Preview module. After doing so, our JavaScript linting was failing. This wasn't surprising since we hadn't yet committed a standard ESLint or Prettier configuration to the codebase. I took a shot at trying to resolve these linting issues, initially turning to the ESLint Drupal Contrib plugin. This allowed me to get ESLint up and running quickly and run linting with only within the context of this module. I resolved all of the linting issues, pushed my work up to GitLab, and started thinking about how I'd reward myself for a job well done.

Disaster Strikes

And as you might expect, the CI build still failed. 🤦‍♂️

At this point I took a step back. First off, I needed to determine what differed between my ESLint process and the one that was being executed by the Drupal Gitlab CI Template. Secondly, beyond just getting the CI job to pass, I wanted to define the linting use cases I was trying to solve for. I decided to focus on the following:

  1. Determining how to run the exact same ESLint command that the GitLab CI Template was running, using the same configuration as Drupal Core.
  2. Developing an ESLint configuration that could be run within the standalone module codebase (with or without an existing instance of Drupal) but matching Drupal Core and GitLab CI's configuration as closely as possible.
Using the Drupal Core ESLint Configuration

Here we literally want to use the same ESLint binary and config used by Drupal Core. Since this is what Drupal's GitLab CI Template is doing, this is also an opportunity to match the CI linting configuration as closely as possible.

The CI job is running the following command:

$ $CI_PROJECT_DIR/$_WEB_ROOT/core/node_modules/.bin/eslint \ --no-error-on-unmatched-pattern --ignore-pattern="*.es6.js" \ --resolve-plugins-relative-to=$CI_PROJECT_DIR/$_WEB_ROOT/core \ --ext=.js,.yml \ --format=junit \ --output-file=$CI_PROJECT_DIR/junit.xml \ $_ESLINT_EXTRA . || EXIT_CODE_FILE=$?

And prior to that command, symlinks are also created for some relevant configuration files:

$ cd $CI_PROJECT_DIR/$_WEB_ROOT/modules/custom/$CI_PROJECT_NAME $ ln -s $CI_PROJECT_DIR/$_WEB_ROOT/core/.eslintrc.passing.json $CI_PROJECT_DIR/$_WEB_ROOT/modules/custom/.eslintrc.json $ ln -s $CI_PROJECT_DIR/$_WEB_ROOT/core/.eslintrc.jquery.json $CI_PROJECT_DIR/$_WEB_ROOT/modules/custom/.eslintrc.jquery.json $ test -e .prettierrc.json || ln -s $CI_PROJECT_DIR/$_WEB_ROOT/core/.prettierrc.json . $ test -e .prettierignore || echo '*.yml' > .prettierignore

This means that we'll need to run eslint using Core's 'passing' configuration (which itself extends the 'jquery' configuration.)

To match that, I created an eslint:core script in the module's package.json:

{ "scripts": { "eslint:core": "../../../core/node_modules/.bin/eslint . \ --no-error-on-unmatched-pattern \ --ignore-pattern='*.es6.js' \ --resolve-plugins-relative-to=../../../core \ --ext=.js,.yml \ -c ../../../core/.eslintrc.passing.json" } }

I was surprised to find that even after running this command locally, the CI job was still failing. It turned out that ESLint wasn't using Core's Prettier config in this case, resulting in a different set of formatting rules being applied. Copying core/.prettierrc.json into the module's root directory resolved this issue.

Copying Drupal Core's prettier config wholesale isn't great. The approaches to referencing and extending a prettier config are clunky, but possible. A more ideal solution would be to have Drupal's prettier config as a package that could be referenced by both core and contrib modules.

Using a Standalone ESLint Configuration

Ideally it would also be possible to run this linting outside of the context of a full Drupal instance. This could help speed up things like pre-commit hooks, some CI tasks, and also make quick linting checks easier to run. With the lessons from using Drupal Core's ESLint configuration fresh in mind, I took another shot at using the eslint-plugin-drupal-contrib plugin.

First, I installed it in the module as a dev dependency:

npm i -D eslint-plugin-drupal-contrib

Next, I created a file .eslintrc.contrib.json in the module's root directory:

{ "extends": ["plugin:drupal-contrib/passing"] }

This will result in eslint using the same configuration as Drupal Core's 'passing' configuration, but without needing to reference Core's configuration files. Finally, you can run this by adding the following eslint script in the module's package.json:

{ "scripts": { "eslint": "eslint . \ -c .eslintrc.contrib.json \ --no-eslintrc" } }

You might be surprised to see the --no-eslintrc flag above. That prevents ESLint from looking for any other configuration files in the directory tree. Without it, ESLint will find the Drupal Core configuration files if this happens to be run from within a Drupal project. This will result in ESLint attempting to resolve plugins using Drupal Core's node_modules directory, which may or may not exist.

Also note that ESLint 8.x uses the --no-eslintrc flag, while the ESLint 9.x equivalent is --no-config-lookup. Drupal core is currently on ESLint 8.x, which is the previous major release.

Happy Linting

I ran into a few more hiccups than I expected along the way, but now feel confident that I can have consistent linting results between my local environment and the Drupal.org CI system in all of the JavaScript code I write for contrib modules. Hopefully this can help you do the same.

Categories: FLOSS Project Planets