Promet Source: No "Small" Drupal Support Contracts

Planet Drupal - Tue, 2022-06-21 12:33
I was recently thinking about Promet’s engagement with British Columbia’s Knowledge Network, and was reminded of a famous maxim from the theater world: “There are no small parts, only small actors.” I’ve always loved that saying because it drives home the point that excellence at every level and at every point in a process, plants seeds for growth that often exceed expectations. 
Categories: FLOSS Project Planets

Real Python: Python mmap: Doing File I/O With Memory Mapping

Planet Python - Tue, 2022-06-21 10:00

The Zen of Python has a lot of wisdom to offer. One especially useful idea is that “There should be one—and preferably only one—obvious way to do it.” Yet there are multiple ways to do most things in Python, and often for good reason. For example, there are multiple ways to read a file in Python, including the rarely used mmap module.

Python’s mmap provides memory-mapped file input and output (I/O). It allows you to take advantage of lower-level operating system functionality to read files as if they were one large string or array. This can provide significant performance improvements in code that requires a lot of file I/O.

In this video course, you’ll learn:

  • What kinds of computer memory exist
  • What problems you can solve with mmap
  • How use memory mapping to read large files faster
  • How to change a portion of a file without rewriting the entire file
  • How to use mmap to share information between multiple processes

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Andy Wingo: an optimistic evacuation of my wordhoard

GNU Planet! - Tue, 2022-06-21 08:21

Good morning, mallocators. Last time we talked about how to split available memory between a block-structured main space and a large object space. Given a fixed heap size, making a new large object allocation will steal available pages from the block-structured space by finding empty blocks and temporarily returning them to the operating system.

Today I'd like to talk more about nothing, or rather, why might you want nothing rather than something. Given an Immix heap, why would you want it organized in such a way that live data is packed into some blocks, leaving other blocks completely free? How bad would it be if instead the live data were spread all over the heap? When might it be a good idea to try to compact the heap? Ideally we'd like to be able to translate the answers to these questions into heuristics that can inform the GC when compaction/evacuation would be a good idea.

lospace and the void

Let's start with one of the more obvious points: large object allocation. With a fixed-size heap, you can't allocate new large objects if you don't have empty blocks in your paged space (the Immix space, for example) that you can return to the OS. To obtain these free blocks, you have four options.

  1. You can continue lazy sweeping of recycled blocks, to see if you find an empty block. This is a bit time-consuming, though.

  2. Otherwise, you can trigger a regular non-moving GC, which might free up blocks in the Immix space but which is also likely to free up large objects, which would result in fresh empty blocks.

  3. You can trigger a compacting or evacuating collection. Immix can't actually compact the heap all in one go, so you would preferentially select evacuation-candidate blocks by choosing the blocks with the least live data (as measured at the last GC), hoping that little data will need to be evacuated.

  4. Finally, for environments in which the heap is growable, you could just grow the heap instead. In this case you would configure the system to target a heap size multiplier rather than a heap size, which would scale the heap to be e.g. twice the size of the live data, as measured at the last collection.

If you have a growable heap, I think you will rarely choose to compact rather than grow the heap: you will either collect or grow. Under constant allocation rate, the rate of empty blocks being reclaimed from freed lospace objects will be equal to the rate at which they are needed, so if collection doesn't produce any, then that means your live data set is increasing and so growing is a good option. Anyway let's put growable heaps aside, as heap-growth heuristics are a separate gnarly problem.

The question becomes, when should large object allocation force a compaction? Absent growable heaps, the answer is clear: when allocating a large object fails because there are no empty pages, but the statistics show that there is actually ample free memory. Good! We have one heuristic, and one with an optimum: you could compact in other situations but from the point of view of lospace, waiting until allocation failure is the most efficient.


Moving on, another use of empty blocks is when shrinking the heap. The collector might decide that it's a good idea to return some memory to the operating system. For example, I enjoyed this recent paper on heuristics for optimum heap size, that advocates that you size the heap in proportion to the square root of the allocation rate, and that as a consequence, when/if the application reaches a dormant state, it should promptly return memory to the OS.

Here, we have a similar heuristic for when to evacuate: when we would like to release memory to the OS but we have no empty blocks, we should compact. We use the same evacuation candidate selection approach as before, also, aiming for maximum empty block yield.


What if you go to allocate a medium object, say 4kB, but there is no hole that's 4kB or larger? In that case, your heap is fragmented. The smaller your heap size, the more likely this is to happen. We should compact the heap to make the maximum hole size larger.

side note: compaction via partial evacuation

The evacuation strategy of Immix is... optimistic. A mark-compact collector will compact the whole heap, but Immix will only be able to evacuate a fraction of it.

It's worth dwelling on this a bit. As described in the paper, Immix reserves around 2-3% of overall space for evacuation overhead. Let's say you decide to evacuate: you start with 2-3% of blocks being empty (the target blocks), and choose a corresponding set of candidate blocks for evacuation (the source blocks). Since Immix is a one-pass collector, it doesn't know how much data is live when it starts collecting. It may not know that the blocks that it is evacuating will fit into the target space. As specified in the original paper, if the target space fills up, Immix will mark in place instead of evacuating; an evacuation candidate block with marked-in-place objects would then be non-empty at the end of collection.

In fact if you choose a set of evacuation candidates hoping to maximize your empty block yield, based on an estimate of live data instead of limiting to only the number of target blocks, I think it's possible to actually fill the targets before the source blocks empty, leaving you with no empty blocks at the end! (This can happen due to inaccurate live data estimations, or via internal fragmentation with the block size.) The only way to avoid this is to never select more evacuation candidate blocks than you have in target blocks. If you are lucky, you won't have to use all of the target blocks, and so at the end you will end up with more free blocks than not, so a subsequent evacuation will be more effective. The defragmentation result in that case would still be pretty good, but the yield in free blocks is not great.

In a production garbage collector I would still be tempted to be optimistic and select more evacuation candidate blocks than available empty target blocks, because it will require fewer rounds to compact the whole heap, if that's what you wanted to do. It would be a relatively rare occurrence to start an evacuation cycle. If you ran out of space while evacuating, in a production GC I would just temporarily commission some overhead blocks for evacuation and release them promptly after evacuation is complete. If you have a small heap multiplier in your Immix space, occasional partial evacuation in a long-running process would probably reach a steady state with blocks being either full or empty. Fragmented blocks would represent newer objects and evacuation would periodically sediment these into longer-lived dense blocks.

mutator throughput

Finally, the shape of the heap has its inverse in the shape of the holes into which the mutator can allocate. It's most efficient for the mutator if the heap has as few holes as possible: ideally just one large hole per block, which is the limit case of an empty block.

The opposite extreme would be having every other "line" (in Immix terms) be used, so that free space is spread across the heap in a vast spray of one-line holes. Even if fragmentation is not a problem, perhaps because the application only allocates objects that pack neatly into lines, having to stutter all the time to look for holes is overhead for the mutator. Also, the result is that contemporaneous allocations are more likely to be placed farther apart in memory, leading to more cache misses when accessing data. Together, allocator overhead and access overhead lead to lower mutator throughput.

When would this situation get so bad as to trigger compaction? Here I have no idea. There is no clear maximum. If compaction were free, we would compact all the time. But it's not; there's a tradeoff between the cost of compaction and mutator throughput.

I think here I would punt. If the heap is being actively resized based on allocation rate, we'll hit the other heuristics first, and so we won't need to trigger evacuation/compaction based on mutator overhead. You could measure this, though, in terms of average or median hole size, or average or maximum number of holes per block. Since evacuation is partial, all you need to do is to identify some "bad" blocks and then perhaps evacuation becomes attractive.

gc pause

Welp, that's some thoughts on when to trigger evacuation in Immix. Next time, we'll talk about some engineering aspects of evacuation. Until then, happy consing!

Categories: FLOSS Project Planets

Test and Code: 190: Testing PyPy - Carl Friedrich Bolz-Tereick

Planet Python - Tue, 2022-06-21 08:00

PyPy is a fast, compliant alternative implementation of Python.
cPython is implemented in C.

PyPy is implemented in Python.
What does that mean?

And how do you test something as huge as an alternative implementation of Python?

Special Guest: Carl Friedrich Bolz-Tereick.

Sponsored By:


<p>PyPy is a fast, compliant alternative implementation of Python.<br> cPython is implemented in C.<br><br> PyPy is implemented in Python.<br> What does that mean?<br><br> And how do you test something as huge as an alternative implementation of Python?</p><p>Special Guest: Carl Friedrich Bolz-Tereick.</p><p>Sponsored By:</p><ul><li><a href="http://rollbar.com/testandcode" rel="nofollow">Rollbar</a>: <a href="http://rollbar.com/testandcode" rel="nofollow">With Rollbar, developers deploy better software faster.</a></li></ul><p>Links:</p><ul><li><a href="https://www.pypy.org/" title="PyPy" rel="nofollow">PyPy</a></li><li><a href="https://www.pypy.org/posts/2022/04/how-is-pypy-tested.html" title="How is PyPy Tested? " rel="nofollow">How is PyPy Tested? </a></li><li><a href="https://speed.pypy.org/" title="PyPy Speed" rel="nofollow">PyPy Speed</a></li><li><a href="https://speed.python.org/" title="Python Speed Center" rel="nofollow">Python Speed Center</a></li></ul>
Categories: FLOSS Project Planets

Steve Kemp: Writing a simple TCL interpreter in golang

Planet Debian - Tue, 2022-06-21 05:45

Recently I was reading Antirez's piece TCL the Misunderstood again, which is a nice defense of the utility and value of the TCL language.

TCL is one of those scripting languages which used to be used a hell of a lot in the past, for scripting routers, creating GUIs, and more. These days it quietly lives on, but doesn't get much love. That said it's a remarkably simple language to learn, and experiment with.

Using TCL always reminds me of FORTH, in the sense that the syntax consists of "words" with "arguments", and everything is a string (well, not really, but almost. Some things are lists too of course).

A simple overview of TCL would probably begin by saying that everything is a command, and that the syntax is very free. There are just a couple of clever rules which are applied consistently to give you a remarkably flexible environment.

To get started we'll set a string value to a variable:

set name "Steve Kemp" => "Steve Kemp"

Now you can output that variable:

puts "Hello, my name is $name" => "Hello, my name is Steve Kemp"

OK, it looks a little verbose due to the use of set, and puts is less pleasant than print or echo, but it works. It is readable.

Next up? Interpolation. We saw how $name expanded to "Steve Kemp" within the string. That's true more generally, so we can do this:

set print pu set me ts $print$me "Hello, World" => "Hello, World"

There "$print" and "$me" expanded to "pu" and "ts" respectively. Resulting in:

puts "Hello, World"

That expansion happened before the input was executed, and works as you'd expect. There's another form of expansion too, which involves the [ and ] characters. Anything within the square-brackets is replaced with the contents of evaluating that body. So we can do this:

puts "1 + 1 = [expr 1 + 1]" => "1 + 1 = 2"

Perhaps enough detail there, except to say that we can use { and } to enclose things that are NOT expanded, or executed, at parse time. This facility lets us evaluate those blocks later, so you can write a while-loop like so:

set cur 1 set max 10 while { expr $cur <= $max } { puts "Loop $cur of $max" incr cur }

Anyway that's enough detail. Much like writing a FORTH interpreter the key to implementing something like this is to provide the bare minimum of primitives, then write the rest of the language in itself.

You can get a usable scripting language with only a small number of the primitives, and then evolve the rest yourself. Antirez also did this, he put together a small TCL interpreter in C named picol:

Other people have done similar things, recently I saw this writeup which follows the same approach:

So of course I had to do the same thing, in golang:

My code runs the original code from Antirez with only minor changes, and was a fair bit of fun to put together.

Because the syntax is so fluid there's no complicated parsing involved, and the core interpreter was written in only a few hours then improved step by step.

Of course to make a language more useful you need I/O, beyond just writing to the console - and being able to run the list-operations would make it much more useful to TCL users, but that said I had fun writing it, it seems to work, and once again I added fuzz-testers to the lexer and parser to satisfy myself it was at least somewhat robust.

Feedback welcome, but even in quiet isolation it's fun to look back at these "legacy" languages and recognize their simplicity lead to a lot of flexibility.

Categories: FLOSS Project Planets

Specbee: How to make a Multilingual Website using Drupal 9

Planet Drupal - Tue, 2022-06-21 05:35
How to make a Multilingual Website using Drupal 9 Shefali Shetty 21 Jun, 2022

It’s a fact. You can’t go global without localized focus. Yeah, that sounds like a  paradox, but it makes sense from a user perspective. Many organizations are reaping the benefits of multilingual web experiences to connect with their customers across the world. And it’s almost a requirement these days. Not only do Multilingual websites enable you to reach new target audiences more effectively, but it adds credibility to your brand, offers familiarity to visitors and makes users more likely to turn into customers.

In a recent research conducted on a list of top 150 global brands across industries, Wikipedia, Google, Nestlé, Airbnb and Adobe emerged as the top 5 brands that scored the best in terms of multilingual support, localization and global user experience. If you’re looking at localizing your brand as you go global, Drupal is a great CMS to opt for because of its fantastic support for multilingual websites. In this article we will describe how Drupal 9's multilingual feature works and how content editors or content teams can utilize the feature.

Multilingual Support Modules

As I mentioned previously, Drupal 9 makes it really easy to build multilingual sites. It offers 4 multilingual support modules that are already built in core. All you have to do is enable them. In your administrator view, go to Extend, select the 4 modules under Multilingual and click on Install.

  1. Configuration Translation Module - This one is not visible for the end users but especially useful for site builders. It translates configuration text like views names, 
  2. Content Translation Module - Allows to translate content entities and types like blocks, comments, taxonomy terms, custom menu links, and more.
  3. Interface Translation Module - Helps translate user interface elements such as Home, Forms, Title, Body, Description, etc. 
  4. Language Module - The real magic happens here. Here’s where you can choose from a whole range of languages (>100) and add it to your configuration.

You can then further configure these modules to have them enabled for all or for only a selected set of content types, entities, configurations or interface elements. 

For more details on each of these modules, make sure you read this article.

Implementing the Multilingual Feature

Once you have enabled these 4 modules, let’s dive right into configuring them.

Step 1: Add a Language (or multiple languages)

In your Drupal 9 admin interface, navigate to Configuration -> Regional and language -> Languages. Once you’re on the Languages page, click on the + Add language button


I’ve chosen Spanish as my language and added it to the list of languages.

Once added, you can select it as your default language or have English as the default.

Adding a Language

Step 2: Update Translations

Now click on the right part of the Edit button and you will get two options as a dropdown - Delete and Translate. When you select Translate, your Drupal site gets updated with all the interface and configuration translations for that language from localize.drupal.org. Here thousands of Drupal contributors help translate interface and configuration strings in regional languages.

Importing translations

Step 3: Language switcher

You can add a language switcher block to any region of your page so the user can switch between their preferred languages.

Adding a Language Switcher block

Step 4: Adding translations to content types and entities

You can have translations for all your content types and entities or you can select the ones as per your requirement.

For this, on your Admin screen go to Configuration -> Regional and language -> Content language and translation

I have selected custom language settings for Content, Redirect and URL alias. Under Content, I’m going to only have translations for my “Ad Page” content type (as shown below). All my fields under “Ad Page” content type are selected to be translated.


Now we’re all set to add translated content to the required content types.

Translating the Content

Now that you know how to enable and configure the multilingual modules in Drupal 9, let’s move on to learning how to actually translate the content. Let’s look at a super simple, 3 step process on how content teams can leverage this functionality to add their translated content. If you want to read about migrating multilingual content from CSV to Drupal, check out this article.

Step 1: Create a new page or Edit an existing one

Since I had opted to have translations for all my Ad page content types, I have created a test page under this content type. Now you will see that along with the usual View, Edit, Delete and Revisions tabs, I also have a new Translate tab.

  Step 2: Select the Language

On clicking the Translate tab, you will be able to see all your languages listed (see below). Observe that the Spanish language that we added does not have a translation yet. Now click on Add to create a Spanish translation page.

Step 3: Add translated content to the respective field

Notice how all your fields and elements of your admin interface have translated themselves to Spanish (see below). All you have to do is add your translated content as per requirement!

  Step 4: Save and Review!

We’re almost there! After adding in all your translated content, don’t forget to Guardar your translation! :)

And here’s what your Multilingual web page will now look like.

Spanish Version

You will notice the URL generated for the translated version (here Spanish) will contain a language prefix (here: es).

English Version

One of our recent Multilingual projects on Drupal 9 was for SEMI. SEMI is a global industry association that connects more than 1.3 million professionals and 2500 members worldwide through its programs, initiatives, market research and advocacy. SEMI members are responsible for innovation and advancements in electronics manufacturing and design supply chain. With 8 regional offices located around the world, having a multilingual setup was imperative for them to enable focus and customization. Read more about how we helped them build a cohesive multisite, multi language experience with Drupal 9.


Final Thoughts

Did you see how easy it was to build a multilingual website in Drupal 9? If you’re looking for Drupal development assistance in creating your next multi language website so you can reach a bigger target audience, feel free to talk to us.

Author: Shefali Shetty

​​Meet Shefali Shetty, Director of Marketing at Specbee. An enthusiast for Drupal, she enjoys exploring and writing about the powerhouse. While not working or actively contributing back to the Drupal project, you can find her watching YouTube videos trying to learn to play the Ukulele :)

Drupal 9 Drupal Development Drupal Planet Subscribe to our Newsletter Now Subscribe Leave this field blank

Leave us a Comment

  Recent Blogs Image How to make a Multilingual Website using Drupal 9 Image Creating custom design systems with Tailwind CSS and implementing it in Drupal Image Build marketing landing pages quickly and easily with Drupal 9 Want to extract the maximum out of Drupal? TALK TO US Featured Success Stories

Upgrading and consolidating multiple web properties to offer a coherent digital experience for Physicians Insurance

Upgrading the web presence of IEEE Information Theory Society, the most trusted voice for advanced technology

Great Southern Homes, one of the fastest growing home builders in the United States, sees greater results with Drupal 9

View all Case Studies
Categories: FLOSS Project Planets

Django Weblog: Django 4.1 beta 1 released

Planet Python - Tue, 2022-06-21 05:30

Django 4.1 beta 1 is now available. It represents the second stage in the 4.1 release cycle and is an opportunity for you to try out the changes coming in Django 4.1.

Django 4.1 has an profusion of new features which you can read about in the in-development 4.1 release notes.

Only bugs in new features and regressions from earlier versions of Django will be fixed between now and 4.1 final (also, translations will be updated following the "string freeze" when the release candidate is issued). The current release schedule calls for a release candidate in a month from now with the final release to follow about two weeks after that around August 3. Early and often testing from the community will help minimize the number of bugs in the release. Updates on the release schedule schedule are available on the django-developers mailing list.

As with all alpha and beta packages, this is not for production use. But if you'd like to take some of the new features for a spin, or to help find and fix bugs (which should be reported to the issue tracker), you can grab a copy of the beta package from our downloads page or on PyPI.

The PGP key ID used for this release is Carlton Gibson: E17DF5C82B4F9D00.

Categories: FLOSS Project Planets

Hynek Schlawack: Don’t Mock What You Don’t Own in 5 Minutes

Planet Python - Tue, 2022-06-21 05:00

A common issue programmers have when they try to test real-world software is how to deal with third-party dependencies. Let’s examine an old, but counter-intuitive principle.

Categories: FLOSS Project Planets

Python Bytes: #289 Textinator is coming for your text, wherever it is

Planet Python - Tue, 2022-06-21 04:00
<p><strong>Watch the live stream:</strong></p> <a href='https://www.youtube.com/watch?v=UP2JK6ISB9I' style='font-weight: bold;'>Watch on YouTube</a><br> <br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://testandcode.com/"><strong>Test &amp; Code</strong></a> Podcast</li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p>Special guest: <a href="https://twitter.com/foosel"><strong>Gina Häußge</strong></a>, creator &amp; maintainer of <a href="https://octoprint.org"><strong>OctoPrint</strong></a></p> <p><strong>Michael #1:</strong> <a href="https://github.com/roman-right/beanita"><strong>beanita</strong></a></p> <ul> <li>Local MongoDB-like database prepared to work with Beanie ODM</li> <li>So, you know <a href="https://github.com/roman-right/beanie"><strong>Beanie</strong></a> - Pydantic + async + MongoDB</li> <li>And you know <a href="https://github.com/scottrogowski/mongita"><strong>Mongita</strong></a> - Mongita is to MongoDB as SQLite is to SQL </li> <li>Beanita lets you use Beanie, but against Mongita rather than a server-based MongoDB server</li> </ul> <p><strong>Brian #2:</strong> <a href="https://goodresearch.dev/index.html"><strong>The Good Research Code Handbook</strong></a></p> <ul> <li>Patrick J Mineault</li> <li>“for grad students, postdocs and PIs (principle investigator) who do a lot of programming as part of their research.”</li> <li>lessons <ul> <li>setup <ul> <li>git, virtual environments, project layout, packaging, cookie cutter</li> </ul></li> <li>style <ul> <li>style guides, keeping things clean</li> </ul></li> <li>coding <ul> <li>separating concerns, separating pure functions and those with side effects, pythonic-ness</li> </ul></li> <li>testing <ul> <li>unit testing, testing with side effects, …</li> <li>(incorrect definition of end-to-end tests, but a good job at covering the other bits)</li> </ul></li> <li>documentation <ul> <li>comments, tests, docstrings, README.md, usage docs, tutorials, websites</li> <li>documenting pipelines and projects</li> </ul></li> <li>social aspects <ul> <li>various reviews, pairing, open source, community </li> </ul></li> <li>sample project</li> <li>extras <ul> <li>testing example</li> <li>good tools to use</li> </ul></li> </ul></li> </ul> <p><strong>Gina</strong> <strong>#3:</strong> <a href="https://cadquery.readthedocs.io/en/latest/"><strong>CadQuery</strong></a></p> <ul> <li>Python lib to do build parametric 3D CAD models</li> <li>Can output STL, STEP, AMF, SVG and some more</li> <li>Uses same geometry kernel as FreeCAD (OpenCascade)</li> <li>Also available: desktop editor, Jupyter extension, CLI <ul> <li>Would recommend the Jupyter extension, the app seems a bit behind latest development</li> </ul></li> <li>Jupyter extension is easy to set up on Docker and comes with a nice 3D preview pane</li> <li>Was able to create a basic parametric design of an insert for an assortment box easily</li> <li>Python 3.8+, not yet 3.11, OpenCascade related</li> </ul> <p><strong>Michael #4:</strong> <a href="https://twitter.com/RhetTurnbull/status/1535713115421089792"><strong>Textinator</strong></a></p> <ul> <li>Like <a href="https://www.textsniper.app"><strong>TextSniper</strong></a>, but in Python</li> <li>Simple MacOS StatusBar / Menu Bar app to automatically detect text in screenshots</li> <li>Built with <a href="https://github.com/jaredks/rumps">RUMPS</a>: Ridiculously Uncomplicated macOS Python Statusbar apps</li> <li>Take a screenshot of a region of the screen using ⌘ + ⇧ + 4 (<code>Cmd + Shift + 4</code>). </li> <li>The app will automatically detect any text in the screenshot and copy it to your clipboard.</li> <li>How Textinator Works <ul> <li>At startup, Textinator starts a persistent <a href="https://developer.apple.com/documentation/foundation/nsmetadataquery?language=objc">NSMetadataQuery Spotlight query</a> (using the <a href="https://pyobjc.readthedocs.io/en/latest/">pyobjc</a> Python-to-Objective-C bridge) to detect when a new screenshot is created.</li> <li>When the user creates screenshot, the <code>NSMetadataQuery</code> query is fired and Textinator performs text detection using a <a href="https://developer.apple.com/documentation/vision?language=objc">Vision</a> <a href="https://developer.apple.com/documentation/vision/vnrecognizetextrequest?language=objc">VNRecognizeTextRequest</a> call.</li> </ul></li> </ul> <p><strong>Brian #5:</strong> <a href="https://hakibenita.com/django-concurrency"><strong>Handling Concurrency Without Locks</strong></a></p> <ul> <li>"How to not let concurrency cripple your system”</li> <li>Haki Benita</li> <li>“…common concurrency challenges and how to overcome them with minimal locking.”</li> <li>Starts with a Django web app</li> <li>A url shortener that generates a unique short url and stores the result in a database so it doesn’t get re-used.</li> <li>Discussions of <ul> <li>collision with two users checking, then storing keys at the same time.</li> <li>locking problems in general </li> <li>utilizing database ability to make sure some items are unique, in this case PostgreSQL</li> <li>updating your code to take advantage of database constraints support to allow you to do less locking within your code</li> </ul></li> </ul> <p><strong>Gina</strong> <strong>#6:</strong> <a href="https://tatsu.readthedocs.io/en/stable/"><strong>TatSu</strong></a></p> <ul> <li>Generates parsers from EBNF grammars (or ANTLR)</li> <li>Can compile the model (similar to regex) for quick reuse or generate python source</li> <li>Many examples provided</li> <li>Active development, Python 3.10+</li> </ul> <p><strong>Extras</strong> </p> <p>Michael:</p> <ul> <li><a href="https://pythonbytes.fm/episodes/show/285/where-we-talk-about-uis-and-python"><strong>Back on 285</strong></a> we spoke about PEP 690. Now there is a <a href="https://developers.facebook.com/blog/post/2022/06/15/python-lazy-imports-with-cinder"><strong>proper blog post</strong></a> about it.</li> <li><a href="https://pythonweekly.us2.list-manage.com/track/click?u=e2e180baf855ac797ef407fc7&amp;id=34c7bf229c&amp;e=e4bde12891"><strong>Expedited release of Python3.11.0b3</strong></a> - Due to a known incompatibility with pytest and the previous beta release (<a href="https://pythonweekly.us2.list-manage.com/track/click?u=e2e180baf855ac797ef407fc7&amp;id=254cb29852&amp;e=e4bde12891"><strong>Python 3.11.0b2</strong></a>) and after some deliberation, Python release team have decided to do an expedited release of Python 3.11.0b3 so the community can continue testing their packages with pytest and therefore testing the betas as expected. (via Python Weekly)</li> <li><a href="https://kagi.com"><strong>Kagi search</strong></a> <ul> <li>via Daniel Hjertholm</li> <li>Not really python related, but if I know Michael right, he'll love the new completely ad free and privacy-respecting search engine kagi.com. I've used kagi.com since their public beta launched, mainly to search for solutions to Python issues at work. The results are way better than DuckDuckGo's results, and even better than Googles! Love the Programming-lens and the ability to up/down prioritize domains in the results.</li> <li>Their FAQ explains everything you need to know: <strong><a href="https://kagi.com/faq">https://kagi.com/faq</a></strong></li> <li>Looks great but not sure about the pricing justification (32 sec of compute = $1), that’s either 837x more than all of Talk Python + Python Bytes or more than 6,700x more than just one of our sites/services. (We spend about $100/mo on 8 servers.) But they <em>may</em> be buying results from Google and Bing, and that could be the cost.</li> <li>Here's <strong><a href="https://twitter.com/vladquant/status/1538559700593156102?s=21&amp;t=YSBQS2lP3oWVA9YDlZt1OA">a short interview</a></strong> with the man who started kagi.</li> </ul></li> </ul> <p>Gina: </p> <ul> <li><a href="https://github.com/rfinnie/rdserialtool"><strong>rdserialtool</strong></a>: Reads out low-cost USB power monitors (UM24C, UM25C, UM34C) via BLE/pybluez. Amazing if you need to monitor the power consumption/voltage/current of some embedded electronics on a budget. Helped me solve a very much OctoPrint development specific problem. Python 3.4+</li> <li><a href="https://pypi.org/project/nodejs-bin/"><strong>nodejs-bin</strong></a>: <ul> <li>by Sam Willis: <a href="https://twitter.com/samwillis/status/1537787836119793667">https://twitter.com/samwillis/status/1537787836119793667</a></li> <li>Install nodejs via pypi/as dependency, still very much an Alpha but looks promising</li> <li>Makes it easier to obtain a full stack environment</li> <li>Very interesting for end to end testing with JS based tooling, or packaging a frontend with your Python app</li> <li>See also nodeenv, which does a similar thing, but with additional steps</li> </ul></li> </ul> <p><strong>Joke:</strong> <a href="https://twitter.com/btskinn/status/1535605341446098946"><strong>Rejected Github Badges</strong></a></p>
Categories: FLOSS Project Planets

Zato Blog: How to integrate with Confluence APIs

Planet Python - Tue, 2022-06-21 03:42

In a previous article, I talked about Jira, and if you are a Jira user, chances are that you also use Confluence as they often go hand in hand, Jira as a ticketing application and Confluence as an enterprise knowledge management system.

From the perspective of integrations, connecting to Confluence and invoking its APIs looks and feels practically the same as with Jira:

  • You need an API token
  • You fill out a form in Zato Dashboard
  • You create a Python service that offers methods such as get_page_by_title, attach_file, update_page_property and similar.

Let’s go through it all step-by-step, starting off with the creation of an API token.

Creating an Atlassian API token

To invoke Confluence, you use an API token that can be shared with other Atlassian products, such as Jira. If you do not have one already, here is how to create it:

  • Log in to Confluence or Jira
  • Visit the address where API tokens can be managed: https://id.atlassian.com/manage-profile/security/api-tokens
  • Click “Create API Token” and provide a name for the token, such as “Zato Integrations”
  • Copy the token somewhere as, once it has been created, you will not be able to retrieve it later on. The only way to change a token is to revoke it and create a new one.
Creating a Confluence connection

In your Zato Dashboard, go to Cloud -> Atlassian -> Confluence:

Click “Create a new connection” and fill out the form below. The username is the same as the email address that you log in to Confluence or Jira with.

Now, click “Change API Token” and enter the token created in the previous section:

Invoking Confluence

Authoring a Zato service that invokes Confluence follows a pattern that will feel familiar no matter what kind of an API you integrate with:

  • Obtain a connection to remote resource
  • Invoke it
  • Process the response the resource returned

In the case of the code below, we are merely logging the response from Confluence. In a bigger integration, we would process it accordingly, e.g. parts of the output could be synchronized with Jira or another system.

Note the ‘client.get_all_pages_from_space’ method below - the client will offer other methods as well, e.g. get_space, get_page_as_pdf or ways to run CQL (Confluence Query Language) directly. Use auto-completion in your IDE to discover all the methods available.

# -*- coding: utf-8 -*- # Zato from zato.common.typing_ import cast_ from zato.server.service import Service # ########################################################################### if 0: from zato.server.connection.confluence_ import ConfluenceClient # ########################################################################### class GetAllPages(Service): def handle(self): # Name of the Confluence space that our pages are in space = 'ABC' # Name of the connection definition to use conn_name = 'My Confluence Connection' # .. create a reference to our connection definition .. confluence = self.cloud.confluence[conn_name] # .. obtain a client to Confluence .. with confluence.conn.client() as client: # type: ConfluenceClient # Cast to enable code completion client = cast_('ConfluenceClient', client) # Get all pages from our space pages = client.get_all_pages_from_space(space) self.logger.info('Pages received -> %s', pages) # ###########################################################################

That is all - you have create an Atlassian API token, a Zato Confluence connection and you have integrated with Confluence in Python!

Next steps
  • Start the tutorial to learn how to integrate APIs and build systems. After completing it, you will have a multi-protocol service representing a sample scenario often seen in banking systems with several applications cooperating to provide a single and consistent API to its callers.

  • Visit the support page if you need assistance.

  • Para aprender más sobre las integraciones de Zato y API en español, haga clic aquí.

  • Pour en savoir plus sur les intégrations API avec Zato en français, cliquez ici.

Categories: FLOSS Project Planets

John Goerzen: Lessons of Social Media from BBSs

Planet Debian - Mon, 2022-06-20 21:52

In the recent article The Internet Origin Story You Know Is Wrong, I was somewhat surprised to see the argument that BBSs are a part of the Internet origin story that is often omitted. Surprised because I was there for BBSs, and even ran one, and didn’t really consider them part of the Internet story myself. I even recently enjoyed a great BBS documentary and still didn’t think of the connection on this way.

But I think the argument is a compelling one.

In truth, the histories of Arpanet and BBS networks were interwoven—socially and materially—as ideas, technologies, and people flowed between them. The history of the internet could be a thrilling tale inclusive of many thousands of networks, big and small, urban and rural, commercial and voluntary. Instead, it is repeatedly reduced to the story of the singular Arpanet.

Kevin Driscoll goes on to highlight the social aspects of the “modem world”, how BBSs and online services like AOL and CompuServe were ways for people to connect. And yet, AOL members couldn’t easily converse with CompuServe members, and vice-versa. Sound familiar?

Today’s social media ecosystem functions more like the modem world of the late 1980s and early 1990s than like the open social web of the early 21st century. It is an archipelago of proprietary platforms, imperfectly connected at their borders. Any gateways that do exist are subject to change at a moment’s notice. Worse, users have little recourse, the platforms shirk accountability, and states are hesitant to intervene.

Yes, it does. As he adds, “People aren’t the problem. The problem is the platforms.”

A thought-provoking article, and I think I’ll need to buy the book it’s excerpted from!

Categories: FLOSS Project Planets

The Drop Times: Top Drupal 9 Books to Read

Planet Drupal - Mon, 2022-06-20 21:11
You can use Drupal books to learn about module development, different types of frameworks and creations, marketing and more.
Categories: FLOSS Project Planets

LAAC Technology: Should You Use AsyncIO for Your Next Python Web Application?

Planet Python - Mon, 2022-06-20 20:00

Python’s AsyncIO web ecosystem continues to mature, but should you build your next production application with one of these shiny new frameworks such as FastAPI, Starlette, or Quart?

Table of Contents A Brief History of Python Web Server Interfaces

Prior to PEP 333, Python web application frameworks, such as Zope, Quixote, and Twisted Web would each be written against specific web server APIs such as CGI or mod_python. PEP 333 created the v1.0 implementation of WSGI to give Python a standard similar to Java’s servlet API. WSGI created a common interface that promoted interchangeability between web frameworks and web servers. For example, Frameworks such as Django, Flask, and Pyramid are compatible with servers such as uWSGI and gunicorn.

Between 2003, when PEP 333 was published, and now, in 2022, WSGI has seen near universal adoption with all popular Python web frameworks and servers using WSGI. Since 2003, web application development and network protocols have evolved and changed. The RFC for the WebSocket protocol was finalized in 2013. The release of Python 3.6, in 2016, added the async and await keywords, which enable non-blocking I/O. WSGI accepts a request and returns a response, which doesn’t work for newer protocols such as WebSocket. ASGI continues the legacy of WSGI by allowing the same interchangeability in the new world of asynchronous Python. ASGI is a fundamental redesign of WSGI that enables support for newer protocols such as HTTP/2, HTTP/3, and WebSocket.

AsyncIO Package Ecosystem Overview Servers Uvicorn

Uvicorn supports the HTTP and WebSocket protocols. Encode OSS, the company started by Tom Christie, the creator of Django REST Framework, maintains Uvicorn. Uvicorn tends to be close to the top in popular performance benchmarks due to its use of uvloop, an alternative event loop, and httptools, Python bindings for the NodeJS HTTP parser.

Uvicorn implements a gunicorn worker, that allows you to turn gunicorn into an ASGI server. Gunicorn is a fantastic, battle-tested process manager and WSGI server for Python, and combining it with uvicorn gives you one of the best ASGI servers. Uvicorn also implements an alternative gunicorn worker with support for PyPy.

Code Example

Taken from the Uvicorn docs

import uvicorn async def app(scope, receive, send): ... if __name__ == "__main__": uvicorn.run("example:app", host="", port=5000, log_level="info") Hypercorn

Hypercorn supports HTTP, HTTP/2, HTTP/3 (QUIC), and WebSocket protocols and utilizes the python hyper libraries and uvloop. Initially, hypercorn was a part of the Quart web framework but transitioned to a standalone ASGI server of quart. Hypercorn is maintained by Philip Jones, a member of the Pallets Project that maintains Flask.

Hypercorn stands out as a well maintained ASGI server that supports HTTP/2, HTTP/3, and Trio, an alternative implementation to the Python standard library’s AsyncIO package. As opposed to uvicorn, hypercorn works similar to gunicorn and operates as a process manager.

Code Example

Taken from the Hypercorn docs

import asyncio from hypercorn.asyncio import serve from hypercorn.config import Config config = Config() config.bind = ["localhost:8080"] async def app(): ... asyncio.run(serve(app, config)) Frameworks Starlette

Starlette can be used as a web framework or a toolkit and supports WebSocket, GraphQL, HTTP server push, in-process background tasks, and more. It falls somewhere in the middle between Django’s batteries include approach and Flask’s minimalism. Starlette is also maintained by Encode OSS. Starlette also tends to be near the top of popular performance benchmarks.

Starlette occupies a unique position in the current ASGI framework ecosystem since other popular projects such as FastAPI build on top of it. Tom Christie has a great track record of open source maintenance and development with Django REST Framework, and Encode has funding.

Code Example

Taken from the Starlette docs

from starlette.applications import Starlette from starlette.responses import PlainTextResponse from starlette.routing import ( Route, Mount, WebSocketRoute ) from starlette.staticfiles import StaticFiles def homepage(request): return PlainTextResponse('Hello, world!') def user_me(request): username = "John Doe" return PlainTextResponse('Hello, %s!' % username) def user(request): username = request.path_params['username'] return PlainTextResponse('Hello, %s!' % username) async def websocket_endpoint(websocket): await websocket.accept() await websocket.send_text('Hello, websocket!') await websocket.close() def startup(): print('Ready to go') routes = [ Route('/', homepage), Route('/user/me', user_me), Route('/user/{username}', user), WebSocketRoute('/ws', websocket_endpoint), Mount('/static', StaticFiles(directory="static")), ] app = Starlette( debug=True, routes=routes, on_startup=[startup] ) FastAPI

FastAPI is a framework built on top of Starlette that adds Pydantic, a Python package that provides data validation and settings management using Python’s type annotations. By adding Pydantic, FastAPI endpoints validate input data and auto generate documentation. FastAPI is maintained by Sebastián Ramírez who has sponsor funding for FastAPI. By using Starlette under the hood, FastAPI’s performance is near the top of popular performance benchmarks.

As the name implies, FastAPI is intended for API applications, and this is where it excels. In the past, if you were creating a small API application, Flask would be the choice. Now, I would consider using FastAPI over Flask. The AsyncIO ecosystem isn’t as mature as the WSGI/Flask ecosystem, but FastAPI looks like one of the big future frameworks in the Python web ecosystem.

Code Example

Taken from the FastAPI docs

from typing import Union from fastapi import FastAPI from pydantic import BaseModel app = FastAPI() class Item(BaseModel): name: str price: float is_offer: Union[bool, None] = None @app.get("/") def read_root(): return {"Hello": "World"} @app.get("/items/{item_id}") def read_item(item_id: int, q: Union[str, None] = None): return {"item_id": item_id, "q": q} @app.put("/items/{item_id}") def update_item(item_id: int, item: Item): return {"item_name": item.name, "item_id": item_id} Quart

Quart bills itself as an AsyncIO reimplementation of the Flask microframework API and provides a migration guide. Additionally, some Flask extensions work with Quart. The author of Quart, Philip Jones, also maintains Hypercorn. Quart’s performance ranks lower than some of the previously mentioned frameworks. Quart stands out with its potential access to the large package ecosystem surrounding Flask.

Code Example

Taken from Quart docs

from dataclasses import dataclass from datetime import datetime from quart import Quart from quart_schema import ( QuartSchema, validate_request, validate_response ) app = Quart(__name__) QuartSchema(app) @dataclass class TodoIn: task: str due: datetime | None @dataclass class Todo(TodoIn): id: int @app.post("/todos/") @validate_request(TodoIn) @validate_response(Todo) async def create_todo(data: Todo) -> Todo: return Todo(id=1, task=data.task, due=data.due) Other Frameworks
  • Sanic
    • Supports ASGI, WebSocket, and background tasks
  • BlackSheep
    • Supports ASGI, WebSocket, and background tasks
  • Aiohttp
    • AsyncIO Client and Server
Benefits of Using AsyncIO
  • Improved Throughput
    • AsyncIO does not usually improve latency for I/O requests. In fact, if you look at the “latency” tab of the previously linked performance benchmark, AsyncIO frameworks perform worse in latency than Django and Flask. However, AsyncIO improves the throughput of your application, meaning the same server hardware can handle more requests per second using AsyncIO.
  • Availability of New Protocols
    • Due to the limitations of WSGI, it doesn’t support newer protocols such as HTTP/2, HTTP/3 and WebSocket. With AsyncIO, Python web servers and frameworks can support these newer protocols.
Obstacles to Using AsyncIO
  • Django and Flask’s Package Ecosystems
    • Large third-party package ecosystems developed around both Django and Flask. Taping into these ecosystems saves you development time. The same ecosystem has not yet developed around AsyncIO frameworks.
  • Synchronous I/O Blocks AsyncIO
    • Let’s say I’m building a SaaS application that needs to accept payments. I choose Stripe, but Stripe’s python library doesn’t support AsyncIO. If I use Stripe’s library, whenever I make an API request to Stripe, it blocks the event loop, and you lose the benefits of AsyncIO.
My Answer

While Python’s AsyncIO web ecosystem has come a long way, I still choose Flask or Django for production applications that don’t require newer protocols such as HTTP/2, HTTP/3 and WebSocket. Django and Flask each have a robust set of third-party packages to help you quickly build your application. On top of that, Django and Flask are both mature, production-tested frameworks used by a lot of companies. I think the new AsyncIO frameworks will reach that point as well, but they’re still too new. If you’re looking at using the AsyncIO ecosystem, your first question should be “Do I need it?” My guess is most people don’t. WSGI servers and frameworks are usually performant enough.

If supporting HTTP/2 or HTTP/3 is a requirement, you don’t have much choice on the server side. I recommend Hypercorn. If you aren’t required to support HTTP/2 or HTTP/3, then Uvicorn becomes my preferred server option. When it comes to frameworks, I would reach for FastAPI for small-API focused applications. Tom Christie has a great track record of open source maintenance with Django REST Framework, and FastAPI adds nice additions on top of Starlette. If you’re building a larger application with some HTML template rendering, I would choose Starlette over FastAPI for the flexibility. If you need to migrate an existing Flask project to enable newer protocols, Quart is the clear choice.

I want to be clear that I think ASGI and AsyncIO are great developments for Python, and I applaud all the people putting work into the ecosystem. I think the ecosystem will continue to grow and mature, but Django and Flask are my choice right now.

Categories: FLOSS Project Planets

KDE Plasma 5.25.1, Bugfix Release for June

Planet KDE - Mon, 2022-06-20 20:00

Tuesday, 21 June 2022. Today KDE releases a bugfix update to KDE Plasma 5, versioned 5.25.1.

Plasma 5.25 was released in June 2022 with many feature refinements and new modules to complete the desktop experience.

This release adds a week's worth of new translations and fixes from KDE's contributors. The bugfixes are typically small but important and include:

  • Kcms/fonts: Fix font hinting preview. Commit. Fixes bug #413673
  • Upower: Prevent integer overflow during new brightness computation. Commit. Fixes bug #454161
  • Fix dragging especially by touch. Commit. Fixes bug #455268
View full changelog
Categories: FLOSS Project Planets


Planet KDE - Mon, 2022-06-20 18:00

The KDE Community has used – and gently encouraged – the Fiduciary License Agreement (FLA) which was created by the Free Software Foundation Europe (FSFE) some 15 years ago. The FLA is a kind of copyright assignment that preserves the Free Software underpinnings of the software, ensures the contributor can (re)use the work and that the fiduciary can handle licensing questions around the contributed code. A CLA without the corporate-style downsides.

Using the FLA has always been an option in the KDE world. Some people choose to sign it to ensure long-term stability. Others don’t, and that’s fine. Here is a 2009-era post from me about the FLA and the licensing situation closer to when we introduced it. The next time I mentioned the FLA was in 2020, so it just kept plugging along all that time.

That’s not to say the document – license, agreement, whathaveyou – was without problems. The language was dated. Some legal precepts have changed. Supporting companies that want to assign their code to KDE e.V. was complicated. This wasn’t unique to the FLA document that KDE e.V. used, so the Contributor Agreements organization was created to steward a next generation of FLA’s.

KDE e.V. has just added the FLA 2.0 as an option for KDE contributors. The FLA 2.0 allows more freedom to KDE e.V. and isn’t tied to the structure of the software as-it-is-right-now. This was an issue with the 1.3 series of agreements: changing our source code repository from SVN to git (a transition that happened in 2011 or thereabouts) meant changing the legal agreement describing which source was covered. The 1.3 series was a pain in the butt that way.

But now we have 2.0 – and have kept 1.3.5 around – which means that each contributor can:

  • not sign anything, and just contribute under the existing Free Software licenses. This is the default, and the least-paperwork way, to contribute to KDE.
  • sign the FLA 2.0, assigning those assignable rights in the work to KDE e.V. If you’re going to do paperwork, this is the recommended way to do it.
  • sign the FLA 1.3.5 and associated FRP. This is the older form and older language, and it is still available as an option.

There will be a signing party – FLA and GPG keys – at Akademy 2022, I’m sure.

Categories: FLOSS Project Planets

GNU Taler news: A digital euro and the future of cash

GNU Planet! - Mon, 2022-06-20 18:00
The Central Bank of Austria has published a report in the context of a workshop celebrating 20 years of Euro-denominated cash. The report discusses the future of cash, including account- and blockchain-based designs, as well as GNU Taler.
Categories: FLOSS Project Planets

Niels Thykier: wrap-and-sort with experimental support for comments in devscripts/2.22.2

Planet Debian - Mon, 2022-06-20 16:00

In the devscripts package currently in Debian testing (2.22.2), wrap-and-sort has opt-in support for preserving comments in deb822 control files such as debian/control and debian/tests/control. Currently, this is an opt-in feature to provide some exposure without breaking anything.

To use the feature, add --experimental-rts-parser to the command line. A concrete example being (adjust to your relevant style):

wrap-and-sort --experimental-rts-parser -tabk

Please provide relevant feedback to #820625 if you have any. If you experience issues, please remember to provide the original control file along with the concrete command line used.

As hinted above, the option is a temporary measure and will be removed again once the testing phase is over, so please do not put it into scripts or packages. For the same reason, wrap-and-sort will emit a slightly annoying warning when using the option.


Categories: FLOSS Project Planets

Security public service announcements: Updated security policy for Drupal core Composer dependencies - PSA-2022-06-20

Planet Drupal - Mon, 2022-06-20 14:18
Date: 2022-June-20Description: In Drupal 9.4 and higher, drupal/core-recommended allows patch-level vendor updates

The drupal/core-recommended metapackage now allows patch-level updates for Composer dependencies. This means that site owners using drupal/core-recommended can now install most Composer dependency security updates themselves, without needing to wait for an upstream release of Drupal core that updates the affected package.

For example, in the future, a Guzzle vendor update like the recent Guzzle security release can be installed by running:

composer update guzzlehttp/guzzle

The change record on drupal/core-recommended and patch-level updates has more detailed information on how this change affects site dependency management.

Drupal security advisories and same-day releases for vendor updates will only be issued if Drupal core is known to be exploitable

It is the Drupal Security Team's policy to create new core releases and issue security advisories for third-party vendor libraries only if an exploit is possible in Drupal core. However, both the earlier version of the drupal/core-recommended metapackage and Drupal.org file archive downloads restrict sites to the exact Composer dependency versions used in Drupal core. Therefore, in practice, we have issued numerous security advisories (or same-day releases without security advisories) where only contributed or custom code might be vulnerable.

For Drupal 9.4.0 and higher, the Security Team plans to no longer issue these "just-in-case" security advisories for Composer dependency security updates. Instead, the dependency updates will be handled as public security hardenings, and will be included alongside other bugfixes in normal Drupal core patch releases. These security hardenings may be released within a few days as off-schedule bugfix releases if contributed projects are known to be vulnerable, or on the next scheduled monthly bugfix window for uncommon or theoretical vulnerabilities. (Keep in mind that Drupal core often already mitigates vulnerabilities present in its dependencies, so automated security scanners sometimes raise false positives when an upstream CVE is announced.)

Site owners are responsible for monitoring security announcements for third-party dependencies as well as for Drupal projects, and for installing dependency security updates when necessary.

Sites built using .tar.gz or .zip file downloads should convert to drupal/core-recommended for same-day dependency updates

Drupal 9.4 sites built with tarball or zip file archives will no longer receive the same level of security support for core dependencies. Going forward, if core is not known to be exploitable, the core file downloads' dependencies will be updated in normal bugfix releases within a few days (if contributed projects are known to be vulnerable) to a few weeks (if the vulnerability is uncommon or theoretical).

Sites built with tarball or zip files should convert to using drupal/core-recommended to apply security updates more promptly than the above timeframe.

Drupal 9.3 will receive prompt, best-effort updates until its end of life

Drupal 9.3 receives security coverage until the release of Drupal 9.5.0 in December 2022, and will not include the above improvement to drupal/core-recommended. Therefore, we will still try to provide prompt releases of Drupal 9.3 for vendor security updates when it is possible for us to do so.

Since normal bugfixes are no longer backported to Drupal 9.3, there will already be few to no other changes between its future releases, so dependency updates may be released as normal bugfix releases (rather than security-only releases). Security advisories for Drupal 9.3 vendor updates may still be issued depending on the nature of the vulnerability.

Drupal 7 is not affected by this change and Drupal 7 core file downloads remain fully covered by the Drupal Security Team

Drupal 7 core includes only limited use of third-party dependencies (in particular, the jQuery and jQuery UI JavaScript packages). Therefore, Drupal 7 is not affected by this policy change. Note that Drupal 7 sites that use third-party libraries with Drupal 7 contributed modules must still monitor and apply updates for those third-party libraries.

For press contacts, please email security-press@drupal.org.

Categories: FLOSS Project Planets

Talking Drupal: Talking Drupal #352 - D7 to D9 Migration

Planet Drupal - Mon, 2022-06-20 14:00

Today we are talking about D7 to D9 Migration with Mauricio Dinarte.


  • Why are you passionate about migration
  • First thing to think about when migrating
  • Timeline
    • Factors
  • Tips and tricks
  • Helpful tools and migrations
  • Tricky things to migrate
  • Data structure inconsistencies
  • Embedded media
  • Data management
  • Source sets
    • CSV
    • Json
    • DB connection
  • Understanddrupal.com
  • Who is the audience
  • Any new content
Resources Guests

Mauricio Dinarte - understanddrupal.com - @dinarcon


Nic Laflin - www.nLighteneddevelopment.com @nicxvan John Picozzi - www.epam.com @johnpicozzi Donna Bungard - @dbungard


Event Platform The Event Platform is actually a set of modules, each of which provides functionality designed to satisfy the needs of anyone creating a site for a Drupal Camp or similar event.

Categories: FLOSS Project Planets

Łukasz Langa: Weekly Report, June 13 - 19

Planet Python - Mon, 2022-06-20 13:35

This week was almost entirely focused on iOS build support for CPython. I’m writing a blog post on my adventures with this. I also spent some time with the CLA bot. In terms of pull requests, I barely closed 13.

Categories: FLOSS Project Planets