roose.digital: How to create a calculator tool in Drupal without programming

Planet Drupal - Fri, 2023-09-15 13:34

Ambitious site-builders can indulge themselves with Drupal. It is, in fact, the perfect no-code environment when you can't program but still want to build more complex calculators. In this blog, I will show, using an example, how I achieved this.

Categories: FLOSS Project Planets

Web Review, Week 2023-37

Planet KDE - Fri, 2023-09-15 11:38

Let’s go for my web review for the week 2023-37.

Willingham Sends Fables Into the Public Domain

Tags: culture, law, public-domain

This is unclear on the technicalities (is it even possible to just claim it like this? is CC0 required? etc.). Still this is a bold move, hats off to this renowned author.


Google gets its way, bakes a user-tracking ad platform directly into Chrome | Ars Technica

Tags: tech, google, surveillance, attention-economy

If you’re still using Chrome, maybe you shouldn’t… They’re clearly making it easier to overcome ad blocker and the tracking won’t be a third party thing anymore, this browser will directly report on your behavior.


Meet the Guy Preserving the New History of PC Games, One Linux Port at a Time

Tags: tech, gaming, maintenance, culture

Interesting conservation work. Video games being part of our culture, especially indie ones, it’s important for such work to happen and be funded.


Touch Pianist - Tap in Rhythm and Perform Your Favourite Music

Tags: tech, music, funny

Really cool and fun experiment. Surprisingly relaxing I find.


If a hammer was like AI…

Tags: tech, ai, gpt, ethics, ecology, bias

Lays out the ethical problems with the current trend of AI system very well. They’re definitely not neutral tools and currently suffer from major issues.


Against LLM maximalism · Explosion

Tags: tech, ai, gpt, language

Now this is a very good article highlighting the pros and cons of large language models for natural language processing tasks. It can help on some things but definitely shouldn’t be relied on for longer term systems.


Computer Science from the Bottom Up

Tags: tech, system, unix, cpu, memory, filesystem

I didn’t read it since it’s basically a whole book. Still from the outline it looks like a very good resource for beginners or to dig deeper on some lower level topics.


A systematic approach to debugging | nicole@web

Tags: tech, debugging

Good process for fixing bug. Definitely similar to how I approach it as well.


A user program doing intense IO can manifest as high system CPU time

Tags: tech, io, storage, cpu, kernel, performance

A good reminder that depending what happens in the kernel, the I/O time you were expecting might turn out to be purely CPU time.


Linear code is more readable

Tags: tech, programming, craftsmanship

A bit of a rambling, there’s something interesting in it though. Splitting small functions early will do more harm than good if they’re not reused. Don’t assume they automatically make things easier to read.


Async Rust Is A Bad Language

Tags: tech, rust, asynchronous, criticism

More details are surfacing regarding async and Rust… definitely not a match in heaven it seems.


Good performance is not just big O - Julio Merino (jmmv.dev)

Tags: tech, performance, programming

Good list of things to keep in mind when thinking about performances. Of course, always measure using a profiler when you want to be really sure.


Response Time Is the System Talking

Tags: tech, queuing, networking, system, performance

Interesting way to approximate how loaded a system is.


FIFO queues are all you need for cache eviction

Tags: tech, caching

Interesting caching strategy. Looks somewhat simple to put in place as well.


Death by a thousand microservices

Tags: tech, microservices, criticism, complexity

Looks like the morbid fascination for microservices is fading. This is very welcome. This piece is a good criticism of this particular fad and gives an interesting theory of why it came to be.


Making visually stable UIs | Horizon EDA Blog

Tags: tech, gui, fonts

Think of the typography and fonts if you don’t want to have text jumping around in your GUI.


Bricolage | Some notes on Local-First Development

Tags: tech, web, frontend, crdt

Interesting, “state of the union” regarding local-first we frontend. Lots of pointers towards other resources too.


Multi-page web apps

Tags: tech, web, frontend, complexity

Good reasoning, multi-page applications should be the default choice for web frontends architecture. The single page applications come at a cost.


The Tyranny of the Marginal User - by Ivan Vendrov

Tags: tech, attention-economy, criticism, design, ux

OK, this is a very bleak view… maybe a bit over the top I’m unsure. There seems to be some truth to it though.


On Waiting - Tim Kellogg

Tags: management, organization, patience

This is a sound advice. In other words, don’t commit too early, only when you got enough information. Of course monitor to make sure you don’t miss said information.


Bye for now!

Categories: FLOSS Project Planets

Stack Abuse: How to Delete a Global Variable in Python

Planet Python - Fri, 2023-09-15 10:51

In Python, variables declared outside of the function or in global space are known as global variables. These variables can be accessed by any function in the program. However, there may be instances where you want to delete or change a global variable within a function. This byte will guide you through the process of doing just that.

Global Variables in Python

Before we delve into how to delete or change a global variable, let's take a moment to understand what a global variable is. In Python, a variable declared outside of a function is known as a global variable. This means that the variable can be accessed from anywhere in the code - be it within a function or outside.

Here's a simple example of a global variable:

x = 10 def print_global(): print("The global variable is: ", x) print_global()


The global variable is: 10

Here, x is a global variable because it is defined outside of the print_global function, yet it can still be accessed within the function.

Why Delete a Global Variable?

So you might be wondering, why would we ever want to delete a global variable? Well, in large programs, global variables can consume significant memory resources, especially if they contain large data structures or objects. Or maybe your list of global variables (via globals()) has become far too cluttered.

One advantage of deleting global variables you no longer need is that it can help to free up memory, improving the footprint and efficiency of your code.

Wait! Deleting a global variable should be done with caution, as it can lead to errors if other parts of your program are still trying to access it.

How to Delete a Global Variable

Deleting a global variable in Python is pretty easy to do - we just use the del keyword. However, if you try to delete a global variable directly within a function, you will encounter an error. This is because Python treats variables as local by default within functions.

Here's an example:

x = 10 def delete_global(): del x print(x) delete_global()

This will output:

UnboundLocalError: local variable 'x' referenced before assignment

To delete the global variable within the function, we need to declare it as global within the function using the global keyword:

x = 10 def delete_global(): global x del x delete_global() print(x)

This will output:

NameError: name 'x' is not defined

As you can see, after calling the delete_global function, trying to print x results in a NameError because the global variable x has been deleted.


In this Byte, we've learned about global variables in Python and why you might want to delete them. We've also seen how to delete a global variable within a function using the del and global keywords. Just always make sure that the variable is not needed elsewhere in your code before deleting it.

Categories: FLOSS Project Planets

Countering string bloat (addendum)

Planet KDE - Fri, 2023-09-15 10:00

As last weeks post on countering string bloat has triggered some interest (and a few misunderstandings) here are a few more details on that topic. Nevertheless this isn’t going to be a comprehensive discussion of string handling in Qt (and was never meant to be), there’s plenty of posts and talks on that subject already out there if you want to dig deeper.

When to use QLatin1String

The KWeatherCore example in the last post was mainly discussing the use of QLatin1String in combination with Qt’s JSON API, as that has corresponding overloads. And that is a very important point, preferring QLatin1String over QStringLiteral for constant strings is only generally advisable if such overloads exist.

Things will work either way of course (as long as we are only dealing with 7bit ASCII strings), but at a cost. QLatin1String is implicitly convertible to a QString, which involves a runtime memory allocation of twice its size and a text codec conversion to UTF-16. QStringLiteral exists to avoid precisely that, and saving that runtime cost is generally preferable over a few bytes in storage.

So this is always a case-by-case decision. QLatin1String overloads only exist in places where they can actually be implemented more efficiently, and it only makes sense to add them in those cases.

Note that “overload” is to be understood a bit more loosely than in the strict C++ sense here. Examples:

  • String comparison and searching.
  • JSON keys.
  • QLatin1String::arg as an alternative for QString::arg.
  • String concatenations, in particular in combination with QStringBuilder.

Yes, exceptions might exist where the reduced binary size trumps the additional runtime cost, e.g. for sparsely used large data tables. But there might be even better solutions for that, and that’s probably worth a post on its own.


With the right choice being a case-by-case decision, there’s also understandably demand for better tooling to support this, search/replace isn’t going to cut it. While I am not aware of a tool that reliably identifies places where QLatin1String overloads should be used instead, there are tools that can at least support that work.

Clazy has the qstring-allocations static check to identify QString uses that can potentially be optimized to avoid memory allocations. This is actually the reverse of what is discussed here, so it’s a good way to catch overzealous QLatin1String uses. It has the second to lowest reliability rating regarding false positives though, so this is also not something to apply without careful review.

Clazy’s qlatin1string-non-ascii check is another useful safety net, finding QLatin1String literals that cannot actually be represented in the Latin-1 encoding.

Enabling QT_NO_CAST_FROM_ASCII also helps a bit as it forces you to think about the right type and encoding when interfacing with QString API.

The other aspect of tooling is looking at binary size impact of code changes. A simple but effective tool is bloaty, which is what produced the size difference table in the previous post. Make sure to strip binaries beforehand, otherwise the debug information will drown everything else.

For a more detailed look, there is also the size tree map in ELF Dissector.

ELF Dissector's size tree map showing KPublicTransport, the big block in the center being QRC data. Expected savings

How much savings to expect varies greatly depending on a number of circumstances. It’s also worth looking at absolute and relative savings separately. In the previously mentioned KWeatherCore example this were 16kB or 7% respectively.

This is due:

  • Few if any QLatin1String overloads were used, so a lot of room for optimization.
  • The library is very small and a significant part of it is JSON handling.
  • Other significantly more impactful optimizations to its static data tables had been applied previously (see e.g. this MR).

Let’s look at another example to put this into perspective, this change in KPublicTransport. Just like the KWeatherCore change this also changes QStringLiteral to QLatin1String in places where corresponding overloads exist, primarily in JSON API.

FILE SIZE VM SIZE -------------- -------------- -0.1% -16 -0.1% -16 .eh_frame_hdr -0.1% -144 -0.1% -144 .eh_frame -0.1% -430 -0.1% -430 .text -0.9% -3.97Ki -0.9% -3.97Ki .rodata -0.3% -4.00Ki -0.3% -4.54Ki TOTAL

The savings here are lower though, just 4kB or 0.3%. This is due:

  • The majority of this code already uses QLatin1String overloads, the change only fixes a few cases that slipped through.
  • Unlike with KWeatherCore the QString overloads remain in use for generic code not using literal keys. We therefore see no reduction due to fewer used external symbols (.plt remains unchanged).
  • The library is much larger in total.
  • The data size is dominated by compiled in resources, primarily polygons in GeoJSON format (the big red box in the center of the above screenshot).

The latter would be the much more relevant optimization target here, as GeoJSON isn’t the most efficient way neither regarding space nor regarding runtime cost.

In general, the absolute amount of size reduction should be somewhat proportional to the amount of QStringLiteral changed to QLatin1String. If the relative change is surprisingly low, it’s worth checking what else is taking the space.

An even more extreme example that came up in discussions on this is Tokodon, where the relative reduction was just a fraction of a percent. A view in ELF Dissector reveals the reason for that, its giant compiled-in Emoji tables overshadowing everything else

Size tree map for Tokodon, the large top left and center blocks are all related to static Emoji data.

Besides the data size (which wont be entirely avoidable here) this also involves a significant amount of static construction code, which is the even more interesting optimization target as it also impacts application startup and application runtime memory use.


As always with optimizations there is no silver bullet. Occasionally looking into the output of various profiling and analysis tools for your application or library usually turns up a few unexpected details that are worth improving.

Nevertheless I stand by my recommendation from last time to keep the seemingly minor details like the use of the right string API in mind. It’s an essentially free optimization that adds up given how widely applicable it is.

Categories: FLOSS Project Planets

Real Python: The Real Python Podcast – Episode #172: Measuring Multiple Facets of Python Performance With Scalene

Planet Python - Fri, 2023-09-15 08:00

When choosing a tool for profiling Python code performance, should it focus on the CPU, GPU, memory, or individual lines of code? What if it looked at all those factors and didn't alter code performance while measuring it? This week on the show, we talk about Scalene with Emery Berger, Professor of Computer Science at the University of Massachusetts Amherst.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Golems GABB: Migrating to Drupal 10: Best Practices and Challenges to Consider

Planet Drupal - Fri, 2023-09-15 07:53
Migrating to Drupal 10: Best Practices and Challenges to Consider Editor Fri, 09/15/2023 - 14:53

What version of Drupal is your site running on? Usage statistics of Drupal Core say that some still use Drupal 6 and 7. There are even websites that are still working on Drupal 5. Using old versions, you deprive yourself of the new features of the admin panel and community-supported modules, themes, and profiles. Your customers are left without a modern user experience and cannot be sure your website is secure. Support for these versions is over. Ahead is only the migration to Drupal 10, the latest version released on December 14, 2022.

What has changed in the new version of Drupal?

The updated version of Drupal 10 offers users an extensive list of valuable updates compared to the previous ones:

Categories: FLOSS Project Planets

Krita 5.2 Release Candidate is out!

Planet KDE - Fri, 2023-09-15 07:24

The release candidate is here for Krita 5.2, this means that we are confident that all the major bugs brought on by the changes in Krita 5.2 have now been fixed, and would like you to give it another round of testing.

Please pay extra attention to the following features of Krita, since they got updated or reworked since Beta2:

  • assignment of profiles to displays in multi-monitor setup (Krita should use EDID info to map the displays to profiles everywhere, except on macOS)
  • dockers layout should now be properly restored after Krita restart, even after the usage of canvas-only mode
  • autokeyframing feature of animated layers got a lot of fixes since Beta2

Here is the full list of bugs (and other minor changes) have been fixed since the second beta:

  • Fix crash when activating Halftone filter in Filter Brush (Bug 473242)
  • Fix a crash when activating Color Index filter in the filter brush (Bug 473242)
  • text: Write xml:space as XML attribute in SVG output
  • text: Normalize linebreaks into LF when loading from SVG
  • Build patched libraqm with Krita instead of in 3rdparty deps
  • [qtbase] Correctly parse non BMP char refs in the sax parser
  • Actually load the fonts in the QML theme (Bug 473478)
  • Fix Channels docker to generate thumbnails asynchronously (Bug 473130)
  • Fix wobbly lines when using line tool (Bug 473459)
  • text: Make sure white-space overrides xml:space
  • text: Reject negative line-height in SVG
  • Simplified fix for tag selector and checkboxes problem (CCBug 473510)
  • Fix creation of a new image from clipboard (Bug 473559)
  • Make sure that the horizontal mode of the preset chooser is properly initialized
  • Hide preset chooser mode button when the chooser is in horizontal mode (Bug 473558)
  • Only repaint KisShapeLayerCanvas on setImage when really needed
  • text: Do not synthesize bold in several cases like fonts that are already bold or variable fonts.
  • AnimAudio: Fixed crash when loading animation file with audio attached, due to incompletely constructed canvas.
  • Fix a model warning in KisTimeBasedItemModel (Bug 473485)
  • Don’t recreate the frames when not necessary (Bug 472414)
  • Fix cross-colorspace bitBlt with channel flags (Bug 473479)
  • text: Also consider HHEA metrics for default line height (Bug 472502)
  • Fix BDF font-size matching (Bug 472791)
  • Make sure that the node emits nodeChanged() signal on opacity change (Bug 473724)
  • Fix ‘enabled’ state of the actions in the Default Tool (Bug 473719)
  • Respect statusbar visibility after Welcome page (Bug 472800)
  • Fix a warning in outline generation code in shape tools (Bug 473715)
  • Possibly fix a crash when switching animated documents (Bug 473760)
  • OpenGL: Request DeprecatedFunctions on Windows to fix Intel driver (Bug 473782)
  • Allow welcome page banner to shrink
  • text: Use line-height when flowing text in shape (Bug 473527)
  • text: Make first word of text-in-shape flush against the shape
  • Fix color values under transparent pixels be lost in Separate Image (Bug 473948)
  • flake: Fix transformation of text path and shape-inside (Bug 472571)
  • Make sure that Krita correctly specifies video codec for libopenh264 (Bug 473207)
  • Don’t allow New/Open File buttons to grow taller (Bug 473509)
  • raqm: Fix Unicode codepoint conversion from UTF-16
  • Android: Bump targetSdkVersion to 33
  • Fix multiple issues with auto-keyframing code
  • Edit Shapes tool: make moving points move points by a delta instead of snapping them to the cursor
  • Initialize tool configGroup before optionWidget (Bug 473515)
  • Fix updates on autokeyframing with onion skins enabled (Bug 474138)
  • JPEG-XL: fix crash on importing XYB grayscale that needs transform
  • JPEG-XL: also apply patches workaround on lossy export
  • Fix artifacts when using assistants in images with high DPI (Bug 436422)
  • Don’t allow closing hidden document views without confirmation (Bug 474396)
  • logdocker: Fix infinite tail recursion with multiple windows (Bug 474431)
Download Windows

If you’re using the portable zip files, just open the zip file in Explorer and drag the folder somewhere convenient, then double-click on the Krita icon in the folder. This will not impact an installed version of Krita, though it will share your settings and custom resources with your regular installed version of Krita. For reporting crashes, also get the debug symbols folder.

Note that we are not making 32 bits Windows builds anymore.


The separate gmic-qt AppImage is no longer needed.

(If, for some reason, Firefox thinks it needs to load this as text: to download, right-click on the link.)


Note: if you use macOS Sierra or High Sierra, please check this video to learn how to enable starting developer-signed binaries, instead of just Apple Store binaries.


We consider Krita on ChromeOS as ready for production. Krita on Android is still beta. Krita is not available for Android phones, only for tablets, because the user interface requires a large screen. RC1 packages for Android are signed as usual.

Source code md5sum

For all downloads, visit https://download.kde.org/unstable/krita/5.2.0-rc1/ and click on Details to get the hashes.


The Linux AppImage and the source .tar.gz and .tar.xz tarballs are signed. This particular release is signed with a non-standard key, you can retrieve is here or download from the public server:

gpg --recv-keys E9FB29E74ADEACC5E3035B8AB69EB4CF7468332F

The signatures are here (filenames ending in .sig).



The post Krita 5.2 Release Candidate is out! appeared first on Krita.

Categories: FLOSS Project Planets

Symphony Blog: [Drupal tutorial] - Rating on parent nodes by comments' votes with Fivestar

Planet Drupal - Fri, 2023-09-15 04:32

On our directory theme BizReview showing listings and their reviews, we need a rating mechanism so users can leave their votes on comments and the parent listings will calculate average ratings.

read more

Categories: FLOSS Project Planets

PyBites: Write more maintainable Python code, avoid these 15 code smells

Planet Python - Fri, 2023-09-15 04:23

This week we talk about code smells.

Listen here:

Also available on our YouTube channel:

While there, like and subscribe to get similar content regularly …

Code smells are characteristics in the code that might indicate deeper issues or potential problems. While they’re not necessarily bugs, they can be a sign of poor code quality or maintainability issues.

We distilled 15 common smells ranging from generic programming to Python specific issues. We hope it will make your more conscious of your code as well as code you’ll review.

If you have any feedback, hit us up on:
– LinkedIn
– X
– Email
(Also for any podcast topic requests …)

Mentioned Dictionary Dispatch Pattern video

And to write cleaner, more maintainable code, in the context of (complex) real world applications, check out our 1:1 coaching options.

00:00 Intro music
00:20 What are code smells?
01:11 1. Long functions or classes
01:46 2. Duplicated code
02:25 3. Data Clumps
03:13 4. Using the global space
03:52 5. Magic numbers
04:38 6. Primitive obsession
05:06 7. Overusing comments
06:23 8. Too deep nesting
07:36 9. Switch statement or long if-elif-elif-else chains
08:41 10. Too deep inheritance
09:45 11. Dead code
10:21 12. Misusing (nested) listcomps
11:03 13. Single letter variable names
12:03 14. Mutable Default Arguments
13:05 15. Error Silencing
14:04 Wrap up
14:56 Outro music

Thanks for tuning in as always and next week we’ll be back with a brand new episode …

Categories: FLOSS Project Planets

Too Many Sites

Planet KDE - Fri, 2023-09-15 02:56

I’m trying to come up with a plan to harmonize all my sites. There are way too many installs on too many machines spread out over vendors and sites. A general clean-up is really needed.

Right now I have a couple of sites that are self-hosted (they run on a old machine in my office), a VPS at Linode and a couple of VPSes at DigitalOcean. I also have a couple of sites run via github pages.

Right now, a majority of the sites are based around WordPress which is great due to the ease of use – but static sites are easier (and cheaper) to host. In addition to this, I also have a couple of Django apps running on VPSes. Nice, but requires quite a bit of RAM in my experience.

So, the general plan is to clean up my domains, and to harmonize the environments. Perhaps a single WordPress machine, that also hosts the static sites, and the dedicated machines for the more complex Django apps. There will also be a bunch of URL rewrites to make the structure better, i.e. this blog will probably live under e8johan.se, but be available via the old URL too (thelins.se/johan/blog).

When reviewing my domains, I found the following ones that I most likely will give up. If there are any takers, please tell me what you want it for, and I’d be happy to hand it over to you (contact me at e8johan-at-gmail):

  • oppenkod.se (open source in Swedish)
  • qt6book.org (I’ve already got qmlbook.org, and I won’t write another book until Qt 8)
  • qt6book.com (see qt6book.org)

Also, I plan to pull all the machines over to DigitalOcean, as they offer a really nice interface (and APIs!) for managing VPSs, DNS and more (and I’ve gotten used to them while building Eperoto’s infrastructure).

Finally, I intend to collect all domain name registrations at a single registrar (most likely Loopia, but let’s see.

Categories: FLOSS Project Planets

Stack Abuse: Check if Elements in List Matches a Regex in Python

Planet Python - Thu, 2023-09-14 16:59

Let's say you have a list of home addresses and want to see which ones reside on a "Street", "Ave", "Lane", etc. Given the variability of physical addresses, you'd probably want to use a regular expression to do the matching. But how do you apply a regex to a list? That's exactly what we'll be looking at in this Byte.

Why Match Lists with Regular Expressions?

Regular expressions are one of best, if not the best, ways to do pattern matching on strings. In short, they can be used to check if a string contains a specific pattern, replace parts of a string, and even split a string based on a pattern.

Another reason you may want to use a regex on a list of strings: you have a list of email addresses and you want to filter out all the invalid ones. You can use a regular expression to define the pattern of a valid email address and apply it to the entire list in one go. There are an endless number of examples like this as to why you'd want to use a regex over a list of strings.

Python's Regex Module

Python's re module provides built-in support for regular expressions. You can import it as follows:

import re

The re module has several functions to work with regular expressions, such as match(), search(), and findall(). We'll be using these functions to check if any element in a list matches a regular expression.

Link: For more information on using regex in Python, check out our article, Introduction to Regular Expressions in Python

Using the match() Function

To check if any element in a list matches a regular expression, you can use a loop to iterate over the list and the re module's match() function to check each element. Here's an example:

import re # List of strings list_of_strings = ['apple', 'banana', 'cherry', 'date'] # Regular expression pattern for strings starting with 'a' pattern = '^a' for string in list_of_strings: if re.match(pattern, string): print(string, "matches the pattern")

In this example, the match() function checks if each string in the list starts with the letter 'a'. The output will be:

apple matches the pattern

Note: The ^ character in the regular expression pattern indicates the start of the string. So, ^a matches any string that starts with 'a'.

This is a basic example, but you can use more complex regular expression patterns to match more specific conditions. For example, here is a regex for matching an email address:

([A-Za-z0-9]+[.-_])*[A-Za-z0-9]+@[A-Za-z0-9-]+(\.[A-Z|a-z]{2,})+ Using the search() Function

While re.match() is great for checking the start of a string, re.search() scans through the string and returns a MatchObject if it finds a match anywhere in the string. Let's tweak our previous example to find any string that contains "Hello".

import re my_list = ['Hello World', 'Python Hello', 'Goodbye World', 'Say Hello'] pattern = "Hello" for element in my_list: if re.search(pattern, element): print(f"'{element}' matches the pattern.")

The output will be:

'Hello World' matches the pattern. 'Python Hello' matches the pattern. 'Say Hello' matches the pattern.

As you can see, re.search() found the strings that contain "Hello" anywhere, not just at the start.

Using the findall() Function

The re.findall() function returns all non-overlapping matches of pattern in string, as a list of strings. This can be useful when you want to extract all occurrences of a pattern from a string. Let's use this function to find all occurrences of "Hello" in our list.

import re my_list = ['Hello Hello', 'Python Hello', 'Goodbye World', 'Say Hello Hello'] pattern = "Hello" for element in my_list: matches = re.findall(pattern, element) if matches: print(f"'{element}' contains {len(matches)} occurrence(s) of 'Hello'.")

The output will be:

'Hello Hello' contains 2 occurrence(s) of 'Hello'. 'Python Hello' contains 1 occurrence(s) of 'Hello'. 'Say Hello Hello' contains 2 occurrence(s) of 'Hello'. Working with Nested Lists

What happens if our list contains other lists? Python's re module functions won't work directly on nested lists, just like it wouldn't work with the root list in the previous examples. We need to flatten the list or iterate through each sub-list.

Let's consider a list of lists, where each sub-list contains strings. We want to find out which strings contain "Hello".

import re my_list = [['Hello World', 'Python Hello'], ['Goodbye World'], ['Say Hello']] pattern = "Hello" for sub_list in my_list: for element in sub_list: if re.search(pattern, element): print(f"'{element}' matches the pattern.")

The output will be:

'Hello World' matches the pattern. 'Python Hello' matches the pattern. 'Say Hello' matches the pattern.

We first loop through each sub-list in the main list. Then for each sub-list, we loop through its elements and apply re.search() to find the matching strings.

Working with Mixed Data Type Lists

Python lists are versatile and can hold a variety of data types. This means you can have a list with integers, strings, and even other lists. This is great for a lot of reasons, but it also means you have to deal with potential issues when the data types matter for your operation. When working with regular expressions, we only deal with strings. So, what happens when we have a list with mixed data types?

import re mixed_list = [1, 'apple', 3.14, 'banana', '123', 'abc123', '123abc'] regex = r'\d+' # matches any sequence of digits for element in mixed_list: if isinstance(element, str) and re.match(regex, element): print(f"{element} matches the regex") else: print(f"{element} does not match the regex or is not a string")

In this case, the output will be:

1 does not match the regex or is not a string apple does not match the regex or is not a string 3.14 does not match the regex or is not a string banana does not match the regex or is not a string 123 matches the regex abc123 does not match the regex or is not a string 123abc matches the regex

We first check if the current element is a string. Only then do we check if it matches the regular expression. This is because the re.match() function expects a string as input. If you try to use it on an integer or a float, Python will throw an error.


Python's re module provides several functions to match regex patterns in strings. In this Byte, we learned how to use these functions to check if any element in a list matches a regular expression. We also saw how to handle lists with mixed data types. Regular expressions can be complex, so take your time to understand them. With a bit of practice, you'll find that they can be used to solve many problems when working with strings.

Categories: FLOSS Project Planets

Stack Abuse: How to Disable Warnings in Python

Planet Python - Thu, 2023-09-14 13:25

Working with any language, you've probably come across warnings - and lots of them. In Python, our warnings are the yellow-highlighted messages that appear when code runs. These warnings are Python's way of telling us that, while our code is technically correct and will run, there's something in it that's not quite right or could eventually lead to issues. Sometimes these warnings are helpful, and sometimes they're not. So what if we want to disable them?

In this Byte, we'll show a bit more about what Python warnings are, why you might want to disable them, and how you can do it.

Python Warnings

Python warnings are messages that the Python interpreter throws when it encounters unusual code that may not necessarily result in an error, but is probably not what you intended. These warnings can include things like deprecation warnings, which tell you when a Python feature is being phased out, or syntax warnings, which alert you to weird but syntactically correct code.

Here's an example of a warning you might see:

import warnings def fxn(): warnings.warn("fxn() is deprecated", DeprecationWarning, stacklevel=2) warnings.simplefilter('always', DeprecationWarning) fxn()

When you run this code, Python will output a DeprecationWarning:

$ python warnings.py warnings.py:9: DeprecationWarning: fxn() is deprecated fxn()

Note: We had to add the warnings.simplefilter('always', DeprecationWarning) line in order to get the warning to show. Otherwise DeprecationWarnings are ignored by default.

Why disable them?

That's a good question. Warnings are indeed useful, but there are times when you might want to disable them.

For example, if you're working with a large codebase and you're aware of the warnings but have decided to ignore them for now, having them constantly pop up can be not only annoying but also cause you to miss more important output from your code. In the same vein, if you're running a script that's outputting to a log file, you might not want warnings cluttering up your logs.

How to Disable Python Warnings

There are a few ways to disable warnings in Python, and we'll look at three of them: using the warnings module, using command line options, and using environment variables.

Using the warnings Module

Python's warnings module provides a way to control how warnings are displayed. You can use the filterwarnings function to ignore all warnings programmatically:

import warnings warnings.filterwarnings("ignore")

This will suppress all warnings. If you want to suppress only a specific type of warning, you can do so by specifying the warning class:

warnings.filterwarnings("ignore", category=DeprecationWarning)

In this case, only DeprecationWarnings will be suppressed.

Using Command Line Options

If you're running your Python script from the command line, you can use the -W option followed by ignore to suppress all warnings:

$ python -W ignore your_script.py Using Environment Variables

You can also use the PYTHONWARNINGS environment variable to control the display of warnings. To ignore all warnings, you can set this variable to ignore:

$ export PYTHONWARNINGS="ignore" $ python your_script.py

This will suppress all warnings for the entire session. If you want to suppress warnings for all sessions, you can add the export PYTHONWARNINGS="ignore" line to your shell's startup file (like .bashrc or .bash_profile for bash) so that this setting is always set.

Risks of Disabling Warnings

While there are several ways to disable warnings in Python, you should also understand the risks associated with doing this.

For example, a DeprecationWarning alerts you that a function you're using is slated for removal in a future version of Python or a library. If you ignore this warning, your code may suddenly stop working when you upgrade to a new version.

As a general rule, it's best to just fix the issues causing the warnings, instead of simply suppressing the warnings. There are, however, situations where removing warnings is actually the most practical solution, like when you're using a library that generates warnings you can't control and aren't actually helpful. In these cases, it's best to just suppress only the specific warnings you need to, and avoid using a blanket "ignore all" command.


Warnings are there for a reason, like signaling potential issues in the code that might lead to bugs or unexpected behavior, so it's best not to suppress them. However, there are times when you may want to disable these warnings, whether to clean up your console output or because you're aware of the warning and have decided it's not relevant to your particular situation.

In this Byte, we've learned about Python's warnings, how to suppress them, along with the potential risks of doing so.

Categories: FLOSS Project Planets

KDE: KDE neon user edition updates! Debian updates, Snaps on hold.

Planet KDE - Thu, 2023-09-14 13:06

I had to make the hard decision to put snaps on hold. I am working odd jobs to “stay alive” and to pay for my beautiful scenery. My “Project” should move forward, as I have done everything asked of me including finding a super awesome management team to take us all the way through. But until it is signed sealed and delivered, I have to survive. In my free time I am helping out Jonathan and working on KDE Neon, he has done so much for me over the years, it is the least I can do!

So without further ado! Carlos and I have been working diligently on new Frameworks 5.110, Plasma 5.27.8, and Applications 23.08.1! They are complete and ready in /user! With that, a great many fixes to qml dependencies and packaging updates. Current users can update freely and the docker images and ISO are building now. We are working on Unstable… as it is a bit unstable right now, but improving

On the Debian front I am wrapping up packaging of new upstream release of squashfuse.

Thanks for stopping by!

If you can spare some change, consider a donation

Thank you!


Categories: FLOSS Project Planets

Scarlett Gately Moore: KDE: KDE neon user edition updates! Debian updates, Snaps on hold.

Planet Debian - Thu, 2023-09-14 13:06

I had to make the hard decision to put snaps on hold. I am working odd jobs to “stay alive” and to pay for my beautiful scenery. My “Project” should move forward, as I have done everything asked of me including finding a super awesome management team to take us all the way through. But until it is signed sealed and delivered, I have to survive. In my free time I am helping out Jonathan and working on KDE Neon, he has done so much for me over the years, it is the least I can do!

So without further ado! Carlos and I have been working diligently on new Frameworks 5.110, Plasma 5.27.8, and Applications 23.08.1! They are complete and ready in /user! With that, a great many fixes to qml dependencies and packaging updates. Current users can update freely and the docker images and ISO are building now. We are working on Unstable… as it is a bit unstable right now, but improving

On the Debian front I am wrapping up packaging of new upstream release of squashfuse.

Thanks for stopping by!

If you can spare some change, consider a donation

Thank you!


Categories: FLOSS Project Planets

To trust AI, it must be open and transparent. Period.

Open Source Initiative - Thu, 2023-09-14 11:00

By Heather Meeker, OSS Capital

Machine learning has been around for a long time. But in late 2022, recent advancements in deep learning and large language models started to change the game and come into the public eye. And people started thinking, “We love Open Source software, so, let’s have Open Source AI, too.” 

But what is Open Source AI? And the answer is: we don’t know yet. 

Machine learning models are not software. Software is written by humans, like me. Machine learning models are trained; they learn on their own automatically, based on the input data provided by humans. When programmers want to fix a computer program, they know what they need: the source code. But if you want to fix a model, you need a lot more: software to train it, data to train it, a plan for training it, and so forth. It is much more complex. And reproducing it exactly ranges from difficult to nearly impossible.

The Open Source Definition, which was made for software, is now in its third decade, and has been a stunning success. There are standard Open Source licenses that everyone uses. Access to source code is a living, working concept that people use every day. But when we try to apply Open Source concepts to AI, we need to first go back to principles.

For something to be “Open Source” it needs to have one overarching quality:  transparency. What if an AI is screening you for a job, or for a medical treatment, or deciding a prison sentence? You want to know how it works. But deep learning models right now are a black box. If you look at the output of a model, it’s impossible to tell how or why the model came up with that output. All you can do is look at the inputs to see if its training was correct. And that’s not nearly as straightforward as looking at source code. 

AI has the potential to greatly benefit our world. Now is the first time in history we’ve had the information and technology to tackle our biggest problems, like climate change, poverty and war. Some people are saying AI will destroy the world, but I think it contributes to the hope of saving the world. 

But first, we need to trust it. And to trust it, it needs to be open and transparent.

As a consumer you should demand that the AI you use is open. As a developer, you should know what rights you have to study and improve AI. As a voter, you should have the right to demand that AI used by the government is open and transparent. 

Without transparency, AI is doomed. AI is potentially so powerful and capable that people are already frightened of it. Without transparency, AI risks going the way of crypto–a technology with great potential that gets shut down by distrust. I hope that we will figure out how to guarantee transparency before that happens, because the problems AI can help us solve are urgent, and I believe we can solve them if we work together. 


OSI has gathered a group of leaders who will be presenting ideas around the topic of AI and Open Source in our upcoming Deep Dive: Defining Open Source AI Webinar Series. Registration is free and allows you to attend and ask questions at any or all of the sessions taking place between September 26 and October 12, 2023. REGISTER HERE today!

The post <span class='p-name'>To trust AI, it must be open and transparent. Period.</span> appeared first on Voices of Open Source.

Categories: FLOSS Research

Python Software Foundation: Announcing Python Software Foundation Fellow Members for Q2 2023! 🎉

Planet Python - Thu, 2023-09-14 10:52

The PSF is pleased to announce its second batch of PSF Fellows for 2023! Let us welcome the new PSF Fellows for Q2! The following people continue to do amazing things for the Python community:

Esteban Maya Cadavid TwitterLinkedInGithubInstagramMartijn Pieters Stack OverflowGitHubWebsitePhilip JonesMastodonGitHubWebsiteYifei WangGitHub 

Thank you for your continued contributions. We have added you to our Fellow roster online.

The above members help support the Python ecosystem by being phenomenal leaders, sustaining the growth of the Python scientific community, maintaining virtual Python communities, maintaining Python libraries, creating educational material, organizing Python events and conferences, starting Python communities in local regions, and overall being great mentors in our community. Each of them continues to help make Python more accessible around the world. To learn more about the new Fellow members, check out their links above.

Let's continue recognizing Pythonistas all over the world for their impact on our community. The criteria for Fellow members is available online: https://www.python.org/psf/fellows/. If you would like to nominate someone to be a PSF Fellow, please send a description of their Python accomplishments and their email address to psf-fellow at python.org. Quarter 3 nominations are currently in review. We are accepting nominations for Quarter 4 through November 20, 2023.

Are you a PSF Fellow and want to help the Work Group review nominations? Contact us at psf-fellow at python.org.

Categories: FLOSS Project Planets

Stack Abuse: Creating a Zip Archive of a Directory in Python

Planet Python - Thu, 2023-09-14 10:22

When dealing with large amounts of data or files, you might find yourself needing to compress files into a more manageable format. One of the best ways to do this is by creating a zip archive.

In this article, we'll be exploring how you can create a zip archive of a directory using Python. Whether you're looking to save space, simplify file sharing, or just keep things organized, Python's zipfile module provides a to do this.

Creating a Zip Archive with Python

Python's standard library comes with a module named zipfile that provides methods for creating, reading, writing, appending, and listing contents of a ZIP file. This module is useful for creating a zip archive of a directory. We'll start by importing the zipfile and os modules:

import zipfile import os

Now, let's create a function that will zip a directory:

def zip_directory(directory_path, zip_path): with zipfile.ZipFile(zip_path, 'w') as zipf: for root, dirs, files in os.walk(directory_path): for file in files: zipf.write(os.path.join(root, file), os.path.relpath(os.path.join(root, file), os.path.join(directory_path, '..')))

In this function, we first open a new zip file in write mode. Then, we walk through the directory we want to zip. For each file in the directory, we use the write() method to add it to the zip file. The os.path.relpath() function is used so that we store the relative path of the file in the zip file, instead of the absolute path.

Let's test our function:

zip_directory('test_directory', 'archive.zip')

After running this code, you should see a new file named archive.zip in your current directory. This zip file contains all the files from test_directory.

Note: Be careful when specifying the paths. If the zip_path file already exists, it will be overwritten.

Python's zipfile module makes it easy to create a zip archive of a directory. With just a few lines of code, you can compress and organize your files.

In the following sections, we'll dive deeper into handling nested directories, large directories, and error handling. This may seem a bit backwards, but the above function is likely what most people came here for, so I wanted to show it first.

Using the zipfile Module

In Python, the zipfile module is the best tool for working with zip archives. It provides functions to read, write, append, and extract data from zip files. The module is part of Python's standard library, so there's no need to install anything extra.

Here's a simple example of how you can create a new zip file and add a file to it:

import zipfile # Create a new zip file zip_file = zipfile.ZipFile('example.zip', 'w') # Add a file to the zip file zip_file.write('test.txt') # Close the zip file zip_file.close()

In this code, we first import the zipfile module. Then, we create a new zip file named 'example.zip' in write mode ('w'). We add a file named 'test.txt' to the zip file using the write() method. Finally, we close the zip file using the close() method.

Creating a Zip Archive of a Directory

Creating a zip archive of a directory involves a bit more work, but it's still fairly easy with the zipfile module. You need to walk through the directory structure, adding each file to the zip archive.

import os import zipfile def zip_directory(folder_path, zip_file): for folder_name, subfolders, filenames in os.walk(folder_path): for filename in filenames: # Create complete filepath of file in directory file_path = os.path.join(folder_name, filename) # Add file to zip zip_file.write(file_path) # Create a new zip file zip_file = zipfile.ZipFile('example_directory.zip', 'w') # Zip the directory zip_directory('/path/to/directory', zip_file) # Close the zip file zip_file.close()

We first define a function zip_directory() that takes a folder path and a ZipFile object. It uses the os.walk() function to iterate over all files in the directory and its subdirectories. For each file, it constructs the full file path and adds the file to the zip archive.

The os.walk() function is a convenient way to traverse directories. It generates the file names in a directory tree by walking the tree either top-down or bottom-up.

Note: Be careful with the file paths when adding files to the zip archive. The write() method adds files to the archive with the exact path you provide. If you provide an absolute path, the file will be added with the full absolute path in the zip archive. This is usually not what you want. Instead, you typically want to add files with a relative path to the directory you're zipping.

In the main part of the script, we create a new zip file, call the zip_directory() function to add the directory to the zip file, and finally close the zip file.

Working with Nested Directories

When working with nested directories, the process of creating a zip archive is a bit more complicated. The first function we showed in this article actually handles this case as well, which we'll show again here:

import os import zipfile def zipdir(path, ziph): for root, dirs, files in os.walk(path): for file in files: ziph.write(os.path.join(root, file), os.path.relpath(os.path.join(root, file), os.path.join(path, '..'))) zipf = zipfile.ZipFile('Python.zip', 'w', zipfile.ZIP_DEFLATED) zipdir('/path/to/directory', zipf) zipf.close()

The main difference is that we're actually creating the zip directory outside of the function and pass it as a parameter. Whether you do it within the function itself or not is up to personal preference.

Handling Large Directories

So what if we're dealing with a large directory? Zipping a large directory can consume a lot of memory and even crash your program if you don't take the right precautions.

Luckily, the zipfile module allows us to create a zip archive without loading all files into memory at once. By using the with statement, we can ensure that each file is closed and its memory freed after it's added to the archive.

import os import zipfile def zipdir(path, ziph): for root, dirs, files in os.walk(path): for file in files: with open(os.path.join(root, file), 'r') as fp: ziph.write(fp.read()) with zipfile.ZipFile('Python.zip', 'w', zipfile.ZIP_DEFLATED) as zipf: zipdir('/path/to/directory', zipf)

In this version, we're using the with statement when opening each file and when creating the zip archive. This will guarantee that each file is closed after it's read, freeing up the memory it was using. This way, we can safely zip large directories without running into memory issues.

Error Handling in zipfile

When working with zipfile in Python, we need to remember to handle exceptions so our program doesn't crash unexpectedly. The most common exceptions you might encounter are RuntimeError, ValueError, and FileNotFoundError.

Let's take a look at how we can handle these exceptions while creating a zip file:

import zipfile try: with zipfile.ZipFile('example.zip', 'w') as myzip: myzip.write('non_existent_file.txt') except FileNotFoundError: print('The file you are trying to zip does not exist.') except RuntimeError as e: print('An unexpected error occurred:', str(e)) except zipfile.LargeZipFile: print('The file is too large to be compressed.')

FileNotFoundError is raised when the file we're trying to zip doesn't exist. RuntimeError is a general exception that might be raised for a number of reasons, so we print the exception message to understand what went wrong. zipfile.LargeZipFile is raised when the file we're trying to compress is too big.

Note: Python's zipfile module raises a LargeZipFile error when the file you're trying to compress is larger than 2 GB. If you're working with large files, you can prevent this error by calling ZipFile with the allowZip64=True argument.

Common Errors and Solutions

While working with the zipfile module, you might encounter several common errors. Let's explore some of these errors and their solutions:


This error happens when the file or directory you're trying to zip does not exist. So always check if the file or directory exists before attempting to compress it.


This error is raised when you're trying to write a directory to a zip file using ZipFile.write(). To avoid this, use os.walk() to traverse the directory and write the individual files instead.


As you probably guessed, this error happens when you don't have the necessary permissions to read the file or write to the directory. Make sure you have the correct permissions before trying to manipulate files or directories.


As mentioned earlier, this error is raised when the file you're trying to compress is larger than 2 GB. To prevent this error, call ZipFile with the allowZip64=True argument.

try: with zipfile.ZipFile('large_file.zip', 'w', allowZip64=True) as myzip: myzip.write('large_file.txt') except zipfile.LargeZipFile: print('The file is too large to be compressed.')

In this snippet, we're using the allowZip64=True argument to allow zipping files larger than 2 GB.

Compressing Individual Files

With zipfile, not only can it compress directories, but it can also compress individual files. Let's say you have a file called document.txt that you want to compress. Here's how you'd do that:

import zipfile with zipfile.ZipFile('compressed_file.zip', 'w') as myzip: myzip.write('document.txt')

In this code, we're creating a new zip archive named compressed_file.zip and just adding document.txt to it. The 'w' parameter means that we're opening the zip file in write mode.

Now, if you check your directory, you should see a new zip file named compressed_file.zip.

Extracting Zip Files

And finally, let's see how to reverse this zipping by extracting the files. Let's say we want to extract the document.txt file we just compressed. Here's how to do it:

import zipfile with zipfile.ZipFile('compressed_file.zip', 'r') as myzip: myzip.extractall()

In this code snippet, we're opening the zip file in read mode ('r') and then calling the extractall() method. This method extracts all the files in the zip archive to the current directory.

Note: If you want to extract the files to a specific directory, you can pass the directory path as an argument to the extractall() method like so: myzip.extractall('/path/to/directory/').

Now, if you check your directory, you should see the document.txt file. That's all there is to it!


In this guide, we focused on creating and managing zip archives in Python. We explored the zipfile module, learned how to create a zip archive of a directory, and even dove into handling nested directories and large directories. We've also covered error handling within zipfile, common errors and their solutions, compressing individual files, and extracting zip files.

Categories: FLOSS Project Planets

Python People: Mariatta Wijaya

Planet Python - Thu, 2023-09-14 09:00

Mariatta has been a contributor to Python for many years and is a very inspiring public speaker.

Some of what we talk about:

  • Python Documentation Working Group
  • GitHub bots, There's an API for that
  • PyLadies
  • PyLadiesCon
  • Typo of the Day (maintainerd, verbossity, work-lie balance, etc.)
  • A fish aquarium
  • Cooking, Baking
  • History of Tempura
  • Working with APIs with Python
  • Public Speaking / Giving Talks
  • The power of seeing other women give talks

★ Support this podcast while learning ★

★ Support this podcast on Patreon ★ <p>Mariatta has been a contributor to Python for many years and is a very inspiring public speaker.</p><p>Some of what we talk about:</p><ul> <li>Python Documentation Working Group</li> <li>GitHub bots, <a href="https://mariatta.ca/posts/talks/theres-an-api-for-that/">There's an API for that</a> </li> <li><a href="https://pyladies.com">PyLadies</a></li> <li><a href="https://conference.pyladies.com">PyLadiesCon</a></li> <li>Typo of the Day (maintainerd, verbossity, work-lie balance, etc.)</li> <li>A fish aquarium</li> <li>Cooking, Baking</li> <li>History of Tempura</li> <li>Working with APIs with Python</li> <li>Public Speaking / Giving Talks</li> <li>The power of seeing other women give talks</li> </ul> <br><p><strong>★ Support this podcast while learning ★</strong></p><ul> <li> <a href="https://pythontest.com/courses/">Python Testing with Pytest, the course</a>, is a great way to learn pytest quickly</li> <li>Part 1, <a href="https://testandcode.teachable.com/p/pytest-primary-power">pytest Primary Power</a> is now available. </li> <li>Start testing effectively and efficiently today.</li> </ul> <strong> <a href="https://www.patreon.com/PythonPeople" rel="payment" title="★ Support this podcast on Patreon ★">★ Support this podcast on Patreon ★</a> </strong>
Categories: FLOSS Project Planets

Discover GCompris

Planet KDE - Thu, 2023-09-14 08:09

GCompris includes well over a hundred fun educational activities. In this video you will learn how they are divided into categories and see a number of examples of what you can find in the GComrpis treasure trove.

Categories: FLOSS Project Planets

Janusworx: derb; Script to Create podcast RSS feeds

Planet Python - Thu, 2023-09-14 08:08

I wrote a tiny script that creates an RSS feed for all the audio files it finds in a folder.
I call it derb.

My mother gets devotional songs and sermons on cds, which I rip to MP3 files and then dump on her phone for her.1
She listens to them all the time, and now three of her friends want to do the same too.
I thought of just sticking them in my self hosted Jellyfin instance,2 but then I realised, all of them have erratic, slow internet. So the idea of self hosting a podcast feed really appealed to me.

So I quickly used Feedgenerator in conjunction with Tinytag, to whip up a script that’d help me do just that. The code’s up on Github, if you want to go install and play with it yourselves.

Here’s a quick walk through derb.py.3

Setting up house

We set up a place to accept a path containing the files. The feed will ultimately be placed as a feed.xml in the same folder as well We then walk through the folder (after a really basic sanity check) and gather all the files into a list.
The base_url is where the feeds (along with the audio) will be served from.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 base_url = "https://ab.janusworx.com" book_folder = input("Paste path to audiobook folder here: ") book_out_path = base_url + (book_folder.split("/")[-1]) file_list = os.walk(book_folder) # Do a basic check on the validity of the path we get, # before we build the audio file list. try: all_files = (list(file_list)[0][2]) except IndexError as e: print(f"\n\n" f"---\n" f"ERROR!: {e}\n" f"Have you typed in the right path?\n" f"---\n") sys.exit("Quitting script!") audio_files = [] for each_file in all_files: each_file = Path(each_file) if each_file.suffix in ['.mp3', '.m4a', '.m4b']: audio_files.append(str(each_file)) audio_files = sorted(audio_files) Creating a feed

We now go about the business of setting up the feed proper.
To begin with, we grab the first file we can get our grubby paws on, and create a TinyTag object that’ll give us a lot of metadata. (If there isn’t any, we quit.)
Oh, and by the way, how do I know what data I’d need to create a feed? I just cribbed everything from the Feedgenerator’s excellent documentation. I also looked at the widely linked to, RSS reference page for clarification if I got confused.

We then, instantiate create a feedgenerator object along with the podcast extension.
Following which, we supply the feed a title, an id4, feed author details, a language, podcast category and description.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 # Setup a feed ## Grab feed metadata from the first audio file feed_metadata_file = TinyTag.get(Path(book_folder, audio_files[0])) ## Grab title from metadata file. ## At the same time, break out if there isn’t any. if not feed_metadata_file.album: sys.exit("\n---\nStopping feed creation.\n Setup audio file metadata with a tag editor") feed_title = feed_metadata_file.album ## Creating feed instance audio_book_feed = FeedGenerator() audio_book_feed.load_extension("podcast") ## Setting up more stuff on the feed proper audio_book_feed.id(base_url) audio_book_feed.title(feed_title) audio_book_feed.author({"name": "Jason Braganza", "email": "feedback@janusworx.com"}) audio_book_feed.link(href=f'{book_out_path}', rel='self') audio_book_feed.language('en') audio_book_feed.podcast.itunes_category('Private') audio_book_feed.description(feed_title) Adding episodes and writing out the file

After which it’s then a matter of looping through that audio file list we created and adding them as entries to the feed object we created.
Once again we grab the metadata from each file, using Tinytag, and then set each feed entry’s details (title, id and enclosure).
I’ve hardcoded the mime types, since I know I only have two basic type of audio files. If you don’t know what kind of audio, you might be serving, the mimetypes-magic package should help.
Finally we write it all out to a file.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 # Loop the file list and create entries in the feed for each_file in audio_files: each_file_metadata = TinyTag.get(Path(book_folder, each_file)) episode_file_path = Path(each_file) episode_suffix = episode_file_path.suffix episode_mime_type = 'audio/mpeg' if episode_suffix == '.mp3' else 'audio/x-m4a' episode_title = each_file_metadata.title episode_size = str(each_file_metadata.filesize) episode_link = f"{book_out_path}/{each_file}" audio_episode = audio_book_feed.add_entry() audio_episode.title(episode_title) audio_episode.id(episode_link) audio_episode.enclosure(episode_link, episode_size, episode_mime_type) # Write the rss feed to the same folder as the source audio files audio_book_feed.rss_file(f"{book_folder}/feed.xml") Serving

Once done, I moved it all to my trusty Pi, which runs barebones Nginx with a single bare page, protected by basic auth.
I decided not to publish the feeds publicly.
Rather I’m going to just set it up in their podcast players, when I meet them, or pass it over to their kids over Signal.5
It’s already worked with three of them, so everyone’s happy and here’s me hoping, fingers crossed, it’ll be easy to support in the long run.

Feedback on this post? Mail me at feedback@janusworx.com

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.

  1. Yes, while it’s slowly moving to Youtube, that world still mostly depends on CDs and USB sticks. ↩︎

  2. the audio files, not my mother’s friends. ↩︎

  3. Large parts are elided. Please look at Github for the actual file. ↩︎

  4. normally, the site it’ll be served from ↩︎

  5. Why should I be the only one doing family tech support? ↩︎

Categories: FLOSS Project Planets