FLOSS Project Planets

Freexian Collaborators: Debian Contributions: YubiHSM packaging, unschroot, live-patching, and more! (by Stefano Rivera)

Planet Debian - Tue, 2024-07-09 20:00
Debian Contributions: 2024-06

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

YubiHSM packaging, by Colin Watson

Freexian is starting to use YubiHSM devices (hardware security modules) as part of some projects, and we wanted to have the supporting software directly in Debian rather than needing to use third-party repositories. Since Yubico publish everything we need under free software licences, Colin packaged yubihsm-connector, yubihsm-shell, and python-yubihsm from https://developers.yubico.com/, in some cases based partly on the upstream packaging, and got them all into Debian unstable. Backports to bookworm will be forthcoming once they’ve all reached testing.

unschroot by Helmut Grohne

Following an in-person discussion at MiniDebConf Berlin, Helmut attempted splitting the containment functionality of sbuild --chroot-mode=unshare into a dedicated tool interfacing with sbuild as a variant of --chroot-mode=schroot providing a sufficiently compatible interface.

While this seemed technically promising initially, a discussion on debian-devel indicated a desire to rely on an existing container runtime such as podman instead of using another Debian-specific tool with unclear long term maintenance. None of the existing container runtimes meet the specific needs of sbuild, so further advancing this matter implies a compromise one way or another.

Linux live-patching, by Santiago Ruano Rincón

In collaboration with Emmanuel Arias, Santiago is working on the development of linux live-patching for Debian. For the moment, this is in an exploratory phase, that includes how to handle the different patches that will need to be provided. kpatch could help significantly in this regard. However, kpatch was removed from unstable because there are some RC bugs affecting the version that was present in Debian unstable. Santiago packaged the most recent upstream version (0.9.9) and filed an Intent to Salvage bug. Santiago is waiting for an ACK by the maintainer, and will upload to unstable after July 10th, following the package salvaging rules. While kpatch 0.9.9 fixes the main issues, it still needs some work to properly support Debian and the Linux kernel versions packaged in our distribution. More on this in the report next month.

Salsa CI, by Santiago Ruano Rincón

The work by Santiago in Salsa CI this month includes a merge request to ease testing how the production images are built from the changes introduced by future merge requests. By default, the pipelines triggered by a merge request build a subset of the images built for production, to reduce the use of resources, and because most of the time the subset of staging images is enough to test the proposed modifications. However, sometimes it is needed to test how the full set of production images is built, and the above mentioned MR helps to do that. The changes include documentation, so hopefully this will make it easier to test future contributions.

Also, for being able to include support for RISC-V, Salsa CI needs to replace kaniko as the tool used to build the images. Santiago tested buildah, but there are some issues when pushing built images for non-default platform architectures (i386, armhf, armel) to the container registry. Santiago will continue to work on this to find a solution.

Miscellaneous contributions
  • Stefano Rivera prepared updates for a number of Python modules.
  • Stefano uploaded the latest point release of Python 3.12 and the latest Python 3.13 beta. Both uncovered upstream regressions that had to be addressed.
  • Stefano worked on preparations for DebConf 24.
  • Stefano helped SPI to reconcile their financial records for DebConf 23.
  • Colin did his usual routine work on the Python team, upgrading 36 packages to new upstream versions (including fixes for four CVEs in python-aiohttp), fixing RC bugs in ipykernel, ipywidgets, khard, and python-repoze.sphinx.autointerface, and packaging zope.deferredimport which was needed for a new upstream version of python-persistent.
  • Colin removed the user_readenv option from OpenSSH’s PAM configuration (#1018260), and prepared a release note.
  • Thorsten Alteholz uploaded a new upstream version of cups.
  • Nicholas Skaggs updated xmacro to support reproducible builds (#1014428), DEP-3 and DEP-5 compatibility, along with utilizing hardening build flags. Helmut supported and uploaded package.
  • As a result of login having become non-essential, Helmut uploaded debvm to unstable and stable and fixed a crossqa.debian.net worker.
  • Santiago worked on the Content Team activities for DebConf24. Together with other DebConf25 team members, Santiago wrote a document for the head of the venue to describe the project of the conference.
Categories: FLOSS Project Planets

Simon Josefsson: Towards Idempotent Rebuilds?

GNU Planet! - Tue, 2024-07-09 18:16

After rebuilding all added/modified packages in Trisquel, I have been circling around the elephant in the room: 99% of the binary packages in Trisquel comes from Ubuntu, which to a large extent are built from Debian source packages. Is it possible to rebuild the official binary packages identically? Does anyone make an effort to do so? Does anyone care about going through the differences between the official package and a rebuilt version? Reproducible-build.org‘s effort to track reproducibility bugs in Debian (and other systems) is amazing. However as far as I know, they do not confirm or deny that their rebuilds match the official packages. In fact, typically their rebuilds do not match the official packages, even when they say the package is reproducible, which had me surprised at first. To understand why that happens, compare the buildinfo file for the official coreutils 9.1-1 from Debian bookworm with the buildinfo file for reproducible-build.org’s build and you will see that the SHA256 checksum does not match, but still they declare it as a reproducible package. As far as I can tell of the situation, the purpose of their rebuilds are not to say anything about the official binary build, instead the purpose is to offer a QA service to maintainers by performing two builds of a package and declaring success if both builds match.

I have felt that something is lacking, and months have passed and I haven’t found any project that address the problem I am interested in. During my earlier work I created a project called debdistreproduce which performs rebuilds of the difference between two distributions in a GitLab pipeline, and display diffoscope output for further analysis. A couple of days ago I had the idea of rewriting it to perform rebuilds of a single distribution. A new project debdistrebuild was born and today I’m happy to bless it as version 1.0 and to announces the project! Debdistrebuild has rebuilt the top-50 popcon packages from Debian bullseye, bookworm and trixie, on amd64 and arm64, as well as Ubuntu jammy and noble on amd64, see the summary status page for links. This is intended as a proof of concept, to allow people experiment with the concept of doing GitLab-based package rebuilds and analysis. Compare how Guix has the guix challenge command.

Or I should say debdistrebuild has attempted to rebuild those distributions. The number of identically built packages are fairly low, so I didn’t want to waste resources building the rest of the archive until I understand if the differences are due to consequences of my build environment (plain apt-get build-dep followed by dpkg-buildpackage in a fresh container), or due to some real difference. Summarizing the results, debdistrebuild is able to rebuild 34% of Debian bullseye on amd64, 36% of bookworm on amd64, 32% of bookworm on arm64. The results for trixie and Ubuntu are disappointing, below 10%.

So what causes my rebuilds to be different from the official rebuilds? Some are trivial like the classical problem of varying build paths, resulting in a different NT_GNU_BUILD_ID causing a mismatch. Some are a bit strange, like a subtle difference in one of perl’s headers file. Some are due to embedded version numbers from a build dependency. Several of the build logs and diffoscope outputs doesn’t make sense, likely due to bugs in my build scripts, especially for Ubuntu which appears to strip translations and do other build variations that I don’t do. In general, the classes of reproducibility problems are the expected. Some are assembler differences for GnuPG’s gpgv-static, likely triggered by upload of a new version of gcc after the original package was built. There are at least two ways to resolve that problem: either use the same version of build dependencies that were used to produce the original build, or demand that all packages that are affected by a change in another package are rebuilt centrally until there are no more differences.

The current design of debdistrebuild uses the latest version of a build dependency that is available in the distribution. We call this a “idempotent rebuild“. This is usually not how the binary packages were built originally, they are often built against earlier versions of their build dependency. That is the situation for most binary distributions.

Instead of using the latest build dependency version, higher reproducability may be achieved by rebuilding using the same version of the build dependencies that were used during the original build. This requires parsing buildinfo files to find the right version of the build dependency to install. We believe doing so will lead to a higher number of reproducibly built packages. However it begs the question: can we rebuild that earlier version of the build dependency? This circles back to really old versions and bootstrappable builds eventually.

While rebuilding old versions would be interesting on its own, we believe that is less helpful for trusting the latest version and improving a binary distribution: it is challenging to publish a new version of some old package that would fix a reproducibility bug in another package when used as a build dependency, and then rebuild the later packages with the modified earlier version. Those earlier packages were already published, and are part of history. It may be that ultimately it will no longer be possible to rebuild some package, because proper source code is missing (for packages using build dependencies that were never part of a release); hardware to build a package could be missing; or that the source code is no longer publicly distributable.

I argue that getting to 100% idempotent rebuilds is an interesting goal on its own, and to reach it we need to start measure idempotent rebuild status.

One could conceivable imagine a way to rebuild modified versions of earlier packages, and then rebuild later packages using the modified earlier packages as build dependencies, for the purpose of achieving higher level of reproducible rebuilds of the last version, and to reach for bootstrappability. However, it may be still be that this is insufficient to achieve idempotent rebuilds of the last versions. Idempotent rebuilds are different from a reproducible build (where we try to reproduce the build using the same inputs), and also to bootstrappable builds (in which all binaries are ultimately built from source code). Consider a cycle where package X influence the content of package Y, which in turn influence the content of package X. These cycles may involve several packages, and it is conceivable that a cycle could be circular and infinite. It may be difficult to identify these chains, and even more difficult to break them up, but this effort help identify where to start looking for them. Rebuilding packages using the same build dependency versions as were used during the original build, or rebuilding packages using a bootsrappable build process, both seem orthogonal to the idempotent rebuild problem.

Our notion of rebuildability appears thus to be complementary to reproducible-builds.org’s definition and bootstrappable.org’s definition. Each to their own devices, and Happy Hacking!

Categories: FLOSS Project Planets

Simon Josefsson: Towards Idempotent Rebuilds?

Planet Debian - Tue, 2024-07-09 18:16

After rebuilding all added/modified packages in Trisquel, I have been circling around the elephant in the room: 99% of the binary packages in Trisquel comes from Ubuntu, which to a large extent are built from Debian source packages. Is it possible to rebuild the official binary packages identically? Does anyone make an effort to do so? Does anyone care about going through the differences between the official package and a rebuilt version? Reproducible-build.org‘s effort to track reproducibility bugs in Debian (and other systems) is amazing. However as far as I know, they do not confirm or deny that their rebuilds match the official packages. In fact, typically their rebuilds do not match the official packages, even when they say the package is reproducible, which had me surprised at first. To understand why that happens, compare the buildinfo file for the official coreutils 9.1-1 from Debian bookworm with the buildinfo file for reproducible-build.org’s build and you will see that the SHA256 checksum does not match, but still they declare it as a reproducible package. As far as I can tell of the situation, the purpose of their rebuilds are not to say anything about the official binary build, instead the purpose is to offer a QA service to maintainers by performing two builds of a package and declaring success if both builds match.

I have felt that something is lacking, and months have passed and I haven’t found any project that address the problem I am interested in. During my earlier work I created a project called debdistreproduce which performs rebuilds of the difference between two distributions in a GitLab pipeline, and display diffoscope output for further analysis. A couple of days ago I had the idea of rewriting it to perform rebuilds of a single distribution. A new project debdistrebuild was born and today I’m happy to bless it as version 1.0 and to announces the project! Debdistrebuild has rebuilt the top-50 popcon packages from Debian bullseye, bookworm and trixie, on amd64 and arm64, as well as Ubuntu jammy and noble on amd64, see the summary status page for links. This is intended as a proof of concept, to allow people experiment with the concept of doing GitLab-based package rebuilds and analysis. Compare how Guix has the guix challenge command.

Or I should say debdistrebuild has attempted to rebuild those distributions. The number of identically built packages are fairly low, so I didn’t want to waste resources building the rest of the archive until I understand if the differences are due to consequences of my build environment (plain apt-get build-dep followed by dpkg-buildpackage in a fresh container), or due to some real difference. Summarizing the results, debdistrebuild is able to rebuild 34% of Debian bullseye on amd64, 36% of bookworm on amd64, 32% of bookworm on arm64. The results for trixie and Ubuntu are disappointing, below 10%.

So what causes my rebuilds to be different from the official rebuilds? Some are trivial like the classical problem of varying build paths, resulting in a different NT_GNU_BUILD_ID causing a mismatch. Some are a bit strange, like a subtle difference in one of perl’s headers file. Some are due to embedded version numbers from a build dependency. Several of the build logs and diffoscope outputs doesn’t make sense, likely due to bugs in my build scripts, especially for Ubuntu which appears to strip translations and do other build variations that I don’t do. In general, the classes of reproducibility problems are the expected. Some are assembler differences for GnuPG’s gpgv-static, likely triggered by upload of a new version of gcc after the original package was built. There are at least two ways to resolve that problem: either use the same version of build dependencies that were used to produce the original build, or demand that all packages that are affected by a change in another package are rebuilt centrally until there are no more differences.

The current design of debdistrebuild uses the latest version of a build dependency that is available in the distribution. We call this a “idempotent rebuild“. This is usually not how the binary packages were built originally, they are often built against earlier versions of their build dependency. That is the situation for most binary distributions.

Instead of using the latest build dependency version, higher reproducability may be achieved by rebuilding using the same version of the build dependencies that were used during the original build. This requires parsing buildinfo files to find the right version of the build dependency to install. We believe doing so will lead to a higher number of reproducibly built packages. However it begs the question: can we rebuild that earlier version of the build dependency? This circles back to really old versions and bootstrappable builds eventually.

While rebuilding old versions would be interesting on its own, we believe that is less helpful for trusting the latest version and improving a binary distribution: it is challenging to publish a new version of some old package that would fix a reproducibility bug in another package when used as a build dependency, and then rebuild the later packages with the modified earlier version. Those earlier packages were already published, and are part of history. It may be that ultimately it will no longer be possible to rebuild some package, because proper source code is missing (for packages using build dependencies that were never part of a release); hardware to build a package could be missing; or that the source code is no longer publicly distributable.

I argue that getting to 100% idempotent rebuilds is an interesting goal on its own, and to reach it we need to start measure idempotent rebuild status.

One could conceivable imagine a way to rebuild modified versions of earlier packages, and then rebuild later packages using the modified earlier packages as build dependencies, for the purpose of achieving higher level of reproducible rebuilds of the last version, and to reach for bootstrappability. However, it may be still be that this is insufficient to achieve idempotent rebuilds of the last versions. Idempotent rebuilds are different from a reproducible build (where we try to reproduce the build using the same inputs), and also to bootstrappable builds (in which all binaries are ultimately built from source code). Consider a cycle where package X influence the content of package Y, which in turn influence the content of package X. These cycles may involve several packages, and it is conceivable that a cycle could be circular and infinite. It may be difficult to identify these chains, and even more difficult to break them up, but this effort help identify where to start looking for them. Rebuilding packages using the same build dependency versions as were used during the original build, or rebuilding packages using a bootsrappable build process, both seem orthogonal to the idempotent rebuild problem.

Our notion of rebuildability appears thus to be complementary to reproducible-builds.org’s definition and bootstrappable.org’s definition. Each to their own devices, and Happy Hacking!

Categories: FLOSS Project Planets

How I manage my KDE email

Planet KDE - Tue, 2024-07-09 17:13

Every once in a while people ask me about my email routine, so I thought I’d write about it here.

Everything I do starts with the philosophy that work and project email is a task queue. Therefore an email is a to-do list item someone else has assigned to me.

Ugh, how horrible! Better get that stuff done or rejected as soon as possible so I can move on to the stuff I want to do.

This means my target is inbox zero; achieving it means I got all my tasks done. Like everyone, I don’t always achieve it, but zero is the goal. How do I work towards it?

#0: Separate KDE and non-KDE emails

When I’m not in KDE mode, I want to be able to turn that stuff off in my own brain. To accomplish this, I have a home email account and a KDE email account. I adjust all my KDE accounts to only send email to my KDE address.

#1: Use an email client app

To manage multiple email accounts without going insane, I avoid webmail. In addition to not supporting multi-account workflows, it’s usually slow, lacking useful features, and has poor keyboard navigation.

I currently use Thunderbird, but I’m investigating moving to KDE’s KMail. Regardless, it has to be a desktop email client that offers mail rules.

#2: Automatic categorization (0 minutes)

I configure my email client with mail rules to automatically tag emails with colored labels according to what they are, and then mark them as read:

This results in almost all newly-arrived emails becoming colored and marked as read:

When I get a new kind of automated email that didn’t automatically receive a color label, I adjust the rules to match that new email so it gets categorized in the future, too.

#3: Manual categorization (1-3 minutes)

When I first open my email client in the morning, everything will be categorized except 5-15 emails sent by actual people. To see just these, I’ll filter the inbox by unread status, since all the auto-categorized colored emails got automatically marked as read.

Then it’s time to figure out what to do with them. For anything that needs a response or action today, I mark it as urgent by hitting the “1” key. For anything that needs a personal response in the next few days, I hit “9” to tag it as personal and it becomes green. And so on.

Any emails that don’t need a response get immediately deleted. I never miss them. It’s fast and painless. Put those emails out of their misery.

#4: Action all the urgent emails (5-15 minutes)

Urgent means urgent; first I’ll go through these one at a time, and action them somehow. This means one of the following:

  1. If it’s from a person, write a reply and then delete the email.
  2. If it’s from an automated system, open the link to the thing it’s about in a web browser and then delete the email.

The email always ends up deleted! For people like us emails are not historical records, they’re tasks. Do you need to remember what tasks you performed 8 years ago on Tuesday, May 11th? Of course not. Don’t be a digital hoarder; delete your emails. You won’t miss them.

At this point I may realize that I was overzealous in tagging something as urgent. That’s fine; I just re-tag it as something else, and then I’ll get to it later.

#5: Action all the merge request emails (5-10 minutes)

Since my day job is “quality assurance manager”, these are important. I’ll go through every automated email from invent.kde.org about merge requests for repos I’m subscribed to and action them somehow:

  1. Open the link to the merge request in my web browser, and then delete the email.
  2. Decide I don’t need to review this particular merge request, and just delete the email.

More deletion! I never keep these emails around; they’re temporal notifications of other people’s work. Nothing worth preserving.

#6: Action all the bug report emails (5-15 minutes)

My web browser is now filled up with tabs for merge requests to review. Now it’s time to do that for relevant bug reports. I follow the same process here: open the bug report in my web browser because it needs a comment or other action from me, and then delete the email — or else immediately delete the email because it’s not directly actionable. Delete, delete, delete. It’s the happiest word when it comes to email. Everyone hates emails; delete them! Show them you mean business.

#7: Do actual work

At this point I’ve spent between 15 and 40 minutes just on email, ugh. Time to do some actual work! So now I’ll spend the next several hours going through those tabs in my web browser, from left to right. First reviewing merge requests, then handling the relevant bug reports (closing, re-opening, replying to comments, changing metadata, marking as duplicate, CCing others, etc). During this step, I’ll also triage the day’s new bug reports.

Sometimes I’ll check email again while doing these, since more will be coming in. It’s easy to delete or action them individually.

After all these tabs are closed, hooray! I have some time to be proactive instead of reactive! Usually this amounts to 0-120 minutes a day during working hours. I try to spend this time on fixing small bugs I found throughout the day, opening and participating in discussion topics about important matters, working on the KDE HIG, and sometimes helping people out on http://www.discuss.kde.org or http://www.reddit.com/r/KDE.

#8: Action all the rest of the emails (10-25 minutes)

Towards the end of the day I’ll look at the emails marked as “Personal” and “KDE e.V./Akademy” and try to knock a few out. It’s okay if I’m too tired; these aren’t urgent and can wait until tomorrow. After a few days of sitting there, I’ll mark them as urgent.

And that’s pretty much it! This is just my workflow; it doesn’t need to be yours. But in case you want to try it, here are answers to some anticipated objections:

Ugh, that sounds like it takes forever!

It really doesn’t.

On a Monday maybe it takes more like 35 or 40 minutes since there are emails from the weekend to process. But on Tuesday through Friday, it’s closer to 15-20 minutes. Often 10 on Friday. Thanks to the automatic categorization, all of this is much faster than manually looking at every email one by one, and much more effective than getting depressed by hundreds of unread emails in the morning and ignoring them.

Deleting emails is too scary, what if I need them in the future?

You won’t.

But if that’s too scary or painful, set up your email account or client app to archive “deleted” messages in permanent storage rather than truly deleting them. Just keep in mind that you’ll eventually run out of storage space and have to deal with that problem in the future. Once it happens, consider it an opportunity to reconsider, asking yourself how many emails you actually did need to dig out of cold storage. I’m guessing the number will be very low, maybe even 0.

This might work for your workflow, but I get different types of emails!

Maybe so, but the general principle of automatically tagging (but not moving) emails applies to anyone. I firmly believe that anyone can benefit from this part. Make the software do the grunt work for you!

What do I do about all of those the old emails in my inbox? There are too many, I’ll never get through them!

If you’re one of those people who has 50,000 emails in your inbox, select all and delete. You won’t miss any of them.

Seriously. All of them. Every single one. Right now. Just do it.

How do I know this is fine?

  • Old notifications about things like bug reports or merge requests are worthless because they already happened. Delete.
  • Old mailing list conversations long since dried up or got actioned without your input. Delete.
  • Old at-the-time urgent emails from important people are no longer relevant, because the people who sent them long ago concluded that you’re unreliable and decided to not contact you again. Because that’s what happens when you let emails pile up: you’re being rude to all the people whose messages you’ve ignored. Feel sad, resolve to do better, then delete.

The good news is that you can get better at this anytime, but it’s almost impossible without making a clean break with a messy past. You’ll be looking at old stuff forever and won’t have time for new stuff.

I just get too much email, it’s impossible to keep up no matter what I do!

You need to unsubscribe from some things. Maybe a lot of things. Longtime contributors to any project will have accumulated years worth of subscriptions to sources of emails that are no longer relevant. Prune them!

This may trigger Fear Of Missing Out. Recognize that and fight against it. You can almost always reduce your email load by unsubscribing from this stuff:

  • Activity in Git repos for projects you no longer contribute to.
  • Bug reports for products you aren’t involved in or responsible for anymore.
  • Medium to high traffic mailing lists that are mostly or entirely irrelevant to your present interests and activities.
  • Almost all the spam from LinkedIn.
  • All the spam from online stores, newspapers, political campaigns. The “unsubscribe” button will work, don’t give up!

Resist the temptation to filter these emails into folders that you tell yourself you’ll remember to look at once in a while. You probably won’t, and by the time you do, everything in them won’t be actionable anymore — if it ever was in the first place. Unsubscribe and delete!

Categories: FLOSS Project Planets

PyCoder’s Weekly: Issue #637 (July 9, 2024)

Planet Python - Tue, 2024-07-09 15:30

#637 – JULY 9, 2024
View in Browser »

Python Grapples With Apple App Store Rejections

A string that is part of the urllib parser module in Python references a scheme for apps that use the iTunes feature to install other apps, which is disallowed. Auto scanning by Apple is rejecting any app that uses Python 3.12 underneath. A solution has been proposed for Python 3.13.
JOE BROCKMEIER

Python’s Built-in Functions: A Complete Exploration

In this tutorial, you’ll learn the basics of working with Python’s numerous built-in functions. You’ll explore how you can use these predefined functions to perform common tasks and operations, such as mathematical calculations, data type conversions, and string manipulations.
REAL PYTHON

Python Error and Performance Monitoring That Doesn’t Suck

With Sentry, you can trace issues from the frontend to the backend—detecting slow and broken code, to fix what’s broken faster. Installing the Python SDK is super easy and PyCoder’s Weekly subscribers get three full months of the team plan. Just use code “pycoder” on signup →
SENTRY sponsor

Constraint Programming Using CP-SAT and Python

Constraint programming is the process of looking for solutions based on a series of restrictions, like employees over 18 who have worked the cash before. This article introduces the concept and shows you how to use open source libraries to write constraint solving code.
PHILIPPE OLIVIER

Register for PyOhio, July 27-28

PYOHIO.ORG • Shared by Anurag Saxena

Psycopg 3.2 Released

PSYCOPG

Polars 1.0 Released

POLARS

Quiz: Python’s Magic Methods

REAL PYTHON

Discussions Any Web Devs Successfully Pivoted to AI/ML Development?

HACKER NEWS

Articles & Tutorials A Search Engine for Python Packages

Finding the right Python package on PyPI can be a bit difficult, since PyPI isn’t really designed for discovering packages easily. PyPiScout.com solves this problem by allowing you to search using descriptions like “A package that makes nice plots and visualizations.”
PYPISCOUT.COM • Shared by Florian Maas

Programming Advice I’d Give Myself 15 Years Ago

Marcus writes in depth about things he has learned in his coding career and wished he new earlier in his journey. Thoughts include fixing foot guns, understanding the pace-quality trade-off, sharpening your axe, and more. Associated HN Discussion.
MARCUS BUFFETT

Keeping Things in Sync: Derive vs Test

Don’t Repeat Yourself (DRY) is generally a good coding philosophy, but it shouldn’t be adhered to blindly. There are other alternatives, like using tests to make sure that duplication stays in sync. This article outlines the why and how of just that.
LUKE PLANT

8 Versions of UUID and When to Use Them

RFC 9562 outlines the structure of Universally Unique IDentifiers (UUIDs) and includes eight different versions. In this post, Nicole gives a quick intro to each kind so you don’t have to read the docs, and explains why you might choose each.
NICOLE TIETZ-SOKOLSKAYA

Defining Python Constants for Code Maintainability

In this video course, you’ll learn how to properly define constants in Python. By coding a bunch of practical example, you’ll also learn how Python constants can improve your code’s readability, reusability, and maintainability.
REAL PYTHON course

Django: Test for Pending Migrations

The makemigrations --check command tells you if there is missing migrations in your Django project, but you have to remember to run it. Adam suggests calling it from a test so it gets triggered as part of your CI/CD process.
ADAM JOHNSON

How to Maximize Your Experience at EuroPython 2024

Conferences can be overwhelming, with lots going on and lots of choices. This post talks about how to get the best experience at EuroPython, or any conference.
SANGARSHANAN

Polars vs. pandas: What’s the Difference?

Explore the key distinctions between Polars and Pandas, two data manipulation tools. Discover which framework suits your data processing needs best.
JODIE BURCHELL

An Overview of the Sparse Array Ecosystem for Python

An overview of the different options available for working with sparse arrays in Python.
HAMEER ABBASI

Projects & Code Get Space Weather Data

GITHUB.COM/BEN-N93 • Shared by Ben Nour

AI to Drop Hats

DROPOFAHAT.ZONE

amphi-etl: Low-Code ETL for Data

GITHUB.COM/AMPHI-AI

aurora: Static Site Generator Implemented in Python

GITHUB.COM/CAPJAMESG

pytest-edit: pytest --edit to Open Failing Test Code

GITHUB.COM/MRMINO

Events Weekly Real Python Office Hours Q&A (Virtual)

July 10, 2024
REALPYTHON.COM

PyCon Nigeria 2024

July 10 to July 14, 2024
PYCON.ORG

PyData Eindhoven 2024

July 11 to July 12, 2024
PYDATA.ORG

Python Atlanta

July 11 to July 12, 2024
MEETUP.COM

PyDelhi User Group Meetup

July 13, 2024
MEETUP.COM

DFW Pythoneers 2nd Saturday Teaching Meeting

July 13, 2024
MEETUP.COM

Happy Pythoning!
This was PyCoder’s Weekly Issue #637.
View in Browser »

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

Categories: FLOSS Project Planets

Drupal Association blog: Drupal Association Announces HeroDevs as Inaugural Partner for Drupal 7 Extended Security Support Provider Program

Planet Drupal - Tue, 2024-07-09 14:33

PORTLAND, Ore., 10 July 2024—The Drupal Association is pleased to announce HeroDevs as the inaugural partner for the new Drupal 7 Extended Security Support Provider Program. This initiative aims to support Drupal 7 users by carefully vetting providers to deliver extended security support services beyond the 5 January 2025 end-of-life (EOL) date.

The Drupal 7 Extended Security Support Provider Program allows organizations that cannot migrate from Drupal 7 to newer versions by the EOL date to continue using a version of Drupal 7 that is secure and compliant. This program complements the Association’s D7 Certified Migration Providers Program, which helps organizations find the right partner to transition their sites from Drupal 7 to Drupal 10.

HeroDevs has successfully met the stringent requirements established by the Drupal Association to become a certified provider with its secure, seamless drop-in replacement of Drupal 7 and core modules. 

“HeroDevs has demonstrated strong expertise in finding and fixing security and compatibility issues for major open-source libraries like Drupal,” Tim Doyle, CEO of the Drupal Association, said. “This program underscores the Drupal Association’s dedication to providing qualified options for organizations using Drupal 7 so they can stay secure while they figure out their next steps for upgrading. ”

As organizations prepare for the transition from Drupal 7, HeroDevs will provide the necessary support to keep their sites secure and operational.

Joe Eames, VP of Partnership at HeroDevs, added, “We are honored to be recognized as the inaugural partner of this important program. At HeroDevs, we are creating a more sustainable, secure web and Drupal is a major part of that. We aim to help organizations maintain a secure and compliant web presence – all while giving open source creators and maintainers the freedom to innovate.” 

For more information about the HeroDevs Drupal 7 Never-Ending Support (NES), click here.

About the Drupal Association

The Drupal Association is a non-profit organization that fosters and supports the Drupal software project, the community, and its growth. Our mission is to drive innovation and adoption of Drupal as a high-impact digital public good, hand-in-hand with our open source community. Through various initiatives, events, and programs, the Drupal Association helps ensure the ongoing development and success of the Drupal project.

Categories: FLOSS Project Planets

Python Engineering at Microsoft: Python in Visual Studio Code – July 2024 Release

Planet Python - Tue, 2024-07-09 11:45

We’re excited to announce the July 2024 release of the Python and Jupyter extensions for Visual Studio Code!

This release includes the following announcements:

  • Enhanced environment discovery with python-environment-tools
  • Improved support for reStructuredText docstrings with Pylance
  • Community contributed Pixi support

If you’re interested, you can check the full list of improvements in our changelogs for the Python, Jupyter and Pylance extensions.

Enhanced environment discovery with python-environment-tools

We are excited to introduce a new tool, python-environment-tools, designed to significantly enhance the speed of detecting global Python installations and Python virtual environments.

This tool leverages Rust to ensure a rapid and accurate discovery process. It also minimizes the number of Input/Output operations by collecting all necessary environment information at once, significantly enhancing the overall performance.

We are currently testing this new feature in the Python extension, running it in parallel with the existing support, to evaluate the new discovery performance. Consequently, you will see a new logging channel called Python Locator that shows the discovery times with this new tool.

This enhancement is part of our ongoing efforts to optimize the performance and efficiency of Python support in VS Code. Visit the python-environment-tools repo to learn more about this feature, ongoing work, and provide feedback!

Improved support for reStructuredText docstrings with Pylance

Pylance has improved support for rendering reStructuredText documentation strings (docstrings) on hover! RestructuredText (RST) is a popular format for documentation, and its syntax is sometimes used for the docstrings of Python packages.

This feature is in its early stages and is currently behind an experimental flag as we work to ensure it handles various Sphinx, Google Doc, and Epytext scenarios effectively. To try it out, you can enable the experimental setting python.analysis.supportRestructuredText.

Common packages where you might observe this change in their docstrings include pandas and scipy. Try this change out, and report any issues or feedback at the Pylance GitHub repository.

Note: This setting is currently experimental, but will likely be enabled by default in the future as it becomes more stabilized.

Community contributed Pixi support

Thanks to @baszalmstra, there is now support for Pixi environment detection in the Python extension! This work added a locator to detect Pixi environments in your workspace similar to other common environments such as Conda. Furthermore, if a Pixi environment is detected in your workspace, the environment will automatically be selected as your default environment.

We appreciate and look forward to continued collaboration with community members on bug fixes and enhancements to improve the Python experience!

Other Changes and Enhancements

We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python and Jupyter Notebooks in Visual Studio Code. Some notable changes include:

We would also like to extend special thanks to this month’s contributors:

Call for Community Feedback

As we are planning and prioritizing future work, we value your feedback! Below are a few issues we would love feedback on:

Try out these new improvements by downloading the Python extension and the Jupyter extension from the Marketplace, or install them directly from the extensions view in Visual Studio Code (Ctrl + Shift + X or ⌘ + ⇧ + X). You can learn more about Python support in Visual Studio Code in the documentation. If you run into any problems or have suggestions, please file an issue on the Python VS Code GitHub page.

The post Python in Visual Studio Code – July 2024 Release appeared first on Python.

Categories: FLOSS Project Planets

Real Python: Customize VS Code Settings

Planet Python - Tue, 2024-07-09 10:00

Visual Studio Code, is an open-source code editor available on all platforms. It’s also a great platform for Python development. The default settings in VS Code present a somewhat cluttered environment.

This Code Conversation with instructor Philipp Acsany is about learning how to customize the settings within the interface of VS Code. Having a clean digital workspace is an important part of your work life. Removing distractions and making code more readable can increase productivity and even help you spot bugs.

In this Code Conversation, you’ll learn how to:

  • Work With User Settings
  • Create a VS Code Profile
  • Find and Adjust Specific Settings
  • Clean Up the VS Code User Interface
  • Export Your Profile to Re-use Across Installations

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Django Weblog: Django security releases issued: 5.0.7 and 4.2.14

Planet Python - Tue, 2024-07-09 10:00

In accordance with our security release policy, the Django team is issuing releases for Django 5.0.7 and Django 4.2.14. These releases address the security issues detailed below. We encourage all users of Django to upgrade as soon as possible.

CVE-2024-38875: Potential denial-of-service in django.utils.html.urlize()

urlize() and urlizetrunc() were subject to a potential denial-of-service attack via certain inputs with a very large number of brackets.

Thanks to Elias Myllymäki for the report.

This issue has severity "moderate" according to the Django security policy.

CVE-2024-39329: Username enumeration through timing difference for users with unusable passwords

The django.contrib.auth.backends.ModelBackend.authenticate() method allowed remote attackers to enumerate users via a timing attack involving login requests for users with unusable passwords.

This issue has severity "low" according to the Django security policy.

CVE-2024-39330: Potential directory-traversal in django.core.files.storage.Storage.save()

Derived classes of the django.core.files.storage.Storage base class which override generate_filename() without replicating the file path validations existing in the parent class, allowed for potential directory-traversal via certain inputs when calling save().

Built-in Storage sub-classes were not affected by this vulnerability.

Thanks to Josh Schneier for the report.

This issue has severity "low" according to the Django security policy.

CVE-2024-39614: Potential denial-of-service in django.utils.translation.get_supported_language_variant()

get_supported_language_variant() was subject to a potential denial-of-service attack when used with very long strings containing specific characters.

To mitigate this vulnerability, the language code provided to get_supported_language_variant() is now parsed up to a maximum length of 500 characters.

Thanks to MProgrammer for the report.

This issue has severity "moderate" according to the Django security policy.

Affected supported versions
  • Django main branch
  • Django 5.1 (currently at beta status)
  • Django 5.0
  • Django 4.2
Resolution

Patches to resolve the issue have been applied to Django's main, 5.1, 5.0, and 4.2 branches. The patches may be obtained from the following changesets.

CVE-2024-38875: Potential denial-of-service in django.utils.html.urlize() CVE-2024-39329: Username enumeration through timing difference for users with unusable passwords CVE-2024-39330: Potential directory-traversal in django.core.files.storage.Storage.save() CVE-2024-39614: Potential denial-of-service in django.utils.translation.get_supported_language_variant() The following releases have been issued

The PGP key ID used for this release is Natalia Bidart: 2EE82A8D9470983E

General notes regarding security reporting

As always, we ask that potential security issues be reported via private email to security@djangoproject.com, and not via Django's Trac instance, nor via the Django Forum, nor via the django-developers list. Please see our security policies for further information.

Categories: FLOSS Project Planets

drunomics: Custom Elements UI: quicker changes to your decoupled Drupal site

Planet Drupal - Tue, 2024-07-09 08:01
Custom Elements UI: quicker changes to your decoupled Drupal site bof teaser.png jurgen.thano Tue, 07/09/2024 - 14:01 The latest version of the Custom Elements module empowers developers building headless Drupal solutions. With a user-friendly interface, it’s now easier to modify output entities, adjust properties, and change formats. At Drupal Developer Days Burgas, attendees explored the Custom Elements UI and discussed Lupus Decoupled, an efficient stack for decoupled Drupal applications. Body New Custom Elements module version

The Custom Elements module is an essential building block in the technology stack that drunomics uses to build headless Drupal solutions, facilitating output of pages in either 'custom elements' or JSON format as the front end requires it.

The newest version of the module features a user interface to modify any entity that is part of the output: any property can be included/excluded, and output format can be changed, without the need to write Drupal/PHP code. This allows a developer to more easily change both the backend API output and the decoupled frontend consuming the output at the same time, making for faster turnaround times in changes to your website.

Our talk at Drupal Developer Days Burgas

Roderik Muit and Alexandru Ieremia chaired an informal (Birds of a Feather) session at Drupal Developer Days in Burgas in June 2024, to prevent the new changes to any interested parties. They also prepared some information about the larger Lupus Decoupled stack for any interested attendees who would not be familiar with it yet.

After the presentation, an animated discussion followed. Some people were curious how the Custom Elements UI worked, what the code behind it looks like and how to write own 'formatters'.

Another person said that Lupus Decoupled seems to exactly satisfy their need to address the resource heavy JSON:API queries in his current main website. He was encouraged to try out a demo and ask any questions in our issue queue or on #lupus-decoupled on Drupal Slack. Users were assured that Lupus Decoupled is ready to use (for experienced developers) and completely open source.

The new Custom Elements version with UI to alter output, is currently available as a development version; we are working to finalize a beta release as soon as possible.

Categories: FLOSS Project Planets

Real Python: Quiz: Split Your Dataset With scikit-learn's train_test_split()

Planet Python - Tue, 2024-07-09 08:00

In this quiz, you’ll test your understanding of how to use train_test_split() from the sklearn library.

By working through this quiz, you’ll revisit why you need to split your dataset in supervised machine learning, which subsets of the dataset you need for an unbiased evaluation of your model, how to use train_test_split() to split your data, and how to combine train_test_split() with prediction methods.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Drupal Association blog: Celebrating Success: DrupalCon Portland 2024 Event Impact Recap

Planet Drupal - Tue, 2024-07-09 07:46

Welcome to the Event Impact Recap of DrupalCon Portland 2024, a benchmark event in North America, that not only marked a significant milestone in the Drupal community, but also holds a special place in my journey. Having served as a contractor for DrupalCon Portland and now stepping into the role of the new Community Programs Director with the Drupal Association, I am thrilled to share the highlights and successes of this remarkable gathering. My goal is to have an Impact Report shared with the community after each DrupalCon that depicts the data and feedback on the event. Please view the slides.

Key Highlights from DrupalCon Portland 2024:

  • Attendance and Engagement:
    • With 1,368 registered attendees and an impressive 97.8% check-in rate, DrupalCon Portland 2024 brought together a vibrant community of Drupal enthusiasts and professionals.
    • Of the 1,368 registered attendees, 438 (about one third) received comped registrations for volunteering, speaking, or other roles at the conference.
    • The event saw 3,249 hotel rooms booked in Portland, OR, highlighting its impact on the local economy and hospitality sector AND, it’s worth to note, these were just the rooms through our block, many rooms were booked outside the block making an even bigger impact on the local business community. 
    • A post event survey showed the rank of overall experience at DrupalCon Portland 2024 at 4.21/5
    • 32% of attendees said this was their 1st DrupalCon 
    • 8 Scholarship grants were given out to the community
    • 95% of attendees said they would attend a future DrupalCon
  • Global Representation:
    • Attendees from 6 continents, 35 countries, and 46 states joined us, demonstrating Drupal's global reach and community diversity.
  • Specialized Summits:
    • Five Summits (Government, Higher Ed, Nonprofit, Healthcare, and Community) attracted 476 attendees, facilitating deep dives into crucial Drupal topics and fostering collaboration.
  • DriesNote and Starshot:
    • A highlight of the event was the DriesNote, attended by 950 people in person, eager to hear about Starshot, an exciting new initiative. (View the recording on the Drupal Association Youtube page). This session not only informed but also inspired attendees about the future of Drupal.
    • Two BOFs were hosted, providing platforms for continued discussions and community engagement beyond the main sessions.
  • Sponsorship:
    • DrupalCon Portland 2024 was made possible thanks to the generous support of our sponsors 
      • Presenting Sponsors: 2
      • Champion Sponsors: 6
      • Advocate Sponsors: 11
      • Exhibitors: 28
      • Total Sponsors: 47
    • The conference wouldn't have been possible without the dedication and partnership of these organizations. Their support underscores their commitment to the Drupal community and its ongoing success.
  • Volunteer Contributions:
    • The success of DrupalCon Portland 2024 was further bolstered by 28 dedicated volunteers, including local ambassadors, logistics contributors, and translation assistants, who collectively contributed 221.5 hours onsite.

I am deeply honored to now step into the role of the new Community Programs Director with the Drupal Association. With over 18 years of experience in event planning, field marketing, nonprofit management, and community engagement, I am excited to leverage my skills to enhance community programs and initiatives within the Drupal ecosystem. My goal is to foster even stronger connections, facilitate meaningful collaborations, and support the growth and inclusivity of the Drupal community.

As we reflect on the achievements and connections fostered at DrupalCon Portland 2024, I am filled with optimism about the future of Drupal and the potential for continued growth and innovation within our community, and am excited to be a part of DrupalCon Barcelona, DrupalCon Singapore, DrupalCon Atlanta and many more for years to come!

DrupalCon Portland 2024 was not just an event but a celebration of collaboration, knowledge sharing, and community spirit. I extend my heartfelt gratitude to everyone who contributed to its success, from attendees and volunteers to sponsors and organizers. Let's carry this momentum forward as we embark on the next chapter of Drupal's journey together.

- Meghan Harrell
Community Programs Director
Drupal Association

Categories: FLOSS Project Planets

Specbee: Simplifying content duplication with Quick Node Clone module in Drupal

Planet Drupal - Tue, 2024-07-09 07:18
If you’re a marketer, you know how much content cloning can simplify your life. It lets you duplicate blog posts, landing pages, articles, product listings, and forum posts effortlessly.  If you’re familiar with Drupal, you should know that nodes are fundamental content entities that represent individual pieces of content on a site. Creating similar content nodes in Drupal can be time-consuming, especially when you have to duplicate them manually.  Fortunately, there's a solution: the Quick Node Clone module. In this blog post, we'll explore how this handy module can streamline your content creation process in Drupal. What is the Quick Node Clone Module The Quick Node Clone module allows Drupal users to swiftly duplicate existing nodes with just a few clicks. This module can save you time and effort by eliminating the need to recreate content from scratch. How to Install the module Getting started with the Quick Node Clone module is straightforward. Simply follow these steps: Download the module from Drupal.org or use Composer to install it. Enable the module in the Drupal administration interface. Clear the cache for the changes to take effect. Configuring the module Once the module is installed, you can customize its settings to suit your needs.  Text to prepend to title The text we enter in this field will be prepended to the title of the cloned node. This will be seen on the node clone page. Clone publication status of original If it's checked then the publication status will be cloned from the Original node that we clone.If Unchecked the publication status will be cloned from the “default publish status of the content type” of that particular node. Exclusion list If you don’t want some field values to be cloned then you can choose the particular content type and exclude any field. This module also supports 'paragraphs', allowing us to exclude any paragraph field from being cloned, similar to nodes. How to Use Quick Node Clone Using the Quick Node Clone module is simple: Navigate to the node you want to duplicate. Click on the "Clone" button, depending on your Drupal configuration. Optionally, make any necessary changes to the cloned node. Save the cloned node, and you're done! Permissions: This module provides a set of permissions. For any content type, we can grant permission to clone its nodes.Additionally, there's the "Administer Quick Node Clone Settings" permission, granting access to the module's configuration page at /admin/config/quick-node-clone. Hooks provided by the module: 1. hook_cloned_node_alter() Example Usage Let's consider a practical example where we want to modify certain properties of the cloned node: /**  * Implements hook_cloned_node_alter().  */ function mymodule_cloned_node_alter($cloned_node, $original_node) {   // Change the title of the cloned node.   $cloned_node->setTitle('Modified Title');   // Check if the cloned node has a specific field and update its value.   if ($cloned_node->hasField('field_example')) {     $cloned_node->set('field_example', 'New Field Value');   } } mymodule should be replaced with the machine name of your custom module. $cloned_node represents the cloned node object that you can modify. $original_node refers to the original node being cloned, providing context for your alterations. 2. hook_cloned_node_paragraph_alter()   Example UsageLet's consider an example scenario where we want to update the value of a specific paragraph field during the cloning process: /**  * Implements hook_cloned_node_paragraph_field_alter().  */ function mymodule_cloned_node_paragraph_field_alter($paragraph, $field_name, $settings) {   // Check if the paragraph has a field named 'field_place' and update its value.   if ($paragraph->hasField('field_place')) {     $paragraph->set('field_place', 'New Changed Place');   } } mymodule should be replaced with the machine name of your custom module. $paragraph represents the cloned paragraph entity that you can modify. $field_name indicates the name of the paragraph field being processed. $settings provides additional information about the field. Final thoughts The Quick Node Clone module is a valuable tool for Drupal users looking to streamline their content creation process. This module can save you time and effort by simplifying the duplication of nodes, allowing you to focus on more important tasks. Give it a try on your Drupal site and experience the benefits firsthand!
Categories: FLOSS Project Planets

Python Bytes: #391 A weak episode

Planet Python - Tue, 2024-07-09 04:00
<strong>Topics covered in this episode:</strong><br> <ul> <li><a href="https://github.com/mwilliamson/python-vendorize"><strong>Vendorize packages from PyPI</strong></a></li> <li><a href="https://martinheinz.dev/blog/112"><strong>A Guide to Python's Weak References Using weakref Module</strong></a></li> <li><a href="https://github.com/Proteusiq/saa"><strong>Making Time Speak</strong></a></li> <li><a href="https://towardsdatascience.com/how-should-you-test-your-machine-learning-project-a-beginners-guide-2e22da5a9bfc"><strong>How Should You Test Your Machine Learning Project? A Beginner’s Guide</strong></a></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=c8RSsydIhhs' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="391">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by <strong>Code Comments</strong>, an original podcast from RedHat: <a href="https://pythonbytes.fm/code-comments">pythonbytes.fm/code-comments</a></p> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy"><strong>@mkennedy@fosstodon.org</strong></a></li> <li>Brian: <a href="https://fosstodon.org/@brianokken"><strong>@brianokken@fosstodon.org</strong></a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes"><strong>@pythonbytes@fosstodon.org</strong></a></li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually Tuesdays at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1:</strong> <a href="https://github.com/mwilliamson/python-vendorize"><strong>Vendorize packages from PyPI</strong></a></p> <ul> <li>Allows pure-Python dependencies to be vendorized: that is, the Python source of the dependency is copied into your own package.</li> <li>Best used for small, pure-Python dependencies</li> </ul> <p><strong>Brian #2:</strong> <a href="https://martinheinz.dev/blog/112"><strong>A Guide to Python's Weak References Using weakref Module</strong></a></p> <ul> <li>Martin Heinz</li> <li>Very cool discussion of weakref</li> <li>Quick garbage collection intro, and how references and weak references are used.</li> <li>Using weak references to build data structures. <ul> <li>Example of two kinds of trees</li> </ul></li> <li>Implementing the Observer pattern</li> <li>How logging and OrderedDict use weak references</li> </ul> <p><strong>Michael #3:</strong> <a href="https://github.com/Proteusiq/saa"><strong>Making Time Speak</strong></a></p> <ul> <li>by Prayson, a former guest and friend of the show</li> <li>Translating time into human-friendly spoken expressions</li> <li>Example: clock("11:15") # 'quarter past eleven' </li> <li>Features <ul> <li>Convert time into spoken expressions in various languages.</li> <li>Easy-to-use API with a simple and intuitive design.</li> <li>Pure Python implementation with no external dependencies.</li> <li>Extensible architecture for adding support for additional languages using the plugin design pattern.</li> </ul></li> </ul> <p><strong>Brian #4:</strong> <a href="https://towardsdatascience.com/how-should-you-test-your-machine-learning-project-a-beginners-guide-2e22da5a9bfc"><strong>How Should You Test Your Machine Learning Project? A Beginner’s Guide</strong></a></p> <ul> <li>François Porcher</li> <li>Using pytest and pytest-cov for testing machine learning projects</li> <li>Lots of pieces can and should be tested just as normal functions. <ul> <li>Example of testing a clean_text(text: str) -> str function</li> </ul></li> <li>Test larger chunks with canned input and expected output. <ul> <li>Example test_tokenize_text()</li> </ul></li> <li>Using fixtures for larger reusable components in testing <ul> <li>Example fixture: bert_tokenizer() with pretrained data</li> </ul></li> <li>Checking coverage</li> </ul> <p><strong>Extras</strong> </p> <p>Michael:</p> <ul> <li><a href="https://www.macrumors.com/2024/07/05/authy-app-hack-exposes-phone-numbers/">Twilio Authy Hack</a> <ul> <li><a href="https://python-bytes-static.nyc3.digitaloceanspaces.com/google-really.png">Google Authenticator is the only option</a>? Really?</li> <li><a href="https://bitwarden.com">Bitwarden to the rescue</a></li> <li>Requires (?) an <a href="https://apps.apple.com/us/app/twilio-authy/id494168017">update to their app</a>, whose release notes (v26.1.0) only say “Bug fixes”</li> </ul></li> <li><a href="https://9to5mac.com/2024/07/03/proton-drive-gets-collaborative-docs-end-to-end-encryption/">Introducing Docs in Proton Drive</a> <ul> <li>This is what I called on Mozilla to do in “<a href="https://mkennedy.codes/posts/michael-kennedys-unsolicited-advice-for-mozilla-and-firefox/">Unsolicited</a><a href="https://mkennedy.codes/posts/michael-kennedys-unsolicited-advice-for-mozilla-and-firefox/"> Advice for Mozilla and Firefox</a>” But Proton got there first</li> </ul></li> <li>Early bird ending for <a href="https://www.codeinacastle.com/python-zero-to-hero-2024?utm_source=pythonbytes">Code in a Castle course</a></li> </ul> <p><strong>Joke:</strong> <a href="https://devhumor.com/media/in-rust-i-trust">I Lied</a></p>
Categories: FLOSS Project Planets

Russ Allbery: Review: Raising Steam

Planet Debian - Tue, 2024-07-09 00:15

Review: Raising Steam, by Terry Pratchett

Series: Discworld #40 Publisher: Anchor Books Copyright: 2013 Printing: October 2014 ISBN: 0-8041-6920-9 Format: Trade paperback Pages: 365

Raising Steam is the 40th Discworld novel and the third Moist von Lipwig novel, following Making Money. This is not a good place to start reading the series.

Dick Simnel is a tinkerer from a line of tinkerers. He has been obsessed with mastering the power of steam since the age of ten, when his father died in a steam accident. That pursuit took him deeper into mathematics and precision, calculations and experiments, until he built Iron Girder: Discworld's first steam-powered locomotive. His early funding came from some convenient family pirate treasure, but turning his prototype into something more will require significantly more resources. That is how he ends up in the office of Harry King, Ankh-Morpork's sanitation magnate.

Simnel's steam locomotive has the potential to solve some obvious logistical problems, such as getting fish from the docks of Quirm to the streets of Ankh-Morpork before it stops being vaguely edible. That's not what makes railways catch fire, however. As soon as Iron Girder is huffing and puffing its way around King's compound, it becomes the most popular attraction in the city. People stand in line for hours to ride it over and over again for reasons that they cannot entirely explain. There is something wild and uncontrollable going on.

Vetinari is not sure he likes wild and uncontrollable, but he knows the lap into which such problems can be dumped: Moist von Lipwig, who is already getting bored with being a figurehead for the city's banking system.

The setup for Raising Steam reminds me more of Moving Pictures than the other Moist von Lipwig novels. Simnel himself is a relentlessly practical engineer, but the trains themselves have tapped some sort of primal magic. Unlike Moving Pictures, Pratchett doesn't provide an explicit fantasy explanation involving intruding powers from another world. It might have been a more interesting book if he had. Instead, this book expects the reader to believe there is something inherently appealing and fascinating about trains, without providing much logic or underlying justification. I think some readers will be willing to go along with this, and others (myself included) will be left wishing the story had more world-building and fewer exclamation points.

That's not the real problem with this book, though. Sadly, its true downfall is that Pratchett's writing ability had almost completely collapsed by the time he wrote it.

As mentioned in my review of Snuff, we're now well into the period where Pratchett was suffering the effects of early-onset Alzheimer's. In that book, his health issues mostly affected the dialogue near the end of the novel. In this book, published two years later, it's pervasive and much worse. Here's a typical passage from early in the book:

It is said that a soft answer turneth away wrath, but this assertion has a lot to do with hope and was now turning out to be patently inaccurate, since even a well-spoken and thoughtful soft answer could actually drive the wrong kind of person into a state of fury if wrath was what they had in mind, and that was the state the elderly dwarf was now enjoying.

One of the best things about Discworld is Pratchett's ability to drop unexpected bits of wisdom in a sentence or two, or twist a verbal knife in an unexpected and surprising direction. Raising Steam still shows flashes of that ability, but it's buried in run-on sentences, drowned in cliches and repetition, and often left behind as the containing sentence meanders off into the weeds and sputters to a confused halt. The idea is still there; the delivery, sadly, is not.

This is the first Discworld novel that I found mentally taxing to read. Sentences are often so overpacked that they require real effort to untangle, and the untangled meaning rarely feels worth the effort. The individual voice of the characters is almost gone. Vetinari's monologues, rather than being a rare event with dangerous layers, are frequent, rambling, and indecisive, often sounding like an entirely different character than the Vetinari we know. The constant repetition of the name any given character is speaking to was impossible for me to ignore. And the momentum of the story feels wrong; rather than constructing the events of the story in a way that sweeps the reader along, it felt like Pratchett was constantly pushing, trying to convince the reader that trains were the most exciting thing to ever happen to Discworld.

The bones of a good story are here, including further development of dwarf politics from The Fifth Elephant and Thud! and the further fallout of the events of Snuff. There are also glimmers of Pratchett's typically sharp observations and turns of phrase that could have been unearthed and polished. But at the very least this book needed way more editing and a lot of rewriting. I suspect it could have dropped thirty pages just by tightening the dialogue and removing some of the repetition.

I'm afraid I did not enjoy this. I am a bit of a hard sell for the magic fascination of trains — I love trains, but my model railroad days are behind me and I'm now more interested in them as part of urban transportation policy. Previous Discworld books on technology and social systems did more of the work of drawing the reader in, providing character hooks and additional complexity, and building a firmer foundation than "trains are awesome." The main problem, though, was the quality of the writing, particularly when compared to the previous novels with the same characters.

I dragged myself through this book out of a sense of completionism and obligation, and was relieved when I finished it.

This is the first Discworld novel that I don't recommend. I think the only reason to read it is if you want to have read all of Discworld. Otherwise, consider stopping with Snuff and letting it be the send-off for the Ankh-Morpork characters.

Followed by The Shepherd's Crown, a Tiffany Aching story and the last Discworld novel.

Rating: 3 out of 10

Categories: FLOSS Project Planets

Kdenlive 24.05.2 released

Planet KDE - Mon, 2024-07-08 15:41

The second maintenance release of the 24.05 series is out,

Full changelog

  • Fix guides categories not correctly saved in document. Commit. Fixes bug #489079.
  • Fix adding record track adds a normal audio track. Commit. Fixes bug #489080.
  • Fix rendering with aspect ratio change always renders with proxies. Commit.
  • Fix compilation on Windows with KF 6.3.0. Commit.
  • Fix timeline duration not correctly updated, resulting in audio/video freeze in timeline after 5 min. Commit.
  • Fix Windows build without DBUS. Commit.
  • Fix crash on spacer tool with subtitles. Commit.

The post Kdenlive 24.05.2 released appeared first on Kdenlive.

Categories: FLOSS Project Planets

Thorsten Alteholz: My Debian Activities in June 2024

Planet Debian - Mon, 2024-07-08 14:27
FTP master

This month I accepted 270 and rejected 23 packages. The overall number of packages that got accepted was 279.

Debian LTS

This was my hundred-twentieth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

During my allocated time I uploaded or worked on:

  • [DLA 3826-1] cups security update for one CVE to prevent arbitrary chmod of files
  • [#1073519] bullseye-pu: cups/2.3.3op2-3+deb11u7 to fix one CVE
  • [#1073518] bookworm-pu: cups/2.4.2-3+deb12u6 to fix one CVE
  • [#1073519] bullseye-pu: cups/2.3.3op2-3+deb11u7 package upload
  • [#1073518] bookworm-pu: cups/2.4.2-3+deb12u6 package upload
  • [#1074438] bullseye-pu: cups/2.3.3op2-3+deb11u8 to fix an upstream regression of the last upload
  • [#1074439] bookworm-pu: cups/2.4.2-3+deb12u7 to fix an upstream regression of the last upload
  • [#1074438] bullseye-pu: cups/2.3.3op2-3+deb11u8 package upload
  • [#1074439] bookworm-pu: cups/2.4.2-3+deb12u7 package upload
  • [#1055802] bookworm-pu: package qtbase-opensource-src/5.15.8+dfsg-11+deb12u1 package upload

This month handling of the CVE of cups was a bit messy. After lifting the embargo of the CVE, a published patch did not work with all possible combinations of the configuration. In other words, in cases of having only one local domain socket configured, the cupsd did not start and failed with a strange error. Anyway, upstream published a new set of patches, which made cups work again. Unfortunately this happended just before the latest point release for Bullseye and Bookworm, so that the new packages did not make it into the release, but stopped in the corresponding p-u-queues: stable-p-u and old-p-u.

I also continued to work on tiff and last but not least did a week of FD and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the seventy-first ELTS month. During my allocated time I tried to upload a new version of cups for Jessie and Stretch. Unfortunately this was stopped due to an autopkgtest error, which I could not reproduce yet.

I also wanted to finally upload a fixed version of exim4. Unfortunately this was stopped due to lots of CI-jobs for Buster. Updates for Buster are now also availble from ELTS, so some stuff had to prepared before the actual switch end of June. Additionally everything was delayed due to a crash of the CI worker. All in all this month was rather ill-fated. At least the exim4 upload will happen/already happened in July.

I also continued to work on an update for libvirt, did a week of FD and attended the LTS/ELTS meeting.

Debian Printing

This month I uploaded new upstream or bugfix versions of:

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream or bugfix version of:

All of those uploads are somehow related to /usr-move. Debian IoT

This month I uploaded new upstream or bugfix versions of:

Debian Mobcom

The following packages have been prepared by the GSoC student Nathan:

misc

This month I uploaded new upstream or bugfix versions of:

Here as well all uploads are somehow related to /usr-move

Categories: FLOSS Project Planets

Talking Drupal: Talking Drupal #458 - Drupal & Next.js

Planet Drupal - Mon, 2024-07-08 14:00

Today we are talking about Next.js, what it is, and how to integrate it with Drupal with guest John Albin Wilkins. We’ll also cover Next.js Webform as our module of the week.

For show notes visit: www.talkingDrupal.com/458

Topics
  • What is Next.js
  • What kind of server do you need
  • How is it used on the web
  • Does it only work on react based systems
  • Why would someone want to integrate with Drupal
  • When changes are made in the content how do you update the app
  • On the module page there are a lot of references to Preview, is this something Next does well
  • What is server side rendering
  • How does Next work with menus and views
  • Any preference on the api for json api vs graphql
  • Performance
  • Editorial experience
  • Responsive images
  • Will Drupal ever ship with a headless front end
  • Winner of the TPOTM
Resources Guests

John Albin Wilkins - john.albin.net JohnAlbin

Hosts

Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Baddý Sonja Breidert - 1xINTERNET baddysonja

MOTW Correspondent

Martin Anderson-Clutz - mandclu.com mandclu

  • Brief description:
    • Have you ever wanted to build a webform in Drupal and have the corresponding Next.js template automatically created for you? There’s a Next.js library for that.
  • Module name/project name:
  • Brief history
    • How old: created in Aug 2022 by Lauri Timmanee (lauriii), who listeners may know as the Drupal Core Product Manager, and one of the people leading the Starshot initiative
    • Versions available: 1.1.1
  • Maintainership
    • Test coverage
    • Documentation - Lengthy README and a tutorial on the Acquia Dev Portal
    • Number of open issues: 17 open issues, 3 of which are bugs
  • Usage stats:
    • 2,246 weekly downloads according to npmjs.com
  • Module features and usage
    • Using this library does require some setup on the Drupal side, including installing the Webform and Webform REST modules. There’s also an extra patch to install if you want to use any autocomplete fields, and some configuration needed for both the REST resources that will be used to exchange data, and the permissions for the account that will be used to retrieve and submit data
    • Out of the box, the library supports over 40 webform components, but you can also provide custom elements if you need something additional. The library also supports conditional logic, so fields can show or hide in the Next.js front end based on conditions defined in your Drupal backend
    • The library also provides front-end validation for email confirmation, date list, and datetime fields, but back end validation is also processed for every submission
    • There is a crowded field of headless CMS competitors, but I thought this library is a good example of the extra power and flexibility you get by using a robust, open source CMS like Drupal as the back end in your headless architecture
Categories: FLOSS Project Planets

The Drop Times: Using Drupal Migrations to Modify Content Within a Site

Planet Drupal - Mon, 2024-07-08 11:54
Explore how to leverage Drupal's Migrate API for internal content restructuring within a site. Jaymie Strecker walks you through practical examples and essential steps to efficiently migrate nodes, modify fields, and update content types, demonstrating the API's versatility beyond traditional migration tasks.
Categories: FLOSS Project Planets

The Drop Times: The Power of Community: Innovation, Collaboration, and Events

Planet Drupal - Mon, 2024-07-08 10:44

Let's talk about something truly amazing—the Drupal community. This isn't just a group of tech enthusiasts; it's a global network of developers, designers, content creators, and business professionals all working together to make Drupal better every day.

What's incredible about the Drupal community is how it constantly drives innovation. Members brainstorm and share ideas, create new modules, improve security, and make Drupal more user-friendly. Their contributions ensure that Drupal stays ahead of the curve in web development.

Think about this: there are thousands of active contributors from all over the world. This community isn't limited by geography. Their work impacts millions of websites globally, from coding and developing modules to offering support and writing documentation. It's a testament to how dedicated and skilled these individuals are.

Now, you might wonder what makes the Drupal community stand out from others. It's their culture of collaboration and respect. Everyone's input is valued here, whether you're a newbie or a seasoned expert. This inclusive approach creates a supportive environment where everyone can learn and grow.

And let’s not forget about the events. Drupal meetups, camps, and conferences are where the magic happens. These events are opportunities to learn, share, and connect. They bring the community together, fostering relationships and sparking new ideas.

Let's now focus on what The Drop Times has covered from last week,

Kazima Abbas, sub-editor at The Drop Times, interviewed Lauri Timmanee on transforming Drupal site building, Experience Builder, and the Starshot Initiative.

The study published in The Drop Times by Veniz Maja Guzman, SEO Expert & Content Strategist at Promet Source, uncovers the growing trend of Drupal adoption in government websites, correlating with entity size.

Explore how AI is revolutionizing Drupal development in Jay Callicott's latest article, "The AI-Driven Developer: From Assistance to Autonomy in Drupal Development."

A comprehensive analysis by Arjun Biju and Alka Elizabeth, sub-editor at The Drop Times, examines CMS usage across 8,134 non-profit and charity organizations in the United States.

The Drupal Trivandrum Meetup on June 29, 2024, at Cafe Coffee Day, Thiruvananthapuram, featured Drupal enthusiasts discussing Drupal's impact and networking.

Drupal Meetup Haneda will be held online on July 25.

Dipak Yadav's report on the Drupal Pune Meetup held on June 22, 2024, highlights engaging sessions on managing multisite platforms, digital lead acquisition, and socially-driven projects.

Developers and agencies are invited to submit their best Drupal projects launched in 2023 or 2024 for a chance to be featured at DrupalCon Barcelona 2024. The submission deadline is September 8, 2024.

DrupalCamp Colorado will host a keynote by Lynn Winter, a digital strategist with expertise in information architecture, UX, and content strategy.

AmyJune Hineline from The Linux Foundation will lead a session on inclusive image practices at Drupal Camp Asheville 2024.

The Drupal Association introduces Ripple Makers, a revamped Individual Membership program designed to enhance community engagement and communication.

DrupalCamp Pune 2024 seeks talented designers for banners, IDs, standees, and goodie bags. Registration for DrupalGovCon 2024 is now open—secure your free tickets for the event.

DrupalGovCon 2024 registration is now open, offering a highly anticipated opportunity for organizers, volunteers, sponsors, and attendees to secure their tickets for the event.

Drupal Camp Asheville 2024 is set for July 12-14, featuring various events, including a Saturday After-Party and a Drupal Coffee Exchange.

DrupalCon Singapore 2024 invites speakers to submit session proposals by July 8, 2024.

We acknowledge that there are more stories to share. However, due to selection constraints, we must pause further exploration for now.

To get timely updates, follow us on LinkedIn, Twitter and Facebook. Also, join us on Drupal Slack at #thedroptimes.

Thank you,
Sincerely,
Kazima Abbas
Sub-editor, The Drop Times 

Categories: FLOSS Project Planets

Pages