FLOSS Project Planets

Real Python: How Do You Choose Python Function Names?

Planet Python - Wed, 2024-07-10 10:00

One of the hardest decisions in programming is choosing names. Programmers often use this phrase to highight the challenges of selecting Python function names. It may be an exaggeration, but there’s still a lot of truth in it.

There are some hard rules you can’t break when naming Python functions and other objects. There are also other conventions and best practices that don’t raise errors when you break them, but they’re still important when writing Pythonic code.

Choosing the ideal Python function names makes your code more readable and easier to maintain. Code with well-chosen names can also be less prone to bugs.

In this tutorial, you’ll learn about the rules and conventions for naming Python functions and why they’re important. So, how do you choose Python function names?

Get Your Code: Click here to download the free sample code that you’ll use as you learn how to choose Python function names.

In Short: Use Descriptive Python Function Names Using snake_case

In Python, the labels you use to refer to objects are called identifiers or names. You set a name for a Python function when you use the def keyword.

When creating Python names, you can use uppercase and lowercase letters, the digits 0 to 9, and the underscore (_). However, you can’t use digits as the first character. You can use some other Unicode characters in Python identifiers, but not all Unicode characters are valid. Not even 🐍 is valid!

Still, it’s preferable to use only the Latin characters present in ASCII. The Latin characters are easier to type and more universally found on most keyboards. Using other characters rarely improves readability and can be a source of bugs.

Here are some syntactically valid and invalid names for Python functions and other objects:

Name Validity Notes number Valid first_name Valid first name Invalid No whitespace allowed first_10_numbers Valid 10_numbers Invalid No digits allowed at the start of names _name Valid greeting! Invalid No ASCII punctuation allowed except for the underscore (_) café Valid Not recommended 你好 Valid Not recommended hello⁀world Valid Not recommended—connector punctuation characters and other marks are valid characters

However, Python has conventions about naming functions that go beyond these rules. One of the core Python Enhancement Proposals, PEP 8, defines Python’s style guide, which includes naming conventions.

According to PEP 8 style guidelines, Python functions should be named using lowercase letters and with an underscore separating words. This style is often referred to as snake case. For example, get_text() is a better function name than getText() in Python.

Function names should also describe the actions being performed by the function clearly and concisely whenever possible. For example, for a function that calculates the total value of an online order, calculate_total() is a better name than total().

You’ll explore these conventions and best practices in more detail in the following sections of this tutorial.

What Case Should You Use for Python Function Names?

Several character cases, like snake case and camel case, are used in programming for identifiers to name the various entities. Programming languages have their own preferences, so the right style for one language may not be suitable for another.

Python functions are generally written in snake case. When you use this format, all the letters are lowercase, including the first letter, and you use an underscore to separate words. You don’t need to use an underscore if the function name includes only one word. The following function names are examples of snake case:

  • find_winner()
  • save()

Both function names include lowercase letters, and one of them has two English words separated by an underscore. You can also use the underscore at the beginning or end of a function name. However, there are conventions outlining when you should use the underscore in this way.

You can use a single leading underscore, such as with _find_winner(), to indicate that a function is meant only for internal use. An object with a leading single underscore in its name can be used internally within a module or a class. While Python doesn’t enforce private variables or functions, a leading underscore is an accepted convention to show the programmer’s intent.

A single trailing underscore is used by convention when you want to avoid a conflict with existing Python names or keywords. For example, you can’t use the name import for a function since import is a keyword. You can’t use keywords as names for functions or other objects. You can choose a different name, but you can also add a trailing underscore to create import_(), which is a valid name.

You can also use a single trailing underscore if you wish to reuse the name of a built-in function or other object. For example, if you want to define a function that you’d like to call max, you can name your function max_() to avoid conflict with the built-in function max().

Unlike the case with the keyword import, max() is not a keyword but a built-in function. Therefore, you could define your function using the same name, max(), but it’s generally preferable to avoid this approach to prevent confusion and ensure you can still use the built-in function.

Double leading underscores are also used for attributes in classes. This notation invokes name mangling, which makes it harder for a user to access the attribute and prevents subclasses from accessing them. You’ll read more about name mangling and attributes with double leading underscores later.

Read the full article at https://realpython.com/python-function-names/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Petter Reinholdtsen: Some notes from the 2024 LinuxCNC Norwegian developer gathering

Planet Debian - Wed, 2024-07-10 08:45

The Norwegian The LinuxCNC developer gathering 2024 is over. It was a great and productive weekend, and I am sad that it is over.

Regular readers probably still remember what LinuxCNC is, but her is a quick summary for those that forgot? LinuxCNC is a free software system for numerical control of machines such as milling machines, lathes, plasma cutters, routers, cutting machines, robots and hexapods. It eats G-code and produce motor movement and other changes to the physical world, while reading sensor input.

I am not quite sure about the total head count, as not all people were present at the gathering the entire weekend, but I believe it was close to 10 people showing their faces at the gathering. The "hard core" of the group, who stayed the entire weekend, were two from Norway, two from Germany and one from England. I am happy with the outcome from the gathering. We managed to wrap up a new stable LinuxCNC release 2.9.3 and even tested it on real hardware within minutes of the release. The release notes for 2.9.3 are still being written, but should show up on on the project site in the next few days. We managed to go through around twenty pull requests and merge then into either the stable release (2.9) or the development branch (master). There are still around thirty pull requests left to process, so we are not out of work yet. We even managed to fix/improve a slightly worn lathe, and experiment with running a mechanical clock using G-code.

The evening barbeque worked well both on Saturday and Sunday. It is quite fun to light up a charcoal grill using compressed air. Sadly the weather was not the best, so we stayed indoors most of the time.

This gathering was made possible partly with sponsoring from both Redpill Linpro, Debian and NUUG Foundation, and we are most grateful for the support. I would also like to thank the local school for lending us some furniture, and of course the rest of the members of the organizers team, Asle and Bosse, for their countless contributions. The gathering was such success that we want to do it again next year.

We plan to organize the next Norwegian LinuxCNC developer gathering at the end of June next year, the weekend Friday 27th to Sunday 29th of June 2025. I recommend you reserve the dates on your calendar today. Other related communities are also welcome to join in, for example those working on systems like FreeCAD and opencamlib, as I am sure we have much in common and sharing experiences would be very useful to all involved. We are of course looking for sponsors for this gathering already. The total budget for this gathering was around NOK 25.000 (around EUR 2.300), so our needs are quite modest. Perhaps a machine or tools company would like to help out the free software manufacturing community by sponsoring food, lodging and transport for such gathering?

Categories: FLOSS Project Planets

Real Python: Quiz: Choosing the Best Font for Programming

Planet Python - Wed, 2024-07-10 08:00

In this quiz, you’ll test your understanding of how to choose the best font for your daily programming. You’ll get questions about the technicalities and features to consider when choosing a programming font and refresh your knowledge about how to spot a high-quality coding font.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Russell Coker: Computer Adavances in the Last Decade

Planet Debian - Wed, 2024-07-10 02:55

I wrote a comment on a social media post where someone claimed that there’s no computer advances in the last 12 years which got long so it’s worth a blog post.

In the last decade or so new laptops have become cheaper than new desktop PCs. USB-C has taken over for phones and for laptop charging so all recent laptops support USB-C docks and monitors with USB-C docks built in have become common. 4K monitors have become cheap and common and higher than 4K is cheap for some use cases such as ultra wide. 4K TVs are cheap and TVs with built-in Android computers for playing internet content are now standard. For most use cases spinning media hard drives are obsolete, SSDs large enough for all the content most people need to store are cheap. We have gone from gigabit Ethernet being expensive to 2.5 gigabit being cheap.

12 years ago smart phones were very limited and every couple of years there would be significant improvements. Since about 2018 phones have been capable of doing most things most people want. 5yo Android phones can run the latest apps and take high quality pics. Any phone that supports VoLTE will be good for another 5+ years if it has security support. Phones without security support still work and are quite usable apart from being insecure. Google and Samsung have significantly increased their minimum security support for their phones and the GKI project from Google makes it easier for smaller vendors to give longer security support. There are a variety of open Android projects like LineageOS which give longer security support on a variety of phones. If you deliberately choose a phone that is likely to be well supported by projects like LineageOS (which pretty much means just Pixel phones) then you can expect to be able to actually use it when it is 10 years old. Compare this to the Samsung Galaxy S3 released in 2012 which was a massive improvement over the original Galaxy S (the S2 felt closer to the S than the S3). The Samsung Galaxy S4 released in 2013 was one of the first phones to have FullHD resolution which is high enough that most people can’t easily recognise the benefits of higher resolution. It wasn’t until 2015 that phones with 4G of RAM became common which is enough that for most phone use it’s adequate today.

Now that 16G of RAM is affordable in laptops running more secure OSs like Qubes is viable for more people. Even without Qubes, OS security has been improving a lot with better compiler features, new languages like Rust, and changes to software design and testing. Containers are being used more but we still aren’t getting all the benefits of that. TPM has become usable in the last few years and we are only starting to take advantage of what it can offer.

In 2012 BTRFS was still at an early stage of development and not many people wanted to use it in production, I was using it in production then and while I didn’t lose any data from bugs I did have some downtime because of BTRFS issues. Now BTRFS is quite solid for server use.

DDR4 was released in 2014 and gave significant improvements over DDR3 for performance and capacity. My home workstation now has 256G of DDR4 which wasn’t particularly expensive while the previous biggest system I owned had 96G of DDR3 RAM. Now DDR5 is available to again increase performance and size while also making DDR4 cheap on the second hand market.

This isn’t a comprehensive list of all advances in the computer industry over the last 12 years or so, it’s just some things that seem particularly noteworthy to me.

Please comment about what you think are the most noteworthy advances I didn’t mention.

Related posts:

  1. Long-term Device Use It seems to me that Android phones have recently passed...
  2. Returning the Aldi Tablet I have decided to return the 7″ Android tablet I...
  3. Standardising Android Don Marti wrote an amusing post about the lack of...
Categories: FLOSS Project Planets

amazee.io: amazee.io Launches New Tokyo Cloud Region on AWS

Planet Drupal - Tue, 2024-07-09 20:00
Discover our new Tokyo Cloud Region on AWS, offering flexible, scalable, and secure PaaS solutions for optimized application delivery and hosting in Japan.
Categories: FLOSS Project Planets

Freexian Collaborators: Debian Contributions: YubiHSM packaging, unschroot, live-patching, and more! (by Stefano Rivera)

Planet Debian - Tue, 2024-07-09 20:00
Debian Contributions: 2024-06

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

YubiHSM packaging, by Colin Watson

Freexian is starting to use YubiHSM devices (hardware security modules) as part of some projects, and we wanted to have the supporting software directly in Debian rather than needing to use third-party repositories. Since Yubico publish everything we need under free software licences, Colin packaged yubihsm-connector, yubihsm-shell, and python-yubihsm from https://developers.yubico.com/, in some cases based partly on the upstream packaging, and got them all into Debian unstable. Backports to bookworm will be forthcoming once they’ve all reached testing.

unschroot by Helmut Grohne

Following an in-person discussion at MiniDebConf Berlin, Helmut attempted splitting the containment functionality of sbuild --chroot-mode=unshare into a dedicated tool interfacing with sbuild as a variant of --chroot-mode=schroot providing a sufficiently compatible interface.

While this seemed technically promising initially, a discussion on debian-devel indicated a desire to rely on an existing container runtime such as podman instead of using another Debian-specific tool with unclear long term maintenance. None of the existing container runtimes meet the specific needs of sbuild, so further advancing this matter implies a compromise one way or another.

Linux live-patching, by Santiago Ruano Rincón

In collaboration with Emmanuel Arias, Santiago is working on the development of linux live-patching for Debian. For the moment, this is in an exploratory phase, that includes how to handle the different patches that will need to be provided. kpatch could help significantly in this regard. However, kpatch was removed from unstable because there are some RC bugs affecting the version that was present in Debian unstable. Santiago packaged the most recent upstream version (0.9.9) and filed an Intent to Salvage bug. Santiago is waiting for an ACK by the maintainer, and will upload to unstable after July 10th, following the package salvaging rules. While kpatch 0.9.9 fixes the main issues, it still needs some work to properly support Debian and the Linux kernel versions packaged in our distribution. More on this in the report next month.

Salsa CI, by Santiago Ruano Rincón

The work by Santiago in Salsa CI this month includes a merge request to ease testing how the production images are built from the changes introduced by future merge requests. By default, the pipelines triggered by a merge request build a subset of the images built for production, to reduce the use of resources, and because most of the time the subset of staging images is enough to test the proposed modifications. However, sometimes it is needed to test how the full set of production images is built, and the above mentioned MR helps to do that. The changes include documentation, so hopefully this will make it easier to test future contributions.

Also, for being able to include support for RISC-V, Salsa CI needs to replace kaniko as the tool used to build the images. Santiago tested buildah, but there are some issues when pushing built images for non-default platform architectures (i386, armhf, armel) to the container registry. Santiago will continue to work on this to find a solution.

Miscellaneous contributions
  • Stefano Rivera prepared updates for a number of Python modules.
  • Stefano uploaded the latest point release of Python 3.12 and the latest Python 3.13 beta. Both uncovered upstream regressions that had to be addressed.
  • Stefano worked on preparations for DebConf 24.
  • Stefano helped SPI to reconcile their financial records for DebConf 23.
  • Colin did his usual routine work on the Python team, upgrading 36 packages to new upstream versions (including fixes for four CVEs in python-aiohttp), fixing RC bugs in ipykernel, ipywidgets, khard, and python-repoze.sphinx.autointerface, and packaging zope.deferredimport which was needed for a new upstream version of python-persistent.
  • Colin removed the user_readenv option from OpenSSH’s PAM configuration (#1018260), and prepared a release note.
  • Thorsten Alteholz uploaded a new upstream version of cups.
  • Nicholas Skaggs updated xmacro to support reproducible builds (#1014428), DEP-3 and DEP-5 compatibility, along with utilizing hardening build flags. Helmut supported and uploaded package.
  • As a result of login having become non-essential, Helmut uploaded debvm to unstable and stable and fixed a crossqa.debian.net worker.
  • Santiago worked on the Content Team activities for DebConf24. Together with other DebConf25 team members, Santiago wrote a document for the head of the venue to describe the project of the conference.
Categories: FLOSS Project Planets

Simon Josefsson: Towards Idempotent Rebuilds?

GNU Planet! - Tue, 2024-07-09 18:16

After rebuilding all added/modified packages in Trisquel, I have been circling around the elephant in the room: 99% of the binary packages in Trisquel comes from Ubuntu, which to a large extent are built from Debian source packages. Is it possible to rebuild the official binary packages identically? Does anyone make an effort to do so? Does anyone care about going through the differences between the official package and a rebuilt version? Reproducible-build.org‘s effort to track reproducibility bugs in Debian (and other systems) is amazing. However as far as I know, they do not confirm or deny that their rebuilds match the official packages. In fact, typically their rebuilds do not match the official packages, even when they say the package is reproducible, which had me surprised at first. To understand why that happens, compare the buildinfo file for the official coreutils 9.1-1 from Debian bookworm with the buildinfo file for reproducible-build.org’s build and you will see that the SHA256 checksum does not match, but still they declare it as a reproducible package. As far as I can tell of the situation, the purpose of their rebuilds are not to say anything about the official binary build, instead the purpose is to offer a QA service to maintainers by performing two builds of a package and declaring success if both builds match.

I have felt that something is lacking, and months have passed and I haven’t found any project that address the problem I am interested in. During my earlier work I created a project called debdistreproduce which performs rebuilds of the difference between two distributions in a GitLab pipeline, and display diffoscope output for further analysis. A couple of days ago I had the idea of rewriting it to perform rebuilds of a single distribution. A new project debdistrebuild was born and today I’m happy to bless it as version 1.0 and to announces the project! Debdistrebuild has rebuilt the top-50 popcon packages from Debian bullseye, bookworm and trixie, on amd64 and arm64, as well as Ubuntu jammy and noble on amd64, see the summary status page for links. This is intended as a proof of concept, to allow people experiment with the concept of doing GitLab-based package rebuilds and analysis. Compare how Guix has the guix challenge command.

Or I should say debdistrebuild has attempted to rebuild those distributions. The number of identically built packages are fairly low, so I didn’t want to waste resources building the rest of the archive until I understand if the differences are due to consequences of my build environment (plain apt-get build-dep followed by dpkg-buildpackage in a fresh container), or due to some real difference. Summarizing the results, debdistrebuild is able to rebuild 34% of Debian bullseye on amd64, 36% of bookworm on amd64, 32% of bookworm on arm64. The results for trixie and Ubuntu are disappointing, below 10%.

So what causes my rebuilds to be different from the official rebuilds? Some are trivial like the classical problem of varying build paths, resulting in a different NT_GNU_BUILD_ID causing a mismatch. Some are a bit strange, like a subtle difference in one of perl’s headers file. Some are due to embedded version numbers from a build dependency. Several of the build logs and diffoscope outputs doesn’t make sense, likely due to bugs in my build scripts, especially for Ubuntu which appears to strip translations and do other build variations that I don’t do. In general, the classes of reproducibility problems are the expected. Some are assembler differences for GnuPG’s gpgv-static, likely triggered by upload of a new version of gcc after the original package was built. There are at least two ways to resolve that problem: either use the same version of build dependencies that were used to produce the original build, or demand that all packages that are affected by a change in another package are rebuilt centrally until there are no more differences.

The current design of debdistrebuild uses the latest version of a build dependency that is available in the distribution. We call this a “idempotent rebuild“. This is usually not how the binary packages were built originally, they are often built against earlier versions of their build dependency. That is the situation for most binary distributions.

Instead of using the latest build dependency version, higher reproducability may be achieved by rebuilding using the same version of the build dependencies that were used during the original build. This requires parsing buildinfo files to find the right version of the build dependency to install. We believe doing so will lead to a higher number of reproducibly built packages. However it begs the question: can we rebuild that earlier version of the build dependency? This circles back to really old versions and bootstrappable builds eventually.

While rebuilding old versions would be interesting on its own, we believe that is less helpful for trusting the latest version and improving a binary distribution: it is challenging to publish a new version of some old package that would fix a reproducibility bug in another package when used as a build dependency, and then rebuild the later packages with the modified earlier version. Those earlier packages were already published, and are part of history. It may be that ultimately it will no longer be possible to rebuild some package, because proper source code is missing (for packages using build dependencies that were never part of a release); hardware to build a package could be missing; or that the source code is no longer publicly distributable.

I argue that getting to 100% idempotent rebuilds is an interesting goal on its own, and to reach it we need to start measure idempotent rebuild status.

One could conceivable imagine a way to rebuild modified versions of earlier packages, and then rebuild later packages using the modified earlier packages as build dependencies, for the purpose of achieving higher level of reproducible rebuilds of the last version, and to reach for bootstrappability. However, it may be still be that this is insufficient to achieve idempotent rebuilds of the last versions. Idempotent rebuilds are different from a reproducible build (where we try to reproduce the build using the same inputs), and also to bootstrappable builds (in which all binaries are ultimately built from source code). Consider a cycle where package X influence the content of package Y, which in turn influence the content of package X. These cycles may involve several packages, and it is conceivable that a cycle could be circular and infinite. It may be difficult to identify these chains, and even more difficult to break them up, but this effort help identify where to start looking for them. Rebuilding packages using the same build dependency versions as were used during the original build, or rebuilding packages using a bootsrappable build process, both seem orthogonal to the idempotent rebuild problem.

Our notion of rebuildability appears thus to be complementary to reproducible-builds.org’s definition and bootstrappable.org’s definition. Each to their own devices, and Happy Hacking!

Categories: FLOSS Project Planets

Simon Josefsson: Towards Idempotent Rebuilds?

Planet Debian - Tue, 2024-07-09 18:16

After rebuilding all added/modified packages in Trisquel, I have been circling around the elephant in the room: 99% of the binary packages in Trisquel comes from Ubuntu, which to a large extent are built from Debian source packages. Is it possible to rebuild the official binary packages identically? Does anyone make an effort to do so? Does anyone care about going through the differences between the official package and a rebuilt version? Reproducible-build.org‘s effort to track reproducibility bugs in Debian (and other systems) is amazing. However as far as I know, they do not confirm or deny that their rebuilds match the official packages. In fact, typically their rebuilds do not match the official packages, even when they say the package is reproducible, which had me surprised at first. To understand why that happens, compare the buildinfo file for the official coreutils 9.1-1 from Debian bookworm with the buildinfo file for reproducible-build.org’s build and you will see that the SHA256 checksum does not match, but still they declare it as a reproducible package. As far as I can tell of the situation, the purpose of their rebuilds are not to say anything about the official binary build, instead the purpose is to offer a QA service to maintainers by performing two builds of a package and declaring success if both builds match.

I have felt that something is lacking, and months have passed and I haven’t found any project that address the problem I am interested in. During my earlier work I created a project called debdistreproduce which performs rebuilds of the difference between two distributions in a GitLab pipeline, and display diffoscope output for further analysis. A couple of days ago I had the idea of rewriting it to perform rebuilds of a single distribution. A new project debdistrebuild was born and today I’m happy to bless it as version 1.0 and to announces the project! Debdistrebuild has rebuilt the top-50 popcon packages from Debian bullseye, bookworm and trixie, on amd64 and arm64, as well as Ubuntu jammy and noble on amd64, see the summary status page for links. This is intended as a proof of concept, to allow people experiment with the concept of doing GitLab-based package rebuilds and analysis. Compare how Guix has the guix challenge command.

Or I should say debdistrebuild has attempted to rebuild those distributions. The number of identically built packages are fairly low, so I didn’t want to waste resources building the rest of the archive until I understand if the differences are due to consequences of my build environment (plain apt-get build-dep followed by dpkg-buildpackage in a fresh container), or due to some real difference. Summarizing the results, debdistrebuild is able to rebuild 34% of Debian bullseye on amd64, 36% of bookworm on amd64, 32% of bookworm on arm64. The results for trixie and Ubuntu are disappointing, below 10%.

So what causes my rebuilds to be different from the official rebuilds? Some are trivial like the classical problem of varying build paths, resulting in a different NT_GNU_BUILD_ID causing a mismatch. Some are a bit strange, like a subtle difference in one of perl’s headers file. Some are due to embedded version numbers from a build dependency. Several of the build logs and diffoscope outputs doesn’t make sense, likely due to bugs in my build scripts, especially for Ubuntu which appears to strip translations and do other build variations that I don’t do. In general, the classes of reproducibility problems are the expected. Some are assembler differences for GnuPG’s gpgv-static, likely triggered by upload of a new version of gcc after the original package was built. There are at least two ways to resolve that problem: either use the same version of build dependencies that were used to produce the original build, or demand that all packages that are affected by a change in another package are rebuilt centrally until there are no more differences.

The current design of debdistrebuild uses the latest version of a build dependency that is available in the distribution. We call this a “idempotent rebuild“. This is usually not how the binary packages were built originally, they are often built against earlier versions of their build dependency. That is the situation for most binary distributions.

Instead of using the latest build dependency version, higher reproducability may be achieved by rebuilding using the same version of the build dependencies that were used during the original build. This requires parsing buildinfo files to find the right version of the build dependency to install. We believe doing so will lead to a higher number of reproducibly built packages. However it begs the question: can we rebuild that earlier version of the build dependency? This circles back to really old versions and bootstrappable builds eventually.

While rebuilding old versions would be interesting on its own, we believe that is less helpful for trusting the latest version and improving a binary distribution: it is challenging to publish a new version of some old package that would fix a reproducibility bug in another package when used as a build dependency, and then rebuild the later packages with the modified earlier version. Those earlier packages were already published, and are part of history. It may be that ultimately it will no longer be possible to rebuild some package, because proper source code is missing (for packages using build dependencies that were never part of a release); hardware to build a package could be missing; or that the source code is no longer publicly distributable.

I argue that getting to 100% idempotent rebuilds is an interesting goal on its own, and to reach it we need to start measure idempotent rebuild status.

One could conceivable imagine a way to rebuild modified versions of earlier packages, and then rebuild later packages using the modified earlier packages as build dependencies, for the purpose of achieving higher level of reproducible rebuilds of the last version, and to reach for bootstrappability. However, it may be still be that this is insufficient to achieve idempotent rebuilds of the last versions. Idempotent rebuilds are different from a reproducible build (where we try to reproduce the build using the same inputs), and also to bootstrappable builds (in which all binaries are ultimately built from source code). Consider a cycle where package X influence the content of package Y, which in turn influence the content of package X. These cycles may involve several packages, and it is conceivable that a cycle could be circular and infinite. It may be difficult to identify these chains, and even more difficult to break them up, but this effort help identify where to start looking for them. Rebuilding packages using the same build dependency versions as were used during the original build, or rebuilding packages using a bootsrappable build process, both seem orthogonal to the idempotent rebuild problem.

Our notion of rebuildability appears thus to be complementary to reproducible-builds.org’s definition and bootstrappable.org’s definition. Each to their own devices, and Happy Hacking!

Categories: FLOSS Project Planets

How I manage my KDE email

Planet KDE - Tue, 2024-07-09 17:13

Every once in a while people ask me about my email routine, so I thought I’d write about it here.

Everything I do starts with the philosophy that work and project email is a task queue. Therefore an email is a to-do list item someone else has assigned to me.

Ugh, how horrible! Better get that stuff done or rejected as soon as possible so I can move on to the stuff I want to do.

This means my target is inbox zero; achieving it means I got all my tasks done. Like everyone, I don’t always achieve it, but zero is the goal. How do I work towards it?

#0: Separate KDE and non-KDE emails

When I’m not in KDE mode, I want to be able to turn that stuff off in my own brain. To accomplish this, I have a home email account and a KDE email account. I adjust all my KDE accounts to only send email to my KDE address.

#1: Use an email client app

To manage multiple email accounts without going insane, I avoid webmail. In addition to not supporting multi-account workflows, it’s usually slow, lacking useful features, and has poor keyboard navigation.

I currently use Thunderbird, but I’m investigating moving to KDE’s KMail. Regardless, it has to be a desktop email client that offers mail rules.

#2: Automatic categorization (0 minutes)

I configure my email client with mail rules to automatically tag emails with colored labels according to what they are, and then mark them as read:

This results in almost all newly-arrived emails becoming colored and marked as read:

When I get a new kind of automated email that didn’t automatically receive a color label, I adjust the rules to match that new email so it gets categorized in the future, too.

#3: Manual categorization (1-3 minutes)

When I first open my email client in the morning, everything will be categorized except 5-15 emails sent by actual people. To see just these, I’ll filter the inbox by unread status, since all the auto-categorized colored emails got automatically marked as read.

Then it’s time to figure out what to do with them. For anything that needs a response or action today, I mark it as urgent by hitting the “1” key. For anything that needs a personal response in the next few days, I hit “9” to tag it as personal and it becomes green. And so on.

Any emails that don’t need a response get immediately deleted. I never miss them. It’s fast and painless. Put those emails out of their misery.

#4: Action all the urgent emails (5-15 minutes)

Urgent means urgent; first I’ll go through these one at a time, and action them somehow. This means one of the following:

  1. If it’s from a person, write a reply and then delete the email.
  2. If it’s from an automated system, open the link to the thing it’s about in a web browser and then delete the email.

The email always ends up deleted! For people like us emails are not historical records, they’re tasks. Do you need to remember what tasks you performed 8 years ago on Tuesday, May 11th? Of course not. Don’t be a digital hoarder; delete your emails. You won’t miss them.

At this point I may realize that I was overzealous in tagging something as urgent. That’s fine; I just re-tag it as something else, and then I’ll get to it later.

#5: Action all the merge request emails (5-10 minutes)

Since my day job is “quality assurance manager”, these are important. I’ll go through every automated email from invent.kde.org about merge requests for repos I’m subscribed to and action them somehow:

  1. Open the link to the merge request in my web browser, and then delete the email.
  2. Decide I don’t need to review this particular merge request, and just delete the email.

More deletion! I never keep these emails around; they’re temporal notifications of other people’s work. Nothing worth preserving.

#6: Action all the bug report emails (5-15 minutes)

My web browser is now filled up with tabs for merge requests to review. Now it’s time to do that for relevant bug reports. I follow the same process here: open the bug report in my web browser because it needs a comment or other action from me, and then delete the email — or else immediately delete the email because it’s not directly actionable. Delete, delete, delete. It’s the happiest word when it comes to email. Everyone hates emails; delete them! Show them you mean business.

#7: Do actual work

At this point I’ve spent between 15 and 40 minutes just on email, ugh. Time to do some actual work! So now I’ll spend the next several hours going through those tabs in my web browser, from left to right. First reviewing merge requests, then handling the relevant bug reports (closing, re-opening, replying to comments, changing metadata, marking as duplicate, CCing others, etc). During this step, I’ll also triage the day’s new bug reports.

Sometimes I’ll check email again while doing these, since more will be coming in. It’s easy to delete or action them individually.

After all these tabs are closed, hooray! I have some time to be proactive instead of reactive! Usually this amounts to 0-120 minutes a day during working hours. I try to spend this time on fixing small bugs I found throughout the day, opening and participating in discussion topics about important matters, working on the KDE HIG, and sometimes helping people out on http://www.discuss.kde.org or http://www.reddit.com/r/KDE.

#8: Action all the rest of the emails (10-25 minutes)

Towards the end of the day I’ll look at the emails marked as “Personal” and “KDE e.V./Akademy” and try to knock a few out. It’s okay if I’m too tired; these aren’t urgent and can wait until tomorrow. After a few days of sitting there, I’ll mark them as urgent.

And that’s pretty much it! This is just my workflow; it doesn’t need to be yours. But in case you want to try it, here are answers to some anticipated objections:

Ugh, that sounds like it takes forever!

It really doesn’t.

On a Monday maybe it takes more like 35 or 40 minutes since there are emails from the weekend to process. But on Tuesday through Friday, it’s closer to 15-20 minutes. Often 10 on Friday. Thanks to the automatic categorization, all of this is much faster than manually looking at every email one by one, and much more effective than getting depressed by hundreds of unread emails in the morning and ignoring them.

Deleting emails is too scary, what if I need them in the future?

You won’t.

But if that’s too scary or painful, set up your email account or client app to archive “deleted” messages in permanent storage rather than truly deleting them. Just keep in mind that you’ll eventually run out of storage space and have to deal with that problem in the future. Once it happens, consider it an opportunity to reconsider, asking yourself how many emails you actually did need to dig out of cold storage. I’m guessing the number will be very low, maybe even 0.

This might work for your workflow, but I get different types of emails!

Maybe so, but the general principle of automatically tagging (but not moving) emails applies to anyone. I firmly believe that anyone can benefit from this part. Make the software do the grunt work for you!

What do I do about all of those the old emails in my inbox? There are too many, I’ll never get through them!

If you’re one of those people who has 50,000 emails in your inbox, select all and delete. You won’t miss any of them.

Seriously. All of them. Every single one. Right now. Just do it.

How do I know this is fine?

  • Old notifications about things like bug reports or merge requests are worthless because they already happened. Delete.
  • Old mailing list conversations long since dried up or got actioned without your input. Delete.
  • Old at-the-time urgent emails from important people are no longer relevant, because the people who sent them long ago concluded that you’re unreliable and decided to not contact you again. Because that’s what happens when you let emails pile up: you’re being rude to all the people whose messages you’ve ignored. Feel sad, resolve to do better, then delete.

The good news is that you can get better at this anytime, but it’s almost impossible without making a clean break with a messy past. You’ll be looking at old stuff forever and won’t have time for new stuff.

I just get too much email, it’s impossible to keep up no matter what I do!

You need to unsubscribe from some things. Maybe a lot of things. Longtime contributors to any project will have accumulated years worth of subscriptions to sources of emails that are no longer relevant. Prune them!

This may trigger Fear Of Missing Out. Recognize that and fight against it. You can almost always reduce your email load by unsubscribing from this stuff:

  • Activity in Git repos for projects you no longer contribute to.
  • Bug reports for products you aren’t involved in or responsible for anymore.
  • Medium to high traffic mailing lists that are mostly or entirely irrelevant to your present interests and activities.
  • Almost all the spam from LinkedIn.
  • All the spam from online stores, newspapers, political campaigns. The “unsubscribe” button will work, don’t give up!

Resist the temptation to filter these emails into folders that you tell yourself you’ll remember to look at once in a while. You probably won’t, and by the time you do, everything in them won’t be actionable anymore — if it ever was in the first place. Unsubscribe and delete!

Categories: FLOSS Project Planets

Steve Kemp: The CP/M emulator is good enough, I think.

Planet Debian - Tue, 2024-07-09 16:00

My previous post mentioned that I'd added some custom syscalls to my CP/M emulator and that lead to some more updates, embedding a bunch of binaries within the emulator so that the settings can be tweaked at run-time, for example running:

!DEBUG 1 !CTRLC 1 !CCP ccpz !CONSOLE adm-3a

Those embedded binaries show up on A: even if they're not in the pwd when you launch the emulator.

Other than the custom syscalls I've updated the real BDOS/BIOS syscalls a bit more, so that now I can run things like the Small C compiler, BBC BASIC, and more. (BBCBasic.com used to launch just fine, but it turned out that the SAVE/LOAD functions didn't work. Ooops!)

I think I've now reached a point where all the binaries I care about run, and barring issues I will slow down/stop development. I can run Turbo Pascal, WordStar, various BASIC interpreters, and I have a significantly improved understanding of how CP/M works - a key milestone in that understanding was getting SUBMIT.COM to execute, and understanding the split between the BDOS and the BIOS.

I'd kinda like to port CP/M to a new (Z80-based) system - but I don't have such a thing to hand, and I guess there's no real need for it. Perhaps I can say I'm "done" with retro stuff, and go back to playing Super Mario Bros (1985) with my boy!

Categories: FLOSS Project Planets

PyCoder’s Weekly: Issue #637 (July 9, 2024)

Planet Python - Tue, 2024-07-09 15:30

#637 – JULY 9, 2024
View in Browser »

Python Grapples With Apple App Store Rejections

A string that is part of the urllib parser module in Python references a scheme for apps that use the iTunes feature to install other apps, which is disallowed. Auto scanning by Apple is rejecting any app that uses Python 3.12 underneath. A solution has been proposed for Python 3.13.
JOE BROCKMEIER

Python’s Built-in Functions: A Complete Exploration

In this tutorial, you’ll learn the basics of working with Python’s numerous built-in functions. You’ll explore how you can use these predefined functions to perform common tasks and operations, such as mathematical calculations, data type conversions, and string manipulations.
REAL PYTHON

Python Error and Performance Monitoring That Doesn’t Suck

With Sentry, you can trace issues from the frontend to the backend—detecting slow and broken code, to fix what’s broken faster. Installing the Python SDK is super easy and PyCoder’s Weekly subscribers get three full months of the team plan. Just use code “pycoder” on signup →
SENTRY sponsor

Constraint Programming Using CP-SAT and Python

Constraint programming is the process of looking for solutions based on a series of restrictions, like employees over 18 who have worked the cash before. This article introduces the concept and shows you how to use open source libraries to write constraint solving code.
PHILIPPE OLIVIER

Register for PyOhio, July 27-28

PYOHIO.ORG • Shared by Anurag Saxena

Psycopg 3.2 Released

PSYCOPG

Polars 1.0 Released

POLARS

Quiz: Python’s Magic Methods

REAL PYTHON

Discussions Any Web Devs Successfully Pivoted to AI/ML Development?

HACKER NEWS

Articles & Tutorials A Search Engine for Python Packages

Finding the right Python package on PyPI can be a bit difficult, since PyPI isn’t really designed for discovering packages easily. PyPiScout.com solves this problem by allowing you to search using descriptions like “A package that makes nice plots and visualizations.”
PYPISCOUT.COM • Shared by Florian Maas

Programming Advice I’d Give Myself 15 Years Ago

Marcus writes in depth about things he has learned in his coding career and wished he new earlier in his journey. Thoughts include fixing foot guns, understanding the pace-quality trade-off, sharpening your axe, and more. Associated HN Discussion.
MARCUS BUFFETT

Keeping Things in Sync: Derive vs Test

Don’t Repeat Yourself (DRY) is generally a good coding philosophy, but it shouldn’t be adhered to blindly. There are other alternatives, like using tests to make sure that duplication stays in sync. This article outlines the why and how of just that.
LUKE PLANT

8 Versions of UUID and When to Use Them

RFC 9562 outlines the structure of Universally Unique IDentifiers (UUIDs) and includes eight different versions. In this post, Nicole gives a quick intro to each kind so you don’t have to read the docs, and explains why you might choose each.
NICOLE TIETZ-SOKOLSKAYA

Defining Python Constants for Code Maintainability

In this video course, you’ll learn how to properly define constants in Python. By coding a bunch of practical example, you’ll also learn how Python constants can improve your code’s readability, reusability, and maintainability.
REAL PYTHON course

Django: Test for Pending Migrations

The makemigrations --check command tells you if there is missing migrations in your Django project, but you have to remember to run it. Adam suggests calling it from a test so it gets triggered as part of your CI/CD process.
ADAM JOHNSON

How to Maximize Your Experience at EuroPython 2024

Conferences can be overwhelming, with lots going on and lots of choices. This post talks about how to get the best experience at EuroPython, or any conference.
SANGARSHANAN

Polars vs. pandas: What’s the Difference?

Explore the key distinctions between Polars and Pandas, two data manipulation tools. Discover which framework suits your data processing needs best.
JODIE BURCHELL

An Overview of the Sparse Array Ecosystem for Python

An overview of the different options available for working with sparse arrays in Python.
HAMEER ABBASI

Projects & Code Get Space Weather Data

GITHUB.COM/BEN-N93 • Shared by Ben Nour

AI to Drop Hats

DROPOFAHAT.ZONE

amphi-etl: Low-Code ETL for Data

GITHUB.COM/AMPHI-AI

aurora: Static Site Generator Implemented in Python

GITHUB.COM/CAPJAMESG

pytest-edit: pytest --edit to Open Failing Test Code

GITHUB.COM/MRMINO

Events Weekly Real Python Office Hours Q&A (Virtual)

July 10, 2024
REALPYTHON.COM

PyCon Nigeria 2024

July 10 to July 14, 2024
PYCON.ORG

PyData Eindhoven 2024

July 11 to July 12, 2024
PYDATA.ORG

Python Atlanta

July 11 to July 12, 2024
MEETUP.COM

PyDelhi User Group Meetup

July 13, 2024
MEETUP.COM

DFW Pythoneers 2nd Saturday Teaching Meeting

July 13, 2024
MEETUP.COM

Happy Pythoning!
This was PyCoder’s Weekly Issue #637.
View in Browser »

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

Categories: FLOSS Project Planets

Drupal Association blog: Drupal Association Announces HeroDevs as Inaugural Partner for Drupal 7 Extended Security Support Provider Program

Planet Drupal - Tue, 2024-07-09 14:33

PORTLAND, Ore., 10 July 2024—The Drupal Association is pleased to announce HeroDevs as the inaugural partner for the new Drupal 7 Extended Security Support Provider Program. This initiative aims to support Drupal 7 users by carefully vetting providers to deliver extended security support services beyond the 5 January 2025 end-of-life (EOL) date.

The Drupal 7 Extended Security Support Provider Program allows organizations that cannot migrate from Drupal 7 to newer versions by the EOL date to continue using a version of Drupal 7 that is secure and compliant. This program complements the Association’s D7 Certified Migration Providers Program, which helps organizations find the right partner to transition their sites from Drupal 7 to Drupal 10.

HeroDevs has successfully met the stringent requirements established by the Drupal Association to become a certified provider with its secure, seamless drop-in replacement of Drupal 7 and core modules. 

“HeroDevs has demonstrated strong expertise in finding and fixing security and compatibility issues for major open-source libraries like Drupal,” Tim Doyle, CEO of the Drupal Association, said. “This program underscores the Drupal Association’s dedication to providing qualified options for organizations using Drupal 7 so they can stay secure while they figure out their next steps for upgrading. ”

As organizations prepare for the transition from Drupal 7, HeroDevs will provide the necessary support to keep their sites secure and operational.

Joe Eames, VP of Partnership at HeroDevs, added, “We are honored to be recognized as the inaugural partner of this important program. At HeroDevs, we are creating a more sustainable, secure web and Drupal is a major part of that. We aim to help organizations maintain a secure and compliant web presence – all while giving open source creators and maintainers the freedom to innovate.” 

For more information about the HeroDevs Drupal 7 Never-Ending Support (NES), click here.

About the Drupal Association

The Drupal Association is a non-profit organization that fosters and supports the Drupal software project, the community, and its growth. Our mission is to drive innovation and adoption of Drupal as a high-impact digital public good, hand-in-hand with our open source community. Through various initiatives, events, and programs, the Drupal Association helps ensure the ongoing development and success of the Drupal project.

Categories: FLOSS Project Planets

Python Engineering at Microsoft: Python in Visual Studio Code – July 2024 Release

Planet Python - Tue, 2024-07-09 11:45

We’re excited to announce the July 2024 release of the Python and Jupyter extensions for Visual Studio Code!

This release includes the following announcements:

  • Enhanced environment discovery with python-environment-tools
  • Improved support for reStructuredText docstrings with Pylance
  • Community contributed Pixi support

If you’re interested, you can check the full list of improvements in our changelogs for the Python, Jupyter and Pylance extensions.

Enhanced environment discovery with python-environment-tools

We are excited to introduce a new tool, python-environment-tools, designed to significantly enhance the speed of detecting global Python installations and Python virtual environments.

This tool leverages Rust to ensure a rapid and accurate discovery process. It also minimizes the number of Input/Output operations by collecting all necessary environment information at once, significantly enhancing the overall performance.

We are currently testing this new feature in the Python extension, running it in parallel with the existing support, to evaluate the new discovery performance. Consequently, you will see a new logging channel called Python Locator that shows the discovery times with this new tool.

This enhancement is part of our ongoing efforts to optimize the performance and efficiency of Python support in VS Code. Visit the python-environment-tools repo to learn more about this feature, ongoing work, and provide feedback!

Improved support for reStructuredText docstrings with Pylance

Pylance has improved support for rendering reStructuredText documentation strings (docstrings) on hover! RestructuredText (RST) is a popular format for documentation, and its syntax is sometimes used for the docstrings of Python packages.

This feature is in its early stages and is currently behind an experimental flag as we work to ensure it handles various Sphinx, Google Doc, and Epytext scenarios effectively. To try it out, you can enable the experimental setting python.analysis.supportRestructuredText.

Common packages where you might observe this change in their docstrings include pandas and scipy. Try this change out, and report any issues or feedback at the Pylance GitHub repository.

Note: This setting is currently experimental, but will likely be enabled by default in the future as it becomes more stabilized.

Community contributed Pixi support

Thanks to @baszalmstra, there is now support for Pixi environment detection in the Python extension! This work added a locator to detect Pixi environments in your workspace similar to other common environments such as Conda. Furthermore, if a Pixi environment is detected in your workspace, the environment will automatically be selected as your default environment.

We appreciate and look forward to continued collaboration with community members on bug fixes and enhancements to improve the Python experience!

Other Changes and Enhancements

We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python and Jupyter Notebooks in Visual Studio Code. Some notable changes include:

We would also like to extend special thanks to this month’s contributors:

Call for Community Feedback

As we are planning and prioritizing future work, we value your feedback! Below are a few issues we would love feedback on:

Try out these new improvements by downloading the Python extension and the Jupyter extension from the Marketplace, or install them directly from the extensions view in Visual Studio Code (Ctrl + Shift + X or ⌘ + ⇧ + X). You can learn more about Python support in Visual Studio Code in the documentation. If you run into any problems or have suggestions, please file an issue on the Python VS Code GitHub page.

The post Python in Visual Studio Code – July 2024 Release appeared first on Python.

Categories: FLOSS Project Planets

Real Python: Customize VS Code Settings

Planet Python - Tue, 2024-07-09 10:00

Visual Studio Code, is an open-source code editor available on all platforms. It’s also a great platform for Python development. The default settings in VS Code present a somewhat cluttered environment.

This Code Conversation with instructor Philipp Acsany is about learning how to customize the settings within the interface of VS Code. Having a clean digital workspace is an important part of your work life. Removing distractions and making code more readable can increase productivity and even help you spot bugs.

In this Code Conversation, you’ll learn how to:

  • Work With User Settings
  • Create a VS Code Profile
  • Find and Adjust Specific Settings
  • Clean Up the VS Code User Interface
  • Export Your Profile to Re-use Across Installations

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Django Weblog: Django security releases issued: 5.0.7 and 4.2.14

Planet Python - Tue, 2024-07-09 10:00

In accordance with our security release policy, the Django team is issuing releases for Django 5.0.7 and Django 4.2.14. These releases address the security issues detailed below. We encourage all users of Django to upgrade as soon as possible.

CVE-2024-38875: Potential denial-of-service in django.utils.html.urlize()

urlize() and urlizetrunc() were subject to a potential denial-of-service attack via certain inputs with a very large number of brackets.

Thanks to Elias Myllymäki for the report.

This issue has severity "moderate" according to the Django security policy.

CVE-2024-39329: Username enumeration through timing difference for users with unusable passwords

The django.contrib.auth.backends.ModelBackend.authenticate() method allowed remote attackers to enumerate users via a timing attack involving login requests for users with unusable passwords.

This issue has severity "low" according to the Django security policy.

CVE-2024-39330: Potential directory-traversal in django.core.files.storage.Storage.save()

Derived classes of the django.core.files.storage.Storage base class which override generate_filename() without replicating the file path validations existing in the parent class, allowed for potential directory-traversal via certain inputs when calling save().

Built-in Storage sub-classes were not affected by this vulnerability.

Thanks to Josh Schneier for the report.

This issue has severity "low" according to the Django security policy.

CVE-2024-39614: Potential denial-of-service in django.utils.translation.get_supported_language_variant()

get_supported_language_variant() was subject to a potential denial-of-service attack when used with very long strings containing specific characters.

To mitigate this vulnerability, the language code provided to get_supported_language_variant() is now parsed up to a maximum length of 500 characters.

Thanks to MProgrammer for the report.

This issue has severity "moderate" according to the Django security policy.

Affected supported versions
  • Django main branch
  • Django 5.1 (currently at beta status)
  • Django 5.0
  • Django 4.2
Resolution

Patches to resolve the issue have been applied to Django's main, 5.1, 5.0, and 4.2 branches. The patches may be obtained from the following changesets.

CVE-2024-38875: Potential denial-of-service in django.utils.html.urlize() CVE-2024-39329: Username enumeration through timing difference for users with unusable passwords CVE-2024-39330: Potential directory-traversal in django.core.files.storage.Storage.save() CVE-2024-39614: Potential denial-of-service in django.utils.translation.get_supported_language_variant() The following releases have been issued

The PGP key ID used for this release is Natalia Bidart: 2EE82A8D9470983E

General notes regarding security reporting

As always, we ask that potential security issues be reported via private email to security@djangoproject.com, and not via Django's Trac instance, nor via the Django Forum, nor via the django-developers list. Please see our security policies for further information.

Categories: FLOSS Project Planets

drunomics: Custom Elements UI: quicker changes to your decoupled Drupal site

Planet Drupal - Tue, 2024-07-09 08:01
Custom Elements UI: quicker changes to your decoupled Drupal site bof teaser.png jurgen.thano Tue, 07/09/2024 - 14:01 The latest version of the Custom Elements module empowers developers building headless Drupal solutions. With a user-friendly interface, it’s now easier to modify output entities, adjust properties, and change formats. At Drupal Developer Days Burgas, attendees explored the Custom Elements UI and discussed Lupus Decoupled, an efficient stack for decoupled Drupal applications. Body New Custom Elements module version

The Custom Elements module is an essential building block in the technology stack that drunomics uses to build headless Drupal solutions, facilitating output of pages in either 'custom elements' or JSON format as the front end requires it.

The newest version of the module features a user interface to modify any entity that is part of the output: any property can be included/excluded, and output format can be changed, without the need to write Drupal/PHP code. This allows a developer to more easily change both the backend API output and the decoupled frontend consuming the output at the same time, making for faster turnaround times in changes to your website.

Our talk at Drupal Developer Days Burgas

Roderik Muit and Alexandru Ieremia chaired an informal (Birds of a Feather) session at Drupal Developer Days in Burgas in June 2024, to prevent the new changes to any interested parties. They also prepared some information about the larger Lupus Decoupled stack for any interested attendees who would not be familiar with it yet.

After the presentation, an animated discussion followed. Some people were curious how the Custom Elements UI worked, what the code behind it looks like and how to write own 'formatters'.

Another person said that Lupus Decoupled seems to exactly satisfy their need to address the resource heavy JSON:API queries in his current main website. He was encouraged to try out a demo and ask any questions in our issue queue or on #lupus-decoupled on Drupal Slack. Users were assured that Lupus Decoupled is ready to use (for experienced developers) and completely open source.

The new Custom Elements version with UI to alter output, is currently available as a development version; we are working to finalize a beta release as soon as possible.

Categories: FLOSS Project Planets

Real Python: Quiz: Split Your Dataset With scikit-learn's train_test_split()

Planet Python - Tue, 2024-07-09 08:00

In this quiz, you’ll test your understanding of how to use train_test_split() from the sklearn library.

By working through this quiz, you’ll revisit why you need to split your dataset in supervised machine learning, which subsets of the dataset you need for an unbiased evaluation of your model, how to use train_test_split() to split your data, and how to combine train_test_split() with prediction methods.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Drupal Association blog: Celebrating Success: DrupalCon Portland 2024 Event Impact Recap

Planet Drupal - Tue, 2024-07-09 07:46

Welcome to the Event Impact Recap of DrupalCon Portland 2024, a benchmark event in North America, that not only marked a significant milestone in the Drupal community, but also holds a special place in my journey. Having served as a contractor for DrupalCon Portland and now stepping into the role of the new Community Programs Director with the Drupal Association, I am thrilled to share the highlights and successes of this remarkable gathering. My goal is to have an Impact Report shared with the community after each DrupalCon that depicts the data and feedback on the event. Please view the slides.

Key Highlights from DrupalCon Portland 2024:

  • Attendance and Engagement:
    • With 1,368 registered attendees and an impressive 97.8% check-in rate, DrupalCon Portland 2024 brought together a vibrant community of Drupal enthusiasts and professionals.
    • Of the 1,368 registered attendees, 438 (about one third) received comped registrations for volunteering, speaking, or other roles at the conference.
    • The event saw 3,249 hotel rooms booked in Portland, OR, highlighting its impact on the local economy and hospitality sector AND, it’s worth to note, these were just the rooms through our block, many rooms were booked outside the block making an even bigger impact on the local business community. 
    • A post event survey showed the rank of overall experience at DrupalCon Portland 2024 at 4.21/5
    • 32% of attendees said this was their 1st DrupalCon 
    • 8 Scholarship grants were given out to the community
    • 95% of attendees said they would attend a future DrupalCon
  • Global Representation:
    • Attendees from 6 continents, 35 countries, and 46 states joined us, demonstrating Drupal's global reach and community diversity.
  • Specialized Summits:
    • Five Summits (Government, Higher Ed, Nonprofit, Healthcare, and Community) attracted 476 attendees, facilitating deep dives into crucial Drupal topics and fostering collaboration.
  • DriesNote and Starshot:
    • A highlight of the event was the DriesNote, attended by 950 people in person, eager to hear about Starshot, an exciting new initiative. (View the recording on the Drupal Association Youtube page). This session not only informed but also inspired attendees about the future of Drupal.
    • Two BOFs were hosted, providing platforms for continued discussions and community engagement beyond the main sessions.
  • Sponsorship:
    • DrupalCon Portland 2024 was made possible thanks to the generous support of our sponsors 
      • Presenting Sponsors: 2
      • Champion Sponsors: 6
      • Advocate Sponsors: 11
      • Exhibitors: 28
      • Total Sponsors: 47
    • The conference wouldn't have been possible without the dedication and partnership of these organizations. Their support underscores their commitment to the Drupal community and its ongoing success.
  • Volunteer Contributions:
    • The success of DrupalCon Portland 2024 was further bolstered by 28 dedicated volunteers, including local ambassadors, logistics contributors, and translation assistants, who collectively contributed 221.5 hours onsite.

I am deeply honored to now step into the role of the new Community Programs Director with the Drupal Association. With over 18 years of experience in event planning, field marketing, nonprofit management, and community engagement, I am excited to leverage my skills to enhance community programs and initiatives within the Drupal ecosystem. My goal is to foster even stronger connections, facilitate meaningful collaborations, and support the growth and inclusivity of the Drupal community.

As we reflect on the achievements and connections fostered at DrupalCon Portland 2024, I am filled with optimism about the future of Drupal and the potential for continued growth and innovation within our community, and am excited to be a part of DrupalCon Barcelona, DrupalCon Singapore, DrupalCon Atlanta and many more for years to come!

DrupalCon Portland 2024 was not just an event but a celebration of collaboration, knowledge sharing, and community spirit. I extend my heartfelt gratitude to everyone who contributed to its success, from attendees and volunteers to sponsors and organizers. Let's carry this momentum forward as we embark on the next chapter of Drupal's journey together.

- Meghan Harrell
Community Programs Director
Drupal Association

Categories: FLOSS Project Planets

Specbee: Simplifying content duplication with Quick Node Clone module in Drupal

Planet Drupal - Tue, 2024-07-09 07:18
If you’re a marketer, you know how much content cloning can simplify your life. It lets you duplicate blog posts, landing pages, articles, product listings, and forum posts effortlessly.  If you’re familiar with Drupal, you should know that nodes are fundamental content entities that represent individual pieces of content on a site. Creating similar content nodes in Drupal can be time-consuming, especially when you have to duplicate them manually.  Fortunately, there's a solution: the Quick Node Clone module. In this blog post, we'll explore how this handy module can streamline your content creation process in Drupal. What is the Quick Node Clone Module The Quick Node Clone module allows Drupal users to swiftly duplicate existing nodes with just a few clicks. This module can save you time and effort by eliminating the need to recreate content from scratch. How to Install the module Getting started with the Quick Node Clone module is straightforward. Simply follow these steps: Download the module from Drupal.org or use Composer to install it. Enable the module in the Drupal administration interface. Clear the cache for the changes to take effect. Configuring the module Once the module is installed, you can customize its settings to suit your needs.  Text to prepend to title The text we enter in this field will be prepended to the title of the cloned node. This will be seen on the node clone page. Clone publication status of original If it's checked then the publication status will be cloned from the Original node that we clone.If Unchecked the publication status will be cloned from the “default publish status of the content type” of that particular node. Exclusion list If you don’t want some field values to be cloned then you can choose the particular content type and exclude any field. This module also supports 'paragraphs', allowing us to exclude any paragraph field from being cloned, similar to nodes. How to Use Quick Node Clone Using the Quick Node Clone module is simple: Navigate to the node you want to duplicate. Click on the "Clone" button, depending on your Drupal configuration. Optionally, make any necessary changes to the cloned node. Save the cloned node, and you're done! Permissions: This module provides a set of permissions. For any content type, we can grant permission to clone its nodes.Additionally, there's the "Administer Quick Node Clone Settings" permission, granting access to the module's configuration page at /admin/config/quick-node-clone. Hooks provided by the module: 1. hook_cloned_node_alter() Example Usage Let's consider a practical example where we want to modify certain properties of the cloned node: /**  * Implements hook_cloned_node_alter().  */ function mymodule_cloned_node_alter($cloned_node, $original_node) {   // Change the title of the cloned node.   $cloned_node->setTitle('Modified Title');   // Check if the cloned node has a specific field and update its value.   if ($cloned_node->hasField('field_example')) {     $cloned_node->set('field_example', 'New Field Value');   } } mymodule should be replaced with the machine name of your custom module. $cloned_node represents the cloned node object that you can modify. $original_node refers to the original node being cloned, providing context for your alterations. 2. hook_cloned_node_paragraph_alter()   Example UsageLet's consider an example scenario where we want to update the value of a specific paragraph field during the cloning process: /**  * Implements hook_cloned_node_paragraph_field_alter().  */ function mymodule_cloned_node_paragraph_field_alter($paragraph, $field_name, $settings) {   // Check if the paragraph has a field named 'field_place' and update its value.   if ($paragraph->hasField('field_place')) {     $paragraph->set('field_place', 'New Changed Place');   } } mymodule should be replaced with the machine name of your custom module. $paragraph represents the cloned paragraph entity that you can modify. $field_name indicates the name of the paragraph field being processed. $settings provides additional information about the field. Final thoughts The Quick Node Clone module is a valuable tool for Drupal users looking to streamline their content creation process. This module can save you time and effort by simplifying the duplication of nodes, allowing you to focus on more important tasks. Give it a try on your Drupal site and experience the benefits firsthand!
Categories: FLOSS Project Planets

Python Bytes: #391 A weak episode

Planet Python - Tue, 2024-07-09 04:00
<strong>Topics covered in this episode:</strong><br> <ul> <li><a href="https://github.com/mwilliamson/python-vendorize"><strong>Vendorize packages from PyPI</strong></a></li> <li><a href="https://martinheinz.dev/blog/112"><strong>A Guide to Python's Weak References Using weakref Module</strong></a></li> <li><a href="https://github.com/Proteusiq/saa"><strong>Making Time Speak</strong></a></li> <li><a href="https://towardsdatascience.com/how-should-you-test-your-machine-learning-project-a-beginners-guide-2e22da5a9bfc"><strong>How Should You Test Your Machine Learning Project? A Beginner’s Guide</strong></a></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=c8RSsydIhhs' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="391">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by <strong>Code Comments</strong>, an original podcast from RedHat: <a href="https://pythonbytes.fm/code-comments">pythonbytes.fm/code-comments</a></p> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy"><strong>@mkennedy@fosstodon.org</strong></a></li> <li>Brian: <a href="https://fosstodon.org/@brianokken"><strong>@brianokken@fosstodon.org</strong></a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes"><strong>@pythonbytes@fosstodon.org</strong></a></li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually Tuesdays at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1:</strong> <a href="https://github.com/mwilliamson/python-vendorize"><strong>Vendorize packages from PyPI</strong></a></p> <ul> <li>Allows pure-Python dependencies to be vendorized: that is, the Python source of the dependency is copied into your own package.</li> <li>Best used for small, pure-Python dependencies</li> </ul> <p><strong>Brian #2:</strong> <a href="https://martinheinz.dev/blog/112"><strong>A Guide to Python's Weak References Using weakref Module</strong></a></p> <ul> <li>Martin Heinz</li> <li>Very cool discussion of weakref</li> <li>Quick garbage collection intro, and how references and weak references are used.</li> <li>Using weak references to build data structures. <ul> <li>Example of two kinds of trees</li> </ul></li> <li>Implementing the Observer pattern</li> <li>How logging and OrderedDict use weak references</li> </ul> <p><strong>Michael #3:</strong> <a href="https://github.com/Proteusiq/saa"><strong>Making Time Speak</strong></a></p> <ul> <li>by Prayson, a former guest and friend of the show</li> <li>Translating time into human-friendly spoken expressions</li> <li>Example: clock("11:15") # 'quarter past eleven' </li> <li>Features <ul> <li>Convert time into spoken expressions in various languages.</li> <li>Easy-to-use API with a simple and intuitive design.</li> <li>Pure Python implementation with no external dependencies.</li> <li>Extensible architecture for adding support for additional languages using the plugin design pattern.</li> </ul></li> </ul> <p><strong>Brian #4:</strong> <a href="https://towardsdatascience.com/how-should-you-test-your-machine-learning-project-a-beginners-guide-2e22da5a9bfc"><strong>How Should You Test Your Machine Learning Project? A Beginner’s Guide</strong></a></p> <ul> <li>François Porcher</li> <li>Using pytest and pytest-cov for testing machine learning projects</li> <li>Lots of pieces can and should be tested just as normal functions. <ul> <li>Example of testing a clean_text(text: str) -> str function</li> </ul></li> <li>Test larger chunks with canned input and expected output. <ul> <li>Example test_tokenize_text()</li> </ul></li> <li>Using fixtures for larger reusable components in testing <ul> <li>Example fixture: bert_tokenizer() with pretrained data</li> </ul></li> <li>Checking coverage</li> </ul> <p><strong>Extras</strong> </p> <p>Michael:</p> <ul> <li><a href="https://www.macrumors.com/2024/07/05/authy-app-hack-exposes-phone-numbers/">Twilio Authy Hack</a> <ul> <li><a href="https://python-bytes-static.nyc3.digitaloceanspaces.com/google-really.png">Google Authenticator is the only option</a>? Really?</li> <li><a href="https://bitwarden.com">Bitwarden to the rescue</a></li> <li>Requires (?) an <a href="https://apps.apple.com/us/app/twilio-authy/id494168017">update to their app</a>, whose release notes (v26.1.0) only say “Bug fixes”</li> </ul></li> <li><a href="https://9to5mac.com/2024/07/03/proton-drive-gets-collaborative-docs-end-to-end-encryption/">Introducing Docs in Proton Drive</a> <ul> <li>This is what I called on Mozilla to do in “<a href="https://mkennedy.codes/posts/michael-kennedys-unsolicited-advice-for-mozilla-and-firefox/">Unsolicited</a><a href="https://mkennedy.codes/posts/michael-kennedys-unsolicited-advice-for-mozilla-and-firefox/"> Advice for Mozilla and Firefox</a>” But Proton got there first</li> </ul></li> <li>Early bird ending for <a href="https://www.codeinacastle.com/python-zero-to-hero-2024?utm_source=pythonbytes">Code in a Castle course</a></li> </ul> <p><strong>Joke:</strong> <a href="https://devhumor.com/media/in-rust-i-trust">I Lied</a></p>
Categories: FLOSS Project Planets

Pages