Feeds

Tag1 Consulting: Migrating Your Data from Drupal 7 to Drupal 10: Syntax and structure of migration files

Planet Drupal - Wed, 2024-07-10 12:11

In the previous article, we saw what a migration file looks like. We made some changes without going too deep into explaining the syntax or structure of the file. Today, we are exploring the language in which migration files are written and the different sections it contains.

Read more mauricio Wed, 07/10/2024 - 09:11
Categories: FLOSS Project Planets

GSoC '24 Progress: Week 3 - 6

Planet KDE - Wed, 2024-07-10 12:00

Hi there! The past few weeks have been really busy with my final exams, so I had to slow down my work. Here’s a brief status report on my progress over the past 4 weeks:

I created a SubtitleEvent class to help us better manage subtitle event information, which can replace the original SubtitledTime class. To distinguish subtitles from different layers, I also added basic display support for subtitle layers as multiple subtitle tracks.

Currently, I’m focused on refining these features. There are still some minor tasks to complete and bugs to fix. You can find more information at this MR.

Stay tuned!

Categories: FLOSS Project Planets

Reproducible Builds: Reproducible Builds in June 2024

Planet Debian - Wed, 2024-07-10 11:08

Welcome to the June 2024 report from the Reproducible Builds project!

In our reports, we outline what we’ve been up to over the past month and highlight news items in software supply-chain security more broadly. As always, if you are interested in contributing to the project, please visit our Contribute page on our website.

Table of contents:

  1. Next Reproducible Builds Summit dates announced
  2. GNU Guix patch review session for reproducibility
  3. New reproducibility-related academic papers
  4. Website updates
  5. Misc development news
  6. Reproducibility testing framework


Next Reproducible Builds Summit dates announced

We are very pleased to announce the upcoming Reproducible Builds Summit, set to take place from September 16th — 19th 2024 in Hamburg, Germany.

We are thrilled to host the seventh edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin and Athens. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort. During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving.

If you’re interesting in joining us this year, please make sure to read the event page which has more details about the event and location. We are very much looking forward to seeing many readers of these reports there.


GNU Guix patch review session for reproducibility

Vagrant Cascadian will holding a Reproducible Builds session as part of the monthly Guix patch review series on July 11th at 17:00 UTC.

These online events are intended to encourage everyone everyone becoming a patch reviewer and the goal of reviewing patches is to help Guix project accept contributions while maintaining our quality standards and learning how to do patch reviews together in a friendly hacking session.


New reproducibility-related academic papers

A total of three separate scholarly papers related to Reproducible Builds were published this month:

An Industry Interview Study of Software Signing for Supply Chain Security was published by Kelechi G. Kalu, Tanmay Singla, Chinenye Okafor, Santiago Torres-Arias and James C. Davis of Electrical and Computer Engineering department of Purdue University, Indiana, USA, and is concerned with:

To understand software signing in practice, we interviewed 18 high-ranking industry practitioners across 13 organizations. We provide possible impacts of experienced software supply chain failures, security standards, and regulations on software signing adoption. We also study the challenges that affect an effective software signing implementation.


DiVerify: Diversifying Identity Verification in Next-Generation Software Signing was written by Chinenye L. Okafor, James C. Davis and Santiago Torres-Arias also of Purdue University and is interested in:

Code signing enables software developers to digitally sign their code using cryptographic keys, thereby associating the code to their identity. This allows users to verify the authenticity and integrity of the software, ensuring it has not been tampered with. Next-generation software signing such as Sigstore and OpenPubKey simplify code signing by providing streamlined mechanisms to verify and link signer identities to the public key. However, their designs have vulnerabilities: reliance on an identity provider introduces a single point of failure, and the failure to follow the principle of least privilege on the client side increases security risks. We introduce Diverse Identity Verification (DiVerify) scheme, which strengthens the security guarantees of nextgeneration software signing by leveraging threshold identity validations and scope mechanisms.


Felix Lagnöhed published their thesis on the Integration of Reproducibility Verification with Diffoscope in GNU Make. This work, amongst some other results:

[…] resulted in an extension of GNU make which is called rmake, where diffoscope — a tool for detecting differences between a large number of file types — was integrated into the workflow of make. rmake was later used to answer the posed research questions for this thesis. We found that different build paths and offsets are a big problem as three out of three tested Free and Open Source Software projects all contained these variations. The results also showed that gcc’s optimisation levels did not affect reproducibility, but link-time optimisation embeds a lot of unreproducible information in build artefacts. Lastly, the results showed that build paths, build ID’s and randomness are the three most common groups of variations encountered in the wild and potential solutions for some variations were proposed.


Pol Dellaiera completed his master thesis on Reproducibility in Software Engineering at University of Mons (UMons) under the supervision of Dr. Tom Mens, full professor and director of the Software Engineering Lab.

The thesis serves as an introduction to the concept of reproducibility in software engineering, offering a comprehensive overview of formalizations using mathematical notations for key concepts and an empirical evaluation of several key tools. By exploring various case studies, methodologies and tools, the research aims to provide actionable insights for practitioners and researchers alike. In a commitment to fostering openness and collaboration, the full thesis has been made publicly available for free access. Additionally, the source files for the thesis are hosted on GitHub, promoting transparency and inviting further exploration and contributions from the global software engineering community.


Website updates

There were a number of improvements made to our website this month, including Akihiro Suda very helpfully made the <h4> elements more distinguishable from the <h3> level [][] as well as added added a guide for Dockerfile reproducibility []. In addition Fay Stegerman added two tools, apksigcopier and reproducible-apk-tools to our Tools page.


Misc development news

In Debian this month, 4 reviews of Debian packages were added, 11 were updated and 14 were removed this month adding to our knowledge about identified issues. Only one issue types was updated, though, explaining that we don’t vary the build path anymore.


On our mailing list this month, Bernhard M. Wiedemann wrote that whilst he had previously collected issues that introduce non-determinism he has now moved on to discuss about “mitigations”, in the sense of how can we avoid whole categories of problem “without patching an infinite number of individual packages”. In addition, Janneke Nieuwenhuizen announced the release of two versions of GNU Mes. [][]


In openSUSE news, Bernhard M. Wiedemann published another report for that distribution.


In NixOS, with the 24.05 release out, we have again validated that our minimal ISO reproduces by building it on a VM with software from the 2000’s and no access to the binary cache.


What’s more, we continued to write patches in order to fix specific reproducibility issues, including Bernhard M. Wiedemann writing three patches (for qutebrowser, samba and systemd), Chris Lamb filing Debian bug #1074214 against the fastfetch package, and Arnout Engelen proposing fixes to refind and the Scala compiler.


Lastly, diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb uploaded two versions (270 and 271) to Debian, and made the following changes as well:

  • Drop Build-Depends on liblz4-tool in order to fix Debian bug #1072575. []
  • Update tests to support zipdetails version 4.004 that is shipped with Perl 5.40. []


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In June, a number of changes were made by Holger Levsen, including:

  • Marking the virt(32|64)c-armhf nodes as down. []
  • Granting a developer access to the osuosl4 node in order to debug a regression on the ppc64el architecture. []
  • Granting a developer access to the osuosl4 node. [][]

In addition, Mattia Rizzolo re-aligned the /etc/default/jenkins file with changes performed upstream [] and changed how configuration files are handled on the rb-mail1 host. [], whilst Vagrant Cascadian documented the failure of the virt32c and virt64c nodes after initial investigation [].


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Categories: FLOSS Project Planets

Real Python: How Do You Choose Python Function Names?

Planet Python - Wed, 2024-07-10 10:00

One of the hardest decisions in programming is choosing names. Programmers often use this phrase to highight the challenges of selecting Python function names. It may be an exaggeration, but there’s still a lot of truth in it.

There are some hard rules you can’t break when naming Python functions and other objects. There are also other conventions and best practices that don’t raise errors when you break them, but they’re still important when writing Pythonic code.

Choosing the ideal Python function names makes your code more readable and easier to maintain. Code with well-chosen names can also be less prone to bugs.

In this tutorial, you’ll learn about the rules and conventions for naming Python functions and why they’re important. So, how do you choose Python function names?

Get Your Code: Click here to download the free sample code that you’ll use as you learn how to choose Python function names.

In Short: Use Descriptive Python Function Names Using snake_case

In Python, the labels you use to refer to objects are called identifiers or names. You set a name for a Python function when you use the def keyword.

When creating Python names, you can use uppercase and lowercase letters, the digits 0 to 9, and the underscore (_). However, you can’t use digits as the first character. You can use some other Unicode characters in Python identifiers, but not all Unicode characters are valid. Not even 🐍 is valid!

Still, it’s preferable to use only the Latin characters present in ASCII. The Latin characters are easier to type and more universally found on most keyboards. Using other characters rarely improves readability and can be a source of bugs.

Here are some syntactically valid and invalid names for Python functions and other objects:

Name Validity Notes number Valid first_name Valid first name Invalid No whitespace allowed first_10_numbers Valid 10_numbers Invalid No digits allowed at the start of names _name Valid greeting! Invalid No ASCII punctuation allowed except for the underscore (_) café Valid Not recommended 你好 Valid Not recommended hello⁀world Valid Not recommended—connector punctuation characters and other marks are valid characters

However, Python has conventions about naming functions that go beyond these rules. One of the core Python Enhancement Proposals, PEP 8, defines Python’s style guide, which includes naming conventions.

According to PEP 8 style guidelines, Python functions should be named using lowercase letters and with an underscore separating words. This style is often referred to as snake case. For example, get_text() is a better function name than getText() in Python.

Function names should also describe the actions being performed by the function clearly and concisely whenever possible. For example, for a function that calculates the total value of an online order, calculate_total() is a better name than total().

You’ll explore these conventions and best practices in more detail in the following sections of this tutorial.

What Case Should You Use for Python Function Names?

Several character cases, like snake case and camel case, are used in programming for identifiers to name the various entities. Programming languages have their own preferences, so the right style for one language may not be suitable for another.

Python functions are generally written in snake case. When you use this format, all the letters are lowercase, including the first letter, and you use an underscore to separate words. You don’t need to use an underscore if the function name includes only one word. The following function names are examples of snake case:

  • find_winner()
  • save()

Both function names include lowercase letters, and one of them has two English words separated by an underscore. You can also use the underscore at the beginning or end of a function name. However, there are conventions outlining when you should use the underscore in this way.

You can use a single leading underscore, such as with _find_winner(), to indicate that a function is meant only for internal use. An object with a leading single underscore in its name can be used internally within a module or a class. While Python doesn’t enforce private variables or functions, a leading underscore is an accepted convention to show the programmer’s intent.

A single trailing underscore is used by convention when you want to avoid a conflict with existing Python names or keywords. For example, you can’t use the name import for a function since import is a keyword. You can’t use keywords as names for functions or other objects. You can choose a different name, but you can also add a trailing underscore to create import_(), which is a valid name.

You can also use a single trailing underscore if you wish to reuse the name of a built-in function or other object. For example, if you want to define a function that you’d like to call max, you can name your function max_() to avoid conflict with the built-in function max().

Unlike the case with the keyword import, max() is not a keyword but a built-in function. Therefore, you could define your function using the same name, max(), but it’s generally preferable to avoid this approach to prevent confusion and ensure you can still use the built-in function.

Double leading underscores are also used for attributes in classes. This notation invokes name mangling, which makes it harder for a user to access the attribute and prevents subclasses from accessing them. You’ll read more about name mangling and attributes with double leading underscores later.

Read the full article at https://realpython.com/python-function-names/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Mer Joyce: voices of the Open Source AI Definition

Open Source Initiative - Wed, 2024-07-10 09:36

The Open Source Initiative (OSI) is running a series of stories about a few of the people involved in the Open Source AI Definition (OSAID) co-design process. We’ll be featuring the voices of the volunteers who have helped shape and are shaping the Definition.

The OSI started researching the topic in 2022, and in 2023 began the co-design process of a new definition of Open Source that applies to AI. The OSI hired Mer Joyce, founder and principal of Do Big Good, as an independent consultant to lead the co-design process. She has worked for over a decade at the intersection of research, policy, innovation and social change.

Mer Joyce, process facilitator for the Open Source AI Definition About co-design

Co-design, also called participatory or human-centered design, is a set of creative methods used to solve communal problems by sharing knowledge and power. The co-design methodology addresses the challenges of reaching an agreed definition within a diverse community (Costanza-Chock, 2020: Escobar, 2018: Creative Reaction Lab, 2018: Friedman et al., 2019). 

As noted in MIT Technology Review’s article about the OSAID, “[t]he open-source community is a big tent… encompassing everything from hacktivists to Fortune 500 companies…. With so many competing interests to consider, finding a solution that satisfies everyone while ensuring that the biggest companies play along is no easy task.” (Gent, 2024). 

The co-design method allows for the integration of diverging perspectives into one just, cohesive and feasible standard. Support from such a significant and broad group of people also creates a tension to be managed between moving swiftly enough to deliver outputs that can be used operationally and taking the time to consult widely to understand the big issues and garner community buy-in. Having Mer as facilitator of the OSAID co-design, with her in-depth experience, has been important in ensuring the integrity of the process. 

The OSAID co-design process

The first step of the OSAID co-design process was to identify the freedoms needed for Open Source AI. After various online and in-person activities and discussions, including five workshops across the world, the community adopted the four freedoms for software, now adapted for AI systems:

  • Freedom to Use the system for any purpose and without having to ask for permission.
  • Freedom to Study how the system works and inspect its components.
  • Freedom to Modify the system for any purpose, including to change its output.
  • Freedom to Share the system for others to use with or without modifications, for any purpose.

The next step was the formation of four working groups to initially analyze four different AI systems and their components. To achieve better representation, special attention was given to diversity, equity and inclusion. Over 50% of the working group participants are people of color, 30% are black, 75% were born outside the US, and 25% are women, trans or nonbinary.

These working groups discussed and voted on which AI system components should be required to satisfy the four freedoms for AI. The components adopted are described in the Model Openness Framework developed by the Linux Foundation.

The vote compilation was performed based on the mean total votes per component (μ). Components that received over 2μ votes were marked as “required,” and between 1.5μ and 2μ were marked “likely required.” Components that received between 0.5μ and μ were marked as “likely not required,” and less than 0.5μ were marked “not required.”

After the working groups evaluated legal frameworks and legal documents for each component, each working group published a recommendation report. The end result is the OSAID with a comprehensive definition checklist encompassing a total of 17 components. More working groups are being formed to evaluate how well other AI systems align with the Definition.

OSAID multi-stakeholder co-design process: from component list to a definition checklist Meet Mer Joyce Video recorded by Ezequiel Lanza, Open Source AI Evangelist at Intel

I am the process facilitator for the Open Source AI Definition, the Open Source Initiative project creating a definition of Open Source AI that will be a part of the stable public infrastructure of Open Source technology that everyone can benefit from, similar to the Open Source Definition that OSI currently stewards. The co-design of the Open Source AI Definition involves consulting with global stakeholders to ensure their vast range of needs are represented while integrating and weaving together the variety of different perspectives on what Open Source AI should mean.

If you would like to participate in the process, we’re currently on version 0.0.7. We will have a release candidate in June and a stable version in October. There is a public forum at discuss.opensource.org where anyone can create an account and make comments. As different versions are created, updates about our process are released here as well. I am available, as is the executive director of the OSI, to answer questions at bi-weekly town halls that are open for anyone to attend.

How to get involved

The OSAID co-design process is open to everyone interested in collaborating. There are many ways to get involved:

  • Join the working groups: be part of a team to evaluate various models against the OSAID.
  • Join the forum: support and comment on the drafts, record your approval or concerns to new and existing threads.
  • Comment on the latest draft: provide feedback on the latest draft document directly.
  • Follow the weekly recaps: subscribe to our newsletter and blog to be kept up-to-date.
  • Join the town hall meetings: participate in the online public town hall meetings to learn more and ask questions.
  • Join the workshops and scheduled conferences: meet the OSI and other participants at in-person events around the world.
One of the many OSAID workshops organized by Mer Joyce around the world
Categories: FLOSS Research

Petter Reinholdtsen: Some notes from the 2024 LinuxCNC Norwegian developer gathering

Planet Debian - Wed, 2024-07-10 08:45

The Norwegian The LinuxCNC developer gathering 2024 is over. It was a great and productive weekend, and I am sad that it is over.

Regular readers probably still remember what LinuxCNC is, but her is a quick summary for those that forgot? LinuxCNC is a free software system for numerical control of machines such as milling machines, lathes, plasma cutters, routers, cutting machines, robots and hexapods. It eats G-code and produce motor movement and other changes to the physical world, while reading sensor input.

I am not quite sure about the total head count, as not all people were present at the gathering the entire weekend, but I believe it was close to 10 people showing their faces at the gathering. The "hard core" of the group, who stayed the entire weekend, were two from Norway, two from Germany and one from England. I am happy with the outcome from the gathering. We managed to wrap up a new stable LinuxCNC release 2.9.3 and even tested it on real hardware within minutes of the release. The release notes for 2.9.3 are still being written, but should show up on on the project site in the next few days. We managed to go through around twenty pull requests and merge then into either the stable release (2.9) or the development branch (master). There are still around thirty pull requests left to process, so we are not out of work yet. We even managed to fix/improve a slightly worn lathe, and experiment with running a mechanical clock using G-code.

The evening barbeque worked well both on Saturday and Sunday. It is quite fun to light up a charcoal grill using compressed air. Sadly the weather was not the best, so we stayed indoors most of the time.

This gathering was made possible partly with sponsoring from both Redpill Linpro, Debian and NUUG Foundation, and we are most grateful for the support. I would also like to thank the local school for lending us some furniture, and of course the rest of the members of the organizers team, Asle and Bosse, for their countless contributions. The gathering was such success that we want to do it again next year.

We plan to organize the next Norwegian LinuxCNC developer gathering at the end of June next year, the weekend Friday 27th to Sunday 29th of June 2025. I recommend you reserve the dates on your calendar today. Other related communities are also welcome to join in, for example those working on systems like FreeCAD and opencamlib, as I am sure we have much in common and sharing experiences would be very useful to all involved. We are of course looking for sponsors for this gathering already. The total budget for this gathering was around NOK 25.000 (around EUR 2.300), so our needs are quite modest. Perhaps a machine or tools company would like to help out the free software manufacturing community by sponsoring food, lodging and transport for such gathering?

Categories: FLOSS Project Planets

Real Python: Quiz: Choosing the Best Font for Programming

Planet Python - Wed, 2024-07-10 08:00

In this quiz, you’ll test your understanding of how to choose the best font for your daily programming. You’ll get questions about the technicalities and features to consider when choosing a programming font and refresh your knowledge about how to spot a high-quality coding font.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Russell Coker: Computer Adavances in the Last Decade

Planet Debian - Wed, 2024-07-10 02:55

I wrote a comment on a social media post where someone claimed that there’s no computer advances in the last 12 years which got long so it’s worth a blog post.

In the last decade or so new laptops have become cheaper than new desktop PCs. USB-C has taken over for phones and for laptop charging so all recent laptops support USB-C docks and monitors with USB-C docks built in have become common. 4K monitors have become cheap and common and higher than 4K is cheap for some use cases such as ultra wide. 4K TVs are cheap and TVs with built-in Android computers for playing internet content are now standard. For most use cases spinning media hard drives are obsolete, SSDs large enough for all the content most people need to store are cheap. We have gone from gigabit Ethernet being expensive to 2.5 gigabit being cheap.

12 years ago smart phones were very limited and every couple of years there would be significant improvements. Since about 2018 phones have been capable of doing most things most people want. 5yo Android phones can run the latest apps and take high quality pics. Any phone that supports VoLTE will be good for another 5+ years if it has security support. Phones without security support still work and are quite usable apart from being insecure. Google and Samsung have significantly increased their minimum security support for their phones and the GKI project from Google makes it easier for smaller vendors to give longer security support. There are a variety of open Android projects like LineageOS which give longer security support on a variety of phones. If you deliberately choose a phone that is likely to be well supported by projects like LineageOS (which pretty much means just Pixel phones) then you can expect to be able to actually use it when it is 10 years old. Compare this to the Samsung Galaxy S3 released in 2012 which was a massive improvement over the original Galaxy S (the S2 felt closer to the S than the S3). The Samsung Galaxy S4 released in 2013 was one of the first phones to have FullHD resolution which is high enough that most people can’t easily recognise the benefits of higher resolution. It wasn’t until 2015 that phones with 4G of RAM became common which is enough that for most phone use it’s adequate today.

Now that 16G of RAM is affordable in laptops running more secure OSs like Qubes is viable for more people. Even without Qubes, OS security has been improving a lot with better compiler features, new languages like Rust, and changes to software design and testing. Containers are being used more but we still aren’t getting all the benefits of that. TPM has become usable in the last few years and we are only starting to take advantage of what it can offer.

In 2012 BTRFS was still at an early stage of development and not many people wanted to use it in production, I was using it in production then and while I didn’t lose any data from bugs I did have some downtime because of BTRFS issues. Now BTRFS is quite solid for server use.

DDR4 was released in 2014 and gave significant improvements over DDR3 for performance and capacity. My home workstation now has 256G of DDR4 which wasn’t particularly expensive while the previous biggest system I owned had 96G of DDR3 RAM. Now DDR5 is available to again increase performance and size while also making DDR4 cheap on the second hand market.

This isn’t a comprehensive list of all advances in the computer industry over the last 12 years or so, it’s just some things that seem particularly noteworthy to me.

Please comment about what you think are the most noteworthy advances I didn’t mention.

Related posts:

  1. Long-term Device Use It seems to me that Android phones have recently passed...
  2. Returning the Aldi Tablet I have decided to return the 7″ Android tablet I...
  3. Standardising Android Don Marti wrote an amusing post about the lack of...
Categories: FLOSS Project Planets

amazee.io: amazee.io Launches New Tokyo Cloud Region on AWS

Planet Drupal - Tue, 2024-07-09 20:00
Discover our new Tokyo Cloud Region on AWS, offering flexible, scalable, and secure PaaS solutions for optimized application delivery and hosting in Japan.
Categories: FLOSS Project Planets

Freexian Collaborators: Debian Contributions: YubiHSM packaging, unschroot, live-patching, and more! (by Stefano Rivera)

Planet Debian - Tue, 2024-07-09 20:00
Debian Contributions: 2024-06

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

YubiHSM packaging, by Colin Watson

Freexian is starting to use YubiHSM devices (hardware security modules) as part of some projects, and we wanted to have the supporting software directly in Debian rather than needing to use third-party repositories. Since Yubico publish everything we need under free software licences, Colin packaged yubihsm-connector, yubihsm-shell, and python-yubihsm from https://developers.yubico.com/, in some cases based partly on the upstream packaging, and got them all into Debian unstable. Backports to bookworm will be forthcoming once they’ve all reached testing.

unschroot by Helmut Grohne

Following an in-person discussion at MiniDebConf Berlin, Helmut attempted splitting the containment functionality of sbuild --chroot-mode=unshare into a dedicated tool interfacing with sbuild as a variant of --chroot-mode=schroot providing a sufficiently compatible interface.

While this seemed technically promising initially, a discussion on debian-devel indicated a desire to rely on an existing container runtime such as podman instead of using another Debian-specific tool with unclear long term maintenance. None of the existing container runtimes meet the specific needs of sbuild, so further advancing this matter implies a compromise one way or another.

Linux live-patching, by Santiago Ruano Rincón

In collaboration with Emmanuel Arias, Santiago is working on the development of linux live-patching for Debian. For the moment, this is in an exploratory phase, that includes how to handle the different patches that will need to be provided. kpatch could help significantly in this regard. However, kpatch was removed from unstable because there are some RC bugs affecting the version that was present in Debian unstable. Santiago packaged the most recent upstream version (0.9.9) and filed an Intent to Salvage bug. Santiago is waiting for an ACK by the maintainer, and will upload to unstable after July 10th, following the package salvaging rules. While kpatch 0.9.9 fixes the main issues, it still needs some work to properly support Debian and the Linux kernel versions packaged in our distribution. More on this in the report next month.

Salsa CI, by Santiago Ruano Rincón

The work by Santiago in Salsa CI this month includes a merge request to ease testing how the production images are built from the changes introduced by future merge requests. By default, the pipelines triggered by a merge request build a subset of the images built for production, to reduce the use of resources, and because most of the time the subset of staging images is enough to test the proposed modifications. However, sometimes it is needed to test how the full set of production images is built, and the above mentioned MR helps to do that. The changes include documentation, so hopefully this will make it easier to test future contributions.

Also, for being able to include support for RISC-V, Salsa CI needs to replace kaniko as the tool used to build the images. Santiago tested buildah, but there are some issues when pushing built images for non-default platform architectures (i386, armhf, armel) to the container registry. Santiago will continue to work on this to find a solution.

Miscellaneous contributions
  • Stefano Rivera prepared updates for a number of Python modules.
  • Stefano uploaded the latest point release of Python 3.12 and the latest Python 3.13 beta. Both uncovered upstream regressions that had to be addressed.
  • Stefano worked on preparations for DebConf 24.
  • Stefano helped SPI to reconcile their financial records for DebConf 23.
  • Colin did his usual routine work on the Python team, upgrading 36 packages to new upstream versions (including fixes for four CVEs in python-aiohttp), fixing RC bugs in ipykernel, ipywidgets, khard, and python-repoze.sphinx.autointerface, and packaging zope.deferredimport which was needed for a new upstream version of python-persistent.
  • Colin removed the user_readenv option from OpenSSH’s PAM configuration (#1018260), and prepared a release note.
  • Thorsten Alteholz uploaded a new upstream version of cups.
  • Nicholas Skaggs updated xmacro to support reproducible builds (#1014428), DEP-3 and DEP-5 compatibility, along with utilizing hardening build flags. Helmut supported and uploaded package.
  • As a result of login having become non-essential, Helmut uploaded debvm to unstable and stable and fixed a crossqa.debian.net worker.
  • Santiago worked on the Content Team activities for DebConf24. Together with other DebConf25 team members, Santiago wrote a document for the head of the venue to describe the project of the conference.
Categories: FLOSS Project Planets

Simon Josefsson: Towards Idempotent Rebuilds?

GNU Planet! - Tue, 2024-07-09 18:16

After rebuilding all added/modified packages in Trisquel, I have been circling around the elephant in the room: 99% of the binary packages in Trisquel comes from Ubuntu, which to a large extent are built from Debian source packages. Is it possible to rebuild the official binary packages identically? Does anyone make an effort to do so? Does anyone care about going through the differences between the official package and a rebuilt version? Reproducible-build.org‘s effort to track reproducibility bugs in Debian (and other systems) is amazing. However as far as I know, they do not confirm or deny that their rebuilds match the official packages. In fact, typically their rebuilds do not match the official packages, even when they say the package is reproducible, which had me surprised at first. To understand why that happens, compare the buildinfo file for the official coreutils 9.1-1 from Debian bookworm with the buildinfo file for reproducible-build.org’s build and you will see that the SHA256 checksum does not match, but still they declare it as a reproducible package. As far as I can tell of the situation, the purpose of their rebuilds are not to say anything about the official binary build, instead the purpose is to offer a QA service to maintainers by performing two builds of a package and declaring success if both builds match.

I have felt that something is lacking, and months have passed and I haven’t found any project that address the problem I am interested in. During my earlier work I created a project called debdistreproduce which performs rebuilds of the difference between two distributions in a GitLab pipeline, and display diffoscope output for further analysis. A couple of days ago I had the idea of rewriting it to perform rebuilds of a single distribution. A new project debdistrebuild was born and today I’m happy to bless it as version 1.0 and to announces the project! Debdistrebuild has rebuilt the top-50 popcon packages from Debian bullseye, bookworm and trixie, on amd64 and arm64, as well as Ubuntu jammy and noble on amd64, see the summary status page for links. This is intended as a proof of concept, to allow people experiment with the concept of doing GitLab-based package rebuilds and analysis. Compare how Guix has the guix challenge command.

Or I should say debdistrebuild has attempted to rebuild those distributions. The number of identically built packages are fairly low, so I didn’t want to waste resources building the rest of the archive until I understand if the differences are due to consequences of my build environment (plain apt-get build-dep followed by dpkg-buildpackage in a fresh container), or due to some real difference. Summarizing the results, debdistrebuild is able to rebuild 34% of Debian bullseye on amd64, 36% of bookworm on amd64, 32% of bookworm on arm64. The results for trixie and Ubuntu are disappointing, below 10%.

So what causes my rebuilds to be different from the official rebuilds? Some are trivial like the classical problem of varying build paths, resulting in a different NT_GNU_BUILD_ID causing a mismatch. Some are a bit strange, like a subtle difference in one of perl’s headers file. Some are due to embedded version numbers from a build dependency. Several of the build logs and diffoscope outputs doesn’t make sense, likely due to bugs in my build scripts, especially for Ubuntu which appears to strip translations and do other build variations that I don’t do. In general, the classes of reproducibility problems are the expected. Some are assembler differences for GnuPG’s gpgv-static, likely triggered by upload of a new version of gcc after the original package was built. There are at least two ways to resolve that problem: either use the same version of build dependencies that were used to produce the original build, or demand that all packages that are affected by a change in another package are rebuilt centrally until there are no more differences.

The current design of debdistrebuild uses the latest version of a build dependency that is available in the distribution. We call this a “idempotent rebuild“. This is usually not how the binary packages were built originally, they are often built against earlier versions of their build dependency. That is the situation for most binary distributions.

Instead of using the latest build dependency version, higher reproducability may be achieved by rebuilding using the same version of the build dependencies that were used during the original build. This requires parsing buildinfo files to find the right version of the build dependency to install. We believe doing so will lead to a higher number of reproducibly built packages. However it begs the question: can we rebuild that earlier version of the build dependency? This circles back to really old versions and bootstrappable builds eventually.

While rebuilding old versions would be interesting on its own, we believe that is less helpful for trusting the latest version and improving a binary distribution: it is challenging to publish a new version of some old package that would fix a reproducibility bug in another package when used as a build dependency, and then rebuild the later packages with the modified earlier version. Those earlier packages were already published, and are part of history. It may be that ultimately it will no longer be possible to rebuild some package, because proper source code is missing (for packages using build dependencies that were never part of a release); hardware to build a package could be missing; or that the source code is no longer publicly distributable.

I argue that getting to 100% idempotent rebuilds is an interesting goal on its own, and to reach it we need to start measure idempotent rebuild status.

One could conceivable imagine a way to rebuild modified versions of earlier packages, and then rebuild later packages using the modified earlier packages as build dependencies, for the purpose of achieving higher level of reproducible rebuilds of the last version, and to reach for bootstrappability. However, it may be still be that this is insufficient to achieve idempotent rebuilds of the last versions. Idempotent rebuilds are different from a reproducible build (where we try to reproduce the build using the same inputs), and also to bootstrappable builds (in which all binaries are ultimately built from source code). Consider a cycle where package X influence the content of package Y, which in turn influence the content of package X. These cycles may involve several packages, and it is conceivable that a cycle could be circular and infinite. It may be difficult to identify these chains, and even more difficult to break them up, but this effort help identify where to start looking for them. Rebuilding packages using the same build dependency versions as were used during the original build, or rebuilding packages using a bootsrappable build process, both seem orthogonal to the idempotent rebuild problem.

Our notion of rebuildability appears thus to be complementary to reproducible-builds.org’s definition and bootstrappable.org’s definition. Each to their own devices, and Happy Hacking!

Categories: FLOSS Project Planets

Simon Josefsson: Towards Idempotent Rebuilds?

Planet Debian - Tue, 2024-07-09 18:16

After rebuilding all added/modified packages in Trisquel, I have been circling around the elephant in the room: 99% of the binary packages in Trisquel comes from Ubuntu, which to a large extent are built from Debian source packages. Is it possible to rebuild the official binary packages identically? Does anyone make an effort to do so? Does anyone care about going through the differences between the official package and a rebuilt version? Reproducible-build.org‘s effort to track reproducibility bugs in Debian (and other systems) is amazing. However as far as I know, they do not confirm or deny that their rebuilds match the official packages. In fact, typically their rebuilds do not match the official packages, even when they say the package is reproducible, which had me surprised at first. To understand why that happens, compare the buildinfo file for the official coreutils 9.1-1 from Debian bookworm with the buildinfo file for reproducible-build.org’s build and you will see that the SHA256 checksum does not match, but still they declare it as a reproducible package. As far as I can tell of the situation, the purpose of their rebuilds are not to say anything about the official binary build, instead the purpose is to offer a QA service to maintainers by performing two builds of a package and declaring success if both builds match.

I have felt that something is lacking, and months have passed and I haven’t found any project that address the problem I am interested in. During my earlier work I created a project called debdistreproduce which performs rebuilds of the difference between two distributions in a GitLab pipeline, and display diffoscope output for further analysis. A couple of days ago I had the idea of rewriting it to perform rebuilds of a single distribution. A new project debdistrebuild was born and today I’m happy to bless it as version 1.0 and to announces the project! Debdistrebuild has rebuilt the top-50 popcon packages from Debian bullseye, bookworm and trixie, on amd64 and arm64, as well as Ubuntu jammy and noble on amd64, see the summary status page for links. This is intended as a proof of concept, to allow people experiment with the concept of doing GitLab-based package rebuilds and analysis. Compare how Guix has the guix challenge command.

Or I should say debdistrebuild has attempted to rebuild those distributions. The number of identically built packages are fairly low, so I didn’t want to waste resources building the rest of the archive until I understand if the differences are due to consequences of my build environment (plain apt-get build-dep followed by dpkg-buildpackage in a fresh container), or due to some real difference. Summarizing the results, debdistrebuild is able to rebuild 34% of Debian bullseye on amd64, 36% of bookworm on amd64, 32% of bookworm on arm64. The results for trixie and Ubuntu are disappointing, below 10%.

So what causes my rebuilds to be different from the official rebuilds? Some are trivial like the classical problem of varying build paths, resulting in a different NT_GNU_BUILD_ID causing a mismatch. Some are a bit strange, like a subtle difference in one of perl’s headers file. Some are due to embedded version numbers from a build dependency. Several of the build logs and diffoscope outputs doesn’t make sense, likely due to bugs in my build scripts, especially for Ubuntu which appears to strip translations and do other build variations that I don’t do. In general, the classes of reproducibility problems are the expected. Some are assembler differences for GnuPG’s gpgv-static, likely triggered by upload of a new version of gcc after the original package was built. There are at least two ways to resolve that problem: either use the same version of build dependencies that were used to produce the original build, or demand that all packages that are affected by a change in another package are rebuilt centrally until there are no more differences.

The current design of debdistrebuild uses the latest version of a build dependency that is available in the distribution. We call this a “idempotent rebuild“. This is usually not how the binary packages were built originally, they are often built against earlier versions of their build dependency. That is the situation for most binary distributions.

Instead of using the latest build dependency version, higher reproducability may be achieved by rebuilding using the same version of the build dependencies that were used during the original build. This requires parsing buildinfo files to find the right version of the build dependency to install. We believe doing so will lead to a higher number of reproducibly built packages. However it begs the question: can we rebuild that earlier version of the build dependency? This circles back to really old versions and bootstrappable builds eventually.

While rebuilding old versions would be interesting on its own, we believe that is less helpful for trusting the latest version and improving a binary distribution: it is challenging to publish a new version of some old package that would fix a reproducibility bug in another package when used as a build dependency, and then rebuild the later packages with the modified earlier version. Those earlier packages were already published, and are part of history. It may be that ultimately it will no longer be possible to rebuild some package, because proper source code is missing (for packages using build dependencies that were never part of a release); hardware to build a package could be missing; or that the source code is no longer publicly distributable.

I argue that getting to 100% idempotent rebuilds is an interesting goal on its own, and to reach it we need to start measure idempotent rebuild status.

One could conceivable imagine a way to rebuild modified versions of earlier packages, and then rebuild later packages using the modified earlier packages as build dependencies, for the purpose of achieving higher level of reproducible rebuilds of the last version, and to reach for bootstrappability. However, it may be still be that this is insufficient to achieve idempotent rebuilds of the last versions. Idempotent rebuilds are different from a reproducible build (where we try to reproduce the build using the same inputs), and also to bootstrappable builds (in which all binaries are ultimately built from source code). Consider a cycle where package X influence the content of package Y, which in turn influence the content of package X. These cycles may involve several packages, and it is conceivable that a cycle could be circular and infinite. It may be difficult to identify these chains, and even more difficult to break them up, but this effort help identify where to start looking for them. Rebuilding packages using the same build dependency versions as were used during the original build, or rebuilding packages using a bootsrappable build process, both seem orthogonal to the idempotent rebuild problem.

Our notion of rebuildability appears thus to be complementary to reproducible-builds.org’s definition and bootstrappable.org’s definition. Each to their own devices, and Happy Hacking!

Categories: FLOSS Project Planets

How I manage my KDE email

Planet KDE - Tue, 2024-07-09 17:13

Every once in a while people ask me about my email routine, so I thought I’d write about it here.

Everything I do starts with the philosophy that work and project email is a task queue. Therefore an email is a to-do list item someone else has assigned to me.

Ugh, how horrible! Better get that stuff done or rejected as soon as possible so I can move on to the stuff I want to do.

This means my target is inbox zero; achieving it means I got all my tasks done. Like everyone, I don’t always achieve it, but zero is the goal. How do I work towards it?

#0: Separate KDE and non-KDE emails

When I’m not in KDE mode, I want to be able to turn that stuff off in my own brain. To accomplish this, I have a home email account and a KDE email account. I adjust all my KDE accounts to only send email to my KDE address.

#1: Use an email client app

To manage multiple email accounts without going insane, I avoid webmail. In addition to not supporting multi-account workflows, it’s usually slow, lacking useful features, and has poor keyboard navigation.

I currently use Thunderbird, but I’m investigating moving to KDE’s KMail. Regardless, it has to be a desktop email client that offers mail rules.

#2: Automatic categorization (0 minutes)

I configure my email client with mail rules to automatically tag emails with colored labels according to what they are, and then mark them as read:

This results in almost all newly-arrived emails becoming colored and marked as read:

When I get a new kind of automated email that didn’t automatically receive a color label, I adjust the rules to match that new email so it gets categorized in the future, too.

#3: Manual categorization (1-3 minutes)

When I first open my email client in the morning, everything will be categorized except 5-15 emails sent by actual people. To see just these, I’ll filter the inbox by unread status, since all the auto-categorized colored emails got automatically marked as read.

Then it’s time to figure out what to do with them. For anything that needs a response or action today, I mark it as urgent by hitting the “1” key. For anything that needs a personal response in the next few days, I hit “9” to tag it as personal and it becomes green. And so on.

Any emails that don’t need a response get immediately deleted. I never miss them. It’s fast and painless. Put those emails out of their misery.

#4: Action all the urgent emails (5-15 minutes)

Urgent means urgent; first I’ll go through these one at a time, and action them somehow. This means one of the following:

  1. If it’s from a person, write a reply and then delete the email.
  2. If it’s from an automated system, open the link to the thing it’s about in a web browser and then delete the email.

The email always ends up deleted! For people like us emails are not historical records, they’re tasks. Do you need to remember what tasks you performed 8 years ago on Tuesday, May 11th? Of course not. Don’t be a digital hoarder; delete your emails. You won’t miss them.

At this point I may realize that I was overzealous in tagging something as urgent. That’s fine; I just re-tag it as something else, and then I’ll get to it later.

#5: Action all the merge request emails (5-10 minutes)

Since my day job is “quality assurance manager”, these are important. I’ll go through every automated email from invent.kde.org about merge requests for repos I’m subscribed to and action them somehow:

  1. Open the link to the merge request in my web browser, and then delete the email.
  2. Decide I don’t need to review this particular merge request, and just delete the email.

More deletion! I never keep these emails around; they’re temporal notifications of other people’s work. Nothing worth preserving.

#6: Action all the bug report emails (5-15 minutes)

My web browser is now filled up with tabs for merge requests to review. Now it’s time to do that for relevant bug reports. I follow the same process here: open the bug report in my web browser because it needs a comment or other action from me, and then delete the email — or else immediately delete the email because it’s not directly actionable. Delete, delete, delete. It’s the happiest word when it comes to email. Everyone hates emails; delete them! Show them you mean business.

#7: Do actual work

At this point I’ve spent between 15 and 40 minutes just on email, ugh. Time to do some actual work! So now I’ll spend the next several hours going through those tabs in my web browser, from left to right. First reviewing merge requests, then handling the relevant bug reports (closing, re-opening, replying to comments, changing metadata, marking as duplicate, CCing others, etc). During this step, I’ll also triage the day’s new bug reports.

Sometimes I’ll check email again while doing these, since more will be coming in. It’s easy to delete or action them individually.

After all these tabs are closed, hooray! I have some time to be proactive instead of reactive! Usually this amounts to 0-120 minutes a day during working hours. I try to spend this time on fixing small bugs I found throughout the day, opening and participating in discussion topics about important matters, working on the KDE HIG, and sometimes helping people out on http://www.discuss.kde.org or http://www.reddit.com/r/KDE.

#8: Action all the rest of the emails (10-25 minutes)

Towards the end of the day I’ll look at the emails marked as “Personal” and “KDE e.V./Akademy” and try to knock a few out. It’s okay if I’m too tired; these aren’t urgent and can wait until tomorrow. After a few days of sitting there, I’ll mark them as urgent.

And that’s pretty much it! This is just my workflow; it doesn’t need to be yours. But in case you want to try it, here are answers to some anticipated objections:

Ugh, that sounds like it takes forever!

It really doesn’t.

On a Monday maybe it takes more like 35 or 40 minutes since there are emails from the weekend to process. But on Tuesday through Friday, it’s closer to 15-20 minutes. Often 10 on Friday. Thanks to the automatic categorization, all of this is much faster than manually looking at every email one by one, and much more effective than getting depressed by hundreds of unread emails in the morning and ignoring them.

Deleting emails is too scary, what if I need them in the future?

You won’t.

But if that’s too scary or painful, set up your email account or client app to archive “deleted” messages in permanent storage rather than truly deleting them. Just keep in mind that you’ll eventually run out of storage space and have to deal with that problem in the future. Once it happens, consider it an opportunity to reconsider, asking yourself how many emails you actually did need to dig out of cold storage. I’m guessing the number will be very low, maybe even 0.

This might work for your workflow, but I get different types of emails!

Maybe so, but the general principle of automatically tagging (but not moving) emails applies to anyone. I firmly believe that anyone can benefit from this part. Make the software do the grunt work for you!

What do I do about all of those the old emails in my inbox? There are too many, I’ll never get through them!

If you’re one of those people who has 50,000 emails in your inbox, select all and delete. You won’t miss any of them.

Seriously. All of them. Every single one. Right now. Just do it.

How do I know this is fine?

  • Old notifications about things like bug reports or merge requests are worthless because they already happened. Delete.
  • Old mailing list conversations long since dried up or got actioned without your input. Delete.
  • Old at-the-time urgent emails from important people are no longer relevant, because the people who sent them long ago concluded that you’re unreliable and decided to not contact you again. Because that’s what happens when you let emails pile up: you’re being rude to all the people whose messages you’ve ignored. Feel sad, resolve to do better, then delete.

The good news is that you can get better at this anytime, but it’s almost impossible without making a clean break with a messy past. You’ll be looking at old stuff forever and won’t have time for new stuff.

I just get too much email, it’s impossible to keep up no matter what I do!

You need to unsubscribe from some things. Maybe a lot of things. Longtime contributors to any project will have accumulated years worth of subscriptions to sources of emails that are no longer relevant. Prune them!

This may trigger Fear Of Missing Out. Recognize that and fight against it. You can almost always reduce your email load by unsubscribing from this stuff:

  • Activity in Git repos for projects you no longer contribute to.
  • Bug reports for products you aren’t involved in or responsible for anymore.
  • Medium to high traffic mailing lists that are mostly or entirely irrelevant to your present interests and activities.
  • Almost all the spam from LinkedIn.
  • All the spam from online stores, newspapers, political campaigns. The “unsubscribe” button will work, don’t give up!

Resist the temptation to filter these emails into folders that you tell yourself you’ll remember to look at once in a while. You probably won’t, and by the time you do, everything in them won’t be actionable anymore — if it ever was in the first place. Unsubscribe and delete!

Categories: FLOSS Project Planets

Steve Kemp: The CP/M emulator is good enough, I think.

Planet Debian - Tue, 2024-07-09 16:00

My previous post mentioned that I'd added some custom syscalls to my CP/M emulator and that lead to some more updates, embedding a bunch of binaries within the emulator so that the settings can be tweaked at run-time, for example running:

!DEBUG 1 !CTRLC 1 !CCP ccpz !CONSOLE adm-3a

Those embedded binaries show up on A: even if they're not in the pwd when you launch the emulator.

Other than the custom syscalls I've updated the real BDOS/BIOS syscalls a bit more, so that now I can run things like the Small C compiler, BBC BASIC, and more. (BBCBasic.com used to launch just fine, but it turned out that the SAVE/LOAD functions didn't work. Ooops!)

I think I've now reached a point where all the binaries I care about run, and barring issues I will slow down/stop development. I can run Turbo Pascal, WordStar, various BASIC interpreters, and I have a significantly improved understanding of how CP/M works - a key milestone in that understanding was getting SUBMIT.COM to execute, and understanding the split between the BDOS and the BIOS.

I'd kinda like to port CP/M to a new (Z80-based) system - but I don't have such a thing to hand, and I guess there's no real need for it. Perhaps I can say I'm "done" with retro stuff, and go back to playing Super Mario Bros (1985) with my boy!

Categories: FLOSS Project Planets

PyCoder’s Weekly: Issue #637 (July 9, 2024)

Planet Python - Tue, 2024-07-09 15:30

#637 – JULY 9, 2024
View in Browser »

Python Grapples With Apple App Store Rejections

A string that is part of the urllib parser module in Python references a scheme for apps that use the iTunes feature to install other apps, which is disallowed. Auto scanning by Apple is rejecting any app that uses Python 3.12 underneath. A solution has been proposed for Python 3.13.
JOE BROCKMEIER

Python’s Built-in Functions: A Complete Exploration

In this tutorial, you’ll learn the basics of working with Python’s numerous built-in functions. You’ll explore how you can use these predefined functions to perform common tasks and operations, such as mathematical calculations, data type conversions, and string manipulations.
REAL PYTHON

Python Error and Performance Monitoring That Doesn’t Suck

With Sentry, you can trace issues from the frontend to the backend—detecting slow and broken code, to fix what’s broken faster. Installing the Python SDK is super easy and PyCoder’s Weekly subscribers get three full months of the team plan. Just use code “pycoder” on signup →
SENTRY sponsor

Constraint Programming Using CP-SAT and Python

Constraint programming is the process of looking for solutions based on a series of restrictions, like employees over 18 who have worked the cash before. This article introduces the concept and shows you how to use open source libraries to write constraint solving code.
PHILIPPE OLIVIER

Register for PyOhio, July 27-28

PYOHIO.ORG • Shared by Anurag Saxena

Psycopg 3.2 Released

PSYCOPG

Polars 1.0 Released

POLARS

Quiz: Python’s Magic Methods

REAL PYTHON

Discussions Any Web Devs Successfully Pivoted to AI/ML Development?

HACKER NEWS

Articles & Tutorials A Search Engine for Python Packages

Finding the right Python package on PyPI can be a bit difficult, since PyPI isn’t really designed for discovering packages easily. PyPiScout.com solves this problem by allowing you to search using descriptions like “A package that makes nice plots and visualizations.”
PYPISCOUT.COM • Shared by Florian Maas

Programming Advice I’d Give Myself 15 Years Ago

Marcus writes in depth about things he has learned in his coding career and wished he new earlier in his journey. Thoughts include fixing foot guns, understanding the pace-quality trade-off, sharpening your axe, and more. Associated HN Discussion.
MARCUS BUFFETT

Keeping Things in Sync: Derive vs Test

Don’t Repeat Yourself (DRY) is generally a good coding philosophy, but it shouldn’t be adhered to blindly. There are other alternatives, like using tests to make sure that duplication stays in sync. This article outlines the why and how of just that.
LUKE PLANT

8 Versions of UUID and When to Use Them

RFC 9562 outlines the structure of Universally Unique IDentifiers (UUIDs) and includes eight different versions. In this post, Nicole gives a quick intro to each kind so you don’t have to read the docs, and explains why you might choose each.
NICOLE TIETZ-SOKOLSKAYA

Defining Python Constants for Code Maintainability

In this video course, you’ll learn how to properly define constants in Python. By coding a bunch of practical example, you’ll also learn how Python constants can improve your code’s readability, reusability, and maintainability.
REAL PYTHON course

Django: Test for Pending Migrations

The makemigrations --check command tells you if there is missing migrations in your Django project, but you have to remember to run it. Adam suggests calling it from a test so it gets triggered as part of your CI/CD process.
ADAM JOHNSON

How to Maximize Your Experience at EuroPython 2024

Conferences can be overwhelming, with lots going on and lots of choices. This post talks about how to get the best experience at EuroPython, or any conference.
SANGARSHANAN

Polars vs. pandas: What’s the Difference?

Explore the key distinctions between Polars and Pandas, two data manipulation tools. Discover which framework suits your data processing needs best.
JODIE BURCHELL

An Overview of the Sparse Array Ecosystem for Python

An overview of the different options available for working with sparse arrays in Python.
HAMEER ABBASI

Projects & Code Get Space Weather Data

GITHUB.COM/BEN-N93 • Shared by Ben Nour

AI to Drop Hats

DROPOFAHAT.ZONE

amphi-etl: Low-Code ETL for Data

GITHUB.COM/AMPHI-AI

aurora: Static Site Generator Implemented in Python

GITHUB.COM/CAPJAMESG

pytest-edit: pytest --edit to Open Failing Test Code

GITHUB.COM/MRMINO

Events Weekly Real Python Office Hours Q&A (Virtual)

July 10, 2024
REALPYTHON.COM

PyCon Nigeria 2024

July 10 to July 14, 2024
PYCON.ORG

PyData Eindhoven 2024

July 11 to July 12, 2024
PYDATA.ORG

Python Atlanta

July 11 to July 12, 2024
MEETUP.COM

PyDelhi User Group Meetup

July 13, 2024
MEETUP.COM

DFW Pythoneers 2nd Saturday Teaching Meeting

July 13, 2024
MEETUP.COM

Happy Pythoning!
This was PyCoder’s Weekly Issue #637.
View in Browser »

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

Categories: FLOSS Project Planets

Drupal Association blog: Drupal Association Announces HeroDevs as Inaugural Partner for Drupal 7 Extended Security Support Provider Program

Planet Drupal - Tue, 2024-07-09 14:33

PORTLAND, Ore., 10 July 2024—The Drupal Association is pleased to announce HeroDevs as the inaugural partner for the new Drupal 7 Extended Security Support Provider Program. This initiative aims to support Drupal 7 users by carefully vetting providers to deliver extended security support services beyond the 5 January 2025 end-of-life (EOL) date.

The Drupal 7 Extended Security Support Provider Program allows organizations that cannot migrate from Drupal 7 to newer versions by the EOL date to continue using a version of Drupal 7 that is secure and compliant. This program complements the Association’s D7 Certified Migration Providers Program, which helps organizations find the right partner to transition their sites from Drupal 7 to Drupal 10.

HeroDevs has successfully met the stringent requirements established by the Drupal Association to become a certified provider with its secure, seamless drop-in replacement of Drupal 7 and core modules. 

“HeroDevs has demonstrated strong expertise in finding and fixing security and compatibility issues for major open-source libraries like Drupal,” Tim Doyle, CEO of the Drupal Association, said. “This program underscores the Drupal Association’s dedication to providing qualified options for organizations using Drupal 7 so they can stay secure while they figure out their next steps for upgrading. ”

As organizations prepare for the transition from Drupal 7, HeroDevs will provide the necessary support to keep their sites secure and operational.

Joe Eames, VP of Partnership at HeroDevs, added, “We are honored to be recognized as the inaugural partner of this important program. At HeroDevs, we are creating a more sustainable, secure web and Drupal is a major part of that. We aim to help organizations maintain a secure and compliant web presence – all while giving open source creators and maintainers the freedom to innovate.” 

For more information about the HeroDevs Drupal 7 Never-Ending Support (NES), click here.

About the Drupal Association

The Drupal Association is a non-profit organization that fosters and supports the Drupal software project, the community, and its growth. Our mission is to drive innovation and adoption of Drupal as a high-impact digital public good, hand-in-hand with our open source community. Through various initiatives, events, and programs, the Drupal Association helps ensure the ongoing development and success of the Drupal project.

Categories: FLOSS Project Planets

Beyond SPDX: expanding licenses identified by ClearlyDefined

Open Source Initiative - Tue, 2024-07-09 14:26

ClearlyDefined is an Open Source project that helps organizations with supply chain compliance. Until recently, ClearlyDefined’s tooling only supported licenses that were part of the standardized SPDX license list. Any component identified by a license that was not part of this list resulted in NOASSERTION, which introduced uncertainty about the permissible use of such component, potentially hindering collaboration, creating legal complexities and security concerns for developers.

Fortunately, Scancode, which is an integral part of how ClearlyDefined detects and normalizes origin, dependencies and licensing metadata of components, already supports non-SPDX licenses thanks to its use of LicenseDB. LicenseDB is the largest free and open database of software licenses, in particular all the Open Source software licenses, with over 2000 community curated licenses texts and their metadata.

Philippe Ombredanne, the leading author of Scancode and LicenseDB, defended ClearlyDefined leveraging this capability already provided by Scancode:

As one of many examples, common public domain dedications are not tracked nor supported by SPDX and are not approved as OSI licenses. Not a single lawyer I know is treating these as proprietary licenses. They are carefully cataloged and properly detected by ScanCode (at least 850+ variants of these at last count plus an infinity of variations detected approximately)…

Collecting data is not endorsing nor promoting anything in particular be it proprietary, open source, free software, source available or else. But rather, just accepting that the world of actual licenses is what it is in all its glorious messy diversity and capturing what these licenses are, without discarding valuable information detected by ScanCode. Discarding and losing data has been the problem until now and has been making ClearlyDefined data mostly harmless and useless at scale as you get better and more information out of a straight ScanCode scan.

You are welcome to use anything you like, but I think it would be better to adopt the de-facto industry standard of ScanCode license data, rather than to reinvent the wheel, especially since ClearlyDefined is kinda using ScanCode rather heavily.

We use a suffix as LicenseRef-scancode in https://scancode-licensedb.aboutcode.org/ and guarantee stability of these with the track record to prove this.

After a healthy discussion on the topic, the ClearlyDefined community agreed that supporting non-SPDX licenses was important. Scancode already provides this functionality and it offers mapping from these non-SPDX licenses to the SPDX LicenseRef. Organizations using ClearlyDefined now have the option to decide how to handle non-SPDX licenses based on their own needs. This work to have ClearlyDefined use the latest version of Scancode and support non-SPDX licenses was led by Lukas Spieß from GitHub with the stewardship from Qing Tomlinson (from SAP) and E. Lynette Rayle (also from GitHub). We would like to thank them and all those involved in the development and testing of this implementation.

Categories: FLOSS Research

Python Engineering at Microsoft: Python in Visual Studio Code – July 2024 Release

Planet Python - Tue, 2024-07-09 11:45

We’re excited to announce the July 2024 release of the Python and Jupyter extensions for Visual Studio Code!

This release includes the following announcements:

  • Enhanced environment discovery with python-environment-tools
  • Improved support for reStructuredText docstrings with Pylance
  • Community contributed Pixi support

If you’re interested, you can check the full list of improvements in our changelogs for the Python, Jupyter and Pylance extensions.

Enhanced environment discovery with python-environment-tools

We are excited to introduce a new tool, python-environment-tools, designed to significantly enhance the speed of detecting global Python installations and Python virtual environments.

This tool leverages Rust to ensure a rapid and accurate discovery process. It also minimizes the number of Input/Output operations by collecting all necessary environment information at once, significantly enhancing the overall performance.

We are currently testing this new feature in the Python extension, running it in parallel with the existing support, to evaluate the new discovery performance. Consequently, you will see a new logging channel called Python Locator that shows the discovery times with this new tool.

This enhancement is part of our ongoing efforts to optimize the performance and efficiency of Python support in VS Code. Visit the python-environment-tools repo to learn more about this feature, ongoing work, and provide feedback!

Improved support for reStructuredText docstrings with Pylance

Pylance has improved support for rendering reStructuredText documentation strings (docstrings) on hover! RestructuredText (RST) is a popular format for documentation, and its syntax is sometimes used for the docstrings of Python packages.

This feature is in its early stages and is currently behind an experimental flag as we work to ensure it handles various Sphinx, Google Doc, and Epytext scenarios effectively. To try it out, you can enable the experimental setting python.analysis.supportRestructuredText.

Common packages where you might observe this change in their docstrings include pandas and scipy. Try this change out, and report any issues or feedback at the Pylance GitHub repository.

Note: This setting is currently experimental, but will likely be enabled by default in the future as it becomes more stabilized.

Community contributed Pixi support

Thanks to @baszalmstra, there is now support for Pixi environment detection in the Python extension! This work added a locator to detect Pixi environments in your workspace similar to other common environments such as Conda. Furthermore, if a Pixi environment is detected in your workspace, the environment will automatically be selected as your default environment.

We appreciate and look forward to continued collaboration with community members on bug fixes and enhancements to improve the Python experience!

Other Changes and Enhancements

We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python and Jupyter Notebooks in Visual Studio Code. Some notable changes include:

We would also like to extend special thanks to this month’s contributors:

Call for Community Feedback

As we are planning and prioritizing future work, we value your feedback! Below are a few issues we would love feedback on:

Try out these new improvements by downloading the Python extension and the Jupyter extension from the Marketplace, or install them directly from the extensions view in Visual Studio Code (Ctrl + Shift + X or ⌘ + ⇧ + X). You can learn more about Python support in Visual Studio Code in the documentation. If you run into any problems or have suggestions, please file an issue on the Python VS Code GitHub page.

The post Python in Visual Studio Code – July 2024 Release appeared first on Python.

Categories: FLOSS Project Planets

Pages