Feeds

Explaining the concept of Data information

Open Source Initiative - Fri, 2024-06-14 09:53

There seems to be some confusion caused by the concept of Data information included in the draft v0.0.8 of the Open Source AI Definition. Some readers may have seen the original dataset included in the list of optional components and quickly jumped to the wrong conclusions. This post clarifies how the draft arrived at its current state, the design principles behind the Data information concept and the constraints (legal and technical) it operates under.

The objective of the Open Source AI Definition

The objective of the Open Source AI Definition is to replicate in the context of artificial intelligence (AI) the principles of autonomy, transparency, frictionless reuse, and collaborative improvement for end users and developers of AI systems. These are described in the preamble.

Following the preamble is the definition of Open Source AI, an adaptation of the definition of Free Software (also known as “the four freedoms”) to AI nomenclature. The preamble and the four freedoms have been co-designed over several meetings and public discussions, online and in-person, and have not recently received significant comments. 

The Free Software definition specifies that a precondition to the freedom to study and modify a program is to have access to the source code. Source code is defined as “the preferred form of the program for making changes in.” Draft v0.0.8 contains a description of what’s necessary to enjoy the freedoms to study and modify an AI system. This new section titled Preferred form to make modifications to machine-learning systems has generated a heated debate. 

What is the preferred form to make modifications

The concept of “preferred form to make modifications” focuses on machine learning systems because these systems require data and training to produce a working system. Other AI systems are more easily classifiable as software and don’t require a special definition. 

The system analysis phase of the co-design process revealed that studying and modifying machine learning systems requires data, code for training and inference and model parameters. For the parameters, there’s no ambiguity: an Open Source AI must make them available under terms that respect the Open Source principles (no field-of-use restrictions, no discrimination against people, etc). For the data and code requirements, the text in the “preferred form to make modifications” section is longer and harder to parse, generating some confusion. 

The intent of the code and data requirements is to  ensure that end users, deployers and developers of an Open Source AI system have all the tools and instructions to recreate that AI system from scratch, to satisfy the freedoms to study and modify the system. At a high-level view, it makes sense to suggest that training datasets should be mandatorily released with permissive licenses in order to be Open Source AI.

However on close examination, it became clear that sharing the original datasets is full of traps. It actually puts Open Source at a disadvantage compared to opaque and proprietary AI systems.

The issue with data

Data is not software: The legal landscape for data is much wider than copyright. Aggregating large datasets and distributing them internationally is an endless nightmare that includes privacy laws, copyright, sui-generis rights, patents, secrets and more. Without diving deeper into legal issues, let’s focus on practical examples to clarify why the distribution of the training dataset is not spelled out as a requirement in the concept of Data information.

  • The Pile, the open dataset used to train the very open Pythia models, was taken down after an alleged copyright infringement, currently being litigated in the United States. However, the Pile appears to be legal to share in Japan. It’s also unclear whether it can be legally shared in the European Union. 
  • DOLMA, the open dataset used to train the very open OLMo models, was initially released with a restrictive license. It later switched to a permissive one. On further inspection, DOLMA appears to suffer from the same legal uncertainties of the Pile, however the Allen Institute has not been sued yet.
  • Training techniques that preserve privacy like federated learning don’t create datasets. 

All these cases show that requiring the original datasets creates vagueness and uncertainty in applying the Open Source AI Definition:

  • If a dataset is only legal in Japan, is that AI Open Source only in Japan?
  • If a dataset is initially legally available but later retracted, does the AI go from being Open Source to not?
    • If so, what happens to the applications that use such AI?
  • If no dataset is created, then will any AI trained with such techniques ever be Open Source?

Additionally, there are reasons to believe that OpenAI, Anthropic and other proprietary systems have been trained on the same questionable data inside The Pile and DOLMA: Proving that’s the case is a lot harder and expensive though. This is clearly a disincentive to be open and transparent on the data sources, adding a burden to the organizations that try to do the right thing.

The solution to these questions, draft v0.0.8 contains the concept of Data information, coupled with code requirements to obtain the expected result: for end users, developers and deployers of AI systems to be able to reproduce an Open Source AI.

Understanding the concept of Data Information

Data information, in the draft Open Source AI Definition, is defined as: 

Sufficiently detailed information about the data used to train the system, so that a skilled person can recreate a substantially equivalent system using the same or similar data.

Read that from the end: The intention of Data information is to allow developers to recreate a substantially equivalent system using the same or similar data. That means that an Open Source AI must disclose all the ingredients, where they’ve been bought and all the instructions to prepare the dish.  

This is a solution that came out of the co-design process, where reviewers didn’t rank the training datasets as high as they ranked the training code and data transparency requirements. 

Data information and the code requirements also address all of the questions around the legality of distributing data and datasets, or their absence.

If a dataset is only legal in Japan or becomes illegal later, one should still be able to recreate a dataset suitable to train an equivalent system replacing the illegal or unavailable pieces with similar ones.

AI systems trained with federated learning (where a dataset isn’t created) can still be Open Source AI if all instructions and code are released so that a new training with different data can generate an equivalent system.

The Data information concept also solves an example (raised on the forum) of an AI system trained on data licensed directly from Reddit. In this case, if the original developers released enough information to allow another AI developer to recreate a substantially equivalent system with Reddit data taken from an existing dataset, like CommonCrawl, it would be considered Open Source AI.

The proposed alternatives

While generally well received, draft v0.0.8 has been criticized by a few people on the forum for putting the training dataset in the “optional requirements”. Some suggestions and pushback we’ve received:

  • Require the use of synthetic data when the training dataset cannot be legally shared: This technique may work in some corner cases, if the technology evolves to be reliable enough. It’s expensive and untested at scale.
  • Classify as Open Source AI systems where all their components are “open source”: This approach is not rooted in the longstanding practice of the GNU project to accept system library exceptions and other compromises in exchange for more Open Source tools.
  • Datasets built by crawling the internet are the equivalent of theft, they shouldn’t be allowed  at all, let alone allowed in Open Source AI: This pushback ignores the reality that large data aggregators already have acquired legally the rights to accumulate that same data (through scraping and terms of use) and are trading it, exclusively capturing the economic value of what should be in the commons. Read Towards a Books Data Commons for AI Training for more details. There is no general agreement that text and data mining is equivalent to theft.

These demands and suggestions are hard to accept. We need an Open Source AI Definition that can effectively guide users and developers to make the right choice. We need one that doesn’t put developers of Open Source AI at a disadvantage compared to proprietary ones. We need a Definition that contains positive examples from the start so we can practically demonstrate positive qualities to policymakers. 

The discussion about data, how to generate incentives to create datasets that can be distributed internationally, safely, preserving privacy, is extremely complex. It can be addressed separately from the Open Source AI Definition. In collaboration with Open Future Foundation and others, OSI is designing a series of conferences to tackle the data governance issue. We’ll make an announcement soon.

Have your say now

The concept of Data information and code requirements is hard to grasp at first. But the preliminary results of the validation phase confirm that the draft v0.0.8 works as expected: Pythia and OLMo both would be Open Source AI, while Falcon, Grok, Llama, Mistral would not (even if they used OSD-compatible licenses) because they don’t share Data information. BLOOM and StarCoder would fail because of field-of-use restrictions in their models.

Data information can be improved but it’s better than other solutions proposed so far. As we get closer to the release of the stable version of the Open Source AI Definition, we need to hear from you: If you support this concept please comment on the forum today. If you don’t support it, please try to propose an alternative that at least covers the practical examples of Pile, DOLMA and federated learning above. Help the community move the conversation forward.

Continue the conversation in the forum

Categories: FLOSS Research

Web Review, Week 2024-24

Planet KDE - Fri, 2024-06-14 09:14

Let’s go for my web review for the week 2024-24.

Microsoft Will Switch Off Recall by Default After Security Backlash

Tags: tech, microsoft, privacy

Unsurprisingly they had to adjust under the pressure. The most blatant issues might be gone, it is still a bad idea at its core.

https://www.wired.com/story/microsoft-recall-off-default-security-concerns/


AI chatbots are intruding into online communities where people are trying to connect with other humans

Tags: tech, ai, machine-learning, gpt, criticism, ethics

Chatbots can be useful in some cases… but definitely not when people expect to connect with other humans.

https://theconversation.com/ai-chatbots-are-intruding-into-online-communities-where-people-are-trying-to-connect-with-other-humans-229473


Malicious VSCode extensions with millions of installs discovered

Tags: tech, vscode, security, ide

How trustworthy are the extensions you get in your editor or IDE? I’d expect most marketplaces to not be well harmed against such attacks.

https://www.bleepingcomputer.com/news/security/malicious-vscode-extensions-with-millions-of-installs-discovered/


HTTP/3 needs us (and other people) to make firewall changes

Tags: tech, http, quic, firewall

Good reminder that firewalls need to be adjusted for proper HTTP/3 support.

https://utcc.utoronto.ca/~cks/space/blog/sysadmin/HTTP3AndOurFirewalls


HTTP/3 in curl mid 2024 | daniel.haxx.se

Tags: tech, http, quic

Interesting status report about HTTP/3 support in curl. Shows quite well the various alternatives and how special HTTP/3 can be.

https://daniel.haxx.se/blog/2024/06/10/http-3-in-curl-mid-2024/


What is PID 0? · blog.dave.tf

Tags: tech, unix, linux, kernel, system, processes

Interesting deep dive in where the PIDs seen in user space come from. And also yes, there is something matching PID 0 which can be traced back to early UNIX systems.

https://blog.dave.tf/post/linux-pid0/


Scan HTML faster with SIMD instructions: Chrome edition – Daniel Lemire’s blog

Tags: tech, cpu, performance, SIMD

SIMD keeps providing interesting performance boosts for parsing work loads.

https://lemire.me/blog/2024/06/08/scan-html-faster-with-simd-instructions-chrome-edition/


Rolling your own fast matrix multiplication: loop order and vectorization – Daniel Lemire’s blog

Tags: tech, c++, compiler, performance, matrix

The ordering used for matrix multiplications definitely matters.

https://lemire.me/blog/2024/06/13/rolling-your-own-fast-matrix-multiplication-loop-order-and-vectorization/


You’ll regret using natural keys

Tags: tech, databases, design

Good advice on designing your database tables. The comments are good too, they allow to complete the picture.

https://blog.ploeh.dk/2024/06/03/youll-regret-using-natural-keys/


Brain dump – Pagination for database objects

Tags: tech, backend, databases

The right and wrong approaches for paginating results coming from a database.

https://www.n16f.net/blog/pagination-for-database-objects/


Optimal SQLite settings for Django

Tags: tech, django, databases, sqlite

Little and to the point reference on safer SQLite use. I should check if some of this would apply or is used by Akonadi as well.

https://gcollazo.com/optimal-sqlite-settings-for-django/


the Gilbert–Johnson–Keerthi algorithm explained as simply as possible

Tags: tech, geometry, mathematics, algorithm

Need to know if two shapes overlap? Good explanation of an elegant algorithm to do it.

https://computerwebsite.net/writing/gjk


Feynman’s Razor - by Defender of the Basic

Tags: tech, documentation, communication, gui

Nice reminder that even though we try to make things simpler to understand to people, there is a point where we can go too far.

https://defenderofthebasic.substack.com/p/feynmans-razor


Foreword for Fuzz Testing Book

Tags: tech, fuzzing, tests, history

Ever wondered where fuzz testing is coming from? This is an important bit of history.

https://pages.cs.wisc.edu/~bart/fuzz/Foreword1.html


Post-Architecture: An Open Approach to Software Engineering

Tags: tech, software, architecture

Indeed this is not for any environment and projects. So take it with a grain of salt. That said, I think this piece has a core truth to it which is more general. Software architectures shouldn’t be considered as something fixed as soon as they are planned, they need to be validated through use and to be prepared to evolve over time as needed.

https://arendjr.nl/blog/2024/06/post-architecture/


Bye for now!

Categories: FLOSS Project Planets

EuroPython: Humble Data workshop for beginners - Pythonistas and data scientists

Planet Python - Fri, 2024-06-14 08:38

Among the many wonderful workshops at EuroPython this year, we are pleased to announce we will be running the Humble Data workshop in person on Tuesday 9th July 2023, at the Prague Congress Centre (PCC). This is following successful deliveries of this workshop at PyCon US, Ghana, Namibia, Africa, Germany, Italy, PyData Global and of course, EuroPython 2022 and 2023!

How is the workshop?

Curious about the event? Read on.

Humble Data workshops are designed to get those from underrepresented groups started in both Python and data science, in an inclusive, laid-back and empathic environment. The workshops are designed to help people with zero experience with coding to learn some of the most fundamental operations in Python, and in turn, use these to get started with reading, transforming and visualizing data.

Humble Data at EuroPython 2023

The workshop will happen on 9th July 2024 for 6 hours, from 09:30 to 17:00 at the Prague Congress Centre (PCC), Room Club C

As part of the workshop, participants will work through a series of approachable tutorials with the help of a mentor. For 3 hours (breaks included) we will have teams that will work together with mentors to do plenty of exercises, quizzes and games, to go from Zero to Hero in Python data science. All that participants will need to bring is a laptop with internet access - we will help them get started with the rest!

Let us help you get started on your Python data science journey. You can read more about the workshop here.

How can I get involved?

Like the idea? Join us as a mentor or mentee!

If you’re new to coding or data science, and want to learn more in a supportive environment, apply to join us at the Humble Data workshop by filling in this form (attendees) or this form (mentors). Participation is free for anyone with a EuroPython Conference Ticket or Combined Ticket. Please note that you will need a conference ticket to participate in this workshop - we thank you for your understanding!

If you’re interested in mentoring, we would love to have your help! It is no issue if you&aposre not the most experienced programmer or data scientist: rather, we are looking for people who are respectful, patient, friendly, curious, and able to explain technical concepts in a way that is approachable for beginners. In return, you will receive the eternal gratitude of the organisers and attendees, the chance to meet people outside of your bubble, and in turn show that you don’t need to fit a certain mold to “look like” a developer or data scientist.

Our wonderful Humble Data mentor at EuroPython 2023

Finally, if you know anyone attending EuroPython this year who you think would like to either attend or mentor Humble Data, please encourage them to apply!

If you’re interested in attending as a mentee, please fill in this form, or as a mentor, please fill in this form, by July 1st, 2023.

We can’t wait to see you all in Prague this July!

Categories: FLOSS Project Planets

Real Python: The Real Python Podcast – Episode #208: Detecting Outliers in Your Data With Python

Planet Python - Fri, 2024-06-14 08:00

How do you find the most interesting or suspicious points within your data? What libraries and techniques can you use to detect these anomalies with Python? This week on the show, we speak with author Brett Kennedy about his book "Outlier Detection in Python."

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

mark.ie: Setting up a local development environment with DDEV to contribute to Drupal core

Planet Drupal - Fri, 2024-06-14 07:42

Contributing to Drupal core is a little different to contributing to a contrib module. This blog post was written during my Drupal core contribution time, sponsored by Code Enigma.

Categories: FLOSS Project Planets

Qt Creator 14 Beta released

Planet KDE - Fri, 2024-06-14 06:21

We are happy to announce the release of Qt Creator 14 Beta!

Categories: FLOSS Project Planets

Python Software Foundation: The Python Language Summit 2024: Python's security model after the xz-utils backdoor

Planet Python - Fri, 2024-06-14 06:05

Pablo Galindo Salgado describing the xz-utils backdoor
(Photo credit: Hugo van Kemenade)
 

The backdoor of the popular compression project xz-utils was discovered on Friday, March 29th 2024, by Andres Freund. Andres is an engineer at Microsoft who noticed performance issues with SSH while contributing to the Postgres project. Andres wasn't looking for security issues, but after digging into the problem further had discovered an attempt to subvert SSH logins across multiple Linux distros.

This was a social engineering attack to gain elevated access to a project, also known as an "insider threat". An account named "Jia Tan" had begun contributing to the xz-utils project soon after the original maintainer had announced on the mailing list that they were struggling with maintenance of the project. Through the use of multiple sock-puppet accounts pressuring the maintainer and over a year of high-quality contributions, eventually Jia Tan was made a release manager for the project.

"Jia Tan may have a bigger role in the project in the future. He has been helping a lot off-list and is practically a co-maintainer already. :-)"

— xz-utils maintainer, Lasse Collin

Over time a series of small subversive changes were made to the project all culminating in a tainted release artifact that put the backdoor in motion. Luckily for all of us, Andres discovered the attack before the new version was deployed more widely.

How is Python similar to xz-utils?

Pablo Galindo Salgado, Steering Council member and the release manager for Python 3.10 and 3.11, brought this topic to the Language Summit to discuss what could be done to improve Python's security model in the wake of the xz-utils backdoor.

Pablo noted the similarities shared between CPython and xz-utils, referencing the previous Language Summit's talk on core developer burnout, the number of modules in the standard library that have one or zero maintainers, the high ratio of maintainers to source code, and the use of autotools for configuration. Autotools was used by Jia Tan as part of the backdoor, specifically to obscure the changes to tainted release artifacts.

Pablo confirmed along with many nods of agreement that indeed, CPython could be vulnerable to a contributor or core developer getting secretly malicious changes merged into the project.

"Could this happen in CPython? Yes!" -- Pablo

For multiple reasons like being able to fix bugs and single-maintainer modules, CPython doesn't require reviewers on the pull requests of core developers. This can lead to "unilateral action", meaning that a change is introduced into CPython without the review of someone besides the author. Other situations like release managers backporting fixes to other branches without review are common.

There was also an emphasis on "binary files", like wheels, images, certificates, and test data that is checked into the CPython repository. Today some of this data doesn't have a known "upstream" or source where it was generated from making introspection difficult. Part of the xz-utils backdoor utilized binary test data in order to smuggle code into the release artifacts without being reviewed by other developers.

So what can be done?

There aren't any silver bullets when it comes to social engineering and insider threats. Barry Warsaw and Carol Willing both emphasized the importance having an action plan in advance for what to do if something similar to the xz-utils backdoor were to happen in order to promptly fix the issue and alert the community.

Thomas Wouters asked the group whether the xz-utils backdoor was a serious enough event to force a new workflow to be adopted by core developers. Thomas noted that mandatory review of all pull requests had been discussed previously and wasn't adopted at the time, but also wasn't discussed as a security issue like it is today. There's been a hesitance to break peoples' workflows or make it impossible to get bugs fixed. This change would also require a cultural change to make asking for code reviews more common amongst core developers to be effective.

Carol Willing concurred, noting that almost every other project she's contributing to requires reviews for all pull requests.

Guido van Rossum was less convinced that having additional review would help much for security. Guido was more concerned about who is given "commit bit" (write access) in the first place, asking for a higher bar such as whether someone had met the person in real life, at a conference, or over a video call.

Mariatta agreed with verifying identities of core developers, including requiring updates to reconfirm the identities of individuals noting that this is commonplace for employment. Mariatta noted that the contributions being done by CPython core developers is of equal or more importance than any individuals' employment.

Some doubt was thrown on verifying identities, especially via video call, as it's now not unheard of for someone being interviewed for employment over a video call to be different from the person who shows up on the first day of work.

Hugo van Kemenade remarked on removing inactive core developers, noting that it's already documented in the CPython developer guide that inactive or unreachable core developers can be removed with or without notice. There was agreement within the group that this should be done more actively to reduce the chances that unattended privileged accounts are resurrected by malicious actors.

There was some discussion about removing modules from the standard library, especially modules which are not used or have no maintainers. Toshio Kuratomi cautioned that moving modules out of the standard library only pushes the problem outwards to one or more projects on PyPI. Łukasz Langa concurred on this point referencing specifically the "chunk" module removed via PEP 594 and feeling unsure whether the alternative project on PyPI should be recommended to users given the author not being reachable.

Overall it was clear there is more discussion and work to be done in this rapidly changing area.

Categories: FLOSS Project Planets

Python Software Foundation: The Python Language Summit 2024

Planet Python - Fri, 2024-06-14 05:27

The Python Language Summit occurs every year just before PyCon US begins, this year occurring on May 15th, 2024 in Pittsburgh, Pennsylvania. The summit is attended by core developers, triagers, and Python implementation maintainers for a full day of talks and discussions on the future direction of Python.

This years summit included talks on the C API, free-threading, the security model of Python post-xz, and Python on mobile platforms.

This year's summit was attended by around 45 people and was covered by Seth Larson.

Attendees of the Python Language Summit 2024
(Photo credit: Kushal Das)

 

 

 

 

 

Categories: FLOSS Project Planets

Python Software Foundation: The Python Language Summit 2024: PyREPL -- New default REPL written in Python

Planet Python - Fri, 2024-06-14 05:26

Lysandros showing the mistake we've all made, no longer a problem in the new REPL
(Photo credit: Hugo van Kemenade)

One of the headline features of Python 3.13 is the new interactive interpreter, sometimes known as a "REPL" (Read-Evaluate-Print-Loop) which was contributed by Pablo Galindo Salgado, Łukasz Langa, and Lysandros Nikolaou and based on the PyPy project's own interactive interpreter, PyREPL. Pablo, Łukasz, and Lysandros all were at the Language Summit 2024 to present about this new feature coming to Python.

Why does Python need a new interpreter?

Python already has an interactive interpreter, so why do we need a new one? Lysandros explained that the existing interpreter is "deeply tangled" to Python's tokenizer which means adding new features or making changes is extremely difficult.

To lend further color to this point, Lysandros dug into how the tokenizer had changed since Python was first developed. Lysandros noted that "for the first 12 years [of Python], Guido was the only one who touched the tokenizer" and only later after the parser was replaced did anyone else meaningfully contribute to the tokenizer.

Terse example code for Python's tokenizer

Meanwhile, there are other REPLs for Python that "have many new features that [Python's] interpreter doesn't have that users have grown to expect", Lysandros explained. Some basic features that were listed as examples included lack of color support meaning no syntax highlighting, the ergonomics issues around exit versus exit(), no support for multi-line editing and buffer history, and poor ergonomics around pasting code into the interpreter.

Why PyREPL?

"We've settled on starting our solution around PyREPL", Pablo explained, "our reasoning being that maintaining terminal applications is hard. Starting from scratch would have a much higher risk for users". Pablo also noted that "most people who would interact with the REPL wouldn't test in betas", because Python pre-releases are generally used for running automated tests in continuous integration and not interactively tested manually.

Pablo explained that there are many different terminals and platforms which are all sources of behaviors and bugs that are hard to get right the first time. "[PyREPL] provided us with a solid base that we know works and we can start modifying".

Tasteful modern art or bug in the REPL?

Another major contributing factor was that PyREPL is written in Python. Pablo emphasized that "now people that want to start contributing to the REPL can actually contribute because it's written in Python".

Finally, Pablo pointed out that because the implementation is now partially shared between CPython and PyPy that both implementations can benefit from bug fixes to the shared parts of the codebase. Support for Chinese characters in the REPL was fixed in CPython and is being contributed back to PyPy.

Łukasz noted that adopting PyREPL wasn't a straightforward copy-paste job, there were multiple ideas in PyPy's PyREPL that don't make sense for CPython. Notably, PyPy is written to also support Python 2, so the code was simplified to only handle Python 3 code. PyREPL for PyPy also came with support for PyGame which wasn't necessary for CPython.

Type hints and strict type checking using mypy were also added to PyREPL, making the PyREPL module the first in the Python standard library to be type-checked on pull requests. Adding type hints to the code immediately found bugs which were fixed and reported back to PyPy.

What are the new features in 3.13?

Pablo gave a demonstration of the new features of PyREPL, including:

  • Colored prompts
  • F1 for help, F3 for bracketed paste
  • Multi-line editing and history
  • Better support for pasting blocks of code
     

Below are some recreated highlights from the demo. Pasting code samples into the old REPL that contain multiple newlines would often result in SyntaxErrors due to multiple newlines in a row resulting in that statement being evaluated. Multi-line editing also helps modifying code all in one place rather than having to piece a snippet together line-by-line, modifying what you want as you go:

Demo of multi-line paste in Python 3.13 

And the "exit versus exit()" paper-cut has been bothering Python users for long enough. This error was especially taunting because the REPL clearly knows what your intent is with it's helpful message to "Use exit() to exit":

"exit" without parenthesis just works, finally!
Windows and terminals

Support is already available for Unix consoles (Linux and macOS) in Python 3.13.0-beta1 and the standout feature request so far for PyREPL has been Windows support. Windows was left out because "historically the console on Windows was way different than Unix consoles". Łukasz continued, saying that "they don't intend to support right now" offering a "yes, but..." for users asking for Windows support.

Windows has two consoles today, cmd.exe of yore and the new "Windows Terminal" which supports many of the same features as Unix consoles including VT100 escape codes. The team's plan is to support the new Windows Terminal, and "to use our sprints here in Pittsburgh to finish". Windows support will also require removing CPython dependencies on the curses and readline libraries.

What's next for PyREPL?

The team already has plans cooking up for what to add to the REPL in Python 3.14. Łukasz commented that "syntax highlighting is an obvious idea to tackle". Łukasz also referenced an idea from Tania Allard for accessibility improvements similar to those in IPython.

Łukasz reiterated that the goal isn't to make an "uber REPL" or "replace IPython", but instead to make a REPL that core developers can use while testing development branches (where dependencies aren't working yet).

Łukasz continued that core developers aren't the only ones that these improvements benefit: "many teachers are using straight-up Python, IDLE, or the terminal because the computers they're using don't allow them to install anything else."

Given the applause from the room during the demos, it's safe to say that this work has been received well. There were only concerns about platform support and rollout for the new REPL.

Gregory Smith informed the team that functionality that requires a "Function" key (ie F1, F2, etc) must also be supported without Function keys due to some computers lacking them, like Chromebooks.

Carol Willing was concerned about releasing PyREPL without support for Windows Terminal, especially from a teaching perspective, describing that potential outcome as "painful". Carol wanted clear documentation on how to get the new REPL on Windows. "Positioning [the new REPL] for teaching without clear Windows instructions is a recipe for disaster".

Pablo assured that the team wants to add support for Windows Terminal in time for the first 3.13 release candidate. Pablo could not make guarantees due to a lack of Windows expertise among the three, saying "the reason I'm not saying 100% is because none of us are Windows experts. We understand what needs to be done... but we need some help."

Łukasz named Steve Dower, the Windows release expert for Python, who is "very motivated to help us get Windows Terminal support during sprints". Łukasz reiterated they're "not 100%, but we are very motivated to get it done".

Gregory Smith shared Carol's concern and framed the problem as one of communication strategy, proposing to "not promise too much until it works completely on Windows". By Python 3.14 the flashy features like syntax highlighting would have landed and the team would have a better understanding of what's needed for Windows. The team can revise the 3.13 "What's New in Python" depending on what gets implemented in the 3.13 timeline.

Ned Deily sought to clarify what the default experience would be for users of 3.13. Pablo said that "on Windows right now you will get the [same REPL] that you got before" and "on Linux and macOS, if your terminal supports the features which most of them do, you get the enhanced experience". "What we want in the sprints is to make Windows support the new one, if we get feature parity, then [Windows] will also get the new [REPL]".

Carol also asked to document how to opt-out of the new REPL in the case that support wasn't added in time for 3.13 to avoid differences between educational material and what students were seeing in their terminal. Kushal Das confirmed that differences across platforms is a source of problems for students, saying that "if all [students] have the same experience it's much better than just improving only macOS and Linux" to avoid students feeling bad just due to their operating system.

Pablo said that the opt-out mechanism was already in place with an environment variable and will discuss other opt-out mechanisms if needed for educators.

Emily Morehouse, speaking as a Steering Council member added that the Steering Council has requested an informational PEP on the new REPL. "Hearing concerns about how [the new REPL] might be rolled out... it sounds like we might need something that's more compatible and an easier rollout", leaving the final discussions to the 3.13 release manager, Thomas Wouters. Carol replied that she believes "we could do it in documentation".

Categories: FLOSS Project Planets

Python Software Foundation: The Python Language Summit 2024: Lightning Talks

Planet Python - Fri, 2024-06-14 05:13

The Python Language Summit 2024 closed off with six lightning talks which were all submitted during the Language Summit. The talks were delivered by Petr Viktorin, David Hewitt, Emily Morehouse, Łukasz Langa, Pablo Galindo Salgado, and Yury Selivanov.

Petr Viktorin: Unsupported build warning

Do you know what happens when you build Python on an unsupported platform?

"... It works!" -- Thomas Wouters

Petr gave a short presentation on a warning that many folks using Python (and even developing Python!) may have never seen before: the unsupported build warning. This warning appears when building on a platform that's not officially supported by CPython, for example "riscv64-unknown-linux-gnu".

"The platform is not supported, use at your own risk"
(Photo credit: Hugo van Kemenade)
 

Just because a platform isn't officially supported by CPython doesn't mean it won't work on that platform, and indeed it's likely that CPython may work fine on the platform or a subset of features may be subtly or not-so-subtly broken or unavailable.

Petr wanted to get a temperature check from the group on whether this warning could be further improved or changed, such as by hiding the warning after the user had executed the test suite or showing the number of tests that had failed.

The room seemed mostly uninterested in exploring this topic further and was in favor of keeping the warning as-is.

David Hewitt: Rust in Python: panic!

David Hewitt maintains the project PyO3 which offers Rust bindings for the Python C API. David explained that these bindings require mapping concepts in the Rust programming language to Python and the topic of today's talk is the panic! macro.

In Rust, the panic! macro will generate a panic, unwind the stack, and then terminate the program while providing feedback to the caller of the program. David showed that there were two methods of handling errors in Rust programs, panic! and Result.

Python functions implemented in Rust use the PyResult type to contain the return value or raised exception which uses the Rust Result type. But what if a Rust function panics, what should PyO3 do?

Today PyO3 raises a separate exception for panics, pyo3_runtime.PanicException to be exact. This exception inherits from BaseException, typically reserved for exceptions that users won't want to catch like KeyboardInterrupt and SystemExit.

David has been receiving feedback from some users that the PanicException inheriting from BaseException is annoying to work with. This is because everywhere that exceptions are caught now needs to also catch PyO3's PanicException, giving the example case of logging exceptions.

David wanted feedback on whether the original choice to inherit from BaseException was appropriate or if there was a better answer.

Pablo Galindo Salgado asked whether an AssertError or RuntimeError would be more appropriate. David replied that he felt that not inheriting from BaseException would "cheapen" the Rust aspect of a panic.

Guido van Rossum offered that he thinks "BaseException is the correct choice", to which there was much agreement from the room.

Emily Morehouse: Formalizing the PEP prototype process

Python Steering Council Member and Language Summit chair, Emily Morehouse, spoke to the group about the PEP prototype process and how formalizing can better support PEP authors.

(Photo credit: Hugo van Kemenade)

Emily started off the talk stating "We all agree that we should be doing more testing and prototyping outside of CPython". She also referenced prior talks like pdb improvements and subinterpreters where this approach was recommended.

Emily noted that the Steering Council has pronounced this as a requirement for PEP authors. She acknowledged that this "can feel a bit bad as a PEP author to be put out into the dark world of figuring out how to gather feedback from the community" and how to manage and distribute a project.

Emily's proposal for improving the PEP process borrows from the TC39 process, which is the process for making changes and improvements to JavaScript. The proposal would see the prototype process be made an "official optional step of the  PEP process" which would then allow creating a separate GitHub repository within the "python" GitHub organization.

This would allow the project to house its own code, issue tracker, and packages would be distributed by the Python organization instead of on someone's personal account. Emily also suggested providing a template for the repository to handle distributions to PyPI.

Emily's theory is that PEPs would see more adoption and get more feedback if they came from an official channel. This approach provides additional support to PEP author and lets the author start gathering community feedback quicker before waiting for PEP pronouncement.

The room was in agreement for moving forward with the process improvement.

Carol Willing thought the improvement would be great but called back to pattern matching for CPython where the work was done in a feature branch of the CPython repository rather than a separate repository. Carol thought using a feature branch worked well for pattern matching and wanted to know how this process might work for future language changes.

Emily replied that the process would be case-by-case depending on the feature whether it's a branch, fork or something else. Thomas Wouters agreed, saying that this proposal appears to be specifically for projects which could be distributed on the PyPI instead of language features.

Łukasz Langa: Python for iOS, finally

Harking back to the previous talk on mobile support for Python, Łukasz wanted to know if the Python team should have a more official presence on phone application stores like the Apple App Store (and maybe the Google Play Store, but Łukasz declined to speak on it since he is an iOS user).

Łukasz noted that there already exists today multiple applications on his phone that "are Python". These applications are useful for trying out Python code, learning Python, and writing small programs.

However Łukasz noted that by not having an official Python application on mobile meant the user experience today is sub-optimal. Some applications are publishing old versions of Python and aren't reachable when being asked to upgrade to newer versions so users can take advantage of new features. Others have suddenly changed from being free to being paid applications. Should the Python development team do something about this?

The response from the room appeared positive, but acknowledged the amount of effort that creating and maintaining such an application would be.

Russell Keith-Magee, the author of BeeWare which is leading the charge to bring Python to mobile platforms, said "Sure, but I'm not building it". After much laughter from the room, Russell noted that the project is "an entirely achievable goal but not a small one".

Ned Deily, macOS release expert, agreed and offered that "implementing a terminal would get us most of the way there".

Pablo Galindo Salgado: Making asserts cooler in 3.14You'll have to imagine the iconic Pablo "✨ woooooow ✨"
(Photo credit: Hugo van Kemenade)

Pablo took the term "lightning talk" to heart and gave a 90 second presentation (demo included!) on his plans to improve asserts in Python in version 3.14. The problem statement was summarized as "asserts are kinda sad", after which Pablo showed how when an assert statement fails there isn't much indication about why the condition failed in the error.

Consider how an assertion error might look today:

Traceback (most recent call last):
  File "main.py", line 7, in <module>
    bar(x, y)
    ~~~^^^^^^
  File "main.py", line 3, in bar
    assert (x + 1) + z == y
           ^^^^^^^^^^^^^^^^^
AssertionError

Pretty opaque! In the above example you'll notice that we can't see the values of x, y, or z which makes evaluating what went wrong difficult. Instead, with Pablo's proposed changes the traceback would look like so:

Traceback (most recent call last):
  File "main.py", line 7, in <module>
    bar(x, y)
    ~~~^^^^^^
  File "main.py", line 3, in bar
    assert (x + 1) + z == y
           ^^^^^^^^^^^^^^^^^
AssertionError: assert ((1 + 1) + 11) == 2

With this change the values are visible for the asserted statement. Similarly being able to inspect containers to show where their contents differ, a-la pytest:

Traceback (most recent call last):
  File "main.py", line 2, in bar
    assert x == y
AssertionError: assert Lists differ: [1, 2, 3, [1, 2]] != [1, 2, 3, [1, 3]]

First differing element 3:
[1, 2]
[1, 3]

- [1, 2, 3, [1, 2]]
?               ^

+ [1, 2, 3, [1, 3]]

Pablo intends to put together a PEP for this feature, including asking questions like whether the process should be hookable and whether user code should be able to provide custom formatters. Stay tuned for that!

Yury Selivanov: Efficient data sharing between subinterpreters

The final talk of the Language Summit was from Yury Selivanov on Memhive, a new "highly experimental" project which adds support for structured data sharing between Python subinterpreters.

Per-Interpreter GIL is a newer feature to Python that allows running multiple "interpreters" in a single instance of Python each with their own Global Interpreter Lock (GIL). Per-interpreter GIL allows for true multi-core parallelism, previously using threads in a Python process would only allow a single Python instruction to execute at a time due to the GIL being shared across all threads.

Being in the same process means that each subinterpreter is sharing the same memory space, and if instructions are executing truly concurrently we run into problems with that shared memory space. PEP 684 which specifies per-interpreter GIL calls out this issue and for now keeps memory allocators using global locking mechanisms.

Yury started the talk by discussing immutable data structures and their properties, the most interesting being how quickly they can be copied in memory. Deep copies are fast for immutable data structures because they are implemented as a single copy-by-reference.

An immutable mapping collection, specifically a hash-array mapped trie (HAMT), has already been implemented in Python for the contextvars module. Context variables need to be copied for every new asynchronous task, so being efficient is important to not impact performance of all async Python workloads.

Yury explaining how to replant a trie 🌲
(Photo credit: Hugo van Kemenade)

HAMTs work by transparently updating the trie structure in background of the mapping allowing for structured sharing while minimizing overhead to create new copies.

The invariant for this to work across subinterpreters is that the immutable collection in the main interpreter must not be garbage collected. Maintaining this invariant will require reliable reference counting across subinterpreters ("remote IncRef"). The proposed implementation would have every subinterpreter maintain multiple queues for tracking local and remote reference counts.

Yury explained that similar to how HAMTs provide an immutable mapping collection there is another data structure for immutable list collections which is "just another 5,000 lines of C" (which received some chuckles) and "luckily we won't be the first ones to implement this collection".

After comparing the performance of pickling mappings or using mappings with immutable data structures showed that immutable data structures were much more performant. The performance was better for immutable mappings for both small and large numbers of keys, between 6x and a "ridiculous" 150,000x faster.

"I believe these are the missing components for subinterpreters", Yury noted with many thanks to Eric Snow who has been working on subinterpreters and per-interpreter GIL for years. Yury concluded that this work is being done with a practical use-case in mind so will be completed and usable for others including CPython.

For folks looking for more on this topic, Yury also gave a talk at PyCon US 2024 about his work on Memhive.

Categories: FLOSS Project Planets

Python Software Foundation: The Python Language Summit 2024: Annotations as Transformers

Planet Python - Fri, 2024-06-14 05:13

The final talk of the main schedule of the Python Language Summit was delivered by Jason R. Coombs on using annotations for transforms. The presentation was accompanied by a GitHub repository and Jupyter notebook illustrating the problem and proposed solution.

Jason is interested in a method for users to "transform their parameters in a reusable way". The motivation was to avoid imperative methods of transforming parameters to "increase reusability, composition, and separation of concerns". Jason imagined transformers which could be "packaged up in a library or used across multiple functions" and would "be applied at the scope of individual parameters".

Python already has a language feature that's similar to this concept with decorators, which allow wrapping a function or class with another function in a syntactically concise way.

Jason noted that "return values can be handled by decorators fairly easily, so [the proposal] is more concerned with input parameters". For a decorator to affect parameters, the decorator "would have to inspect the parameters" and "entangle itself with the function signature".

Diagram from Jason's presentation showing transforms being applied to individual parameters of a function.

Jason's proposal would use type annotations due to type annotations already specifying the desired type, the proposal being to add behavior "this is the type I want to make this" and perform transforms. Below is some example code of the proposal:

def transformer(val: float | None) -> float:
    return val if val is not None else 0

def make_str(val: float) -> str:
    return str(val)

def my_fn(
    p1: transformer,
    p2: transformer
) -> make_str:

    return (p1 ** 2) + p2

Jason went on to show that Pydantic was offering something similar to his proposal by having functions called on parameters and return values using the pydantic.BeforeValidator class in conjunction with typing.Annotated, though this use-case "wasn't being advertised by Pydantic":

from typing import Annotated
import pydantic

def transformer(val: float | None) -> float:
    return val if val is not None else 0

@pydantic.validate_call(validate_return=True)
def my_fn(
    p1: Annotated[float, pydantic.BeforeValidator(transformer)],
    p2: Annotated[float, pydantic.BeforeValidator(transformer)]
) -> Annotated[str, pydantic.BeforeValidator(str)]:

    return (p1 ** 2) + p2

Jason didn't like this approach though due to the verbosity, requiring to use a decorator and provide annotations, and needing an extra dependency.

Eric V. Smith asked if Jason had seen PEP 712, which Eric is the sponsor of, that describes a "converter" mechanism for dataclass fields. This mechanism was similar in that "the type you annotated something with became different to the type you passed". Eric remarked it was "pretty common thing that people want to pass in different types when they're constructing something than the internal types of the class".

Jason replied that he had seen the PEP but "hadn't incorporated it into a larger strategy yet". Steering council member Barry Warsaw noted that he "didn't know what the solution is, but it is interesting... that the problems are adjacent".

There was skepticism from the room, including from typing council member Guido van Rossum, on using type annotations as the mechanism for transformers. Type annotations today don't affect the runtime behavior of the code and this proposal would be a departure from that, Guido noting "process-wise, that's going to be a difficult hurdle".

If type annotations weren't the way forwards, Jason had also considered proposing new syntax or a new language feature and wanted feedback on whether "there's viability" in that approach and if so, "[he] could explore those options".

There were questions about why decorators weren't sufficient, citing PEP 318 motivation section containing examples similar to the ones Jason had presented. Transformers could be assigned to parameters by name, passing in the transformer as a key-value parameters into the decorator like so:

def transformer(val: float | None) -> float:
    return val if val is not None else 0

@apply(p1=transformer, p2=transformer)
def my_fn(
    p1: float,
    p2: float
) -> float:

    return (p1 ** 2) + p2

Jason found this pattern "discouraging" and "less elegant" because the variable name needs to mentioned in multiple places and that he was "hoping for something that was more integrated into the language, to not feel like a second-class feature".

Łukasz Langa commented on the case for removing the "None" type from a union, could already be done with a type guard and drew attention to work being done to allow more complicated type guards. Łukasz was "sympathetic to conciseness, but type checkers already handle this".

Steering Council member Gregory Smith was hesitant to make any change in this area. He agreed that "as a language, we're missing something", but "wasn't sure if we've got a way forward that doesn't make the language more complicated".

Categories: FLOSS Project Planets

Python Software Foundation: The Python Language Summit 2024: Limiting Yield in Async Generators

Planet Python - Fri, 2024-06-14 05:13

Zac Hatfield-Dodds came to the Language Summit to present on a fundamental incompatability between the popular async programming paradigm "structured concurrency" and asynchronous generators, specifically when it came to exception handling when the two were mixed together.

Structured Concurrency

Structured concurrency is becoming more popular for Python async programming like with Trio "nurseries" and in the Python standard library with the addition of asyncio.TaskGroup in Python 3.11.

When using structured concurrency, active tasks can be thought of as a tree-like structure where sub-tasks of a parent task have to exit before the parent task itself can proceed past a pre-defined scope. This exit can come through all the tasks completing successfully or from an exception being raised either internally or externally (for example, in the case of a timeout on time-bounded work).

The mechanism which allows a parent task and its sub-tasks to cooperate in this way is called a "cancel scope" which Trio makes a top-level concept but is implicitly used in asyncio.TaskGroup and asyncio.timeout.

Async programs that are structured with this paradigm can rely on exceptions behaving in a much more recognizable way. There's no more danger of a spawned sub-task silently swallowing an exception because all sub-tasks are guaranteed to be checked for their status before the parent task can exit.

The problem with yields

The fundamental issue is that yields suspend the current call frame, in effect "returning" a value, and then the generator needs to be "called" again for execution to be resumed. This suspension doesn't play well with structured concurrency because execution can't be suspended in the same call frame as a cancel scope, otherwise that scope can't process exceptions from its child tasks.


Zac leading a "fun game of 'why is this code broken?'"
(Photo credit: Hugo van Kemenade)

Zac presented some innocuous looking code samples that suffered from the described issue:

async def iter_with_timeout(ait, max_time): try: while True: with asyncio.timeout(max_time): yield await anext(ait) except StopAsyncIteration: return async def fn(): async for elem in iter_with_timeout(ait, max_time=1.0): await do_something_with(elem)

In this example, asyncio.timeout() could expire while the yield had suspended the generator and before the generator was resumed. This scenario would result in the cancellation exception being raised in the outer task outside of the asyncio.timeout() cancel scope. If things had gone to plan and the generator wasn't suspended the cancellation would be caught by asyncio.timeout() instead and execution would proceed.

Zac presented the following fix to the iter_with_timeout() function:

async def iter_with_timeout(ait, max_time): try: while True: with asyncio.timeout(max_time): tmp = await anext(ait) 
yield tmp # Move yield outside the cancel scope!
 except StopAsyncIteration: return

By moving the yield outside the cancellation scope it means that the suspension of the frame isn't happening when execution is inside a cancellation scope. This means that propagation of cancellation errors can't be subverted by a suspended call frame for this program.

If you're still having trouble understanding the problem: you are not alone. There was a refrain of "still with me?" coming from Zac throughout this talk. I recommend looking at the problem statement and motivating examples in the PEP for more information.

Where to go from here

Zac and Nathaniel Smith have coauthored PEP 789 with their proposed solution of disallowing yield statements within context managers that behave like cancel scopes. Attempting to yield within these scopes would instead raise a RuntimeError.

The mechanism would be using a new function "sys.prevents_yields()" which would be used by authors of async frameworks to annotate context managers which can't be suspended safely. Users of async frameworks wouldn't need to change their code unless it contained the unwanted behavior.

The language would need to support this feature by adding metadata to call frames to track whether the current frame should allow yields to occur.

Mark Shannon was concerned that the solution was "lots of machinery to handle the exception being raised in the wrong place" and sought clarification that there would be overhead added to every call and return. Zac confirmed this would be the case, but that it could be done with "one integer [member on call frames] that you increment and decrement, but it would do some operation on every frame call and return".

Irit Katriel asked why a "runtime error" was being used "instead of something static". Zac explained that users might define their own context managers which have a "cancel scope property" and the runtime "wouldn't know statically whether a given context manager should raise an error or not".

Łukasz Langa asked whether adding a type annotation to context managers would be sufficient to avoid adding runtime overhead. Zac responded that "there are still many users that don't use static type checking", and that "there's no intention to make it required by default". Łukasz was concerned that the proposal "would be contentious for runtime performance" due to the impact being "non-trivial".

Pablo Galindo Salgado wanted to explore other big ideas to avoid the performance penalty like adding new syntax or language feature, such as "with noyield" to provide a static method of avoiding the issue. Zac agreed that changing the context manager protocol could also be a solution.

Guido van Rossum lamented that this was "yet another demonstration that async generators were a bridge too far. Could we have a simpler PEP that proposes to deprecate and eventually remove from the language asynchronous generators, just because they're a pain and tend to spawn more complexity".

Zac had no objections to a PEP deprecating async generators¹. Zac continued, "while static analysis is helpful in some cases, there are inevitably cases that it misses which kept biting us... until we banned all async generators in our codebase".

¹ Editors note: after the summit an update to PEP 789 would describe how the problem doesn't exist solely in async generators and thus removal of the feature wouldn't solve the problem, either.

Categories: FLOSS Project Planets

Python Software Foundation: The Python Language Summit 2024: Should we make pdb better?

Planet Python - Fri, 2024-06-14 05:13

Tian Gao came to the Language Summit 2024 to talk about improving pdb, short for "Python debugger", a module and command line tool for debugging Python.

Tian Gao presenting on how to improve pdb

There are not many command-line debugger alternatives to pdb for Python. Tian mentioned a few, including PuDB, pdb++, and ipdb, but those alternatives are all themselves based on either pdb or another standard library module 'bdb'.

pdb is the only "standalone" command-line-based Python debugger

Tian presented a laundry list of desirable new features that could be added to pdb, including:

  • Showing more lines of code around the current breakpoint.
  • Colors in the terminal, syntax highlighting.
  • Customization, with defaults being safe.
  • Handling of more scenarios (threads, asyncio, bytecode, remote debugging)
Performance and backwards compatibility

The biggest issue according to Tian, which he noted had been discussed in the past, was the performance of pdb. "pdb is slow because sys.trace is slow, which is something we cannot change", and the only way forward on making pdb faster is to switch to sys.monitoring to avoid triggering unnecessary events.

Switching to sys.monitoring would give a big boost to performance. According to Tian, "setting a breakpoint in your code in the worst case you get a 100x slowdown compared to almost zero overhead with sys.monitoring". Unfortunately, switching isn't so easy, Tian noted there are serious backwards compatibility concerns for the standard library module bdb if pdb were to start using sys.monitoring.

"If we're not ready to [switch to sys.monitoring] yet, would we ever do this in the future?", Tian asked the group, noting that an alternative is to create a third-party library and encourage folks to use that library instead.

Thomas Wouters started off saying that "bdb is a standard library module and it cannot break user code" and cautioned that core developers don't know who is depending on modules. bdb's interface can't have backwards incompatible changes without long deprecation periods. In Thomas' mind, "the answer is obvious, leave pdb as it is and build something else".

Thomas also noted "in the long-term, a debugger in the standard library is important" but that development doesn't need to happen in the standard library. Thomas listed the benefits for developing a new debugger outside the standard library like being able to publish outside the Python release schedule and to use the debugger with older Python versions. Once a debugger reaches a certain level of stability it can be added to the standard library and potentially replace pdb.

Tian agreed with Thomas' proposal in theory, but was concerned that a third-party debugger on PyPI wouldn't see the same levels of adoption compared to being in the standard library and thus would struggle to meet a threshold of "stability" without a critical mass of users. Or worse yet, maintainers wouldn't be motivated to continue due to a lack of use, resulting in a "dead project". (Some foreshadowing, Steering Council member Emily Morehouse gave a lightning talk on this topic later on in the Language Summit)

Łukasz Langa noted that Python now has support for "breakpoint()" and that "what breakpoint() actually does, we can change. We can run another debugger if we decide to", referencing if a better debugger was added in the future to CPython that it could be made into a new default for breakpoints.

Russell Keith-Magee from BeeWare, was interested in what Tian had said about remote debugging, noting that "remote debugging is the only way you can debug [on mobile platforms]". Russell would be interested in pdb or a new debugger supporting this use-case. Tian noted that unfortunately remote debugging would be one of the more difficult features to implement.

Pablo Galindo Salgado, commenting on existing Python "attach-to-process" debuggers, said that the hacks in use today are "extremely unsafe". Pablo said that "we'd need something inside CPython [to be safe], but then you have another problem, you have to implement that feature on [all platforms]". Pablo also mentioned that attach-to-process debugging is usually a bad model because it can't be enabled by default for security reasons but "you won't know when you'll need to debug".

Anthony Shaw asked about the scope of the project and was interested in whether there could be a framework for debugging in CPython that pdb and others could build on. Anthony pointed out that many other debuggers "needed to do a bunch of hooks and tricks" to do debugging because it's "not provided out of the box by CPython".

Tian responded that "bdb is supposed to do that, but it was written 30 years ago so is too old to support new things that a debugger wants". Others mentioned that sys.monitoring (new in Python 3.12) was meant to be a framework for debuggers to build on.

Gregory Smith, Steering Council member, said he "wants all of these things" and agreed with Thomas to "develop this as much as you can... outside of the standard library", telling Tian that "you're going to end up in a better state that way". Greg's primary concern was whether CPython needed to do anything to enable Tian's proposal. He continued, "it sounds like we (CPython) have most of what we need, but if we don't let's get that planned so we can enable a successful separate project before we ship it with Python in the future".

Categories: FLOSS Project Planets

Python Software Foundation: The Python Language Summit 2024: Python on Mobile

Planet Python - Fri, 2024-06-14 05:13

Malcolm Smith from BeeWare presented on the status and direction of Python on mobile platforms like iOS and Android. BeeWare has been working on bringing Python to mobile for a few years now. Previously Russell Keith-Magee gave a talk at the Language Summit in 2023 on BeeWare to announce plans for Tier 3 support for Python on Android and iOS in Python 3.13 along with Anaconda's funded support for the project.

Now we've arrived at Python 3.13 pre-releases, and things are going well! Malcolm reported that "the implementations are nearly complete" along with thank-yous to the core developers who helped with the project.

Overview of current Python mobile platform support

The other platforms listed in the table "iOS x86_64 and Android ARM32/x86", don't have any plans to be implemented. There aren't any actual physical devices for iOS on x86_64 as the architecture is only used for development simulators.

For Android the ARM32 and x86 platforms are being phased out due to being 32-bit architectures and today represent less than 10% of devices. For these reasons, Malcolm and team have decided not to implement support for this architecture.

Malcolm also reported that there is a buildbot for iOS and in the coming weeks there will be buildbots added for Android ARM64 and x86_64 platforms.

Let's talk packages!

Python is well-known for its rich package ecosystem, and the BeeWare team is working on bringing Python packages to mobile Python, too. "It's not enough just to have support for CPython", Malcolm said on this topic, "we also need to support the packaging ecosystem". As with many new platforms for Python, pure Python packages work without much issue and "the difficulty comes in with anything which contains native compiled components".

 

The current and future approach for mobile-friendly Python packages
 

The BeeWare team's approach so far has been to bootstrap packages with native components on their own by creating tools and "building wheels for popular packages like numpy, cryptography, and Pillow". Malcolm reported that the current approach of rebuilding individual packages isn't scalable and the team would need to help upstream maintainers build their own mobile wheels. Malcolm said the team plans to focus this year on "making it as easy as possible to produce and release [mobile] wheels within existing workflows" and contributing to tools like cibuildwheel, setuptools, and PyO3.

Malcolm also hopes that "by the end of this year some of the major packages will be in position to start releasing mobile wheels to the Python Package Index". The team has already specified a format for the wheel tags for iOS (PEP 730) and Android (PEP 738). "The binary compatibility situation is pretty good", Malcolm noted that iOS and Android both come from a single source in Apple and Google respectively meaning "there's a fairly well-defined set of libraries available on each version".

Python today provides an embeddable package for the Windows platform. Malcolm requested from the group that more official Python embeddable packages be created for each of the mobile platforms with headers and libraries to ease building Python packages for those platforms. Having these artifacts available would provide a reference for binary compatibility on those platforms.

Ned Deily, the macOS release expert for CPython, agreed that having more binary releases for macOS and iOS is something we "should definitely do in the 3.14 timeframe".

Challenges with keeping mobile buildbots green

Malcolm provided the core developer team some tips on writing Python code with these new and constrained platforms in mind. He warned that there is little to no support for spawning subprocesses, but "multi-threading on the other hand is perfectly fine on both of these platforms".

Mobile platforms also tend to be constrained in terms of security. iOS only allows loading libraries from specific folders and Android has restrictions like not being able to read the root directory or create hard links.

Given these differences, "it's reasonable to expect that mobile platforms will have more frequent failures as development proceeds, so how do we go about testing them?" The full CPython test suite is running on both mobile platforms with buildbots, but today there's no testing done before a pull request is merged. This situation leads to mobile buildbots starting to fail without the contributing developer necessarily noticing.

This problem is exacerbated by limited continuous integration (CI) resources in GitHub Actions, especially for macOS which limits virtualization on ARM64 processors. Malcolm suggested evaluating GitHub's Merge Queue feature as a potential way to solve this issue by requiring a small amount of testing on mobile platforms without blocking development of features.

Malcolm's proposal for better visibility of test failures for mobile

Łukasz Langa agreed that CI was an issue, one that he's actively looking improving, but wasn't convinced that using a merge queue would decrease the number of jobs required to run. Malcolm clarified that he is proposing only running a smaller subset of jobs per-commit in pull requests and the complete set, including some buildbots, as a part of pre-merge testing.

Many folks expressed concern about adding buildbots as a part of pre-merge or per-commit checks, because buildbots have no high-availability SLA and often suffer occasional outages, some buildbots not being reliable and therefore preventing merging of commits, and concerns about security of unreviewed changes running on buildbots.

Thomas Wouters, Python 3.13 release manager, was "unconvinced" on adding pre-merge testing for Tier 3 platforms, something that is usually reserved for Tier 1 platforms.

Ned Deily recommended doing iOS builds as a part of existing macOS builds in GitHub Actions. This would catch build errors for the platform and would likely find some issues early without much additional investment.

Categories: FLOSS Project Planets

Python Software Foundation: The Python Language Summit 2024: Free-threading ecosystems

Planet Python - Fri, 2024-06-14 05:13

Following years of excitement around the removal of the Global Interpreter Lock (GIL), Python without the GIL is coming soon. Python 3.13 pre-releases already have support for being built without the GIL using a new --disable-gil compile-time option:

# Download
wget https://www.python.org/ftp/python/3.13.0/Python-3.13.0b2.tgz
echo "c87c42aa8137230a15a02ed90a6600610ba680cb5b54c0fbc57581a0d032e0c4  ./Python-3.13.0b2.tgz" | sha256sum --check
tar -xzvf ./Python-3.13.0b2.tgz

# Build
cd Python-3.13.0b2/
./configure --disable-gil
make

# Run with no GIL!
./python -X nogil -c "import sys; print(sys._is_gil_enabled())"
False

But simply having GIL-less Python is not enough, code needs to be written that is safe and performant without the GIL using both the C and Python APIs.

This year at the Language Summit, Daniele Parmeggiani gave a talk about ways Python can enable safe and performant concurrent code without locking CPython into a specific implementation or memory model.

Don't leak the details

Daniele started his talk, like many Python users, with cautious enthusiasm about the prospect of free-threading in Python:

"Given the acceptance notes to PEP 703, one should argue for caution in discussing the prospects of new multi-threading ecosystems after the release of Python 3.13 — with a hopeful spirit I will disregard this caution here."

-- Daniele Parmeggiani

Daniele detailed a feature request he had opened to create a public function for the private C API function "_Py_TRY_INCREF()". Daniele wanted to use this function to increment an object's reference count safely in a truly multi-threaded Python where a reference count might be decremented concurrently to an increment.

Daniele continued, "[Sam Gross] responded as thoughtfully and thoroughly as he usually does that the function shouldn't be public, and I agree with him".

The semantics of _Py_TRY_INCREF() today are tied to the specific implementation of free-threading and without a guarantee that the underlying implementation won't change Daniele does not think the function "should ever be made public".

But without this functionality Daniele's problem still stands, where do we go from here?

Higher-level APIs to the rescue

"At a higher-level it's possible to write further guarantees without constraining what's under the hood". Daniele started a single step up in abstraction, detailing an atomic reference API:

PyObject *AtomicRef_Get(AtomicRef *self)
{
    PyObject *reference;
    reference = self->reference;
    while (!_Py_TRY_INCREF(reference)) {
        reference = self->reference;
    }
    return reference;
}

This would be "trivial to implement" with the new garbage collection scheme in Python 3.13 ("quiescent state-based reclamation" or QSBR), "but what if [Python 3.14] were to change this scheme radically? Or what if 3.15 decides to do away with it entirely?"

Daniele eschewed making guarantees about low-level APIs at this stage of development, but concluded that "an API for atomically updating a reference to a PyObject seems like a high-level use-case worth guaranteeing, regardless of any implementation of reference counting".

Atomic data structures

Daniele continued exploring higher-level concepts that Python could provide at this stage of free-threading by looking to what other languages are doing.

Java provides a java.util.concurrent package containing some familiar faces for Python concurrency users like Semaphores, Locks, and Barriers, but also some other atomic primitives that map to Python classes like dicts, lists, booleans, and integers. Daniele asked whether Python should provide atomic variations for primitives like numbers and dictionaries.

Daniele explained that many atomic data structures use the "compare-and-set" model to synchronize read and write access to the same space in memory. Compare-and-set requires the caller to specify an expected value, if the value in memory matches the expected value then the value is updated to the passed value, and the call returns whether the operation was successful or not.

Daniele explained that compare-and-set establishes a "happens-before" ordering between concurrent writes to the same memory location, joking that the phrase "happens-before" may spark thoughts of memory models which he wished to avoid.

Today Python doesn't have any method of reordering memory accesses which would require thinking about the memory model. Daniele noted that may come one day from the new just-in-time compiler (JIT).

Daniele was already developing an atomic dictionary class and had seen performance gains over the existing standard library dictionary with the GIL disabled (with lower single-threaded performance):

Performance comparison of dict with and without the GIL and Daniele's AtomicDict

Daniele observed that the free-threading changes actually decreased the performance for write-heavy workloads on builtin types like dictionaries because "Python programs will now actually be subject to memory contention". When multiple threads attempt to mutate a list or dictionary, "it will be as if the GIL is still there, [the threads] will all be contending for one lock", offering that "new concurrent data structures would alleviate this performance issue".

Daniele wanted to know what primitives Python should offer for C extension developers targeting free-threaded builds, or asked if it's still too early to make guarantees:

"As the writer of a C extension looking to implement concurrent lock-free data structures for Python", Daniele asked of the room, "does CPython eventually wish to incorporate... either high-level atomics or low-level routines?"

Daniele continued, "if not the atomics, then new low-level APIs like _Py_TRY_INCREF() will be necessary in order not to force the abuse of locks in external efforts towards new free-threading ecosystems".

Discussion

Thomas Wouters, channeling the Steering Council's past intent from accepting PEP 703 last October said, "we don't know yet what users will actually need" and the Steering Council didn't want to "prematurely optimize" and mandate features be implemented without that knowledge.

Thomas recommended building solutions to "production use-cases" as PyPI packages or separate projects before the deciding to pull those solutions into Python, summarizing the sentiment with, "we need to take our time and make sure we're doing the right thing".

Steering Council member Barry Warsaw agreed with Thomas on strategy, also adding that "[atomic references] might be something [Python] needs to make sure the interpreter doesn't crash with some of our own C code". Barry was interested in how to "ensure that the interpreter stays safe in the face of free-threading without necessarily thinking about the right APIs for the higher-level data structures".

Sam Gross, author and main implementer of PEP 703 to make the GIL optional in CPython, commented on making additional guarantees to standard library collections, saying "we're going to find situations that are ambiguous where no one's promised thread-safety or [the lack of thread-safety]".

Sam would also like to see "scalable collections" on PyPI (and "would love to see in Python eventually too") that are "designed not just to be thread-safe, but to scale well with certain workloads". Sam noted that builtin data classes like dict and list "can only make so many trade-offs" and tend to "focus on single-threaded performance" or "multi-threaded read-only access".

Eric Snow wanted to see immutable data structures be considered, too, noting the benefits to performance and shareability that Yury Selivanov was seeing when using them with sub-interpreters.

Gregory Smith sympathized with Daniele on wanting to avoid thinking about memory models, but "had a sneaking suspicion we kinda have to anyway". Greg was concerned about other stacks like data science and machine learning "re-interpreting Python code and transforming it into other things that run on other hardware". Without a clear definition, people "make their own assumptions" and get confused when code runs differently in different places.

Replying to Greg, Daniele offered that there's already a mechanism for determining whether an object is shared between threads "which might be a first-step", but that this "was a detail of the implementation, and not a part of the language".

Guido van Rossum began by being "wary of looking to Java for examples", stating that many APIs that Python borrowed from Java were eventually deprecated and removed.

Guido commented that "there will be other people with much higher-level ideas on concurrency" and recommended "to wait as long as we can before we build anything into the language explicitly or implicitly". Guido also felt it was "important that we have sub-interpreters as well as free-threading, so people can play with different models before we commit to anything".

Overall, the group seemed interested in Daniele's work on atomics but didn't seem willing to commit to exact answers for Python yet. It's clear that more experimentation will be needed in this area.

 

 

Categories: FLOSS Project Planets

Python Software Foundation: The Python Language Summit 2024: Native Interface and Limited C API

Planet Python - Fri, 2024-06-14 05:13

Back in October 2023, PEP 731 proposed a new C API working group charged with overseeing and coordinating the development and maintenance of the Python C API. This working group spawned from a series of discussions on the C API from the Language Summit in 2023 and creation of an inventory of problems with the C API at the 2023 core developer sprint.

Two inaugural C API working group members, Petr Viktorin and Victor Stinner, presented back-to-back talks on the C API and gave context on what's been happening in the past year.

What does the C API working group do?

The first of the two C API talks was given by Petr Viktorin on the "Native Interface" and some of the first steps towards an idealized C API.

Petr started off by explaining that the C API working group makes two types of decisions: what functionality to expose via the C API and how to expose it. Petr also explained that the C API working group keeps two separate issue trackers, one for incremental "evolution" of the C API and another for "revolution", a place where more "radical" ideas are discussed.

The existing C API wasn't designed with the knowledge, context, and needs of today (like free-threading), but there are many good parts of the C API. Petr explained that one of the more impactful things the working group has done is to formalize "guidelines to get consistency with the good parts of the existing API".

Petr gave an example of what can go wrong with the PyLong_GetSign() function. This API has a baked-in type check that can't be avoided due to its function signature and thus incurs a performance penalty even when the caller has already checked whether the object is the correct type.

This extra performance penalty means that CPython itself uses its own private API which avoids the type check, but this extra private API only for CPython isn't a great experience. Other languages and projects want access to the more performant API, too.

Petr went on to reference Mark Shannon's proposal for a New C API which Petr called "close to perfect" with caveats around not dropping existing APIs and the name, instead suggesting "Native Interface" for the name of the new C API.

"Unfortunately we need to keep the old API around. We can't just remove a chunk of the existing API just because it's old", Petr lamented. Petr also noted that not being able to remove parts of the existing API might mean that the Faster CPython project loses some incentive to work on the new C API.

C API decisions are made on three axes: performance, safety, and convenience. Petr argued that of the three, "performance should be prioritized", because a convenient and safe layer can be built on top of a performant API with the right amount of context.

Annotating the existing C API

Petr noted that we have experience within Python for adding a safety layer on top of APIs in Python: type hints! Type hints in Python provide context into an API's inputs and outputs that can be checked using external tooling without incurring a performance penalty on runtime.

Petr proposed adding annotations to C function signatures for function behaviors like "returns a null pointer on error" or "never returns a null pointer" which can then be used in other contexts like documentation or borrow checking. Among the proposed annotations were some about whether references were borrowed, stolen, or a new reference, which can be used to check consistency of references.

List of possible annotations for C API functions

Petr also noted that many of these annotations apply not only to new APIs but to existing APIs as well. Implementing these annotations as empty C macros means that behavior and performance isn't impacted but can be parsed from header files.

Petr's slides showing the annotations in use as C macros

To go along with these new annotations, Petr proposed writing a tool similar to Argument Clinic. Argument Clinic is a tool maintained by the CPython team which automatically generates boilerplate code like function signatures and argument unpacking based on input instructions.

Mark Shannon asked to clarify whether the priority was to improve the C API or document existing behavior. Petr's plan was to add annotation information to the existing API and to wait on implementing the new Native Interface until later. This plan wouldn't change the behavior of any existing API, but APIs which aren't conforming would receive a new variant that conforms to the new C API standards.

Victor Stinner asked whether the annotation information would be stored in a separate file. Petr noted that a separate file is the plan to make it easier to wrap the API and to avoid needing to parse header files directly.

PyO3 maintainer David Hewitt asked whether the plan was to include variations that avoid type checks for all C API functions to dodge the performance penalty for C API wrappers. David noted that PyO3 implemented many C API function calls as methods on wrapped objects. This means that the type check was implicit and thus could avoid having types checked again by the C API function. David also clarified that these extra checks "aren't a major performance drag" but would be great to remove the inefficiencies if possible.

Petr answered that wrappers will need to wait for the Native Interface to be implemented to expose the underlying C API functions which don't include type checks.

There was enthusiastic agreement from the room about using annotation information for documentation and automatically generating boilerplate code and checks along with being able to do borrow checking using annotation information.

Limited C API

The second C API talk was given by Victor Stinner on the status of the Limited C API. The Limited C API is a subset of the Python C API that's consistent across different versions of Python. The Limited C API can be opted-in to using #define Py_LIMITED_API, by doing so only public functions of the limited C API can be used.

Victor started off by listing his long-term goals for the Python C API, which mostly focused on reducing friction both for maintainers of the Python C API and for third parties using the API or updating to support new Python versions. One possibility to achieve this would be to "move to using the Limited C API by default and use the Stable ABI for everybody" but Victor noted this is a "very long term goal".

Getting to this goal is challenging because it's difficult to know how a given change will affect the ecosystem of Python projects, both for finding affected projects and how widespread breakage would be for users. Victor explained that each change typically only requires "1-10 lines of code changed per impacted project" to fix issues.

Trying to move all functions from private to either public or internal

Victor's biggest project currently is to remove private functions from the C API, specifically functions which begin with an underscore "_" by convention. Victor explained that he removed all 300 private functions starting with "_Py" for 3.13.0-alpha1 to discover how and where private APIs are used by downstream projects. Victor and team anticipated that this mass-removal would cause breakages, so after the initial round of discovery the removed functions causing the most issues have been re-added in 3.13.0-alpha2.

As of 3.13.0-beta1, 264 functions of the over 300 functions are still removed. The functions which have been added back are not simply left as-is either: once a private function is discovered the C API working group gets a chance to design a new public C API function for projects to use instead.

"The goal isn't to annoy people, the goal is to provide better functions for everybody" -- Victor Stinner

These new public C API functions would have documentation, tests, backwards compatibility guarantees, and can benefit from the new C API working group guidelines around API design. Victor gave an example of the PyDict_Pop() API which previously required checking for an error condition using PyErr_Occurred() to disambiguate between a key not being in the dictionary or if any other error occurred.

The new PyDict_Pop() function returns -1, 0, and 1 for the "error", "not found", and "found" cases respectively in accordance with new C API guidelines meaning a call to PyErr_Ocurred() is avoided.

New PyDict_Pop() public function with improvements

The pythoncapi-compat project, which Victor is a maintainer of, provides backfills for these new 3.13 APIs for Python 3.12 and older. This means that projects can immediately start taking advantage of new APIs which are better designed and return strong references. Victor highlighted in particular PyDict_GetItemRef() and others which are new in 3.13 and are important for free-threading due to PyDict_GetItem() returning a borrowed reference instead of a new strong reference.

Slide from Victor's presentation on current Limited C API adoption

The biggest users of the Python C API like Cython, PyO3, pybind, and more are at various stages of supporting the Limited C API, most of which require an opt-in for builds.

Victor's top project in coming months and years will be to move the C API away from using structures ("C structs") like PyFrameObject, PyThreadState, and PyTypeObject. Victor noted that projects like Cython, greenlet, gevent, and more have to access directly into structure members which can cause breakages when upgrading to new Python versions. Victor explained that there is no way to handle this with the Limited C API today. "We already provide many helper functions like getters and setters, but we need to provide even more" said Victor as a way forwards on this issue.

Petr questioned the approach of "breaking current projects so that future Python versions don't break them", saying that it'd be better to warn projects about using private API functions that aren't supported and wait to introduce breaking changes when it's necessary to progress the C API.

Victor replied that he'd already started work on a PEP to opt-in for build errors when a project is using deprecated functions, "like a strict mode for the C API". Victor agreed that the current plan isn't great in this way, "we ask people to update their code and the timeline is very short, we expect people to update in one years time" noting the circumstances where this can be difficult such as unmaintained projects or solo-maintainers.

Petr also added here that the opt-in would need to be versioned per Python version, so users can have control over when they want to do the work to move to new C API functions.

Eric Snow and Mark Shannon remarked on a more incremental strategy. This strategy would see deprecated functions moved structurally into a separate file ("legacy.c" and "legacy.h") but with the behavior preserved to have a clearer idea of what functions Python developers want to remove. After being moved the functions would be implemented using newly designed APIs where possible. Others noted that this would only be a convenience for core developers and projects that are interested in internals like PyO3 and Cython.

David Hewitt commented on the long feedback cycles, as downstream projects of the Stable ABI are still using Python 3.7 as a target, so any changes to the Stable ABI may not receive feedback until many years later. Victor responded that he's working on a new project that implements new functions of Python for old Python versions.

Overall, the work and proposals presented by both Petr and Victor were well-received by the room. It's clear that the Python C API is in good hands with the C API working group and is moving in the right direction to solve tomorrow's problems.

Categories: FLOSS Project Planets

Python Software Foundation: The Python Language Summit 2024: Should Python adopt Calendar Versioning?

Planet Python - Fri, 2024-06-14 05:12

 

Hugo van Kemenade, the newly announced Release Manager for Python 3.14 and 3.15, started the Language Summit with a proposal to change Python's versioning scheme.

Hugo's view of kicking off the language summit!
(Photo credit: Hugo van Kemenade)

The goal of Hugo's proposal was to make expectations around versioning, backwards compatibility, and support timelines clearer for Python users.

On the surface, Python's versioning might appear to be Semantic Versioning (SemVer) due to its three-part version and infamous set of backwards incompatible changes known as Python 3. Hugo noted that the publication of Python 1.0.0 (1994) and what would become the Python versioning scheme predates the publication of SemVer by at around 15 years (2009).

The perception of Python using semantic versioning is a source of confusion for users who don't expect backwards incompatible changes when upgrading to new versions of Python. In reality almost all new feature releases of Python include backwards incompatible changes such as the removal of "dead batteries" where PEP 594 marked 19 modules for removal in Python 3.13.

Calendar Versioning (CalVer) encompasses a wide array of different versioning schemes that have one property in common: using the release date as part of a release's version. Calendar-based versions vary quite widely, but typically include a two or four digit year (YY or YYYY) and sometimes a month or day (MM and DD).

Using years in versions is quite common amongst other programming languages, operating systems like Ubuntu, and tools like Black, pip, and PyCharm.

Slide from Hugo's presentation showing programming languages using calendar-based versioning like Ada, Algol, C, C++, Fortran, and JavaScript

Since 2019, Python has made releases according to the new yearly cadence from PEP 602. Moving to annual releases made it possible for downstream distributors to rely on when a new Python version appears, which brings newer Python versions to users faster.

Each minor release receives 5 years of security fixes. Using the release year of 2026 as an example, users could add 5 years and know they'll receive security fixes on that minor release until 2031. Figuring out this information from "3.15" in the existing versioning scheme would require another lookup, typically to the release schedule PEP.

If the year were baked into the version, one wouldn't need to see the release schedule to know when support was ending, instead one could add 5 years to the year encoded in the version (e.g. for "3.26", 26 + 5 = 31, therefore security support ends in 2031).

Hugo offered multiple proposed versioning schemes, including:

  • Using the release year as minor version (3.YY.micro, "3.26.0")
  • Using the release year as major version (YY.0.micro, "26.0.0")
  • Using the release year and month as major and minor version (YY.MM.micro, "26.10.0")

There were discussions about other options beyond these amongst attendees.

Thomas Wouters, release manager for 3.12 and 3.13, questioned the value-add for adopting a new versioning system. Thomas noted that while the current system is confusing, changing the system in any way also adds confusion for users. Hugo responded that clarity, especially support for security fix and end-of-life dates, was the biggest motivation.

Barry Warsaw wondered if there was a way to test potential new versioning scheme ahead of time to find potential problems. Hugo referenced the deadsnakes project which builds distributions of CPython for Ubuntu. The deadsnakes project previously created a build of Python 3.9 that modified the version to be "3.10" to help discover breakages in projects assuming a single-digit minor version. Hugo also had experience using static code analysis to find other version assumptions in Python projects.

"Python 3 is a brand at this point, and we should stick to it" said Guido van Rossum after sharing concerns that changes to the major version would break the ecosystem more than changes to the minor version. Others voiced concerns about changing the major version "3" including in the "python3" binary and for packaging such as "abi3" tag.

Carol Willing noted that many projects are relying on Python's versioning system and already have those versions "baked in" to warnings in existing releases. Hugo confirmed this is a problem, including Python itself, which had a few deprecation warnings and messages that reference future Python versions like 3.15. Hugo's plan would be to update these versions for Python, give plenty of time before the new versioning scheme took affect.

Donghee Na offered up Rust's use of "yearly editions" in the branding of their releases, where the version number is completely separate from the branding of the release. Hugo was concerned that this would add another layer of confusion and would mostly repeat information already found in the release schedule.

Overall the proposal to use the current year as the minor version was well-received, Hugo mentioned that he'd be drafting up a PEP for this change.

Carl Meyer cautioned against making any changes to the version scheme before 2026 in order to preserve the 3.14 "π"-thon release which received approval and laughter from the room. Sounds like whatever happens we'll get to have our pie and eat it too. 🥧

Categories: FLOSS Project Planets

Talk Python to Me: #466: Pydantic Performance Tips

Planet Python - Fri, 2024-06-14 04:00
You're using Pydantic and it seems pretty straightforward, right? But could you adopt some simple changes to your code that would make it a lot faster and more efficient? Chances are, you'll find a couple of the tips from Sydney Runkle that will do just that. Join us to talk about Pydantic performance tips here on Talk Python.<br/> <br/> <strong>Episode sponsors</strong><br/> <br/> <a href='https://talkpython.fm/sentry'>Sentry Error Monitoring, Code TALKPYTHON</a><br> <a href='https://talkpython.fm/code-comments'>Code Comments</a><br> <a href='https://talkpython.fm/training'>Talk Python Courses</a><br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>Sydney Runkle</b>: <a href="https://www.linkedin.com/in/sydney-runkle-105a35190/" target="_blank" rel="noopener">linkedin.com</a><br/> <b>Pydantic</b>: <a href="https://pydantic.dev/opensource" target="_blank" rel="noopener">pydantic.dev</a><br/> <b>Performance docs</b>: <a href="https://docs.pydantic.dev/latest/concepts/performance/" target="_blank" rel="noopener">docs.pydantic.dev</a><br/> <b>Union tips</b>: <a href="https://docs.pydantic.dev/latest/concepts/unions/" target="_blank" rel="noopener">docs.pydantic.dev</a><br/> <b>Sydney's presentation slides</b>: <a href="https://docs.google.com/presentation/d/183bn9ecIzOOqfxanrESu7rBaKCI70CX0/edit?usp=sharing&ouid=117072411264002710561&rtpof=true&sd=true" target="_blank" rel="noopener">docs.google.com</a><br/> <b>JSON to Pydantic</b>: <a href="https://jsontopydantic.com" target="_blank" rel="noopener">jsontopydantic.com</a><br/> <b>Samuel talking FastUI</b>: <a href="https://talkpython.fm/episodes/show/449/building-uis-in-python-with-fastui" target="_blank" rel="noopener">talkpython.fm</a><br/> <br/> <b>CodeFlash</b>: <a href="https://www.codeflash.ai" target="_blank" rel="noopener">codeflash.ai</a><br/> <b>Codspeed</b>: <a href="https://codspeed.io" target="_blank" rel="noopener">codspeed.io</a><br/> <b>Watch this episode on YouTube</b>: <a href="https://www.youtube.com/watch?v=R8PL1snHgzY" target="_blank" rel="noopener">youtube.com</a><br/> <b>Episode transcripts</b>: <a href="https://talkpython.fm/episodes/transcript/466/pydantic-performance-tips" target="_blank" rel="noopener">talkpython.fm</a><br/> <br/> <b>--- Stay in touch with us ---</b><br/> <b>Subscribe to us on YouTube</b>: <a href="https://talkpython.fm/youtube" target="_blank" rel="noopener">youtube.com</a><br/> <b>Follow Talk Python on Mastodon</b>: <a href="https://fosstodon.org/web/@talkpython" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <b>Follow Michael on Mastodon</b>: <a href="https://fosstodon.org/web/@mkennedy" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div>
Categories: FLOSS Project Planets

Paul Tagliamonte: Reverse Engineering a Restaurant Pager system 🍽️

Planet Debian - Fri, 2024-06-14 01:07

It’s been a while since I played with something new – been stuck in a bit of a rut with radios recently - working on refining and debugging stuff I mostly understand for the time being. The other day, I was out getting some food and I idly wondered how the restaurant pager system worked. Idle curiosity gave way to the realization that I, in fact, likely had the means and ability to answer this question, so I bought the first set of the most popular looking restaurant pagers I could find on eBay, figuring it’d be a fun multi-week adventure.

Order up!

I wound up buying a Retekess brand TD-158 Restaurant Pager System (they looked like ones I’d seen before and seemed to be low-cost and popular), and quickly after, had a pack of 10 pagers and a base station in-hand. The manual stated that the radios operated at 433 MHz` (cool! can do! Love a good ISM band device), and after taking an inital read through the manual for tips on the PHY, I picked out a few interesting things. First is that the base station ID was limited to 0-999, which is weird because it means the limiting factor is likely the base-10 display on the base station, not the protocol – we need enough bits to store 999 – at least 10 bits. Nothing else seemed to catch my eye, so I figured may as well jump right to it.

Not being the type to mess with success, I did exactly the same thing as I did in my christmas tree post, and took a capture at 433.92MHz since it was in the middle of the band, and immediately got deja-vu. Not only was the signal at 433.92MHz, but throwing the packet into inspectrum gave me the identical plot of the OOK encoding scheme.

Not just similar – identical. The only major difference was the baud rate and bit structure of the packets, and the only minor difference was the existence of what I think is a wakeup preamble packet (of all zeros), rather than a preamble symbol that lasted longer than usual PHY symbol (which makes this pager system a bit easier to work with than my tree, IMHO).

Getting down to work, I took some measurements to determine what the symbol duration was over the course of a few packets, I was able to determine the symbol rate was somewhere around 858 microseconds (0.000858 seconds per symbol), which is a weird number, but maybe I’m slightly off or there’s some larger math I’m missing that makes this number satisfyingly round (internal low cost crystal clock or something? I assume this is some hardware constrait with the pager?)

Anyway, good enough. Moving along, let’s try our hand at a demod – let’s just assume it’s all the same as the chrismas tree post and demod ones and zeros the same way here. That gives us 26 bits:

00001101110000001010001000

Now, I know we need at least 10 bits for the base station ID, some number of bits for the pager ID, and some bits for the command. This was a capture of me hitting “call” from a base station ID of 55 to a pager with the ID of 10, so let’s blindly look for 10 bit chunks with the numbers we’re looking for:

0000110111 0000001010 001000

Jeez. First try. 10 bits for the base station ID (55 in binary is 0000110111), 10 bits for the pager ID (10 in binary is 0000001010), which leaves us with 6 bits for a command (and maybe something else too?) – which is 8 here. Great, cool, let’s work off that being the case and revisit it if we hit bugs.

Besides our data packet, there’s also a “preamble” packet that I’ll add in, in case it’s used for signal detection or wakeup or something – which is fairly easy to do since it’s the same packet structure as the above, just all zeros. Very kind of them to leave it with the same number of bits and encoding scheme – it’s nice that it can live outside the PHY.

Once I got here, I wrote a quick and dirty modulator, and was able to ring up pagers! Unmitigated success and good news – only downside was that it took me a single night, and not the multi-week adventure I was looking for. Well, let’s finish the job and document what we’ve found for the sake of completeness.

Boxing everything up

My best guess on the packet structure is as follows:

base id argument command

For a call or F2 operation, the argument is the Pager’s ID code, but for other commands it’s a value or an enum, depending. Here’s a table of my by-hand demodulation of all the packet types the base station produces:

Type Cmd Id Description Call8Call the pager identified by the id in argument Off60Request any pagers on the charger power off when power is removed, argument is all zero F240Program a pager to the specified Pager ID (in argument) and base station F344Set the reminder duration in seconds specified in argument F448Set the pager's beep mode to the one in argument (0 is disabled, 1 is slow, 2 is medium, 3 is fast) F552Set the pager's vibration mode to the one in argument (0 is disabled, 1 is enabled) Kitchen’s closed for the night

I’m not going to be publishing this code since I can’t think of a good use anyone would have for this besides folks using a low cost SDR and annoying local resturants; but there’s enough here for folks who find this interesting to try modulating this protocol on their own hardware if they want to buy their own pack of pagers and give it a shot, which I do encourage! It’s fun! Radios are great, and this is a good protocol to hack with – it’s really nice.

All in all, this wasn’t the multi-week adventure I was looking for, this was still a great exercise and a fun reminder that I’ve come a far way from when I’ve started. It felt a lot like cheating since I was able to infer a lot about the PHY because I’d seen it before, but it was still a great time. I may grab a few more restaurant pagers and see if I can find one with a more exotic PHY to emulate next. I mean why not, I’ve already got the thermal printer libraries working 🖨️

Categories: FLOSS Project Planets

Kirigami tutorial now ported to Qt6

Planet KDE - Thu, 2024-06-13 20:00
After three months, KDE’s Kirigami tutorial has been ported to Qt6. In case you are unaware of what Kirigami is: Qt provides two GUI technologies to create desktop apps: QtWidgets and QtQuick QtWidgets uses only C++ while QtQuick uses QML (plus optional C++ and JavaScript) Kirigami is a library made by KDE that extends QtQuick and provides a lot of niceties and quality-of-life components Strictly speaking there weren’t that many API changes to Kirigami.
Categories: FLOSS Project Planets

Pages