Planet Python
Python⇒Speed: It's time to stop using Python 3.8
Upgrading to new software versions is work, and work that doesn’t benefit your software’s users. Users care about features and bug fixes, not how up-to-date you are.
So it’s perhaps not surprising how many people still use Python 3.8. As of September 2024, about 14% of packages downloaded from PyPI were for Python 3.8. This includes automated downloads as part of CI runs, so it doesn’t mean 3.8 is used in 14% of applications, but that’s still 250 million packages installed in a single day!
Still, there is only so much time you can delay upgrading, and for Python 3.8, the time to upgrade is as soon as possible. Python 3.8 is reaching its end of life at the end of October 2024.
No more bug fixes.
No more security fixes.
Still not convinced? Let’s see why you want to upgrade.
Read more...Python⇒Speed: When should you upgrade to Python 3.13?
Python 3.13 will be out October 1, 2024—but should you switch to it immediately? And if you shouldn’t upgrade just yet, when should you?
Immediately after the release, you probably didn’t want to upgrade just yet. But from December 2024 and onwards, upgrading is definitely worth trying, though it may not succeed. To understand why, we need to consider Python packaging, the software development process, and take a look at the history of past releases.
Read more...Glyph Lefkowitz: Python macOS Framework Builds
When you build Python, you can pass various options to ./configure that change aspects of how it is built. There is documentation for all of these options, and they are things like --prefix to tell the build where to install itself, --without-pymalloc if you have some esoteric need for everything to go through a custom memory allocator, or --with-pydebug.
One of these options only matters on macOS, and its effects are generally poorly understood. The official documentation just says “Create a Python.framework rather than a traditional Unix install.” But… do you need a Python.framework? If you’re used to running Python on Linux, then a “traditional Unix install” might sound pretty good; more consistent with what you are used to.
If you use a non-Framework build, most stuff seems to work, so why should anyone care? I have mentioned it as a detail in my previous post about Python on macOS, but even I didn’t really explain why you’d want it, just that it was generally desirable.
The traditional answer to this question is that you need a Framework build “if you want to use a GUI”, but this is demonstrably not true. At first it might not seem so, since the go-to Python GUI test is “run IDLE”; many non-Framework builds also omit Tkinter because they don’t ship a Tk dependency, so IDLE won’t start. But other GUI libraries work fine. For example, uv tool install runsnakerun / runsnake will happily pop open a GUI window, Framework build or not. So it bears some explaining
Wait, what is a “Framework” anyway?Let’s back up and review an important detail of the mac platform.
On macOS, GUI applications are not just an executable file, they are organized into a bundle, which is a directory with a particular layout, that includes metadata, that launches an executable. A thing that, on Linux, might live in a combination of /bin/foo for its executable and /share/foo/ for its associated data files, is instead on macOS bundled together into Foo.app, and those components live in specified locations within that directory.
A framework is also a bundle, but one that contains a library. Since they are directories, Applications can contain their own Frameworks and Frameworks can contain helper Applications. If /Applications is roughly equivalent to the Unix /bin, then /Library/Frameworks is roughly equivalent to the Unix /lib.
App bundles are contained in a directory with a .app suffix, and frameworks are a directory with a .framework suffix.
So what do you need a Framework for in Python?The truth about Framework builds is that there is not really one specific thing that you can point to that works or doesn’t work, where you “need” or “don’t need” a Framework build. I was not able to quickly construct an example that trivially fails in a non-framework context for this post, but I didn’t try that many different things, and there are a lot of different things that might fail.
The biggest issue is not actually the Python.framework itself. The metadata on the framework is not used for much outside of a build or linker context. However, Python’s Framework builds also ship with a stub application bundle, which places your Python process into a normal application(-ish) execution context all the time, which allows for various platform APIs like [NSBundle mainBundle] to behave in the normal, predictable ways that all of the numerous, various frameworks included on Apple platforms expect.
Various Apple platform features might want to ask a process questions like “what is your unique bundle identifier?” or “what entitlements are you authorized to access” and even beginning to answer those questions requires information stored in the application’s bundle.
Python does not ship with a wrapper around the core macOS “cocoa” API itself, but we can use pyobjc to interrogate this. After installing pyobjc-framework-cocoa, I can do this
1 2>>> import AppKit >>> AppKit.NSBundle.mainBundle()On a non-Framework build, it might look like this:
1NSBundle </Users/glyph/example/.venv/bin> (loaded)But on a Framework build (even in a venv in a similar location), it might look like this:
1NSBundle </Library/Frameworks/Python.framework/Versions/3.12/Resources/Python.app> (loaded)This is why, at various points in the past, GUI access required a framework build, since connections to the window server would just be rejected for Unix-style executables. But that was an annoying restriction, so it was removed at some point, or at least, the behavior was changed. As far as I can tell, this change was not documented. But other things like user notifications or geolocation might need to identity an application for preferences or permissions purposes, respectively. Even something as basic as “what is your app icon” for what to show in alert dialogs is information contained in the bundle. So if you use a library that wants to make use of any of these features, it might work, or it might behave oddly, or it might silently fail in an undocumented way.
This might seem like undocumented, unnecessary cruft, but it is that way because it’s just basic stuff the platform expects to be there for a lot of different features of the platform.
/etc/ buildsStill, this might seem like a strangely vague description of this feature, so it might be helpful to examine it by a metaphor to something you are more familiar with. If you’re familiar with more Unix style application development, consider a junior developer — let’s call him Jim — asking you if they should use an “/etc build” or not as a basis for their Docker containers.
What is an “/etc build”? Well, base images like ubuntu come with a bunch of files in /etc, and Jim just doesn’t see the point of any of them, so he likes to delete everything in /etc just to make things simpler. It seems to work so far. More experienced Unix engineers that he has asked react negatively and make a face when he tells them this, and seem to think that things will break. But their app seems to work fine, and none of these engineers can demonstrate some simple function breaking, so what’s the problem?
Off the top of your head, can you list all the features that all the files that /etc is needed for? Why not? Jim thinks it’s weird that all this stuff is undocumented, and it must just be unnecessary cruft.
If Jim were to come back to you later with a problem like “it seems like hostname resolution doesn’t work sometimes” or “ls says all my files are owned by 1001 rather than the user name I specified in my Dockerfile” you’d probably say “please, put /etc back, I don’t know exactly what file you need but lots of things just expect it to be there”.
This is what a framework vs. a non-Framework build is like. A Framework build just includes all the pieces of the build that the macOS platform expects to be there. What pieces do what features need? It depends. It changes over time. And the stub that Python’s Framework builds include may not be sufficient for some more esoteric stuff anyway. For example, if you want to use a feature that needs a bundle that has been signed with custom entitlements to access something specific, like the virtualization API, you might need to build your own app bundle. To extend our analogy with Jim, the fact that /etc exists and has the default files in it won’t always be sufficient; sometimes you have to add more files to /etc, with quite specific contents, for some features to work properly. But “don’t get rid of /etc (or your application bundle)” is pretty good advice.
Do you ever want a non-Framework build?macOS does have a Unix subsystem, and many Unix-y things work, for Unix-y tasks. If you are developing a web application that mostly runs on Linux anyway and never care about using any features that touch the macOS-specific parts of your mac, then you probably don’t have to care all that much about Framework builds. You’re not going to be surprised one day by non-framework builds suddenly being unable to use some basic Unix facility like sockets or files. As long as you are aware of these limitations, it’s fine to install non-Framework builds. I have a dozen or so Pythons on my computer at any given time, and many of them are not Framework builds.
Framework builds do have some small drawbacks. They tend to be larger, they can be a bit more annoying to relocate, they typically want to live in a location like /Library or ~/Library. You can move Python.framework into an application bundle according to certain rules, as any bundling tool for macOS will have to do, but it might not work in random filesystem locations. This may make managing really large number of Python versions more annoying.
Most of all, the main reason to use a non-Framework build is if you are building a tool that manages a fleet of Python installations to perform some automation that needs to know about Python installs, and you want to write one simple tool that does stuff on Linux and on macOS. If you know you don’t need any platform-specific features, don’t want to spend the (not insignificant!) effort to cover those edge cases, and you get a lot of value from that level of consistency (for example, a teaching environment or interdisciplinary development team with a lot of platform diversity) then a non-framework build might be a better option.
Why do I care?Personally, I think it’s important for Framework builds to be the default for most users, because I think that as much stuff should work out of the box as possible. Any user who sees a neat library that lets them get control of some chunk of data stored on their mac - map data, health data, game center high scores, whatever it is - should be empowered to call into those APIs and deal with that data for themselves.
Apple already makes it hard enough with their thicket of code-signing and notarization requirements for distributing software, aggressive privacy restrictions which prevents API access to some of this data in the first place, all these weird Unix-but-not-Unix filesystem layout idioms, sandboxing that restricts access to various features, and the use of esoteric abstractions like mach ports for communications behind the scenes. We don't need to make it even harder by making the way that you install your Python be a surprise gotcha variable that determines whether or not you can use an API like “show me a user notification when my data analysis is done” or “don’t do a power-hungry data analysis when I’m on battery power”, especially if it kinda-sorta works most of the time, but only fails on certain patch-releases of certain versions of the operating system, becuase an implementation detail of a proprietary framework changed in the meanwhile to require an application bundle where it didn’t before, or vice versa.
More generally, I think that we should care about empowering users with local computation and platform access on all platforms, Linux and Windows included. This just happens to be one particular quirk of how native platform integration works on macOS specifically.
AcknowledgmentsThank you to my patrons who are supporting my writing on this blog. For this one, thanks especially to long-time patron Hynek who requested it specifically. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor! I am also available for consulting work if you think your organization could benefit from expertise on topics like “how can we set up our Mac developers’ laptops with Python”.
Real Python: How to Use Conditional Expressions With NumPy where()
The NumPy where() function is a powerful tool for filtering array elements in lists, tuples, and NumPy arrays. It works by using a conditional predicate, similar to the logic used in the WHERE or HAVING clauses in SQL queries. It’s okay if you’re not familiar with SQL—you don’t need to know it to follow along with this tutorial.
You would typically use np.where() when you have an array and need to analyze its elements differently depending on their values. For example, you might need to replace negative numbers with zeros or replace missing values such as None or np.nan with something more meaningful. When you run where(), you’ll produce a new array containing the results of your analysis.
You generally supply three parameters when using where(). First, you provide a condition against which each element of your original array is matched. Then, you provide two additional parameters: the first defines what you want to do if an element matches your condition, while the second defines what you want to do if it doesn’t.
If you think this all sounds similar to Python’s ternary operator, you’re correct. The logic is the same.
Note: In this tutorial, you’ll work with two-dimensional arrays. However, the same principles can be applied to arrays of any dimension.
Before you start, you should familiarize yourself with NumPy arrays and how to use them. It will also be helpful if you understand the subject of broadcasting, particularly for the latter part of this tutorial.
In addition, you may want to use the data analysis tool Jupyter Notebook as you work through the examples in this tutorial. Alternatively, JupyterLab will give you an enhanced notebook experience, but feel free to use any Python environment.
The NumPy library is not part of core Python, so you’ll need to install it. If you’re using a Jupyter Notebook, create a new code cell and type !python -m pip install numpy into it. When you run the cell, the library will install. If you’re working at the command line, use the same command, only without the exclamation point (!).
With these preliminaries out of the way, you’re now good to go.
Get Your Code: Click here to download the free sample code that shows you how to use conditional expressions with NumPy where().
Take the Quiz: Test your knowledge with our interactive “How to Use Conditional Expressions With NumPy where()” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
How to Use Conditional Expressions With NumPy where()This quiz aims to test your understanding of the np.where() function. You won't find all the answers in the tutorial, so you'll need to do additional research. It's recommended that you make sure you can do all the exercises in the tutorial before tackling this quiz. Enjoy!
How to Write Conditional Expressions With NumPy where()One of the most common scenarios for using where() is when you need to replace certain elements in a NumPy array with other values depending on some condition.
Consider the following array:
Python >>> import numpy as np >>> test_array = np.array( ... [ ... [3.1688358, 3.9091694, 1.66405549, -3.61976783], ... [7.33400434, -3.25797286, -9.65148913, -0.76115911], ... [2.71053173, -6.02410179, 7.46355805, 1.30949485], ... ] ... ) Copied!To begin with, you need to import the NumPy library into your program. It’s standard practice to do so using the alias np, which allows you to refer to the library using this abbreviated form.
The resulting array has a shape of three rows and four columns, each containing a floating-point number.
Now suppose you wanted to replace all the negative numbers with their positive equivalents:
Python >>> np.where( ... test_array < 0, ... test_array * -1, ... test_array, ... ) array([[3.1688358 , 3.9091694 , 1.66405549, 3.61976783], [7.33400434, 3.25797286, 9.65148913, 0.76115911], [2.71053173, 6.02410179, 7.46355805, 1.30949485]]) Copied!The result is a new NumPy array with the negative numbers replaced by positives. Look carefully at the original test_array and then at the corresponding elements of the new all_positives array, and you’ll see that the result is exactly what you wanted.
Note: The above example gives you an idea of how the where() function works. If you were doing this in practice, you’d most likely use either the np.abs() or np.absolute() functions instead. Both do the same thing because the former is shorthand for the latter:
Python >>> np.abs(test_array) array([[3.1688358 , 3.9091694 , 1.66405549, 3.61976783], [7.33400434, 3.25797286, 9.65148913, 0.76115911], [2.71053173, 6.02410179, 7.46355805, 1.30949485]]) Copied!Once more, all negative values have been removed.
Before moving on to other use cases of where(), you’ll take a closer look at how this all works. To achieve your aim in the previous example, you passed in test_array < 0 as the condition. In NumPy, this creates a Boolean array that where() uses:
Python >>> test_array < 0 array([[False, False, False, True], [False, True, True, True], [False, True, False, False]]) Copied!The Boolean array, often called the mask, consists only of elements that are either True or False. If an element matches the condition, the corresponding element in the Boolean array will be True. Otherwise, it’ll be False.
Read the full article at https://realpython.com/numpy-where-conditional-expressions/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Real Python: Quiz: Python Virtual Environments: A Primer
So you’ve been primed on Python virtual environments! Test your understanding of the tutorial here.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
PyCoder’s Weekly: Issue #646 (Sept. 10, 2024)
#646 – SEPTEMBER 10, 2024
View in Browser »
Discover the power of Pydantic, Python’s most popular data parsing, validation, and serialization library. In this hands-on video course, you’ll learn how to make your code more robust, trustworthy, and easier to debug with Pydantic.
REAL PYTHON course
The PSF is introducing monthly office hours on the PSF Discord discussion board. This is a chance to connect with the board members and learn more about what they do. The schedule for the next 12 sessions is in the post.
PYTHON SOFTWARE FOUNDATION
Sounds tricky right? Well that’s exactly what Kraken Technologies is doing. Learn how they manage 100s of deployments a day and how they handle errors when they crop up. Sneak peak: they use Sentry to reduce noise, prioritize issues, and maintain code quality–without relying on a dedicated QA team →
SENTRY sponsor
Ari is switching from pandas to Polars and surprisingly (even to himself) it isn’t because of the better performance. Read on for the reasons why.
ARI LAMSTEIN
Pre-commit hooks are a great way to help maintain code quality. However, some of your code quality standards may be specific to your project, and therefore, not covered by existing code linting and formatting tools. In this article, Stefanie shows you how to incorporate custom checks into your pre-commit setup.
STEFANIEMOLIN.COM • Shared by Stefanie Molin
Just how does one debug the tool one is using to find bugs? Python 3.13’s new REPL is implemented in Python and adding print statements means you get output in your output. This quick post talks about the environment variable PYREPL_TRACE and how to use it to capture debug information.
RODRIGO GIRÃO SERRÃO
Carlton has some strong opinions on how Django manages usernames and custom users through auth.User and how the current solution is daunting to folks new to Django. This article dives into why the current approach might be problematic and what could be done.
CARLTON GIBSON
Redowan keeps running into code that mucks with the root logger’s settings, which leaks into his own code. This post explains the problem and how to make sure you aren’t doing it in your own libraries.
REDOWAN DELOWAR
Polars 1.6 allows you to natively create beautiful plots without pandas, NumPy, or PyArrow. This is enabled by Narwhals, a lightweight compatibility layer between dataframe libraries.
POLA.RS • Shared by Marco Gorelli
Hynek often gets challenged when he suggests the use of virtual environments within Docker containers, and this post explains why he still does.
HYNEK SCHLAWACK
This tutorial covers how to write a Python web crawler using Scrapy to scrape and parse data, and then store the data in MongoDB.
REAL PYTHON
Once you’ve got Anaconda on macOS, using any other Python can be problematic. This article walks you through escaping Anaconda.
PAUL ROMER
Frak talks about how technical interviews often have false negatives and how this impacts your organization.
FRAK LOPEZ
September 11, 2024
REALPYTHON.COM
September 12 to September 13, 2024
MEETUP.COM
September 13 to September 16, 2024
PYTHON.ORG.BR
September 18 to September 21, 2024
PYDATA.ORG
September 19 to September 22, 2024
PYLATAM.ORG • Shared by David Sol
September 20 to September 24, 2024
PYCON.ORG
September 21 to September 23, 2024
PYCON.ORG
September 21 to September 23, 2024
BARCAMPS.EU
Happy Pythoning!
This was PyCoder’s Weekly Issue #646.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
ListenData: How to Integrate Gemini API with Python
In this tutorial, you will learn how to use Google's Gemini AI model through its API in Python.
Steps to Access Gemini APIFollow the steps below to access the Gemini API and then use it in python.
- Visit Google AI Studio website.
- Sign in using your Google account.
- Create an API key.
- Install the Google AI Python library for the Gemini API using the command below :
pip install google-generativeai.
Real Python: When to Use .__repr__() vs .__str__() in Python
One of the most common tasks that a computer program performs is to display data. The program often displays this information to the program’s user. However, a program also needs to show information to the programmer developing and maintaining it. The information a programmer needs about an object differs from how the program should display the same object for the user, and that’s where .__repr__() vs .__str__() comes in.
A Python object has several special methods that provide specific behavior. There are two similar special methods that describe the object using a string representation. These methods are .__repr__() and .__str__(). The .__repr__() method returns a detailed description for a programmer who needs to maintain and debug the code. The .__str__() method returns a simpler description with information for the user of the program.
The .__repr__() and .__str__() methods are two of the special methods that you can define for any class. They allow you to control how a program displays an object in several common forms of output, such as what you get from the print() function, formatted strings, and interactive environments.
In this video course, you’ll learn how to differentiate .__repr__() vs .__str__() and how to use these special methods in the classes you define. Defining these methods effectively makes the classes that you write more readable and easier to debug and maintain. So, when should you choose Python’s .__repr__() vs .__str__?
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python Circle: Removing PDF pages using Python and PyPDF2
Python Anywhere: Issues after system maintenance on 2024-09-05
On Thursday 5 September 2024 we performed some system maintenance. It appeared to have gone well, and was completed at the scheduled time (06:20 UTC), but unfortunately there were unexpected knock-on effects that caused issues later on in the day, and further problems on Saturday 7 September. This post gives the details of why we needed to perform the maintenance, what happened, and what we will do to prevent a recurrence.
Real Python: Python News Roundup: September 2024
As the autumn leaves start to fall, signaling the transition to cooler weather, the Python community has warmed up to a series of noteworthy developments. Last month, a new maintenance release of Python 3.12.5 was introduced, reinforcing the language’s ongoing commitment to stability and security.
On a parallel note, Python continues its reign as the top programming language according to IEEE Spectrum’s annual rankings. This sentiment is echoed by the Python Developers Survey 2023 results, which reveal intriguing trends and preferences within the community.
Looking ahead, PEP 750 has proposed the addition of tag strings in Python 3.14, inspired by JavaScript’s tagged template literals. This feature aims to enhance string processing, offering developers more control and expressiveness.
Furthermore, EuroSciPy 2024 recently concluded in Poland after successfully fostering cross-disciplinary collaboration and learning. The event featured insightful talks and hands-on tutorials, spotlighting innovative tools and libraries that are advancing scientific computing with Python.
Let’s dive into the most significant Python news from the past month!
Python 3.12.5 ReleasedEarly last month, Python 3.12.5 was released as the fifth maintenance update for the 3.12 series. Since the previous patch update in June, this release packs over 250 bug fixes, performance improvements, and documentation enhancements.
Here are the most important highlights:
- Standard Library: Many modules in the standard library received crucial updates, such as fixes for crashes in ssl when the main interpreter restarts, and various corrections for error-handling mechanisms.
- Core Python: The core Python runtime has several enhancements, including improvements to dictionary watchers, error messages, and fixes for edge-case crashes involving f-strings and multithreading.
- Security: Key security improvements include the addition of missing audit events for interactive Python use and socket connection authentication within a fallback implementation on platforms such as Windows, where Unix inter-process communication is unavailable.
- Tests: New test cases have been added and bug fixes have been applied to prevent random memory leaks during testing.
- Documentation: Python documentation has been updated to remove discrepancies and clarify edge cases in multithreaded queues.
Additionally, Python 3.12.5 comes equipped with pip 24.2 by default, bringing a slew of significant improvements to enhance security, efficiency, and functionality. One of the most notable upgrades is that pip now defaults to using system certificates, bolstering security measures when managing and installing third-party packages.
Read the full article at https://realpython.com/python-news-september-2024/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
PyCharm: How to Use Jupyter Notebooks in PyCharm
PyCharm is one of the most well-known data science tools, offering excellent out-of-the-box support for Python, SQL, and other languages. PyCharm also provides integrations for Databricks, Hugging Face and many other important tools. All these features allow you to write good code and work with your data and projects faster.
PyCharm Professional’s support for Jupyter notebooks combines the interactive nature of Jupyter notebooks with PyCharm’s superior code quality and data-related features. This blog post will explore how PyCharm’s Jupyter support can significantly boost your productivity.
Watch this video to get a comprehensive overview of using Jupyter notebooks in PyCharm and learn how you can speed up your data workflows.
Speed up data analysis Get acquainted with your dataWhen you start working on your project, it is extremely important to understand what data you have, including information about the size of your dataset, any problems with it, and its patterns. For this purpose, your pandas and Polars DataFrames can be rendered in Jupyter outputs in Excel-like tables. The tables are fully interactive, so you can easily sort one or multiple columns and browse and view your data, you can choose how many rows will be shown in a table and perform many other operations.
The table also provides some important information for example:
- You can find the the size of a table in its header.
- You can find the data type symbols in the column headers.
- You can also use JetBrains AI Assistant to get information about your DataFrame by clicking on the icon.
After getting acquainted with your data, you need to clean it. This an important step, but it is also extremely time consuming because there are all sorts of problems you could find, including missing values, outliers, inconsistencies in data types, and so on. Indeed, according to the State of Developer Ecosystem 2023 report, nearly 50% of Data Professionals dedicate 30% of their time or more to data preparation. Fortunately, PyCharm offers a variety of features that streamline the data-cleaning process.
Some insights are already available in the column headers.
First, we can easily spot the amount of missing data for each column because it is highlighted in red. Also, we may be able to see at a glance whether some of our columns have outliers. For example, in the bath column, the maximum value is significantly higher than the ninety-fifth percentile. Therefore, we can expect that this column has at least one outlier and requires our attention.
Additionally, you might suspect there’s an issue with the data if the data type does not match the expected one. For example, the header of the total_sqft column below is marked with the symbol, which in PyCharm indicates that the column contains the Object data type. The most appropriate data type for a column like total_sqft would likely be float or integer, however, so we may expect there to be inconsistencies in the data types within the column, which could affect data processing and analysis. After sorting, we notice one possible reason for the discrepancy: the use of text in data and ranges instead of numerical values.
So, our suspicion that the column had data-type inconsistencies was proven correct. As this example shows, small details in the table header can provide important information about your data and alert you to issues that need to be addressed, so it’s always worth checking.You can also use no-code visualizations to gather information about whether your data needs to be cleaned. Simply click on the icon in the top-left corner of the table. There are many available visualization options, including histograms, that can be used to see where the peaks of the distribution are, whether the distribution is skewed or symmetrical, and whether there are any outliers.
Of course, you can use code to gather information about your dataset and fix any problems you’ve identified. However, the mentioned low-code features often provide valuable insights about your data and can help you work with it much faster.
Code faster Code completion and quick documentationA significant portion of a data professional’s job involves writing code. Fortunately, PyCharm is well known for its features that allow you to write code significantly faster. For example, local ML-powered full line code completion can provide suggestions for entire lines of code.
Another useful feature is quick documentation, which appears when you hover the cursor over your code. This allows you to gather information about functions and other code elements without having to leave the IDE.
RefactoringsOf course, working with code and data is an interactive process, and you may often decide to make some changes in your code – for example, to rename a variable. Going through the whole file or, in some cases, the entire project, would be cumbersome and time consuming. We can use PyCharm’s refactoring capabilities to rename a variable, introduce a constant, and make many other changes in your code. For example, in this case, I want to rename the DataFrame to make it shorter. I simply use the the Rename refactoring to make the necessary changes.
PyCharm offers a vast number of different refactoring options. To dive deeper into this functionality, watch this video.
Fix problemsIt is practically impossible to write code without there being any mistakes or typos. PyCharm has a vast array of features that allow you to spot and address issues faster. You will notice the Inspection widget in the top-right corner if it finds any problems.
For example, I forgot to import a library in my project and made several typos in the doc so let’s take a look how PyCharm can help here.
First of all, the problem with the library import:
Additionally, with Jupyter traceback, you can see the line where the error occurred and get a link to the code. This makes the bug-fixing process much easier. Here, I have a typo in line 3. I can easily navigate to it by clicking on the blue text.
Additionally if you would like to get more information and suggestion how to fix the problem, you can use JetBrains AI Assistant by clicking on Explain with AI.
Of course, that is just the tip of the iceberg. We recommend reading the documentation to better understand all the features PyCharm offers to help you maintain code quality.
Navigate easilyFor the majority of cases, data science work involves a lot of experimentation, with the journey from start to finish rarely resembling a straight line.
During this experimentation process, you have to go back and forth between different parts of your project and between cells in order to find the best solution for a given problem. Therefore, it is essential for you to be able to navigate smoothly through your project and files. Let’s take a look at how PyCharm can help in this respect.
First of all, you can use the classic CMD+F (Mac) or CTRL+F (Windows) shortcut for searching in your notebook. This basic search functionality offers some additional filters like Match Case or Regex.
You can use Markdown cells to structure the document and navigate it easily.
If you would like to highlight some cells so you can come back to them later, you can mark them with #TODO or #FIXME, and they will be made available for you to dissect in a dedicated window.
Or you can use tags to highlight some cells so you’ll be able to spot them more easily.
In some cases, you may need to see the most recently executed cell; in this case, you can simply use the Go To option.
Save your workBecause teamwork is essential for data professionals, you need tooling that makes sharing the results of your work easy. One popular solution is Git, which PyCharm supports with features like notebook versioning and version comparison using the Diff view. You can find an in-depth overview of the functionality in this tutorial.
Another useful feature is Local History, which automatically saves your progress and allows you to revert to previous steps with just a few clicks.
Use the full power of AI AssistantJetBrains AI Assistant helps you automate repetitive tasks, optimize your code, and enhance your productivity. In Jupyter notebooks, it also offers several unique features in addition to those that are available in any JetBrains tool.
Click the icon to get insights regarding your data. You can also ask additional questions regarding the dataset or ask AI Assistant to do something – for example, “write some code that solves the missing data problem”.
AI data visualizationPressing the icon will suggest some useful visualizations for your data. AI Assistant will generate the proper code in the chat section for your data.
AI cellAI Assistant can create a cell based on a prompt. You can simply ask it to create a visualization or do something else with your code or data, and it will generate the code that you requested.
DebuggerPyCharm offers advanced debugging capabilities to enhance your experience in Jupyter notebooks. The integrated Jupyter debugger allows you to set breakpoints, inspect variables, and evaluate expressions directly within your notebooks. This powerful tool helps you step through your code cell by cell, making it easier to identify and fix issues as they arise. Read our blog post on how you can debug a Jupyter notebook in PyCharm for a real-life example.
Get started with PyCharm ProfessionalPyCharm’s Jupyter support enhances your data science workflows by combining the interactive aspects of Jupyter notebooks with advanced IDE features. It accelerates data analysis with interactive tables and AI assistance, improves coding efficiency with code completion and refactoring, and simplifies error detection and navigation. PyCharm’s seamless Git integration and powerful debugging tools further boost productivity, making it essential for data professionals.
Download PyCharm Professional to try it out for yourself! Get an extended trial today and experience the difference PyCharm Professional can make in your data science endeavors.Use the promo code “PyCharmNotebooks” at checkout to activate your free 60-day subscription to PyCharm Professional. The free subscription is available for individual users only.
Activate your 60-day trialExplore our official documentation to fully unlock PyCharm’s potential for your projects.
Mike Driscoll: Adding Terminal Effects with Python
The Python programming language has thousands of wonderful third-party packages available on the Python Package Index. One of those packages is TerminalTextEffects (TTE), a terminal visual effects engine.
Here are the features that TerminalTextEffects provides, according to their documentation:
- Xterm 256 / RGB hex color support
- Complex character movement via Paths, Waypoints, and motion easing, with support for quadratic/cubic bezier curves.
- Complex animations via Scenes with symbol/color changes, layers, easing, and Path synced progression.
- Variable stop/step color gradient generation.
- Path/Scene state event handling changes with custom callback support and many pre-defined actions.
- Effect customization exposed through a typed effect configuration dataclass that is automatically handled as CLI arguments.
- Runs inline, preserving terminal state and workflow.
Note: This package may be somewhat slow in Windows Terminal, but it should work fine in other terminals.
Let’s spend a few moments learning how to use this neat package
InstallationThe first step to using any new package is to install it. You can use pip or pipx to install TerminalTextEffects. Here is the typical command you would run in your terminal:
python -m pip install terminaltexteffectsNow that you have TerminalTextEffects installed, you can start using it!
UsageLet’s look at how you can use TerminalTextEffects to make your text look neat in the terminal. Open up your favorite Python IDE and create a new file file with the following contents:
from terminaltexteffects.effects.effect_slide import Slide text = ("PYTHON" * 10 + "\n") * 10 effect = Slide(text) effect.effect_config.merge = True with effect.terminal_output() as terminal: for frame in effect: terminal.print(frame)This code will cause the string, “Python” to appear one hundred times with ten strings concatenated and ten rows. You use a Slide effect to make the text slide into view. TerminalTextEffects will also style the text too.
When you run this code, you should see something like the following:
TerminalTextEffects has many different built-in effects that you can use as well. For example, you can use Beams to make the output even more interesting. For this example, you will use the Zen of Python text along with the Beams effects:
from terminaltexteffects.effects.effect_beams import Beams TEXT = """ The Zen of Python, by Tim Peters Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Now is better than never. Although never is often better than *right* now. If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea -- let's do more of those! """ effect = Beams(TEXT) with effect.terminal_output() as terminal: for frame in effect: terminal.print(frame)Now try running this code. You should see something like this:
That looks pretty neat! You can see a whole bunch of other effects you can apply on the package’s Showroom page.
Wrapping UpTerminalTextEffects provides lots of neat ways to jazz up your text-based user interfaces with Python. According to the documentation, you should be able to use TerminalTextEffects in other TUI libraries, such as Textual or Asciimatics, although it doesn’t specifically state how to do that. Even if you do not do that, you could use TerminalTextEffects with the Rich package to create a really interesting application in your terminal.
Links
The post Adding Terminal Effects with Python appeared first on Mouse Vs Python.
Python Bytes: #400 Celebrating episode 400
Zato Blog: Service-oriented API task scheduling
An integral part of Zato, its scalable, service-oriented scheduler makes it is possible to execute high-level API integration processes as background tasks. The scheduler runs periodic jobs which in turn trigger services and services are what is used to integrate systems.
Integration processIn this article we will check how to use the scheduler with three kinds of jobs, one-time, interval-based and Cron-style ones.
What we want to achieve is a sample yet fairly common use-case:
- Periodically consult a remote REST endpoint for new data
- Store data found in Redis
- Push data found as an e-mail attachment
Instead of, or in addition to, Redis or e-mail, we could use SQL and SMS, or MongoDB and AMQP or anything else - Redis and e-mail are just example technologies frequently used in data synchronisation processes that we use to highlight the workings of the scheduler.
No matter the input and output channels, the scheduler works always the same - a definition of a job is created and the job's underlying service is invoked according to the schedule. It is then up to the service to perform all the actions required in a given integration process.
Python codeOur integration service will read as below:
# -*- coding: utf-8 -*- # Zato from zato.common.api import SMTPMessage from zato.server.service import Service class SyncData(Service): name = 'api.scheduler.sync' def handle(self): # Which REST outgoing connection to use rest_out_name = 'My Data Source' # Which SMTP connection to send an email through smtp_out_name = 'My SMTP' # Who the recipient of the email will be smtp_to = 'hello@example.com' # Who to put on CC smtp_cc = 'hello.cc@example.com' # Now, let's get the new data from a remote endpoint .. # .. get a REST connection by name .. rest_conn = self.out.plain_http[rest_out_name].conn # .. download newest data .. data = rest_conn.get(self.cid).text # .. construct a new e-mail message .. message = SMTPMessage() message.subject = 'New data' message.body = 'Check attached data' # .. add recipients .. message.to = smtp_to message.cc = smtp_cc # .. attach the new data to the message .. message.attach('my.data.txt', data) # .. get an SMTP connection by name .. smtp_conn = self.email.smtp[smtp_out_name].conn # .. send the e-mail message with newest data .. smtp_conn.send(message) # .. and now store the data in Redis. self.kvdb.conn.set('newest.data', data)Now, we just need to make it run periodically in background.
Mind the timezoneIn the next steps, we will use the Zato Dashboard to configure new jobs for the scheduler.
Keep it mind that any date and time that you enter in web-admin is always interepreted to be in your web-admin user's timezone and this applies to the scheduler too - by default the timezone is UTC. You can change it by clicking Settings and picking the right timezone to make sure that the scheduled jobs run as expected.
It does not matter what timezone your Zato servers are in - they may be in different ones than the user that is configuring the jobs.
Endpoint definitionsFirst, let's use web-admin to define the endpoints that the service uses. Note that Redis does not need an explicit declaration because it is always available under "self.kvdb" in each service.
- Configuring outgoing REST APIs
- Configuring SMTP e-mail
Now, we can move on to the actual scheduler jobs.
Three types of jobsTo cover different integration needs, three types of jobs are available:
- One-time - fires once only at a specific date and time and then never runs again
- Interval-based - for periodic processes, can use any combination of weeks, days, hours, minutes and seconds for the interval
- Cron-style - similar to interval-based but uses the syntax of Cron for its configuration
Select one-time if the job should not be repeated after it runs once.
Interval-basedSelect interval-based if the job should be repeated periodically. Note that such a job will by default run indefinitely but you can also specify after how many times it should stop, letting you to express concepts such as "Execute once per hour but for the next seven days".
Cron-styleSelect cron-style if you are already familiar with the syntax of Cron or if you have some Cron tasks that you would like to migrate to Zato.
Running jobs manuallyAt times, it is convenient to run a job on demand, no matter what its schedule is and regardless of what type a particular job is. Web-admin lets you always execute a job directly. Simply find the job in the listing, click "Execute" and it will run immediately.
Extra contextIt is very often useful to provide additional context data to a service that the scheduler runs - to achieve it, simply enter any arbitrary value in the "Extra" field when creating or an editing a job in web-admin.
Afterwards, that information will be available as self.request.raw_request in the service's handle method.
ReusabilityThere is nothing else required - all is done and the service will run in accordance with a job's schedule.
Yet, before concluding, observe that our integration service is completely reusable - there is nothing scheduler-specific in it despite the fact that we currently run it from the scheduler.
We could now invoke the service from command line. Or we could mount it on a REST, AMQP, WebSocket or trigger it from any other channel - exactly the same Python code will run in exactly the same fashion, without any new programming effort needed.
More resources➤ Python API integration tutorial
➤ What is an integration platform?
➤ Python Integration platform as a Service (iPaaS)
➤ What is an Enterprise Service Bus (ESB)? What is SOA?
Python Morsels: Commenting in Python
Python's comments start with an octothorpe character.
Table of contents
- Writing a comment in Python
- Inline comments in Python
- Best practices for commenting in Python
- Comment as needed, but not too much
We have a Python program that prints out Hello!, pauses for a second, and then prints Goodbye! on the same line:
from time import sleep print("Hello!", end="", flush=True) sleep(1) # ANSI code to clear current line print("\r\033[K", end="") print("Goodbye!")It prints Hello!:
~ $ python3 hello.py Hello!And then one second later it overwrites Hello! with Goodbye!:
~ $ python3 hello.py Goodbye!It does this using an ANSI escape code (that \033[K string).
The line above the print call in our code is called a comment:
# ANSI code to clear current line print("\r\033[K", end="")Python's comments all start with the # character.
I call this character an octothorpe, though it goes by many names. Some of the more common names for # are hashmark, number sign, and pound sign.
You can write a comment in Python by putting an octothorpe character (#) at the beginning of a line, and then writing your comment. The comment stops at the end of the line, meaning the next line is code... unless you write another octothorpe character!
Here we've written more details and added an additional line to note that this code doesn't yet work on Windows:
# ANSI code to clear current line: \r moves to beginning, \033[K erases to end. # Note: This will not work on Windows without code to enable ANSI escape codes. print("\r\033[K", end="")This is sometimes called a block comment because it's a way to write a block of text that represents a comment.
Unlike some programming languages, Python has no multiline comment syntax. If you think you've seen a multiline comment, it may have been a docstring or a multiline string. More on that in multiline comments in Python.
Inline comments in PythonComments don't need to be …
Read the full article: https://www.pythonmorsels.com/commenting-in-python/Armin Ronacher: Multiversion Python Thoughts
Now that uv is rapidly advancing I have started to dive back into making multi-version imports for Python work. The goal here is to enable multiple resolutions from the solver in uv so that two incompatible versions of a library can be installed and used simultaniously.
Simplified speaking it should be possible for a library to depend on both pydantic 1.x and 2.x simultaniously.
I have not made it work yet, but I have I think found all of the pieces that stand in the way. This post mostly exists to share how it could be done with the least amount of changes to Python.
Basic OperationPython's import system places modules in a module cache. This cache is exposed via sys.modules. Every module that is imported is placed in that container prior to initialization. The key is the import path of the module. This in some ways presents the first issue.
Note on Terms for Packages, Modules and DistributionsPython's terms for packages are super confusing. Here is what I will use in this article:
- foo.py: this is a python “module”. It gets registered in sys.modules as 'foo' and has an attribute __name__ set to 'foo'.
- foo/__init__.py: declares also a Python “module” named 'foo' but it is simultaniously a “package”. Unlike a normal module it also has two extra attributes: __path__ which is set to ['./foo'] so that sub modules can be found and it has an attribute __package__ which is also set to 'foo' which marks it as package.
- Additionally on PyPI one can register things. These things were called packages at one point and are now mostly called "projects". Within Python however they are not called Projects but “distribution packages”. For instance this is what you see when you try to use the importlib.metadata API. For now I will just call this a “distribution”.
Note that a distribution can ship both modules and multiple at once. You could have a package called whatever and it reports a foo.py file and a bar/baz.py file which in turn would make foo and bar.baz be importable.
Say you have two Python distributions both of which provide the same toplevel package. In that case they are going to clash in sys.modules. As there is actually relationship of the distribution name to the entry in sys.modules this is a problem that does not just exist with multi version imports but it's one that does not happen all that much.
So let's say we have two distributions: foo@1.0.0 and foo@2.0.0. Both expose a toplevel module called foo which is a true Python package with a single __init__.py file. The installer would already fail to place these because one fully overrides the other.
So step 1 would be to place these modules in different places. So where they normally would be in site-packages, in this case we might want to not have these packages there. That solves us the file system clashes.
So we might place them in some extra cache that looks like this:
.venv/ multi-version-packages/ foo@1.0.0/ foo/ __init__.py foo@2.0.0/ foo/ __init__.pyNow that package is entirely non-importable since nothing looks at multi-version-packages. We will need a custom import hook to get them imported. That import hook will also need to change the name of what's stored in sys.modules.
So instead of registering foo as sys.modules['foo'] we might want to try to register it as sys.modules['foo@1.0.0'] and sys.modules['foo@2.0.0'] instead. There is however a catch and that is this very common pattern:
import sys def import_module(name): __import__(name) return sys.modules[name]That poses a bit of a problem because someone is probably going to call this as import_module('foo') and now we would not find the entry in sys.modules.
This means that in addition to the new entries in sys.modules we would also need to register some proxies that “redirect” us to the real names. These proxies however would need to know if they point to 1.0.0 or 2.0.0.
MetadataSo let's deal with this problem first. How do we know if we need 1.0.0 or 2.0.0? The answer is most likely a package's dependenices. Instead of allowing a package to depend simultaniously on two different versions of the same dependency we can start with a much simpler problem and say that each package can only depend on one version. So that means if I have a myapp package it would have to pick between foo@1.0.0 or foo@2.0.0. However if it were to depended on another package (say slow-package) that one could depend on a different version of foo than myapp:
myapp v0.1.0 ├── foo v2.0.0 └── slow-package v0.1.0 └── foo v1.0.0In that case when someone tries to import foo we would be consulting the package metadata of the calling package to figure out which version is attempted.
There are two challenges with this today and they come from the history of Python:
- the import hook does not (always) know which module triggered the import
- python modules do not know their distribution package
Let's look at these in detail.
Import ContextThe goal is that when slow_package/__init__.py imports foo we get foo@1.0.0 version, when myapp/__init__.py improts foo we get the foo@2.0.0 version. What is needed for this to work is that the import system understands not just what is imported, but who is importing. In some sense Python has that. That's because __import__ (which is the entry point to the import machinery) gets the module globals. Here is what an import statement roughly maps to:
# highlevel import from foo import bar # under the hood _rv = __import__('foo', globals(), locals(), ['bar']) bar = _rv.barThe name of the package that is importing can be retrieved by inspecting the globals(). So in theory for instance the import system could utilize this information. globals()['__name__'] would tell us slow_package vs myapp. There however is a catch and that is that the import name is not the distribution name. The PyPI package could be called mycompany-myapp and it exports a python package just called myapp. This happens very commonly in all kinds of ways. For instance on PyPI one installs Scikit-learn but the python package installed is sklearn.
There is however another problem and that is interpreter internals and C/Rust extensions. We have already established that Python packages will pass globals and locals when they import. But what do C extensions do? The most common internal import API is called PyImport_ImportModule and only takes a module name. Is this a problem? Do C extensions even import stuff? Yes they do. Here is an example from pygame:
MODINIT_DEFINE (color) { PyObject *colordict; colordict = PyImport_ImportModule ("pygame.colordict"); if (colordict) { PyObject *_dict = PyModule_GetDict (colordict); PyObject *colors = PyDict_GetItemString (_dict, "THECOLORS"); /* TODO */ } else { MODINIT_ERROR; } /* snip */ }And that makes sense. A sufficiently large python package will have inter dependencies between the stuff written in C and Python. It's also complicated by the fact that the C module does initialize a module, but it does not have a natural module scope. The way the C extension initializes the module is with the PyModule_Create API:
static struct PyModuleDef module_def = { PyModuleDef_HEAD_INIT, "foo", /* name of module */ NULL, -1, SpamMethods }; PyMODINIT_FUNC PyInit_foo(void) { return PyModule_Create(&module_def); }So both the name of the module created as well as the name of what is imported is entirely hardcoded. A C extension does not “know” what the intended name is, it must know this on its own.
In some sense this is already a bit of a disconnect beween the Python and C world. Python for instance has relative imports (from .foo import bar). This is implemented by inspecting the globals. There is however no API to do these relative imports on the C layer.
The only workaround I know right now would be to perform stack walking. That way one would try to isolate the shared library that triggered the import to understand which module it comes from. An alternative would be to carry the current C extension module that is active on the interpreter state, but that would most likely be quite expensive.
The goal would be to find out which .so/.dylib file triggered the import. Stack walking is a rather expensive operation and it can be incredibly brittle but there might not be a perfect way around it. Ideally Python would at any point know which c extension module is active.
Distributions from ModulesSo let's say that we have the calling python module figured out: now we need to figure out the associated PyPI distribution name. Unfortunately such a mapping does not exist at all. Ideally when a sys.module entry is created, we either record a special attribute there (say __distribution__) which carries the name of the PyPI distribution name so we can call importlib.metadata.distribution(__distribution__).requires to get the requirements or we have some other API to map it.
In the absence of that, how could we get it? There is an expensive way to get a reverse mapping (importlib.metadata.packages_distributions) but unfortunately it has some limitations:
- it's very slow
- it has situations where it does not manage to reveal the distribution for a package
- it can reveal more than one distribution for a package
Because of namespace packages in particular it can return more than one distribution that provides a package such as foo (eg: foo-bar provides foo.bar and foo-baz provides foo.baz. In that case it will just return both foo-bar and foo-baz for foo).
The solution here might just be that installers like uv start materializing the distribution name onto the modules in one way or another.
Putting it TogetherThe end to end solution might be this:
- install multi-version packages outside of site-packages
- materialize a __distribution__ field onto modules or provide an API that maps import names to their PyPI distribution name so that meta data (requirements) can be discovered.
- patch __import__ to resolve packages to their fully-qualified, multi
version name based on who imports it
- via globals() for python code
- via stack-walking for C extensions (unless a better option is found)
- register proxy entries in sys.modules that have a dynamic __getattr__ which redirects to the fully qualified names if necessary. This would allow someone to access sys.modules['foo'] and automatically proxy it to foo@1.0.0 or foo@2.0.0 respectively.
There are lots of holes with this approach unfortunately. That's in parts because people patch around in sys.modules. Interestingly enough sys.modules can be manipulated but it can't be replaced. This might make it possible to replace that dictionary with some more magical dictionary in future versions of Python potentially.
Test and Code: 222: Import within a Python package
In this episode we're talking about importing part of a package into another part of the same package.
We'll look at: `from . import module` and `from .module import something`
and also: `import package` to access the external API from with the package.
Why would we use `import package` if `from . import api` would work fine?
Learn pytest
- pytest is the number one test framework for Python.
- Learn the basics super fast with Hello, pytest!
- Then later you can become a pytest expert with The Complete pytest Course
- Both courses are at courses.pythontest.com
TechBeamers Python: Pass by Reference vs Pass by Value in Python
Is Python pass by reference or pass by value? This question intrigues every Python programmer coming from a C or Java background. In this tutorial, you will get its answer and learn the meaning of “pass by reference” and “pass by value” along with their difference in Python. Python Pass by Reference vs Pass by […]
The post Pass by Reference vs Pass by Value in Python appeared first on TechBeamers.
Python Insider: Python 3.13.0RC2, 3.12.6, 3.11.10, 3.10.15, 3.9.20, and 3.8.20 are now available!
Hi there!
A big joint release today. Mostly security fixes but we also have the final release candidate of 3.13 so let’s start with that!
Final opportunity to test and find any show-stopper bugs before we bless and release 3.13.0 final on October 1st.
Get it here: Python Release Python 3.13.0rc2 | Python.org
Call to actionWe strongly encourage maintainers of third-party Python projects to prepare their projects for 3.13 compatibilities during this phase, and where necessary publish Python 3.13 wheels on PyPI to be ready for the final release of 3.13.0. Any binary wheels built against Python 3.13.0rc2 will work with future versions of Python 3.13. As always, report any issues to the Python bug tracker.
Please keep in mind that this is a preview release and while it’s as close to the final release as we can get it, its use is not recommended for production environments.
Core developers: time to work on documentation now- Are all your changes properly documented?
- Are they mentioned in What’s New?
- Did you notice other changes you know of to have insufficient documentation?
As a reminder, until the final release of 3.13.0, the 3.13 branch is set up so that the Release Manager (@thomas) has to merge the changes. Please add him (@Yhg1s on GitHub) to any changes you think should go into 3.13.0. At this point, unless something critical comes up, it should really be documentation only. Other changes (including tests) will be pushed to 3.13.1.
New features in Python 3.13- A new and improved interactive interpreter, based on PyPy’s, featuring multi-line editing and color support, as well as colorized exception tracebacks.
- An experimental free-threaded build mode, which disables the Global Interpreter Lock, allowing threads to run more concurrently. The build mode is available as an experimental feature in the Windows and macOS installers as well.
- A preliminary, experimental JIT, providing the ground work for significant performance improvements.
- The locals() builtin function (and its C equivalent) now has well-defined semantics when mutating the returned mapping, which allows debuggers to operate more consistently.
- The (cyclic) garbage collector is now incremental, which should mean shorter pauses for collection in programs with a lot of objects.
- A modified version of mimalloc is now included, optional but enabled by default if supported by the platform, and required for the free-threaded build mode.
- Docstrings now have their leading indentation stripped, reducing memory use and the size of .pyc files. (Most tools handling docstrings already strip leading indentation.)
- The dbm module has a new dbm.sqlite3 backend that is used by default when creating new files.
- The minimum supported macOS version was changed from 10.9 to 10.13 (High Sierra). Older macOS versions will not be supported going forward.
- WASI is now a Tier 2 supported platform. Emscripten is no longer an officially supported platform (but Pyodide continues to support Emscripten).
- iOS is now a Tier 3 supported platform, with Android on the way as well.
This is an expedited release for 3.12 due to security content. The schedule returns back to regular programming in October.
One notable change for macOS users: as mentioned in the previous release of 3.12, this release drops support for macOS versions 10.9 through 10.12. Versions of macOS older than 10.13 haven’t been supported by Apple since 2019, and maintaining support for them has become too difficult. (All versions of Python 3.13 have already dropped support for them.)
Get it here: Python Release Python 3.12.6 | Python.org
92 commits.
Python 3.11.10Python 3.11 joins the elite club of security-only versions with no binary installers.
Get it here: Python Release Python 3.11.10 | Python.org
28 commits.
Python 3.10.15Get it here: Python Release Python 3.10.15 | Python.org
24 commits.
Python 3.9.20Get it here: Python Release Python 3.9.20 | Python.org
22 commits.
Python 3.8.20Python 3.8 is very close to End of Life (see the Release Schedule). Will this be the last release of 3.8 ever? We’ll see… but now I think I jinxed it.
Get it here: Python Release Python 3.8.20 | Python.org
22 commits.
Security content in today’s releases- gh-123678 and gh-116741: Upgrade bundled libexpat to 2.6.3 to fix CVE-2024-28757, CVE-2024-45490, CVE-2024-45491 and CVE-2024-45492.
- gh-118486: os.mkdir() on Windows now accepts mode of 0o700 to restrict the new directory to the current user. This fixes CVE-2024-4030 affecting tempfile.mkdtemp() in scenarios where the base temporary directory is more permissive than the default.
- gh-123067: Fix quadratic complexity in parsing "-quoted cookie values with backslashes by http.cookies. Fixes CVE-2024-7592.
- gh-113171: Fixed various false positives and false negatives in IPv4Address.is_private, IPv4Address.is_global, IPv6Address.is_private, IPv6Address.is_global. Fixes CVE-2024-4032.
- gh-67693: Fix urllib.parse.urlunparse() and urllib.parse.urlunsplit() for URIs with path starting with multiple slashes and no authority. Fixes CVE-2015-2104.
- gh-121957: Fixed missing audit events around interactive use of Python, now also properly firing for python -i, as well as for python -m asyncio. The event in question is cpython.run_stdin.
- gh-122133: Authenticate the socket connection for the socket.socketpair() fallback on platforms where AF_UNIX is not available like Windows.
- gh-121285: Remove backtracking from tarfile header parsing for hdrcharset, PAX, and GNU sparse headers. That’s CVE-2024-6232.
- gh-114572: ssl.SSLContext.cert_store_stats() and ssl.SSLContext.get_ca_certs() now correctly lock access to the certificate store, when the ssl.SSLContext is shared across multiple threads.
- gh-102988: email.utils.getaddresses() and email.utils.parseaddr() now return ('', '') 2-tuples in more situations where invalid email addresses are encountered instead of potentially inaccurate values. Add optional strict parameter to these two functions: use strict=False to get the old behavior, accept malformed inputs. getattr(email.utils, 'supports_strict_parsing', False) can be use to check if the strict paramater is available. This improves the CVE-2023-27043 fix.
- gh-123270: Sanitize names in zipfile.Path to avoid infinite loops (gh-122905) without breaking contents using legitimate characters. That’s CVE-2024-8088.
- gh-121650: email headers with embedded newlines are now quoted on output. The generator will now refuse to serialize (write) headers that are unsafely folded or delimited; see verify_generated_headers. That’s CVE-2024-6923.
- gh-119690: Fixes data type confusion in audit events raised by _winapi.CreateFile and _winapi.CreateNamedPipe.
- gh-116773: Fix instances of <_overlapped.Overlapped object at 0xXXX> still has pending operation at deallocation, the process may crash.
- gh-112275: A deadlock involving pystate.c’s HEAD_LOCK in posixmodule.c at fork is now fixed.
Upgrading is highly recommended to all users of affected versions.
Thank you for your supportThanks to all of the many volunteers who help make Python Development and these releases possible! Please consider supporting our efforts by volunteering yourself or through organization contributions to the Python Software Foundation.
–
Łukasz Langa @ambv
on behalf of your friendly release team,
Ned Deily @nad
Steve Dower @steve.dower
Pablo Galindo Salgado @pablogsal
Łukasz Langa @ambv
Thomas Wouters @thomas