Planet Python
Talk Python to Me: #478: When and how to start coding with kids
PyCoder’s Weekly: Issue #648 (Sept. 24, 2024)
#648 – SEPTEMBER 24, 2024
View in Browser »
Get a sneak peek at the upcoming features in Python 3.13 aimed at enhancing performance. In this tutorial, you’ll make a custom Python build with Docker to enable free threading and an experimental JIT compiler. Along the way, you’ll learn how these features affect the language’s ecosystem.
REAL PYTHON
Python code too slow? You can quickly create a Rust extension to speed it up. This post shows you how to re-implement some Python code as Rust, connect the Rust to Python, and optimize the Rust for even further performance improvements.
ITAMAR TURNER-TRAURING
Snyk is thrilled to announce DevSecCon 2024, Developing AI Trust Oct 8-9, a FREE virtual summit designed for DevOps, developer and security pros of all levels. Hear from industry pros John Hammond & Daniel Miessler for some practical tips on devsecops approach to secure development. Save our spot →
SNYK.IO sponsor
This post covers Python’s doctest which allows you to write tests within your code’s docstrings. This does two things: it gives you tests, but it also documents how your code can be used.
JUHA-MATTI SANTALA
A well-designed coding environment enhances your focus and productivity and makes coding sessions more enjoyable. In this Code Conversation, your instructor Philipp Ascany will guide you step-by-step through the process of finding, installing, and adjusting color themes in VS Code.
REAL PYTHON course
What are strategies for being a productive developer with ADHD? How can you help your team members with ADHD to succeed and complete projects? This week on the show, we speak with Chris Ferdinandi about his website and podcast “ADHD For the Win!”
REAL PYTHON podcast
Intel makes it easy to develop AI applications on open frameworks and libraries. Seamlessly integrate into an open ecosystem and build AI solutions with more flexibility and choice. Explore the possibilities available with Intel’s OpenVINO toolkit →
INTEL CORPORATION sponsor
Simon has been on the Python Software Foundation Board for two years now and has recently returned from a board retreat. This post talks about what he has learned along the way, including just what does the PSF do, PyPI, PyCons, and more.
SIMON WILLISON
Goodhart’s law states: “When a measure becomes a target, it ceases to be a good measure.” Whether that’s test coverage, cyclomatic complexity, or code performance, all metrics are proxies and proxies can be gamed.
HILLEL WAYNE
This post covers how Mark did a deep dive on a large dataset covering building footprints in Asia. He uses a variety of tools including duckdb and the multiprocessing module in Python.
MARK LITWINTSCHIK
“Django comes with a form library, and yet we wrote a total replacement library… Django forms were fundamentally not usable for what we wanted to do.”
KODARE.NET
This article shows how you can create a case-insensitive string class using some basic meta programming with the dunder method __new__.
RODRIGO GIRÃO SERRÃO
Discover seven ways you can use Jupyter notebooks in PyCharm to explore and work with your data more quickly and effectively.
HELEN SCOTT
Talk Python interviews Charlie Marsh, the maintainer of ruff, and they talk about uv and other projects at Astral.
KENNEDY & MARSH podcast
Python 3.8 will stop getting security updates in November 2024. You really should upgrade!
ITAMAR TURNER-TRAURING
GITHUB.COM/FPGMAAS • Shared by Florian Maas
Django Content Settings: Advanced Admin Editable SettingsDJANGO-CONTENT-SETTINGS.READTHEDOCS.IO • Shared by oduvan
Events Weekly Real Python Office Hours Q&A (Virtual) September 25, 2024
REALPYTHON.COM
September 25 to September 27, 2024
PYDATA.ORG
September 26 to September 27, 2024
PITERPY.COM
September 26 to September 27, 2024
DURIANPY.ORG
September 27 to September 30, 2024
PYCON.JP
September 28 to September 30, 2024
PYCON.ORG
October 3 to October 5, 2024
PYCON.ORG
October 4 to October 6, 2024
PYCON.ORG
Happy Pythoning!
This was PyCoder’s Weekly Issue #648.
View in Browser »
[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]
Real Python: Advanced Python import Techniques
In Python, you use the import keyword to make code in one module available in another. Imports in Python are important for structuring your code effectively. Using imports properly will make you more productive, allowing you to reuse code while keeping your projects maintainable.
This video course provides a comprehensive overview of Python’s import statement and how it works. The import system is powerful, and this course will teach you how to harness this power. While you’ll cover many of the concepts behind Python’s import system, this video course is mostly example driven, so you’ll learn from the numerous code examples shared throughout.
In this video course, you’ll learn how to:
- Use modules, packages, and namespace packages
- Manage namespaces and avoid shadowing
- Avoid circular imports
- Import modules dynamically at runtime
- Customize Python’s import system
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
PyCharm: PyCharm vs. Jupyter Notebook
Jupyter notebooks are an important tool for data scientists, providing an easy option for conducting experiments and presenting results. According to our Developer Ecosystem Survey 2023, at least 35% of data professionals use Jupyter notebooks. Furthermore, over 40% of these users spend more than 20% of their working time using these resources.
There are several implementations of notebook technology available to data professionals. At first we’ll look at the well-known Jupyter Notebook platforms by Project Jupyter. For the purposes of this article, we’ll refer to the Project Jupyter implementations of notebooks as “vanilla Jupyter” in order to avoid confusion, since there are several other implementations of the tool.
While vanilla Jupyter notebooks can be sufficient for some tasks, there are other cases where it would be better to rely on another tool for working with data. In this article, we’ll outline the key differences between PyCharm Professional and vanilla Jupyter when it comes to data science applications.
What is Jupyter Notebook?Jupyter Notebook is an open-source platform that allows users to create and share code, visualizations, and text. It’s primarily used for data analysis and scientific research. Although JupyterLab offers some plugins and tools, its capabilities and user experience are significantly more limited than PyCharm’s.
What is PyCharm Professional?PyCharm is a comprehensive integrated development environment (IDE) that supports a wide range of technologies out of the box, offering deep integration between them. In addition to enhanced support for Jupyter notebooks, PyCharm Professional also provides superior database support, Python script editing, and GitHub integration, as well as support for AI Assistant, Hugging Face, dbt-Core, and much more.
Feature comparison: PyCharm Pro vs. Jupyter Language supportWhile Jupyter notebooks claim to support over 40 programming languages, their usage is limited to the .ipynb format, which makes working with traditional file extensions like .py, .sql, and others less convenient. On the other hand, while PyCharm offers support for fewer languages – Python, JavaScript and TypeScript, SQL, and R (via plugin), along with several markup languages like HTML and CSS – the support is much more comprehensive.
Often, Jupyter notebooks and Python scripts serve different purposes. Notebooks are typically used for prototyping and experimentation, while Python scripts are more suitable for production. In PyCharm Professional, you can work with both of these formats and it’s easy to convert .ipynb files into .py files. See the video below for more information.
The smartest code completionIf you’ve ever written code in PyCharm Professional, you’ll have definitely noticed its code completion capabilities. In fact, the IDE offers several different types of code completion. In addition to the standard JetBrains completion, which provides suggestions based on an extensive understanding of your project and libraries, there’s also runtime completion, which can suggest names of data objects like columns in a pandas or Polars DataFrame, and ML-powered, full-line completion that suggests entire lines of code based on the current file. Additionally, you can enhance these capabilities with LLM-powered tools such as JetBrains AI Assistant, GitHub Copilot, Amazon Whisper, and others.
In contrast, code completion in Jupyter notebooks is limited. Vanilla Jupyter notebooks lack awareness of your project’s context, there’s no local ML-based code completion, and there’s no runtime code completion for database objects.
Code quality features and debuggerPyCharm offers several tools to enhance your code quality, including smart refactorings, quick-fixes, and AI Assistant – none of which are available in vanilla Jupyter notebooks.
If you’ve made a mistake in your code, PyCharm Professional will suggest several actions to fix it. These become visible when you click on the lightbulb icon.
PyCharm Professional also inspects code on the file and project level. To see all of the issues you have in your current file, you can click on the image in the top right-hand corner.
While vanilla Jupyter notebooks can highlight any issues after a code cell has been executed (as seen below), it doesn’t have features that allow you to analyze your entire file or project.
PyCharm provides a comprehensive and advanced debugging environment for both Python scripts and Jupyter notebooks. This debugger allows you to step into your code, running through the execution steps line by line, and pinpointing exactly where an error was made. If you’ve never used the debugger in PyCharm, you can learn how to debug a Jupyter notebook in PyCharm with the help of this blog by Dr. Jodie Burchell. In contrast, vanilla Jupyter offers basic debugging tools such as cell-by-cell execution and interactive %debug commands.
RefactoringsWeb-based Jupyter notebooks lack refactoring capabilities. If you need to rename a variable, introduce a constant, or perform any other operation, you have to do it manually, cell by cell. In PyCharm Professional, you can access the Refactoring menu via Control + T and use it to make changes in your file faster. More information about refactorings in PyCharm you can find in the video.
Other code-related featuresIf you forget how to work with a library in vanilla Jupyter notebooks, you need to open another tab in a browser to look up the documentation, taking you out of your development environment and programming flow.
In PyCharm Professional, you can get information about a function or library you’re currently using right in the IDE by hovering over the code.
If you have a subscription to AI Assistant you can also use it for troubleshooting, such as asking it to explain code and runtime errors, as well as finding potential problems with your code before you run it.
Working with tablesDataFrames are one of the most important types of data formats for the majority of data professionals. In vanilla Jupyter notebooks, if you print a pandas or Polars DataFrame, you’ll see a static, output with a limited number of columns and rows shown. Since the DataFrame outputs in Jupyter notebooks are static, this makes it difficult to explore your data without writing additional code.
In PyCharm Professional, you can use interactive tables that allow you to easily view, navigate, sort, and filter data. You can create charts and access essential data insights, including descriptive statistics and missing values – all without writing a single line of code.
What’s more, the interactive tables are designed to give you a lot of information about your data, including details of:
- Data type symbols in the column headers
- The size of your DataFrame (in our case it is 2390 rows and 82 columns).
- Descriptive statistics and missing values and many more.
If you want to get more information about how interactive tables work in PyCharm, check out the documentation.
Versioning and GitHub integrationIn PyCharm Professional, you have several version control options, including Git.
With PyCharm’s GitHub integration, you can see and revert your changes with the help of the IDE’s built-in visual diff tool. This enables you to compare changes between different commits of your notebooks. You can find an in-depth overview of the functionality in this tutorial.
Another incredibly useful feature is the local history, which automatically saves a version history of your changes. This means that if you haven’t committed something, and you need to roll back to an earlier version, you can do so with the click of a button.
In vanilla Jupyter notebooks, you have to rely on the CLI git tool. In addition, Git is the only way of versioning your work, meaning there is no way to revert changes if you haven’t committed them.
NavigationWhen you work on your project in Jupyter Notebook, you always need to navigate either within a given file or the whole project. On the other hand, the navigation functionality in PyCharm is significantly richer.
Beyond the Structure view that is also present in JupyterLab, you can also find some additional features to navigate your project in our IDEs. For example, double pressing Shift will help you find anything in your project or settings.
In addition to that, you can find the specific source of code in your project using PyCharm Professional’s Find Usages, Go to Implementation, Go to Declaration, and other useful features.
Check out this blog post for more information about navigation in Jupyter notebooks in PyCharm.
VisualizationsIn addition to libraries that are available in vanilla Jupyter notebooks, PyCharm also provides further visualization possibilities with the help of interactive tables. This means you don’t have to remember or type boilerplate code to create graphs.
How to choose between PyCharm and vanilla Jupyter notebooksVanilla Jupyter notebooks are a lightweight tool. If you need to do some fast experiments, it makes sense to use this implementation.
On the other hand, PyCharm Professional is a feature-rich IDE that simplifies the process of working with Jupyter notebooks. If you need to work with complex projects with a medium or large codebase, or you want to add a significant boost to productivity, PyCharm Professional is likely to be more suitable, allowing you to complete your data project more smoothly and quickly .
Get started with PyCharm ProfessionalPyCharm Professional is a data science IDE that supports Python, rich databases, Jupyter, Git, Conda, and other technologies right out of the box. Work on projects located in local or remote development environments. Whether you’re developing data pipelines, prototyping machine learning models, or analyzing data, PyCharm equips you with all the tools you need.
The best way to understand the difference between tools is to try them for yourself. We strongly recommend downloading PyCharm and testing it in your real-life projects.
Download PyCharm Professional and get an extended 60-day trial by using the promo code “PyCharmNotebooks”. The free subscription is available for individual users only.
Activate your 60-day trialBelow are other blog posts that you might find useful for boosting your productivity with PyCharm Professional.
Python Software Foundation: Service Awards given by the PSF: what are they and how they differ
Do you know someone in the Python community who inspires you and whose contributions to the Python community are outstanding? Other than saying thank you (definitely do this too!), you can also nominate them to receive recognition given by the PSF. In this blog post, we will explain what each of the awards are and how they differ. We hope this will encourage you to nominate your favorite inspirational community member to receive an award!
PSF Community Service AwardsThe most straightforward way to acknowledge someone’s volunteer effort serving the Python community is to nominate them for the PSF Community Service Awards (CSA). The awardee will receive:
A cash award of $599 USD
Free registration at all future PyCon US events
Recipients need not be PSF members and can receive multiple awards if they have continuous outstanding contributions. Other than individuals, there are also small organizational groups (e.g. PyCon JP Association 2021) who can receive the CSA award.
The PSF Board reviews nominations quarterly. CSA recipients will be recognized at PyCon US every year.
CSA Award Winners
The PSF Community Service Awards are all about the wonderful and dedicated folks in our community, and we had to take this opportunity to show some of their faces! You can find all of the inspiring PSF CSA recipients on our CSA webpage.
PyCon JP Association CSA Recipients (left to right): Takayuki Shimizukawa,
Shunsuke Yoshida, Jonas Obrist, Manabu Terada, Takanori SuzuPSF Distinguished Service Awards
As the highest award that the PSF bestows, the Distinguished Service Award is the level up of the CSA award described above. Recipients of a DSA need to have made significant, sustained, and exemplary contributions with an exceptionally positive impact on the Python community. Recognition will take the form of an award certificate plus a cash award of $5000 USD. As of the writing of this blog post, there are only 7 awardees of the DSA in history.
Naomi Ceder is the latest Distinguished Service Awards recipient, she received the award in 2022
PSF Fellow MembershipAlthough it is also a form of recognition, the PSF Fellow Membership is different from the awards above and there’s no comparison of the level of recognition between fellowship and any of the awards above. Fellows are members who have been nominated for their extraordinary efforts and impact upon Python, the community, and the broader Python ecosystem. Fellows are nominated from the broader community and if they meet Fellow criteria, they are elevated by a vote of the Fellows Working Group. PSF Fellows are lifetime voting members of the PSF. That means Fellows are eligible to vote in PSF elections as well as follow the membership rules and Bylaws of the PSF.
Nominate someone!We hope this makes the types of recognition given by the PSF clear, as well as gives you confidence in nominating folks in the Python community that you think should be recognized for a CSA, DSA, or as a PSF Fellow. We also hope that this will inspire you to become a Python community member that receives a service award!
Real Python: Python Virtual Environments: A Primer
In this tutorial, you’ll learn how to work with Python’s venv module to create and manage separate virtual environments for your Python projects. Each environment can use different versions of package dependencies and different versions of Python.
Once you’ve learned to work with virtual environments, you’ll be able to help other programmers reproduce your development setup and make sure that your projects never create dependency conflicts.
By the end of this tutorial, you’ll know how to:
- Create and activate a Python virtual environment
- Explain why you want to isolate external dependencies
- Visualize what Python does when you create a virtual environment
- Customize your virtual environments using optional arguments to venv
- Deactivate and remove virtual environments
- Choose additional tools for managing your Python versions and virtual environments
Working with virtual environments is a common and effective technique used in Python development. Gaining a better understanding of how they work, why you need them, and what you can do with them will help you master your Python programming workflow.
Throughout the tutorial, you can select code examples for either Windows, Ubuntu Linux, or macOS. Pick your platform at the top right of the relevant code blocks to get the commands that you need, and feel free to switch between them if you want to learn how to work with virtual environments on other operating systems.
Free Bonus: Click here to download a free cheat sheet that summarizes the main venv commands you’ll learn about in this tutorial.
Take the Quiz: Test your knowledge with our interactive “Python Virtual Environments: A Primer” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Python Virtual Environments: A PrimerIn this quiz, you'll test your understanding of Python virtual environments. With this knowledge, you'll be able to avoid dependency conflicts and help other developers reproduce your development environment.
How Can You Work With a Python Virtual Environment?If you just need to get a virtual environment up and running to continue working on your favorite project, then this section is for you.
This tutorial uses Python’s venv module to create virtual environments. This module is part of Python’s standard library, and it’s been the officially recommended way to create virtual environments since Python 3.5.
Note: There are other great third-party tools for creating virtual environments, such as conda and virtualenv, that you’ll learn more about later in this tutorial. Either of these tools can help you set up a virtual environment and also go beyond just that.
For basic usage, venv is an excellent choice because it already comes packaged with your Python installation. With that in mind, you’re ready to create your first virtual environment.
Create ItAny time you’re working on a Python project that uses external dependencies you’re installing with pip, it’s best to first create a virtual environment:
Windows PowerShell PS> py -m venv venv\ Copied!This command allows the Python launcher for Windows to select an appropriate version of Python to execute. It comes bundled with the official installation and is the most convenient way to execute Python on Windows.
You can bypass the launcher and run the Python executable directly using the python command, but if you haven’t configured the PATH and PATHEXT variables, then you might need to provide the full path:
Windows PowerShell PS> C:\Users\Name\AppData\Local\Programs\Python\Python312\python -m venv venv\ Copied!The system path shown above assumes that you installed Python 3.12 using the Windows installer provided by the Python downloads page. The path to the Python executable on your system might be different. Working with PowerShell, you can find the path using the where.exe python command.
Note: You don’t need to include the backslash (\) at the end of the name of your virtual environment, but it’s a helpful reminder that you’re creating a folder.
Shell $ python3 -m venv venv/ Copied!Many Linux operating systems ship with a version of Python 3. If python3 doesn’t work, then you’ll have to first install Python and you may need to use the specific name of the executable version that you installed, for example, python3.12 for Python 3.12.x. If that’s the case for you, remember to replace mentions of python3 in the code blocks with your specific version number.
Note: You don’t need to include the slash (/) at the end of the name of your virtual environment, but it’s a helpful reminder that you’re creating a folder.
Shell $ python3 -m venv venv/ Copied!Older versions of macOS come with a system installation of Python 2.7.x that you should never use to run your scripts. If you’re working on macOS < 12.3 and invoke the Python interpreter with python instead of python3, then you might accidentally start up the outdated system Python interpreter.
If running python3 doesn’t work, then you’ll have to first install a modern version of Python.
Note: You don’t need to include the slash (/) at the end of the name of your virtual environment, but it’s a helpful reminder that you’re creating a folder.
This command creates a new virtual environment named venv using Python’s built-in venv module. The first venv that you use in the command specifies the module, and the second venv/ sets the name for your virtual environment. You could name it differently, but calling it venv is a good practice for consistency.
Activate ItGreat! Your project now has its own virtual environment. Generally, before you start to use it, you’ll activate the environment by executing a script that comes with the installation:
Windows PowerShell PS> venv\Scripts\activate (venv) PS> Copied!If your attempt to run this command produces an error, then you’ll first have to loosen the execution policy.
Shell $ source venv/bin/activate (venv) $ Copied!Before you run this command, make sure that you’re in the folder containing the virtual environment you just created. If you’ve named your virtual environment something other than venv, then you’ll have to use that name in the path instead of venv when you source the activation script.
Note: You can also work with your virtual environment without activating it. To do this, you provide the full path to its Python interpreter when executing a command. However, you’ll likely want to activate the virtual environment after you create it to save yourself the effort of having to repeatedly type long pathnames.
Once you can see the name of your virtual environment in your command prompt—in this case (venv)—then you’ll know that your virtual environment is active. Now you’re all set and ready to install your external packages!
Read the full article at https://realpython.com/python-virtual-environments-a-primer/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Django Weblog: PyCharm &amp; Django Campaign 2024 - encore
The Django Software Foundation's biggest fundraising event of the year is here!
Get 30% off PyCharm, Support Django
Each year, our friends at JetBrains, the creators of PyCharm, run an incredible deal. You get a 30% discounted year of PyCharm, AND the DSF gets 100% of the money. Yes, 100%! It's making a donation and directly getting a great product in return! This is available for new users, and those who had used PyCharm in the past, stopped, and want to try again.
The fundraiserThe fundraiser started during DjangoCon Europe in June, and is now back on from September 22nd to October 6th. Buy PyCharm and support Django!
In the past, JetBrains through the PyCharm fundraiser has provided approximately one quarter of the Django Software Foundation's budget!
Donations like this fundraiser allow the DSF to function. Our two wonderful Fellows, Natalia Bidart and Sarah Boyce keep Django running smoothly, picking up pieces that would otherwise not happen.
The other side of the DSF is our support for Django groups across the globe. We supported every DjangoCon, particularly with donating funding towards opportunity grants for more people to be able to attend these conferences. The DSF also supports smaller events around the world, including DjangoGirls events.
PyCharmFinally, I want to tell you about PyCharm itself.
PyCharm is an integrated development environment (IDE) that helps professional Python web developers be more productive, be more confident, and write better code. It supports the full Python web workflow out of the box, including popular Python web frameworks, such as Django, frontend technologies, and databases.
Here are the main benefits of using PyCharm in your Django development:
- Django (including templates), Flask, FastAPI
- Database management (Postgres, Redis)
- JS, React, Node.js, TailwindCSS
- Built-in HTTP Client and endpoint tools
Get Django work done with PyCharm, a powerful IDE tailored for Django web development!
Consider this the easiest charitable donation you will ever make, when you get such a great product in return!
Get 30% off PyCharm, Support Django
Other ways to donateIf you would like to donate in another way, especially if you are already a PyCharm customer, here are other ways to donate to the DSF:
- On our website via credit card
- Via GitHub Sponsors
- For those able to make a larger donation, particularly corporate sponsors ($2000+), more information is here: Corporate membership
Python Bytes: #402 How to monetize your blog
Quansight Labs Blog: Multi-dimensional Sparse Arrays in SciPy
Hynek Schlawack: Python Project-Local Virtualenv Management Redux
One of my first TIL entries was about how you can imitate Node’s node_modules semantics in Python on UNIX-like operating systems. A lot has happened since then (to the better!) and it’s time for an update. direnv still rocks, though.
Armin Ronacher: FSL: A Better Business/Open Source Balance Than AGPL
subtext: in my opinion, and for companies (and their users) that want a good balance between protecting their core business with Open Source ideals.
Following up to my thoughts on the case for funding Open Source, there is a second topic I want to discuss in more detail: Open Source and commercialization. As our founder likes to say: Open Source is not a business model. And indeed it really isn't. However, this does not mean that Open Source and Open Source licenses aren't a critical consideration for a technology company and a fascinating interconnection between the business model and license texts.
As some of you might know I'm a strong proponent of the concept now branded as “Fair Source” which we support at Sentry. Fair Source is defined by a family of springing licenses that give you the right to read and modify code, while also providing an exclusivity period for the original creator to protect their core business. After a designated time frame, the code transitions into Open Source via a process called DOSP: Delayed Open Source Publication. This is not an entirely new idea, and I have been writing about it a few times before [1] [2].
A recurring conversation I have in this context is the AGPL (Affero General Public License) as an alternative vehicle for balancing business goals and Open Source ideals. This topic also has resurfaced recently because of Elasticsearch'es Open Source, Again post where they announced that they will license Elasticsearch under the AGPL.
In my view, while AGPL is a true Open Source license, it is an inferior choice compared to the FSL (the Functional Source License, a Fair Source license) for many projects. Let me explain my reasoning.
The Single Vendor ModelWhen you take a project like Sentry, which started as an Open Source project and later turned into a VC funded company, its model revolves around a commercial entity being in charge. That model is often referred to as “single vendor.” This is also the case with companies like Clickhouse Inc. or Elastic and their respective projects.
Sentry today is no longer Open Source, it's Fair Source (FSL licensed). Elastic on the other hand is indeed unquestionable Open Source (AGPL among others). What both projects have in common is that they value brand (including trademarks), that they have strong opinions on how that project should be run, and they use a CLA to give themselves the right to re-licenses it under other terms.
In a "single vendor" setup, the company behind the project holds significant power (for ~150 years give or take).
The Illusion of EqualityWhen you look at the AGPL as a license it's easy to imagine that everybody is equal. Every contributor to a project agrees with the underlying license assumptions of the AGPL and acts accordingly. However, in practice, things are more complicated — especially when it comes to commercial usage. Many legal departments are wary of the AGPL and the broader GPL family of licenses. Some challenges are also inherent to the licenses such as not being able to publish *GPL code to the app store.
You can see this also with Elasticsearch. The code is not just AGPL licensed, you can also retrieve it under the ELv2 and SSPL licensing terms. Something that Elastic can do due to the CLAs in place.
Compare this to Linux, which is licensed under GPLv2 with a syscall exception. This very specific license was chosen by Linus Torvalds to ensure the project's continued success while keeping it truly open. In Linux' case, no single entity has more rights than anyone else. There is not even a realistic option to relicense to a newer version of the GPL.
The FSL explicitly recognizes the reality that the single vendor holds significant power but balances it by ensuring that this power diminishes over time. This idea can also be found in copyright law, where a creator's work eventually enters the public domain. A key difference with software though is that it continuously evolves, making it hard to pinpoint when it might eventually become public domain as thousands of people contribute to it.
The FSL is much more aggressive in that aspect. If we run Sentry into the ground and the business fails, within two years, anyone can pick up the pieces and revive it like a Phoenix from the ashes. This isn't just hypothetical. Bryan Cantrill recently mentioned the desire of Oxide forking CockroachDB once its BUSL change date kicks in. While that day hasn't come yet, it's a real possibility.
Dying CompaniesLet's face it: companies fail. I have no intentions for Sentry to be one of them, but you never know. Companies also don't just die just once, they can do so repeatedly. Xapian is an example I like to quote here. It started out as a GPL v2+ licensed search project called Muscat which was built at Cambridge. After several commercial acquisitions and transitions, the project eventually became closed source (which was possible because the creators held the copyright). Some of the original creators together with the community forked the last GPLv2 version into a project that eventually became known as Xapian.
What's the catch? The catch is that the only people who could license it more liberally than GPLv2 are long gone from the project. Xapian refers to its current license “a historical accident”. The license choice causes some challenges specifically to how Xapian is embedded. There are three remaining entities that would need to agree to the relicensing. From my understanding none of those entities commercially use Xapian's original code today but also have no interest in actually supporting a potential relicensing.
Unlike trademark law which has a concept of abandonment, the copyright situation is stricter. It would take two lifetimes for Xapian to enter the public domain and at that point it will be probably be mostly for archival purposes.
Equal Grounds Now or LaterIf Xapian's original code would have been FSL licensed, it would have been Apache 2.0 (or MIT with the alternative model) many times over. You don't need to hope that the original license holder still cares, by the time you get hold of the source code, you already have an irrevocable promise that it will eventually turn into Apache 2.0 (or MIT with the alternative license choice) which is about as non-strings attached as it can get.
So in some ways a comparison is “AGPL now and forever” vs “FSL now, Apache 2.0/MIT in two years”.
That's not to say that AGPL (or SSPL) don't have their merits. Xapian as much as it might suffer from their accidental license choice also is a successful Open Source project that helped a lot of developers out there. Maybe the license did in fact work out well for them, and because everybody is in the same boat it also has created a community of equals.
I do believe however it's important to recognize that “single-vendor AGPL with a CLA” is absolutely not the same as “community driven AGPL project without the CLA”.
The title claims that FSL balances Open Source better than AGPL, and it's fair to question how a license that isn't Open Source can achieve that. The key lies in understanding that Fair Source is built on the concept of delayed Open Source. Yes, there's a waiting period, but it’s a relatively short one: just two years. Count to two and the code transitions to full, unshackled openness. And that transition to Open Source is a promise that can't be taken from you.
[1]Originally about the BUSL license which introduced the idea (Open Source, SaaS and Monetization) [2]Later about our own DOSP based license, the FSL (FSL: A License For the Bazaar, Not the Cathedral).Seth Michael Larson: PyCon Taiwan 2024 Keynote
Published 2024-09-22 by Seth Larson
Reading time: minutes
Here are my slides and overview of my PyCon Taiwan 2024 Keynote titled "Bytes, Pipes, and People". The video will be published to YouTube, subscribe to the PyCon Taiwan YouTube channel to be notified when available.
Software security has historically been treated as extra or "nice-to-have", not a core feature that users expect. This means we have accumulated plenty of tech debt. Now there are growing incentives and requirements for producing secure software to meet user expectations.
Luckily for us, many of the tools, data, and systems already exist to help us build a culture of security for Python. These tools help relay messages between software creators and users so we can collaborate on this shared goal.
By actively participating you are starting the positive feedback loop of software security, making users safer faster!
Below is a list of items that actions can implement to build a culture of security for Python:
Maintainers- Adopt Trusted Publishers if you use GitHub Actions, GitLab CI/CD, Google Cloud Build, or ActiveState to publish Python packages.
- Use lock files for the build and publish workflow, such as pip-tools, Poetry, or PDM.
- Adopt a lightweight security policy. Do not stress about CVEs: fix, release, publish a CVE.
- Contribute new insecure code detections to Bandit.
- Update dependencies that have vulnerabilities. Prioritize projects that are connected to the internet.
- Update software on a semi-regular basis to avoid out-of-date and end-of-life software. Staying up-to-date helps you being able to upgrade to fixed versions in the future.
- Run tests with PYTHONWARNINGS with DeprecationWarning and PendingDeprecationWarning set to errors to avoid missing deprecated features.
- Create a secure open source usage policy, using verified data to evaluate open source projects. Do not install new projects without checking your policy first.
- If you need a Software Bill-of-Materials document there are tools available to generate one. Those tools will improve over time from new Python package SBOM standards.
- Add a vulnerability scanner like pip-audit, Grype, or Trivy.
- What is Software Bill-of-Materials ("SBOM")?
- Trusted Publishers
- PyPI blog
- Bandit
- Warnings in Python
- pip-audit
- Scientific Python SPEC-8: "Securing the Release Process"
- Supply chain security threats (SLSA)
- Grype
- Trivy
- Ecosystem.ms
- Libraries.io
- Deps.dev
- Trusty
- pip-tools
- Poetry
- PDM
- uv
- Sigstore Python
- Yanking Python packages
- HTTP Archive
- Sonatype 2023 Annual State of Software Supply Chain Report
- Kushal Das for Python Language Summit photos
- StackOverflow
Thanks for reading! ♡ Did you find this article helpful and want more content like it?
Get notified of new posts by subscribing to the RSS feed or the email newsletter.
This work is licensed under CC BY-SA 4.0
Python Morsels: Prompting a user for input
We can prompt our users for input with Python's built-in input function.
Table of contents
- Prompting for user input
- Customizing the prompt text
- Prompt users with Python's built-in input function
Python has a built-in input function that we can use to prompt a user of our program to enter some text:
>>> color = input("Favorite color:") Favorite color:▯Our Python REPL is now hanging and waiting for us (the user) to input text. Let's type purple:
>>> color = input("Favorite color:") Favorite color:purple▯After the user has typed the text they'd like to enter, they can hit the Enter key to lock-in the value.
>>> color = input("Favorite color:") Favorite color:purple >>>Now the color variable contains the string purple:
>>> color 'purple' Customizing the prompt textNote that the input function …
Read the full article: https://www.pythonmorsels.com/prompting-for-input/Erik Marsja: Using Pandas to Read JSON from URL
The post Using Pandas to Read JSON from URL appeared first on Erik Marsja.
When working with data in Python, using Pandas to read JSON from URL is an excellent tool that lets you directly load JSON data from a web source into a Pandas dataframe. This tutorial will teach you the steps to accomplish this task, building upon our previous discussions on reading JSON with Python more generally.
Table of Contents How to use Pandas to Read JSON from URLFirst, let us look at a simple example of using Pandas to read JSON from a URL.
import pandas as pd # URL containing JSON data url = "http://api.open-notify.org/astros.json" # Read JSON data from URL into a DataFrame df = pd.read_json(url) # Display the dataframe print(df)In the code chunk above, we start by importing the Pandas library. The URL variable contains the web address where the JSON data is hosted. The pd.read_json(url) function is then used to read the JSON data from the URL and load it into a Pandas DataFrame, which is a two-dimensional labeled data structure with columns of potentially different types. Finally, print(df) displays the DataFrame, allowing us to see the imported data in tabular format.
Now that we have seen a basic example, let us learn more about the parameters of the pd.read_json() method to understand how we can customize the reading process.
The pd.read_json() method has several parameters that allow you to fine-tune how the JSON data is read and converted into a dataframe. Here is an overview of the most important parameters:
- path_or_buf: The string containing the URL or the path to the JSON file. This is the source of the JSON data that will be read.
- orient: Defines the expected JSON string format. Default is ‘columns’. This parameter specifies the orientation of the JSON data. Other options include ‘split’, ‘records’, ‘index’, and ‘values’.
- typ: Specifies the type of object to be returned. Default is ‘frame’. This parameter can be set to ‘series’ if you want to return a Series instead of a DataFrame.
- dtype: Determines whether to infer types of objects. Default is ‘None’. This parameter can be used to specify the data type for each column.
- convert_axes: Whether to convert the axes to another type. Default is ‘True’. This parameter allows you to convert the axes to a specified data type.
- convert_dates: List of columns to convert to dates. Default is ‘True’. This parameter can be used to specify which columns should be parsed as dates.
- keep_default_dates: Whether to include default date parsers. Default is ‘True’. This parameter determines whether to use the default date parsers provided by Pandas.
- precise_float: Whether to use a high precision floating point converter. Default is ‘False’. This parameter can be set to ‘True’ if you need high precision for float values.
- date_unit: Unit for encoding datetime. Default is ‘None’. This parameter can be used to specify the time unit for encoding datetime objects.
- encoding: Specifies the encoding to be used. Default is ‘utf-8’. This parameter determines the encoding for reading the JSON data.
- lines: Whether to read the JSON file as a JSON object per line. Default is ‘False’. This parameter can be set to ‘True’ if the JSON data is in a line-delimited format.
With these parameters allows you to better control how JSON data is read and processed, enabling you to tailor the DataFrame to your needs.
SummaryTo summarize, we have learned how to use Pandas to read JSON data from a URL. We explored a practical example and detailed the parameters of the pd.read_json() method, enhancing our ability to customize the data reading process. Handling nested JSON data can be more challenging, but that will be covered in a future post.
I would appreciate it if you could share this post and leave your comments below. Your feedback is invaluable!
Here are some other reading data-related tutorials:
- How to Convert JSON to Excel in Python with Pandas
- How to use Pandas read_html to Scrape Data from HTML Tables
The post Using Pandas to Read JSON from URL appeared first on Erik Marsja.
Real Python: The Real Python Podcast – Episode #221: Thriving as a Developer With ADHD
What are strategies for being a productive developer with ADHD? How can you help your team members with ADHD to succeed and complete projects? This week on the show, we speak with Chris Ferdinandi about his website and podcast "ADHD For the Win!"
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
PyCharm: What’s New in PyCharm 2024.2.2!
PyCharm 2024.2.2 is here with many key updates, including Python support improvements, new Django features, and enhancements to the Data View tool window!
Visit our What’s New page for more details on all these features and to explore many others. You can download the latest version from our download page or update your current version through our free Toolbox App.
Download PyCharm 2024.2.2 PyCharm 2024.2.2 highlights Django enhancements PRO New code completion suggestionsWhen working with models, PyCharm now offers field completion suggestions in a variety of cases, such as Model.save(update_fields[…]), Model.refresh_from_db(fields=[…]), Model.clean_fields(exclude=[…]), and so on.
Quick-fix to create a method for an unresolved ViewSetIf a ViewSet has an unresolved reference, PyCharm suggests a quick-fix to introduce the missing method. Use Alt + Enter to call it.
What’s new! Data View PROYou can now look at n-dimensional NumPy arrays in the Data View tool window. Define the array you would like to inspect, along with a specific dimension or slice, in a special field at the bottom of the tool window, and PyCharm will display a table with the results.
Python support improvements Support for default types for type parameters (PEP 696)Improve typing with PyCharm’s support for the Python 3.13 ability to define the default types for type parameters. The IDE now incorporates default types for type parameters both for old-style and new-style generic classes, functions, and type aliases, and it takes them into account in type inference.
Pattern matching: Foldable match statementsTo improve the readability of code with large pattern-matching statements, you can now use folding for entire match statements or for separate cases inside them.
Download PyCharm 2024.2.2Visit our What’s New page to learn about other useful features included in this release, or read the release notes for the full breakdown, including more details on the features mentioned here.
If you encounter any problems, please report them in our issue tracker so we can address them promptly.
Connect with us on X (formerly Twitter) to share your thoughts on PyCharm 2024.2.2!
Talk Python to Me: #477: Awesome Text Tricks with NLP and spaCy
Wingware: Wing Python IDE Version 10.0.6 - September 20, 2024
Wing 10.0.6 adds support for Python 3.13 and fixes some issues with AI development, code refactoring, and unit testing with pytest.
See the change log for details.
Download Wing 10 Now: Wing Pro | Wing Personal | Wing 101 | Compare Products
What's New in Wing 10
AI Assisted Development
Wing Pro 10 takes advantage of recent advances in the capabilities of generative AI to provide powerful AI assisted development, including AI code suggestion, AI driven code refactoring, description-driven development, and AI chat. You can ask Wing to use AI to (1) implement missing code at the current input position, (2) refactor, enhance, or extend existing code by describing the changes that you want to make, (3) write new code from a description of its functionality and design, or (4) chat in order to work through understanding and making changes to code.
Examples of requests you can make include:
"Add a docstring to this method" "Create unit tests for class SearchEngine" "Add a phone number field to the Person class" "Clean up this code" "Convert this into a Python generator" "Create an RPC server that exposes all the public methods in class BuildingManager" "Change this method to wait asynchronously for data and return the result with a callback" "Rewrite this threaded code to instead run asynchronously"Yes, really!
Your role changes to one of directing an intelligent assistant capable of completing a wide range of programming tasks in relatively short periods of time. Instead of typing out code by hand every step of the way, you are essentially directing someone else to work through the details of manageable steps in the software development process.
Support for Python 3.12, 3.13, and ARM64 LinuxWing 10 adds support for Python 3.12 and 3.13, including (1) faster debugging with PEP 669 low impact monitoring API, (2) PEP 695 parameterized classes, functions and methods, (3) PEP 695 type statements, and (4) PEP 701 style f-strings.
Wing 10 also adds support for running Wing on ARM64 Linux systems.
Poetry Package ManagementWing Pro 10 adds support for Poetry package management in the New Project dialog and the Packages tool in the Tools menu. Poetry is an easy-to-use cross-platform dependency and package manager for Python, similar to pipenv.
Ruff Code Warnings & ReformattingWing Pro 10 adds support for Ruff as an external code checker in the Code Warnings tool, accessed from the Tools menu. Ruff can also be used as a code reformatter in the Source > Reformatting menu group. Ruff is an incredibly fast Python code checker that can replace or supplement flake8, pylint, pep8, and mypy.
Try Wing 10 Now!
Wing 10 is a ground-breaking new release in Wingware's Python IDE product line. Find out how Wing 10 can turbocharge your Python development by trying it today.
Downloads: Wing Pro | Wing Personal | Wing 101 | Compare Products
See Upgrading for details on upgrading from Wing 9 and earlier, and Migrating from Older Versions for a list of compatibility notes.