Planet Python

Subscribe to Planet Python feed
Planet Python - http://planetpython.org/
Updated: 21 hours 35 min ago

Matt Layman: WhiteNoise For Static Files - Building SaaS

Fri, 2023-12-08 19:00
This video is all about adding the popular WhiteNoise package into my Django app to serve static files (e.g., CSS, JavaScript, and images) directly from the app. I walk through the process from start to finish and deploy it live to show how things work.
Categories: FLOSS Project Planets

Django Weblog: 2023 Malcolm Tredinnick Memorial Prize awarded to Djangonaut Space

Fri, 2023-12-08 11:56

The Django Software Foundation Board is pleased to announce that the 2023 Malcolm Tredinnick Memorial Prize has been awarded to Djangonaut Space.

Djangonaut Space, run by organizers Dawn Wages, Rachell Calhoun, Sarah Abderemane, Sarah Boyce, and Tim Schilling, is a mentoring initiative dedicated to expanding contributions and diversifying contributors within the Django community. Drawing on their extensive experience as mentors and contributors, they've cultivated an inclusive universe for newcomers, emphasizing group learning, sustainability, leadership development and generous use of space puns. 🌌

Thanks to the fantastic support from a team of volunteer mentors, the program had a stellar pilot session, propelling 🎉 nine 🎉 pull requests (PRs) to Django and launching 🎊 five 🎊 new contributors into the Django community. 🥳 Given the community's enthusiastic interest and demand, the program is well-positioned to evolve and expand at warp speed, welcoming even more Djangonauts on future missions. 🚀

Each year we receive many nominations, and it is always hard to pick the winner. This year, as always, we received many nominations for the Malcolm Tredinnick Memorial Price with some being nominated multiple times. Some have been nominated in multiple years. If your nominee didn’t make it this year, you can always nominate them again next year.

Malcolm would be very proud of the legacy he has fostered in our community!

Congratulations Djangonaut Space on the well-deserved honor!

Categories: FLOSS Project Planets

Real Python: The Real Python Podcast – Episode #183: Exploring Code Reviews in Python and Automating the Process

Fri, 2023-12-08 07:00

What goes into a code review in Python? Is there a difference in how a large organization practices code review compared to a smaller one? What do you do if you're a solo developer? This week on the show, Brendan Maginnis and Nick Thapen from Sourcery return to talk about code review and automated code assistance.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Glyph Lefkowitz: Annotated At Runtime

Thu, 2023-12-07 20:56

PEP 0593 added the ability to add arbitrary user-defined metadata to type annotations in Python.

At type-check time, such annotations are… inert. They don’t do anything. Annotated[int, X] just means int to the type-checker, regardless of the value of X. So the entire purpose of Annotated is to provide a run-time API to consume metadata, which integrates with the type checker syntactically, but does not otherwise disturb it.

Yet, the documentation for this central purpose seems, while not exactly absent, oddly incomplete.

The PEP itself simply says:

A tool or library encountering an Annotated type can scan through the annotations to determine if they are of interest (e.g., using isinstance()).

But it’s not clear where “the annotations” are, given that the PEP’s entire “consuming annotations” section does not even mention the __metadata__ attribute where the annotation’s arguments go, which was only even added to CPython’s documentation. Its list of examples just show the repr() of the relevant type.

There’s also a bit of an open question of what, exactly, we are supposed to isinstance()-ing here. If we want to find arguments to Annotated, presumably we need to be able to detect if an annotation is an Annotated. But isinstance(Annotated[int, "hello"], Annotated) is both False at runtime, and also a type-checking error, that looks like this:

1Argument 2 to "isinstance" has incompatible type "<typing special form>"; expected "_ClassInfo"

The actual type of these objects, typing._AnnotatedAlias, does not seem to have a publicly available or documented alias, so that seems like the wrong route too.

Now, it certainly works to escape-hatch your way out of all of this with an Any, build some version-specific special-case hacks to dig around in the relevant namespaces, access __metadata__ and call it a day. But this solution is … unsatisfying.

What are you looking for?

Upon encountering these quirks, it is understandable to want to simply ask the question “is this annotation that I’m looking at an Annotated?” and to be frustrated that it seems so obscure to straightforwardly get an answer to that question without disabling all type-checking in your meta-programming code.

However, I think that this is a slightly misframing of the problem. Code that is inspecting parameters for an annotation is going to do something with that annotation, which means that it must necessarily be looking for a specific set of annotations. Therefore the thing we want to pass to isinstance is not some obscure part of the annotations’ internals, but the actual interesting annotation type from your framework or application.

When consuming an Annotated parameter, there are 3 things you probably want to know:

  1. What was the parameter itself? (type: The type you passed in.)
  2. What was the name of the annotated object (i.e.: the parameter name, the attribute name) being passed the parameter? (type: str)
  3. What was the actual type being annotated? (type: type)

And the things that we have are the type of the Annotated we’re querying for, and the object with annotations we are interrogating. So that gives us this function signature:

1 2 3 4 5def annotated_by( annotated: object, kind: type[T], ) -> Iterable[tuple[str, T, type]]: ...

To extract this information, all we need are get_args and get_type_hints; no need for __metadata__ or get_origin or any other metaprogramming. Here’s a recipe:

1 2 3 4 5 6 7 8 9 10 11 12def annotated_by( annotated: object, kind: type[T], ) -> Iterable[tuple[str, T, type]]: for k, v in get_type_hints(annotated, include_extras=True).items(): all_args = get_args(v) if not all_args: continue actual, *rest = all_args for arg in rest: if isinstance(arg, kind): yield k, arg, actual

It might seem a little odd to be blindly assuming that get_args(...)[0] will always be the relevant type, when that is not true of unions or generics. Note, however, that we are only yielding results when we have found the instance type in the argument list; our arbitrary user-defined instance isn’t valid as a type annotation argument in any other context. It can’t be part of a Union or a Generic, so we can rely on it to be an Annotated argument, and from there, we can make that assumption about the format of get_args(...).

This can give us back the annotations that we’re looking for in a handy format that’s easy to consume. Here’s a quick example of how you might use it:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15@dataclass class AnAnnotation: name: str def a_function( a: str, b: Annotated[int, AnAnnotation("b")], c: Annotated[float, AnAnnotation("c")], ) -> None: ... print(list(annotated_by(a_function, AnAnnotation))) # [('b', AnAnnotation(name='b'), <class 'int'>), # ('c', AnAnnotation(name='c'), <class 'float'>)] Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support me on Patreon as well! I am also available for consulting work if you think your organization could benefit from expertise on topics like “how do I do Python metaprogramming, but, like, not super janky”.

Categories: FLOSS Project Planets

James Bennett: Use "pip install" safely

Thu, 2023-12-07 20:13

This is part of a series of posts I’m doing as a sort of Python/Django Advent calendar, offering a small tip or piece of information each day from the first Sunday of Advent through Christmas Eve. See the first post for an introduction.

Managing dependencies should be boring

Last year I wrote a post about “boring” dependency management in Python, where I advocated a setup based entirely around standard Python packaging tools, in that …

Read full entry

Categories: FLOSS Project Planets

Python Engineering at Microsoft: Python in Visual Studio Code – December 2023 Release

Thu, 2023-12-07 16:02

We’re excited to announce the December 2023 release of the Python and Jupyter extensions for Visual Studio Code!

This release includes the following announcements:

  • Configurable debugging options added to Run button menu
  • Show Type Hierarchy with Pylance
  • Deactivate command support for automatically activated virtual environments in the terminal
  • Setting to turn REPL Smart Send on/off and a message when it is unsupported

If you’re interested, you can check the full list of improvements in our changelogs for the Python, Jupyter and Pylance extensions.

Configurable debugging options added to Run button menu

The Python Debugger extension now has configurable debug options under the Run button menu. When you select Python Debugger: Debug using launch.json and there is an existing launch.json in your workspace, it shows all available debug configurations you can pick to start the debugger. In the case you do not have an existing launch.json, you will be prompted to select a debug configuration template to create a launch.json file for your Python application, and then can run your application using this configuration.

Show Type Hierarchy with Pylance

You can now more conveniently explore and navigate through your Python projects’ types relationships when using Pylance. This can be helpful when working with large codebases with complex type relationships.

When you right-click on a symbol, you can select Show Type Hierarchy to open the type hierarchy view. From there you can navigate through the symbol’s subtypes as well as super-types.

Deactivate command support for automatically activated virtual environments in the terminal

The Python extension has a new activation mechanism that activates the selected environment in your default terminal without running any explicit activation commands. This is currently behind an experimental flag and can be enabled through the following User setting: "python.experiments.optInto": ["pythonTerminalEnvVarActivation"] as mentioned in our August 2023 release notes.

However, one problem with this activation mechanism is that it didn’t support the deactivate command because there is no inherent activation script. We received feedback that this is an important part of some users’ workflow, so we have added support for deactivate when the selected default terminal is PowerShell or Command Prompt. We plan to add support for additional terminals in the future.

Setting to turn REPL Smart Send on/off and a message when it is unsupported

When attempting to use Smart Send via kbstyle(Shift+Enter) on a Python file that contains unsupported Python code (e.g., Python 2 source code), there is now a warning message and a setting to deactivate REPL Smart Send. Users are also able to change their user and workspace specific behavior for REPL Smart Send via the python.REPL.enableREPLSmartSend setting.

Other Changes and Enhancements

We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python and Jupyter Notebooks in Visual Studio Code. Some notable changes include:

  • The Pylance extension has adjusted its release cadence to monthly stable releases and nightly pre-release builds, similar to the Python extension release cadence. These changes will allow for more extensive testing on stable builds and a more reliable user experience.
  • String inputs for numerical values are now supported in attach debug configurations with the Python Debugger extension (@vscode-python-debugger#115).
  • The Python test adapter rewrite experiment has been rolled out to 100% of users. For the time being, you can opt-out by adding "python.experiments.optOutFrom" : "pythonTestAdapter" in your settings.json, but we will soon drop this experimental flag and adopt this new architecture.

We would also like to extend special thanks to this month’s contributors:

Try out these new improvements by downloading the Python extension and the Jupyter extension from the Marketplace, or install them directly from the extensions view in Visual Studio Code (Ctrl + Shift + X or ⌘ + ⇧ + X). You can learn more about Python support in Visual Studio Code in the documentation. If you run into any problems or have suggestions, please file an issue on the Python VS Code GitHub page.

The post Python in Visual Studio Code – December 2023 Release appeared first on Python.

Categories: FLOSS Project Planets

CodersLegacy: Pytest Tutorial: Mastering Unit Testing in Python

Thu, 2023-12-07 14:54

Welcome to a ALL-IN-ONE Tutorial designed to meet all your testing requirements. Whether you’re just starting with the fundamentals to build a solid conceptual foundation or aiming to craft professional-grade test cases for entire projects, this guide has got you covered. The focus of this tutorial will be around the popular “Pytest” library.

Table Of Contents
  1. Understanding Unit Testing
  2. Why Pytest?
  3. Getting Started with pytest
  4. Pytest Tutorial: Writing your First Test
  5. Pytest Tutorial: Parameterized Testing
  6. Pytest Tutorial: Command-Line Options
  7. How to use Pytest effectively in Larger Projects
  8. Conclusion
Understanding Unit Testing

In the world of programming, a unit is the smallest part of your code, like a single function or method. Unit testing involves examining these individual parts to ensure they’re doing what they’re supposed to. It’s like putting each piece of the puzzle under a microscope to make sure it fits perfectly and does its job without causing trouble for the whole picture.

Imagine you’re building a complex building. The traditional approach might be to wait for the entire building to be completed, then performing tests on it to test its integrity. Unit testing on the other hand, would have you test the integrity of each floor as you build it.

Here is another scenario:

You have a function that’s supposed to add two numbers together. Unit testing for this function would involve giving it different pairs of numbers and checking if it consistently produces the correct sum. It’s like asking, “Hey, can you add 2 and 3? What about 0 and 0? Or even -1 and 1?” Each time, the unit test checks if the function gives the right answer. These different pairs of numbers must be defined carefully to ensure that the function works under a variety of different circumstances. For example, the first pair might use negative numbers, second pair might use positive numbers, and third pair might target an “error” case where a string and a number are used as inputs (with the expectation of the test failing).

Why bother with this meticulous process? Because, it helps catch bugs early on, before they turn into big, tangled problems.

Unit testing ensures that each building block of your code functions as expected, creating a solid foundation for your software structure. It’s a practice that developers swear by because it not only saves time but also makes your code more reliable.

Why Pytest?

Let’s explore some of the key features that make pytest a popular choice for testing in Python.


1. Automatic Test Discovery

One of pytest‘s strengths is its ability to automatically discover and run tests in your project. By default, pytest identifies files with names starting with “test_” or ending with “_test.py” and considers them as test modules. It then discovers test functions or methods within these modules.


2. Concise Syntax

pytest uses a simplified and expressive syntax for writing tests. Test functions don’t need to be part of a class, and assertions can be made using the assert statement directly. This leads to more readable and concise test code.


3. Parameterized Testing

Pytest supports parameterized testing, enabling developers to run the same test with multiple sets of inputs. This feature is incredibly beneficial for testing a variety of scenarios without duplicating test code.


And many more such benefits (that we can’t explain without getting too technical).

Getting Started with pytest

To get started with pytest, you need to install it. You can do this using pip, the Python package installer, with the following command:

pip install pytest

Once installed, you can run tests using the pytest command.

Pytest Tutorial: Writing your First Test

Let’s start by creating a basic test using pytest. Create a file named test_example.py with the following content:

# test_example.py def add(x, y): return x + y def test_add(): assert add(2, 3) == 5 assert add(0, 0) == 0 assert add(-1, 1) == 0

In this example, we define a simple add function and a corresponding test function using pytest‘s assert statement. The test checks whether the add function produces the expected results for different input values.

To execute the tests, run the following command in your terminal:

pytest test_example.py

pytest will discover and run all test functions in the specified file, by looking for functions with the word “test” in their names. If the tests pass, you’ll see an output indicating success. Otherwise, pytest will provide detailed information about the failures.

Pytest Tutorial: Parameterized Testing

pytest supports parameterized testing, enabling you to run the same test with multiple sets of inputs. This is achieved using the @pytest.mark.parametrize decorator.

import pytest def add(x, y): return x + y @pytest.mark.parametrize("input_a, input_b, expected", [ (2, 3, 5), (0, 0, 0), (-1, 1, 0), ]) def test_add(input_a, input_b, expected): result = add(input_a, input_b) assert result == expected

In this example, the test_add function is executed three times with different input values, reducing code duplication and making it easier to cover various scenarios.

Pytest Tutorial: Command-Line Options

pytest provides a plethora of command-line options to customize test runs. For example, you can specify the directory or files to test, run specific tests or test classes, and control the verbosity of the output.

Here are key command-line options:


Specifying Test Directories or Files:

Use the -k option to specify a substring match for test names. For instance:

pytest -k test_module

This command runs all tests containing “test_module” in their names.

Specify a specific directory or file to test:

pytest tests/test_module.py

Execute tests from a specific file or directory, allowing targeted testing.


Controlling Verbosity:

Adjust the verbosity level with the -v option to get more detailed output:

pytest -v

Display test names and results. Useful for understanding test execution flow.

Increase verbosity for even more detailed information:

pytest -vv

Provide additional information about skipped tests and setup/teardown stages.


Marking and Selecting Tests:

Utilize custom markers to categorize and selectively run tests. For example:

pytest -m slow

This command runs tests marked with @pytest.mark.slow, allowing you to separate and focus on tests specifically categorized as slow-running.

Select tests based on their outcome, such as only running failed tests:

pytest --lf

Run only the tests that failed in the last test run.


Parallel Test Execution:

Speed up test runs by leveraging parallel execution:

pytest -n auto

This command runs tests in parallel, utilizing all available CPU cores.


Generating Detailed Reports:

Generate detailed reports in various formats, such as HTML or XML:

pytest --html=report.html

This command produces an HTML report for a more visual representation of test results, aiding in result analysis and sharing with stakeholders.


These command-line options empower developers to fine-tune their testing processes, making Pytest a flexible and customizable tool for projects of any scale. Whether you need to run specific tests, control output verbosity, or generate comprehensive reports, Pytest’s command-line options provide the versatility needed for efficient and effective testing.

How to use Pytest effectively in Larger Projects

As the size of your project grows, with the number of files and lines of code increasing significantly, the need to organize your code becomes even more important. While it may seem tempting to write all your “test” functions in the same file as your regular code, this is not an ideal solution.

Instead, it is recommended to create a separate file where all of your tests are written. Depending on the size of the project, you can even have multiple test files (e.g. one test file for each class).

Opting for this approach introduces potential complications. When test cases are written in a separate file, a common concern arises: How do we invoke the functions intended for testing?

This requires careful structuring of your project and code to ensure that individual functions and classes of your project can be imported by the pytest files.

Here is a good project structure to follow, where each of the files in src folder represent an independent module (e.g. a single class), and each of the files in the tests folder corresponds to a file in the src folder.

project_root/ |-- src/ | |-- __init__.py | |-- users.py | |-- services.py | |-- tests/ | |-- __init__.py | |-- test_users.py | |-- test_services.py | | -- main.py

The __init__.py file is an important addition to the src folder, where all of our project files are stored (excluding the main driver code). When this file is created in a folder, that folder will be recognized by Python as a Python Package, and enables other files to import files from within this folder. You do not have to put anything in this file (leave it empty, though it can be customized with special statements).

It is also necessary to put the __init__.py file in the tests folder, in order for imports between it, and the src folder to succeed.

Example scenario: Importing the users.py file from test_users.py.

from src.users import * Conclusion

This marks the end of the Pytest tutorial.

By incorporating unit testing into your development workflow, you can catch and fix bugs early, improve code maintainability, and ensure that your software functions as intended. With pytest, the journey of mastering unit testing in Python becomes not only effective but also enjoyable. So, go ahead, write those tests, and build robust, reliable Python applications with confidence!

The post Pytest Tutorial: Mastering Unit Testing in Python appeared first on CodersLegacy.

Categories: FLOSS Project Planets

Python Insider: Python 3.12.1 is now available

Thu, 2023-12-07 14:50

 

Python 3.12.1 is now available.

https://www.python.org/downloads/release/python-3121/

 

This is the first maintenance release of Python 3.12

Python 3.12 is the newest major release of the Python programming language, and it contains many new features and optimizations. 3.12.1 is the latest maintenance release, containing more than 400 bugfixes, build improvements and documentation changes since 3.12.0.

 Major new features of the 3.12 series, compared to 3.11  New features  Type annotations  Deprecations
  • The deprecated wstr and wstr_length members of the C implementation of unicode objects were removed, per PEP 623.
  • In the unittest module, a number of long deprecated methods and classes were removed. (They had been deprecated since Python 3.1 or 3.2).
  • The deprecated smtpd and distutils modules have been removed (see PEP 594 and PEP 632. The setuptools package continues to provide the distutils module.
  • A number of other old, broken and deprecated functions, classes and methods have been removed.
  • Invalid backslash escape sequences in strings now warn with SyntaxWarning instead of DeprecationWarning, making them more visible. (They will become syntax errors in the future.)
  • The internal representation of integers has changed in preparation for performance enhancements. (This should not affect most users as it is an internal detail, but it may cause problems for Cython-generated code.)

For more details on the changes to Python 3.12, see What’s new in Python 3.12.

 More resources  
Enjoy the new releases

Thanks to all of the many volunteers who help make Python Development and these releases possible! Please consider supporting our efforts by volunteering yourself or through organization contributions to the Python Software Foundation.

Your release team,
Thomas Wouters
Ned Deily
Steve Dower
Łukasz Langa

Categories: FLOSS Project Planets

Christian Ledermann: 'Hypermodernize' your Python Package

Thu, 2023-12-07 06:28

In the original Hypermodern Python Blogpost, Poetry was recommended as the preferred tool.
There are quite a lot of packaging tools out there which I do not want to go into in depth, instead I recommend Anna-Lena Popkes An unbiased evaluation of environment management and packaging tools.
What has emerged as the preferred way to store packaging information is the pyproject.toml file.

Using pyproject.toml over setup.py has become a preferred choice for several reasons:

  1. Consistent Configuration: pyproject.toml is part of the PEP 518 and PEP 517 specifications, offering a standardized and consistent way to declare build configurations and dependencies.
  2. PEP 518 Support: pyproject.toml supports PEP 518, allowing the declaration of build system requirements, enabling better compatibility with modern build tools and providing more flexibility in the build process.
  3. Modern Tooling: Some modern Python tools rely on pyproject.toml for project configuration. Adopting pyproject.toml aligns with these tools and facilitates a smoother integration with them.
  4. Readability and Maintainability: The pyproject.toml format is often considered more readable and straightforward than the Python code used in setup.py. This can contribute to better maintainability, especially for more complex projects.
  5. Standardization: As Python packaging evolves, pyproject.toml is becoming the de facto standard for configuration, supported by tools like pip and build systems like flit and poetry.

The adoption of pyproject.toml aligns with modern best practices in the Python packaging ecosystem.

Converting a setup.py base package to a pyproject.toml based one turns out to be straightforward for most use cases.
While it is easy to write a pyproject.toml there are a number of tools to convert legacy package information into this format.

  • setuptools-py2cfg A script for converting setup.py to setup.cfg
  • ini2toml which automatically translates .ini/.cfg files into TOML
  • validate-pyproject for automated checks on pyproject.toml powered by JSON Schema definitions
  • pyprojectsort enforces consistent formatting of pyproject.toml files, reducing merge request conflicts and saving time otherwise spent on manual formatting.
  • pyroma Rate your Python packages package friendliness.
Example

Install the required packages

pip install setuptools-py2cfg ini2toml[full]

or if you get an error no matches found: ini2toml[full]

pip install setuptools-py2cfg ini2toml configupdater tomlkit

Create a temporary cfg file

setuptools-py2cfg > tmp.cfg

and convert it into a toml file:

ini2toml --output-file=tmp.toml --profile=setup.cfg tmp.cfg

Apart from the profile setup.cfg ini2toml also has profiles to convert settings to their pyproject.toml equivalent for

  • .coveragerc
  • .isort.cfg
  • mypy.ini
  • pytest.ini ('ini_options' table)

After you converted the files, cut and paste them into a new (or existing) pyproject.toml file and validate and format them.

pip install validate-pyproject pyprojectsort pyroma pyprojectsort validate-pyproject pyproject.toml

Now you can delete your old setup.py file and verify that your package still builds as expected with

rm setup.py python -m build pyroma .

Check the dist/[my-package-name].tar.gz file to ensure it builds like before.

Adjust and tweak the settings as outlined in the user guide

Additional information
Categories: FLOSS Project Planets

James Bennett: Compile your Python

Wed, 2023-12-06 20:32

This is part of a series of posts I’m doing as a sort of Python/Django Advent calendar, offering a small tip or piece of information each day from the first Sunday of Advent through Christmas Eve. See the first post for an introduction.

You can compile Python?

Yes. And in a lot of ways!

For example, you can use tools like Cython or mypyc to write Python, or Python-like code, and turn that Python-like code automatically …

Read full entry

Categories: FLOSS Project Planets

Matt Layman: Operations, WhiteNoise, and Tailwind - Building SaaS with Python and Django #177

Wed, 2023-12-06 19:00
In this episode, I worked through a couple of issues discovered after having the site be operational for real use. From there, we moved onto some fundamental technology and integrated WhiteNoise to handle static files for the application. After adding WhiteNoise, we hooked up Tailwind CSS.
Categories: FLOSS Project Planets

Glyph Lefkowitz: Safer, Not Later

Wed, 2023-12-06 15:01

Facebook — and by extension, most of Silicon Valley — rightly gets a lot of shit for its old motto, “Move Fast and Break Things”.

As a general principle for living your life, it is obviously terrible advice, and it leads to a lot of the horrific outcomes of Facebook’s business.

I don’t want to be an apologist for Facebook. I also do not want to excuse the worldview that leads to those kinds of outcomes. However, I do want to try to help laypeople understand how software engineers—particularly those situated at the point in history where this motto became popular—actually meant by it. I would like more people in the general public to understand why, to engineers, it was supposed to mean roughly the same thing as Facebook’s newer, goofier-sounding “Move fast with stable infrastructure”.

Move Slow

In the bad old days, circa 2005, two worlds within the software industry were colliding.

The old world was the world of integrated hardware/software companies, like IBM and Apple, and shrink-wrapped software companies like Microsoft and WordPerfect. The new world was software-as-a-service companies like Google, and, yes, Facebook.

In the old world, you delivered software in a physical, shrink-wrapped box, on a yearly release cycle. If you were really aggressive you might ship updates as often as quarterly, but faster than that and your physical shipping infrastructure would not be able to keep pace with new versions. As such, development could proceed in long phases based on those schedules.

In practice what this meant was that in the old world, when development began on a new version, programmers would go absolutely wild adding incredibly buggy, experimental code to see what sorts of things might be possible in a new version, then slowly transition to less coding and more testing, eventually settling into a testing and bug-fixing mode in the last few months before the release.

This is where the idea of “alpha” (development testing) and “beta” (user testing) versions came from. Software in that initial surge of unstable development was extremely likely to malfunction or even crash. Everyone understood that. How could it be otherwise? In an alpha test, the engineers hadn’t even started bug-fixing yet!

In the new world, the idea of a 6-month-long “beta test” was incoherent. If your software was a website, you shipped it to users every time they hit “refresh”. The software was running 24/7, on hardware that you controlled. You could be adding features at every minute of every day. And, now that this was possible, you needed to be adding those features, or your users would get bored and leave for your competitors, who would do it.

But this came along with a new attitude towards quality and reliability. If you needed to ship a feature within 24 hours, you couldn’t write a buggy version that crashed all the time, see how your carefully-selected group of users used it, collect crash reports, fix all the bugs, have a feature-freeze and do nothing but fix bugs for a few months. You needed to be able to ship a stable version of your software on Monday and then have another stable version on Tuesday.

To support this novel sort of development workflow, the industry developed new technologies. I am tempted to tell you about them all. Unit testing, continuous integration servers, error telemetry, system monitoring dashboards, feature flags... this is where a lot of my personal expertise lies. I was very much on the front lines of the “new world” in this conflict, trying to move companies to shorter and shorter development cycles, and move away from the legacy worldview of Big Release Day engineering.

Old habits die hard, though. Most engineers at this point were trained in a world where they had months of continuous quality assurance processes after writing their first rough draft. Such engineers feel understandably nervous about being required to ship their probably-buggy code to paying customers every day. So they would try to slow things down.

Of course, when one is deploying all the time, all other things being equal, it’s easy to ship a show-stopping bug to customers. Organizations would do this, and they’d get burned. And when they’d get burned, they would introduce Processes to slow things down. Some of these would look like:

  1. Let’s keep a special version of our code set aside for testing, and then we’ll test that for a few weeks before sending it to users.
  2. The heads of every department need to sign-off on every deployed version, so everyone needs to spend a day writing up an explanation of their changes.
  3. QA should sign off too, so let’s have an extensive sign-off process where each individual tester does a fills out a sign-off form.

Then there’s my favorite version of this pattern, where management decides that deploys are inherently dangerous, and everyone should probably just stop doing them. It typically proceeds in stages:

  1. Let’s have a deploy freeze, and not deploy on Fridays; don’t want to mess up the weekend debugging an outage.
  2. Actually, let’s extend that freeze for all of December, we don’t want to mess up the holiday shopping season.
  3. Actually why not have the freeze extend into the end of November? Don’t want to mess with Thanksgiving and the Black Friday weekend.
  4. Some of our customers are in India, and Diwali’s also a big deal. Why not extend the freeze from the end of October?
  5. But, come to think of it, we do a fair amount of seasonal sales for Halloween too. How about no deployments from October 10 onward?
  6. You know what, sometimes people like to use our shop for Valentine’s day too. Let’s just never deploy again.

This same anti-pattern can repeat itself with an endlessly proliferating list of “environments”, whose main role ends up being to ensure that no code ever makes it to actual users.

… and break things anyway

As you may have begun to suspect, there are a few problems with this style of software development.

Even back in the bad old days of the 90s when you had to ship disks in boxes, this methodology contained within itself the seeds of its own destruction. As Joel Spolsky memorably put it, Microsoft discovered that this idea that you could introduce a ton of bugs and then just fix them later came along with some massive disadvantages:

The very first version of Microsoft Word for Windows was considered a “death march” project. It took forever. It kept slipping. The whole team was working ridiculous hours, the project was delayed again, and again, and again, and the stress was incredible. [...] The story goes that one programmer, who had to write the code to calculate the height of a line of text, simply wrote “return 12;” and waited for the bug report to come in [...]. The schedule was merely a checklist of features waiting to be turned into bugs. In the post-mortem, this was referred to as “infinite defects methodology”.

Which lead them to what is perhaps the most ironclad law of software engineering:

In general, the longer you wait before fixing a bug, the costlier (in time and money) it is to fix.

A corollary to this is that the longer you wait to discover a bug, the costlier it is to fix.

Some bugs can be found by code review. So you should do code review. Some bugs can be found by automated tests. So you should do automated testing. Some bugs will be found by monitoring dashboards, so you should have monitoring dashboards.

So why not move fast?

But here is where Facebook’s old motto comes in to play. All of those principles above are true, but here are two more things that are true:

  1. No matter how much code review, automated testing, and monitoring you have some bugs can only be found by users interacting with your software.
  2. No bugs can be found merely by slowing down and putting the deploy off another day.

Once you have made the process of releasing software to users sufficiently safe that the potential damage of any given deployment can be reliably limited, it is always best to release your changes to users as quickly as possible.

More importantly, as an engineer, you will naturally have an inherent fear of breaking things. If you make no changes, you cannot be blamed for whatever goes wrong. Particularly if you grew up in the Old World, there is an ever-present temptation to slow down, to avoid shipping, to hold back your changes, just in case.

You will want to move slow, to avoid breaking things. Better to do nothing, to be useless, than to do harm.

For all its faults as an organization, Facebook did, and does, have some excellent infrastructure to avoid breaking their software systems in response to features being deployed to production. In that sense, they’d already done the work to avoid the “harm” of an individual engineer’s changes. If future work needed to be performed to increase safety, then that work should be done by the infrastructure team to make things safer, not by every other engineer slowing down.

The problem is that slowing down is not actually value neutral. To quote myself here:

If you can’t ship a feature, you can’t fix a bug.

When you slow down just for the sake of slowing down, you create more problems.

The first problem that you create is smashing together far too many changes at once.

You’ve got a development team. Every engineer on that team is adding features at some rate. You want them to be doing that work. Necessarily, they’re all integrating them into the codebase to be deployed whenever the next deployment happens.

If a problem occurs with one of those changes, and you want to quickly know which change caused that problem, ideally you want to compare two versions of the software with the smallest number of changes possible between them. Ideally, every individual change would be released on its own, so you can see differences in behavior between versions which contain one change each, not a gigantic avalanche of changes where any one of hundred different features might be the culprit.

If you slow down for the sake of slowing down, you also create a process that cannot respond to failures of the existing code.

I’ve been writing thus far as if a system in a steady state is inherently fine, and each change carries the possibility of benefit but also the risk of failure. This is not always true. Changes don’t just occur in your software. They can happen in the world as well, and your software needs to be able to respond to them.

Back to that holiday shopping season example from earlier: if your deploy freeze prevents all deployments during the holiday season to prevent breakages, what happens when your small but growing e-commerce site encounters a catastrophic bug that has always been there, but only occurs when you have more than 10,000 concurrent users. The breakage is coming from new, never before seen levels of traffic. The breakage is coming from your success, not your code. You’d better be able to ship a fix for that bug real fast, because your only other option to a fast turn-around bug-fix is shutting down the site entirely.

And if you see this failure for the first time on Black Friday, that is not the moment where you want to suddenly develop a new process for deploying on Friday. The only way to ensure that shipping that fix is easy is to ensure that shipping any fix is easy. That it’s a thing your whole team does quickly, all the time.

The motto “Move Fast And Break Things” caught on with a lot of the rest of Silicon Valley because we are all familiar with this toxic, paralyzing fear.

After we have the safety mechanisms in place to make changes as safe as they can be, we just need to push through it, and accept that things might break, but that’s OK.

Some Important Words are Missing

The motto has an implicit preamble, “Once you have done the work to make broken things safe enough, then you should move fast and break things”.

When you are in a conflict about whether to “go fast” or “go slow”, the motto is not supposed to be telling you that the answer is an unqualified “GOTTA GO FAST”. Rather, it is an exhortation to take a beat and to go through a process of interrogating your motivation for slowing down. There are three possible things that a person saying “slow down” could mean about making a change:

  1. It is broken in a way you already understand. If this is the problem, then you should not make the change, because you know it’s not ready. If you already know it’s broken, then the change simply isn’t done. Finish the work, and ship it to users when it’s finished.
  2. It is risky in a way that you don’t have a way to defend against. As far as you know, the change works, but there’s a risk embedded in it that you don’t have any safety tools to deal with. If this is the issue, then what you should do is pause working on this change, and build the safety first.
  3. It is making you nervous in a way you can’t articulate. If you can’t describe an known defect as in point 1, and you can’t outline an improved safety control as in step 2, then this is the time to let go, accept that you might break something, and move fast.

The implied context for “move fast and break things” is only in that third condition. If you’ve already built all the infrastructure that you can think of to build, and you’ve already fixed all the bugs in the change that you need to fix, any further delay will not serve you, do not have any further delays.

Unfortunately, as you probably already know,

This motto did a lot of good in its appropriate context, at its appropriate time. It’s still a useful heuristic for engineers, if the appropriate context is generally understood within the conversation where it is used.

However, it has clearly been taken to mean a lot of significantly more damaging things.

Purely from an engineering perspective, it has been reasonably successful. It’s less and less common to see people in the industry pushing back against tight deployment cycles. It’s also less common to see the basic safety mechanisms (version control, continuous integration, unit testing) get ignored. And many ex-Facebook engineers have used this motto very clearly under the understanding I’ve described here.

Even in the narrow domain of software engineering it is misused. I’ve seen it used to argue a project didn’t need tests; that a deploy could be forced through a safety process; that users did not need to be informed of a change that could potentially impact them personally.

Outside that domain, it’s far worse. It’s generally understood to mean that no safety mechanisms are required at all, that any change a software company wants to make is inherently justified because it’s OK to “move fast”. You can see this interpretation in the way that it has leaked out of Facebook’s engineering culture and suffused its entire management strategy, blundering through market after market and issue after issue, making catastrophic mistakes, making a perfunctory apology and moving on to the next massive harm.

In the decade since it has been retired as Facebook’s official motto, it has been used to defend some truly horrific abuses within the tech industry. You only need to visit the orange website to see it still being used this way.

Even at its best, “move fast and break things” is an engineering heuristic, it is not an ethical principle. Even within the context I’ve described, it’s only okay to move fast and break things. It is never okay to move fast and harm people.

So, while I do think that it is broadly misunderstood by the public, it’s still not a thing I’d ever say again. Instead, I propose this:

Make it safer, don’t make it later.

Acknowledgements

Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support me on Patreon as well! I am also available for consulting work if you think your organization could benefit from expertise on topics like “how do I make changes to my codebase, but, like, good ones”.

Categories: FLOSS Project Planets

Python Engineering at Microsoft: Python Linting in Visual Studio Code – Hinting and Linting Video Series

Wed, 2023-12-06 14:29

One of the most important parts of writing code is making sure your code is readable. There are so many positive downstream effects of clean code from its ease to maintain and add features, debug subtle programming errors or find uninitialized variables. Some people call this code hygiene. VS Code has linter extension support that enables you develop faster, produce cleaner code and is tweaked to your set up.

How VS Code handles Python linters

Linters are the development tool that is used to make sure your code is formatted consistently across your team. Then add your linter to your requirements-dev.txt— or otherwise named file to store your development only requirements — and pip install -r requirements-dev.txt. The linter development experience can live entirely within VS Code if you’d like. This means you do not need to pip install pylint in your Python environment for example. However in most collaborative programming projects, I prefer to install my linter in my virtual environment (old habits die hard) so if I want to use local terminal features of Pylint in VS Code, I can.

PRO-TIP: Set your default importStrategy importStrategy is a setting on all of our Python linter extensions that defines which linter should be automatically used in VS Code. I like to set it to fromEnvironment in my User level settings so that it automatically uses the linter from my the Python environment for any project I’m working on with a team, but also allow VS Code to default to any Workspace level settings my team is sharing. settings.json

// use the pylint version with extension "pylint.importStrategy": "useBundled" // use the pylint version found in requirements "pylint.importStrategy": "fromEnvironment"

When you start your project, the first thing you will likely do is activate your virtual environment or open up a container to develop in (like Dev Containers). In VS Code, you’ll select your Python interpreter by using shortcut key Ctrl+P to open up the Command Pallette, and select from the dropdown menu which environment you’d like to use for your project. I select the interpreter associated with my project environment.

There are many packages for code quality. At the time of this post, VS Code and its active extension-contributing community supports flake8, ruff, pylint and the newly released mypy. The Ruff linter was created from the Python tools extension template by Charlie R. Marsh. The VS Code team encourages community contributed packages and you can learn more in the VS Code documentation.

PRO-TIP: VS Code extension works as soon as its installed VS Code linting is automatically enabled once the extension is installed. You no longer need python.linting.enabled set to true under settings.json.

Enable what you want and disable what you don’t

Navigate to the extensions tab in the activity bar to the far left and search for “Pylint.” I often like to enable pre-release so I can get the latest features and report bugs if I come across them, doing my part for the community.

GIF: Enable Pylint

PRO-TIP: Install isort Install an import sorting extension like isort then use the shortcut key Shift+Alt+Oin order to sort your imports quickly.

PRO-TIP: Toggle Problems tab Use Ctrl+Shift+M to toggle open and close the Problems tab in VS Code to access any issues reported by linters.

GIF: Solving my first problems in VS Code for a Wagtail project

You can specify problems to consistently ignore in your projects by adding disable flags to your settings. You can do this either in your settings panel Ctrl+, or with your settings.json Ctrl+P then typing “Settings JSON” in the text bar. The following can be added via your Workspace or User settings depending on your desired scope.

Here’s what it looks like to disable a few doc string problems among other arguments in Pylint. “Args” are always lists in brackets: settings.json

"pylint.arg": [ "--reports", "12", "--disable=C0114", "--disable=C0115", "--disable=C0116", ]

The same settings work for other linter extensions: settings.json

"flake8.arg": ["--ignore=E24,W504", "--verbose"], "ruff.args": ["--config=/path/to/pyproject.toml"]

Want to dig more into the VS Code Linting documentation? https://code.visualstudio.com/docs/python/linting

Keep in touch!

I host The Python Pulse every second Friday of the month 11 AM PT / 2 PM ET / 7 PM UTC https://aka.ms/python-pulse-live

… or join me on the Microsoft Python Discord

More Reading…

The post Python Linting in Visual Studio Code – Hinting and Linting Video Series appeared first on Python.

Categories: FLOSS Project Planets

Pages