Planet Python

Subscribe to Planet Python feed
Planet Python -
Updated: 2 hours 38 min ago

Talk Python to Me: #438: Celebrating JupyterLab 4 and Jupyter 7 Releases

Thu, 2023-11-16 03:00
Jupyter Notebooks and Jupyter Lab have to be one of the most important parts of Python when it comes to bring new users to the Python ecosystem and certainly for the day to day work of data scientists and general scientists who have made some of the biggest discoveries of recent times. And that platform has recently gotten a major upgrade with JupyterLab 4 released and Jupyter Notebook being significantly reworked to be based on the changes from JupyterLab as well. We have an excellent panel of guests, Sylvain Corlay, Frederic Collonval, Jeremy Tuloup, and Afshin Darian here to tell us what's new in these and other parts of the Jupyter ecosystem.<br/> <br/> <strong>Links from the show</strong><br/> <br/> <div><b>Guests</b><br/> <br/> <b>Sylvain Corlay</b><br/> <b>Frederic Collonval</b><br/> <b>Jeremy Tuloup</b><br/> <b>Afshin Darian</b><br/> <br/> <b>JupyterLab 4.0 is Here</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Announcing Jupyter Notebook 7</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>JupyterCon 2023 Videos</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Jupyterlite</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Download JupyterLab Desktop</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Mythical Man Month Book</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Blender in Jupyter</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Watch this episode on YouTube</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Episode transcripts</b>: <a href="" target="_blank" rel="noopener"></a><br/> <br/> <b>--- Stay in touch with us ---</b><br/> <b>Subscribe to us on YouTube</b>: <a href="" target="_blank" rel="noopener"></a><br/> <b>Follow Talk Python on Mastodon</b>: <a href="" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>talkpython</a><br/> <b>Follow Michael on Mastodon</b>: <a href="" target="_blank" rel="noopener"><i class="fa-brands fa-mastodon"></i>mkennedy</a><br/></div><br/> <strong>Sponsors</strong><br/> <a href=''>Phylum</a><br> <a href=''>Python Tutor</a><br> <a href=''>Talk Python Training</a>
Categories: FLOSS Project Planets

Matt Layman: Parse Inbound Email - Building SaaS with Python and Django #175

Wed, 2023-11-15 19:00
In this episode, we switched to the inbound side and parsed an email to transform it into a journal entry. This caused us to look into the dateutil library and look at Python’s standard email module to use EmailMessage.
Categories: FLOSS Project Planets

Stack Abuse: Guide to Heaps in Python

Wed, 2023-11-15 14:21

Imagine a bustling airport with flights taking off and landing every minute. Just as air traffic controllers prioritize flights based on urgency, heaps help us manage and process data based on specific criteria, ensuring that the most "urgent" or "important" piece of data is always accessible at the top.

In this guide, we'll embark on a journey to understand heaps from the ground up. We'll start by demystifying what heaps are and their inherent properties. From there, we'll dive into Python's own implementation of heaps, the heapq module, and explore its rich set of functionalities. So, if you've ever wondered how to efficiently manage a dynamic set of data where the highest (or lowest) priority element is frequently needed, you're in for a treat.

What is a Heap?

The first thing you'd want to understand before diving into the usage of heaps is what is a heap. A heap stands out in the world of data structures as a tree-based powerhouse, particularly skilled at maintaining order and hierarchy. While it might resemble a binary tree to the untrained eye, the nuances in its structure and governing rules distinctly set it apart.

One of the defining characteristics of a heap is its nature as a complete binary tree. This means that every level of the tree, except perhaps the last, is entirely filled. Within this last level, nodes populate from left to right. Such a structure ensures that heaps can be efficiently represented and manipulated using arrays or lists, with each element's position in the array mirroring its placement in the tree.

The true essence of a heap, however, lies in its ordering. In a max heap, any given node's value surpasses or equals the values of its children, positioning the largest element right at the root. On the other hand, a min heap operates on the opposite principle: any node's value is either less than or equal to its children's values, ensuring the smallest element sits at the root.

Advice: You can visualize a heap as a pyramid of numbers. For a max heap, as you ascend from the base to the peak, the numbers increase, culminating in the maximum value at the pinnacle. In contrast, a min heap starts with the minimum value at its peak, with numbers escalating as you move downwards.

As we progress, we'll dive deeper into how these inherent properties of heaps enable efficient operations and how Python's heapq module seamlessly integrates heaps into our coding endeavors.

Characteristics and Properties of Heaps

Heaps, with their unique structure and ordering principles, bring forth a set of distinct characteristics and properties that make them invaluable in various computational scenarios.

First and foremost, heaps are inherently efficient. Their tree-based structure, specifically the complete binary tree format, ensures that operations like insertion and extraction of priority elements (maximum or minimum) can be performed in logarithmic time, typically O(log n). This efficiency is a boon for algorithms and applications that require frequent access to priority elements.

Another notable property of heaps is their memory efficiency. Since heaps can be represented using arrays or lists without the need for explicit pointers to child or parent nodes, they are space-saving. Each element's position in the array corresponds to its placement in the tree, allowing for predictable and straightforward traversal and manipulation.

The ordering property of heaps, whether as a max heap or a min heap, ensures that the root always holds the element of highest priority. This consistent ordering is what allows for quick access to the top-priority element without having to search through the entire structure.

Furthermore, heaps are versatile. While binary heaps (where each parent has at most two children) are the most common, heaps can be generalized to have more than two children, known as d-ary heaps. This flexibility allows for fine-tuning based on specific use cases and performance requirements.

Lastly, heaps are self-adjusting. Whenever elements are added or removed, the structure rearranges itself to maintain its properties. This dynamic balancing ensures that the heap remains optimized for its core operations at all times.

Advice: These properties made heap data structure a good fit for an efficient sorting algorithm - heap sort. To learn more about heap sort in Python, read our "Heap Sort in Python" article.

As we delve deeper into Python's implementation and practical applications, the true potential of heaps will unfold before us.

Types of Heaps

Not all heaps are created equal. Depending on their ordering and structural properties, heaps can be categorized into different types, each with its own set of applications and advantages. The two main categories are max heap and min heap.

The most distinguishing feature of a max heap is that the value of any given node is greater than or equal to the values of its children. This ensures that the largest element in the heap always resides at the root. Such a structure is particularly useful when there's a need to frequently access the maximum element, as in certain priority queue implementations.

The counterpart to the max heap, a min heap ensures that the value of any given node is less than or equal to the values of its children. This positions the smallest element of the heap at the root. Min heaps are invaluable in scenarios where the least element is of prime importance, such as in algorithms that deal with real-time data processing.

Beyond these primary categories, heaps can also be distinguished based on their branching factor:

While binary heaps are the most common, with each parent having at most two children, the concept of heaps can be extended to nodes having more than two children. In a d-ary heap, each node has at most d children. This variation can be optimized for specific scenarios, like decreasing the height of the tree to speed up certain operations.

Binomial Heap is a set of binomial trees that are defined recursively. Binomial heaps are used in priority queue implementations and offer efficient merge operations.

Named after the famous Fibonacci sequence, the Fibonacci heap offers better-amortized running times for many operations compared to binary or binomial heaps. They're particularly useful in network optimization algorithms.

Python's Heap Implementation - The heapq Module

Python offers a built-in module for heap operations - the heapq module. This module provides a collection of heap-related functions that allow developers to transform lists into heaps and perform various heap operations without the need for a custom implementation. Let's dive into the nuances of this module and how it brings you the power of heaps.

The heapq module doesn't provide a distinct heap data type. Instead, it offers functions that work on regular Python lists, transforming and treating them as binary heaps.

This approach is both memory-efficient and integrates seamlessly with Python's existing data structures.

That means that heaps are represented as lists in heapq. The beauty of this representation is its simplicity - the zero-based list index system serves as an implicit binary tree. For any given element at position i, its:

  • Left Child is at position 2*i + 1
  • Right Child is at position 2*i + 2
  • Parent Node is at position (i-1)//2

This implicit structure ensures that there's no need for a separate node-based binary tree representation, making operations straightforward and memory usage minimal.

Space Complexity: Heaps are typically implemented as binary trees but don't require storage of explicit pointers for child nodes. This makes them space-efficient with a space complexity of O(n) for storing n elements.

It's essential to note that the heapq module creates min heaps by default. This means that the smallest element is always at the root (or the first position in the list). If you need a max heap, you'd have to invert order by multiplying elements by -1 or use a custom comparison function.

Python's heapq module provides a suite of functions that allow developers to perform various heap operations on lists.

Note: To use the heapq module in your application, you'll need to import it using simple import heapq.

In the following sections, we'll dive deep into each of these fundamental operations, exploring their mechanics and use cases.

How to Transform a List into a Heap

The heapify() function is the starting point for many heap-related tasks. It takes an iterable (typically a list) and rearranges its elements in-place to satisfy the properties of a min heap:

import heapq data = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5] heapq.heapify(data) print(data)

This will output a reordered list that represents a valid min heap:

[1, 1, 2, 3, 3, 9, 4, 6, 5, 5, 5]

Time Complexity: Converting an unordered list into a heap using the heapify function is an O(n) operation. This might seem counterintuitive, as one might expect it to be O(nlogn), but due to the tree structure's properties, it can be achieved in linear time.

How to Add an Element to the Heap

The heappush() function allows you to insert a new element into the heap while maintaining the heap's properties:

import heapq heap = [] heapq.heappush(heap, 5) heapq.heappush(heap, 3) heapq.heappush(heap, 7) print(heap)

Running the code will give you a list of elements maintaining the min heap property:

[3, 5, 7]

Time Complexity: The insertion operation in a heap, which involves placing a new element in the heap while maintaining the heap property, has a time complexity of O(logn). This is because, in the worst case, the element might have to travel from the leaf to the root.

How to Remove and Return the Smallest Element from the Heap

The heappop() function extracts and returns the smallest element from the heap (the root in a min heap). After removal, it ensures the list remains a valid heap:

import heapq heap = [1, 3, 5, 7, 9] print(heapq.heappop(heap)) print(heap)

Note: The heappop() is invaluable in algorithms that require processing elements in ascending order, like the Heap Sort algorithm, or when implementing priority queues where tasks are executed based on their urgency.

This will output the smallest element and the remaining list:

1 [3, 7, 5, 9]

Here, 1 is the smallest element from the heap, and the remaining list has maintained the heap property, even after we removed 1.

Time Complexity: Removing the root element (which is the smallest in a min heap or largest in a max heap) and reorganizing the heap also takes O(logn) time.

How to Push a New Item and Pop the Smallest Item

The heappushpop() function is a combined operation that pushes a new item onto the heap and then pops and returns the smallest item from the heap:

import heapq heap = [3, 5, 7, 9] print(heapq.heappushpop(heap, 4)) print(heap)

This will output 3, the smallest element, and print out the new heap list that now includes 4 while maintaining the heap property:

3 [4, 5, 7, 9]

Note: Using the heappushpop() function is more efficient than performing operations of pushing a new element and popping the smallest one separately.

How to Replace the Smallest Item and Push a New Item

The heapreplace() function pops the smallest element and pushes a new element onto the heap, all in one efficient operation:

import heapq heap = [1, 5, 7, 9] print(heapq.heapreplace(heap, 4)) print(heap)

This prints 1, the smallest element, and the list now includes 4 and maintains the heap property:

1 [4, 5, 7, 9]

Note: heapreplace() is beneficial in streaming scenarios where you want to replace the current smallest element with a new value, such as in rolling window operations or real-time data processing tasks.

Finding Multiple Extremes in Python's Heap

nlargest(n, iterable[, key]) and nsmallest(n, iterable[, key]) functions are designed to retrieve multiple largest or smallest elements from an iterable. They can be more efficient than sorting the entire iterable when you only need a few extreme values. For example, say you have the following list and you want to find three smallest and three largest values in the list:

data = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]

Here, nlargest() and nsmallest() functions can come in handy:

import heapq data = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5] print(heapq.nlargest(3, data)) # Outputs [9, 6, 5] print(heapq.nsmallest(3, data)) # Outputs [1, 1, 2]

This will give you two lists - one contains the three largest values and the other contains the three smallest values from the data list:

[9, 6, 5] [1, 1, 2] How to Build Your Custom Heap

While Python's heapq module provides a robust set of tools for working with heaps, there are scenarios where the default min heap behavior might not suffice. Whether you're looking to implement a max heap or need a heap that operates based on custom comparison functions, building a custom heap can be the answer. Let's explore how to tailor heaps to specific needs.

Implementing a Max Heap using heapq

By default, heapq creates min heaps. However, with a simple trick, you can use it to implement a max heap. The idea is to invert the order of elements by multiplying them by -1 before adding them to the heap:

import heapq class MaxHeap: def __init__(self): self.heap = [] def push(self, val): heapq.heappush(self.heap, -val) def pop(self): return -heapq.heappop(self.heap) def peek(self): return -self.heap[0]

With this approach, the largest number (in terms of absolute value) becomes the smallest, allowing the heapq functions to maintain a max heap structure.

Heaps with Custom Comparison Functions

Sometimes, you might need a heap that doesn't just compare based on the natural order of elements. For instance, if you're working with complex objects or have specific sorting criteria, a custom comparison function becomes essential.

To achieve this, you can wrap elements in a helper class that overrides the comparison operators:

import heapq class CustomElement: def __init__(self, obj, comparator): self.obj = obj self.comparator = comparator def __lt__(self, other): return self.comparator(self.obj, other.obj) def custom_heappush(heap, obj, comparator=lambda x, y: x < y): heapq.heappush(heap, CustomElement(obj, comparator)) def custom_heappop(heap): return heapq.heappop(heap).obj

With this setup, you can define any custom comparator function and use it with the heap.


Heaps offer predictable performance for many operations, making them a reliable choice for priority-based tasks. However, it's essential to consider the specific requirements and characteristics of the application at hand. In some cases, tweaking the heap's implementation or even opting for alternative data structures might yield better real-world performance.

Heaps, as we've journeyed through, are more than just another data structure. They represent a confluence of efficiency, structure, and adaptability. From their foundational properties to their implementation in Python's heapq module, heaps offer a robust solution to a myriad of computational challenges, especially those centered around priority.

Categories: FLOSS Project Planets

Real Python: Embeddings and Vector Databases With ChromaDB

Wed, 2023-11-15 09:00

The era of large language models (LLMs) is here, bringing with it rapidly evolving libraries like ChromaDB that help augment LLM applications. You’ve most likely heard of chatbots like OpenAI’s ChatGPT, and perhaps you’ve even experienced their remarkable ability to reason about natural language processing (NLP) problems.

Modern LLMs, while imperfect, can accurately solve a wide range of problems and provide correct answers to many questions. But, due to the limits of their training and the number of text tokens they can process, LLMs aren’t a silver bullet for all tasks.

You wouldn’t expect an LLM to provide relevant responses about topics that don’t appear in their training data. For example, if you asked ChatGPT to summarize information in confidential company documents, then you’d be out of luck. You could show some of these documents to ChatGPT, but there’s a limited number of documents that you can upload before you exceed ChatGPT’s maximum number of tokens. How would you select documents to show ChatGPT?

To address these shortcomings and scale your LLM applications, one great option is to use a vector database like ChromaDB. A vector database allows you to store encoded unstructured objects, like text, as lists of numbers that you can compare to one another. You can, for example, find a collection of documents relevant to a question that you want an LLM to answer.

In this tutorial, you’ll learn about:

  • Representing unstructured objects with vectors
  • Using word and text embeddings in Python
  • Harnessing the power of vector databases
  • Encoding and querying over documents with ChromaDB
  • Providing context to LLMs like ChatGPT with ChromaDB

After reading, you’ll have the foundational knowledge to use ChromaDB in your NLP or LLM applications. Before reading, you should be comfortable with the basics of Python and high school math.

Get Your Code: Click here to download free sample code that shows you how to use ChromaDB to add context to an LLM.

Represent Data as Vectors

Before diving into embeddings and vector databases, you should understand what vectors are and what they represent. Feel free to skip ahead to the next section if you’re already comfortable with vector concepts. If you’re not or if you could use a refresher, then keep reading!

Vector Basics

You can describe vectors with variable levels of complexity, but one great starting place is to think of a vector as an array of numbers. For example, you could represent vectors using NumPy arrays as follows:

Python >>> import numpy as np >>> vector1 = np.array([1, 0]) >>> vector2 = np.array([0, 1]) >>> vector1 array([1, 0]) >>> vector2 array([0, 1]) Copied!

In this code block, you import numpy and create two arrays, vector1 and vector2, representing vectors. This is one of the most common and useful ways to work with vectors in Python, and NumPy offers a variety of functionality to manipulate vectors. There are also several other libraries that you can use to work with vector data, such as PyTorch, TensorFlow, JAX, and Polars. You’ll stick with NumPy for this overview.

You’ve created two NumPy arrays that represent vectors. Now what? It turns out you can do a lot of cool things with vectors, but before continuing on, you’ll need to understand some key definitions and properties:

  • Dimension: The dimension of a vector is the number of elements that it contains. In the example above, vector1 and vector2 are both two-dimensional since they each have two elements. You can only visualize vectors with three dimensions or less, but generally, vectors can have any number of dimensions. In fact, as you’ll see later, vectors that encode words and text tend to have hundreds or thousands of dimensions.

  • Magnitude: The magnitude of a vector is a non-negative number that represents the vector’s size or length. You can also refer to the magnitude of a vector as the norm, and you can denote it with ||v|| or |v|. There are many different definitions of magnitude or norm, but the most common is the Euclidean norm or 2-norm. You’ll learn how to compute this later.

  • Unit vector: A unit vector is a vector with a magnitude of one. In the example above, vector1 and vector2 are unit vectors.

  • Direction: The direction of a vector specifies the line along which the vector points. You can represent direction using angles, unit vectors, or coordinates in different coordinate systems.

  • Dot product (scalar product): The dot product of two vectors, u and v, is a number given by u ⋅ v = ||u|| ||v|| cos(θ), where θ is the angle between the two vectors. Another way to compute the dot product is to do an element-wise multiplication of u and v and sum the results. The dot product is one of the most important and widely used vector operations because it measures the similarity between two vectors. You’ll see more of this later on.

  • Orthogonal vectors: Vectors are orthogonal if their dot product is zero, meaning that they’re at a 90 degree angle to each other. You can think of orthogonal vectors as being completely unrelated to each other.

  • Dense vector: A vector is considered dense if most of its elements are non-zero. Later on, you’ll see that words and text are most usefully represented with dense vectors because each dimension encodes meaningful information.

While there are many more definitions and properties to learn, these six are most important for this tutorial. To solidify these ideas with code, check out the following block. Note that for the rest of this tutorial, you’ll use v1, v2, and v3 to name your vectors:

Python >>> import numpy as np >>> v1 = np.array([1, 0]) >>> v2 = np.array([0, 1]) >>> v3 = np.array([np.sqrt(2), np.sqrt(2)]) >>> # Dimension >>> v1.shape (2,) >>> # Magnitude >>> np.sqrt(np.sum(v1**2)) 1.0 >>> np.linalg.norm(v1) 1.0 >>> np.linalg.norm(v3) 2.0 >>> # Dot product >>> np.sum(v1 * v2) 0 >>> v1 @ v3 1.4142135623730951 Copied!

You first import numpy and create the arrays v1, v2, and v3. Calling v1.shape shows you the dimension of v1. You then see two different ways to compute the magnitude of a NumPy array. The first, np.sqrt(np.sum(v1**2)), uses the Euclidean norm that you learned about above. The second computation uses np.linalg.norm(), a NumPy function that computes the Euclidean norm of an array by default but can also compute other matrix and vector norms.

Lastly, you see two ways to calculate the dot product between two vectors. Using np.sum(v1 * v2) first computes the element-wise multiplication between v1 and v2 in a vectorized fashion, and you sum the results to produce a single number. A better way to compute the dot product is to use the at-operator (@), as you see with v1 @ v3. This is because @ can perform both vector and matrix multiplications, and the syntax is cleaner.

While all of these vector definitions and properties may seem straightforward to compute, you might still be wondering what they actually mean and why they’re important to understand. One way to better understand vectors is to visualize them in two dimensions. In this context, you can represent vectors as arrows, like in the following plot:

Representing vectors as arrows in two dimensions

The above plot shows the visual representation of the vectors v1, v2, and v3 that you worked with in the last example. The tail of each vector arrow always starts at the origin, and the tip is located at the coordinates specified by the vector. As an example, the tip of v1 lies at (1, 0), and the tip of v3 lies at roughly (1.414, 1.414). The length of each vector arrow corresponds to the magnitude that you calculated earlier.

From this visual, you can make the following key inferences:

  1. v1 and v2 are unit vectors because their magnitude, given by the arrow length, is one. v3 isn’t a unit vector, and its magnitude is two, twice the size of v1 and v2.

  2. v1 and v2 are orthogonal because their tails meet at a 90 degree angle. You see this visually but can also verify it computationally by computing the dot product between v1 and v2. By using the dot product definition, v1 ⋅ v2 = ||v1|| ||v2|| cos(θ), you can see that when θ = 90, cos(θ) = 0 and v1 ⋅ v2 = 0. Intuitively, you can think of v1 and v2 as being totally unrelated or having nothing to do with each other. This will become important later.

  3. v3 makes a 45 degree angle with both v1 and v2. This means that v3 will have a non-zero dot product with v1 and v2. This also means that v3 is equally related to both v1 and v2. In general, the smaller the angle between two vectors, the more they point toward a common direction.

Read the full article at »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Mike Driscoll: Using CSS to Style a Python TUI with Textual

Wed, 2023-11-15 08:32

Textual is a Python framework for creating Text Based user interfaces (TUIs). You can create graphical user interfaces in your terminal with Textual.

If you haven’t heard of Textual before, check out An Intro to Textual – Creating Text User Interfaces with Python

In this tutorial, you will learn how to create and style a form. The form won’t do anything, but this tutorial teaches how to add widgets, lay them out, and then give them some style.

Getting Started

If you don’t have Textual yet, you must install it. Textual is not built-in to Python, so you can use pip to get it on your machine.

Open up your terminal and run the following command to install Textual:

python -m pip install textual

Now you’re ready to rock!

Creating a Form in Textual

You are now ready to start coding with Textual. Open up your favorite Python editor and create a new file named

Then enter the following code:

# from import App, ComposeResult from textual.containers import Center from textual.screen import Screen from textual.widgets import Button, Footer, Header, Input, Static class Form(Static): def compose(self) -> ComposeResult: """ Creates the main UI elements """ yield Input(id="first_name", placeholder="First Name") yield Input(id="last_name", placeholder="Last Name") yield Input(id="address", placeholder="Address") yield Input(id="city", placeholder="City") yield Input(id="state", placeholder="State") yield Input(id="zip_code", placeholder="Zip Code") yield Input(id="email", placeholder="email") with Center(): yield Button("Save", id="save_button") class AddressBookApp(App): def compose(self) -> ComposeResult: """ Lays out the main UI elemens plus a header and footer """ yield Header() yield Form() yield Footer() if __name__ == "__main__": app = AddressBookApp()

Here, you import all the bits and bobs you’ll need to create your form. You can use the Static class to group together multiple widgets. Think of it as a container-widget.

You create the Form() class to contain most of your form’s widgets. You will compose a series of text input widgets where users can fill in their name and address information. There is also a reference to something called Center(), an actual container in Textual that helps you align widgets.

Next, in the AddressBookApp() class, you create a header, the form, and a footer. Now you are ready to run can run your code.

Open up your terminal again and use the following command:


When you run your code, you will see something like the following:

The default colors work, but you may want to change them to give your application a different look.

You will learn how to do that by using CSS!

CSS Styling

Textual supports a limited subset of CSS that you can use to style your widgets.Create a new file and name it form.css.

Next, add the following code:

Input { background: white; } Button { background: blue; }

The Input parameter tells Textual to style all the widgets that are of the Input type. In this example, you are setting the background color white.

The Button line item will set all the Button widget’s background color to blue. Of course, in this example, there is only one Button.

Now you need to update your code to tell Textual that you want to load a CSS file:

from import App, ComposeResult from textual.containers import Center from textual.screen import Screen from textual.widgets import Button, Footer, Header, Input, Static class Form(Static): def compose(self) -> ComposeResult: """ Creates the main UI elements """ yield Input(id="first_name", placeholder="First Name") yield Input(id="last_name", placeholder="Last Name") yield Input(id="address", placeholder="Address") yield Input(id="city", placeholder="City") yield Input(id="state", placeholder="State") yield Input(id="zip_code", placeholder="Zip Code") yield Input(id="email", placeholder="email") with Center(): yield Button("Save", id="save_button") class AddressBookApp(App): CSS_PATH = "form.css" def compose(self) -> ComposeResult: """ Lays out the main UI elemens plus a header and footer """ yield Header() yield Form() yield Footer() if __name__ == "__main__": app = AddressBookApp()

One-line change is all you need and that change is the first line in your AddressBookApp() class where you set a CSS_PATH variable. You can supply a relative or an absolute path to your CSS file here.

If you want to modify the style of any of the widgets in your TUI, you only need to go into the CSS file.

Try re-running the application and you’ll see an immediate difference:

If you’d like to be more specific about which widgets you want to style, change your CSS to the following:

Input { background: white; } #first_name { background: yellow; color: red } #address { background: green; } #save_button { background: blue; }

Here, you leave the Input widgets the same but add some hash-tag items to the CSS. These hash-tagged names must match the id you set for the individual widgets you want to style.

If you specify incorrect id names, those style blocks will be ignored. In this example, you explicitly modify the first_name and address Input widgets. You also call out the save_button Button,. This doesn’t really change the look of the button since you didn’t change the color, but if you add a second Button, it won’t get any special styling.

Here is what it looks like when you run it now:

You may not like these colors, so feel free to try out some of your own. That’s part of the fun of creating a TUI.

Wrapping Up

Now you know the basics of using CSS with your Textual applications. CSS is not my favorite way of applying styling, but this seems to work pretty well with Textual. The other nice thing about Textual is that there is a developer mode that you can enable where you can edit the CSS and watch it change the user interface live.

Give Textual a try and see what you can make!

The post Using CSS to Style a Python TUI with Textual appeared first on Mouse Vs Python.

Categories: FLOSS Project Planets

PyCharm: Unveiling Python 3.12: What’s New in the World of Python?

Wed, 2023-11-15 06:15

Python 3.12 made its debut on October 2, 2023, in keeping with the annual tradition of releasing new versions every October.

This latest iteration introduces a range of new features and enhancements that we will delve into in this blog post. For a comprehensive list of changes, you can refer to the official documentation.


F-strings, also known as formatted string literals, were introduced in Python 3.6, providing a straightforward and concise method for string formatting. They allow the inclusion of expressions within string literals, simplifying the creation of strings with variables, expressions, or function call results. F-strings are identified by the prefix fbefore the string, and expressions within curly braces {} are computed and substituted with their values.

Due to their readability and versatility, f-strings have become the preferred choice for string formatting in Python, facilitating the creation of neatly formatted and dynamic strings in your code.

Issues addressed in Python 3.12:

  • Flexibility to use quotes
  • Improved handling of backslashes
  • Refined handling of comments
  • Enhanced support for nested f-strings

Quotes in Python 3.11

Quotes in Python 3.12


In Python 3.11

In Python 3.12


In Python 3.11


In Python 3.11

Error Messages

Python 3.12 has made significant enhancements in error messages compared to previous versions. While prior updates improved error messages, with the introduction of a PEG parser in Python 3.9 and “did you mean” semantics in Python 3.10, this release introduces further improvements:

  • Added stdlib as a source of places for “did you mean”
  • Class member “did you mean”
  • Import from syntax error “did you mean”
  • Import names “did you mean”

Another notable improvement is the increased intelligence of error messages when dealing with common developer mistakes. For example, the error message explicitly recommends the correct approach.

import a.y.z from b.y.z Traceback (most recent call last):   File "<stdin>", line 1     import a.y.z from b.y.z     ^^^^^^^^^^^^^^^^^^^^^^^ SyntaxError: Did you mean to use 'from ... import ...' instead?

Additionally, Python 3.12’s error messages are more astute in recognizing instances where you reference an object’s attribute but don’t include the self prefix.

If you use PyCharm, you probably won’t see much of a change, since the IDE handled such errors and provided a quick-fix suggestion even before running a script.

In the past, the check was limited to the built-ins, but it now includes support for the standard library.

Lastly, when you encounter an import error and receive an exception while trying to import something from a module, Python 3.12 automatically suggests potential corrections. These enhancements collectively contribute to a significantly improved coding experience in Python.

Improvements in Type Annotations PEP 698 Override Decorator

In this PEP, the suggestion is to introduce an @override decorator to Python’s type system. This addition aims to empower type checkers to proactively identify and prevent a specific category of errors arising when a base class modifies methods inherited by its derived classes.

PEP 695 Generic Types

Previously, we used to define generics using TypeVar syntax. TypeVar is a feature of the Python type hinting system that allows you to create a placeholder for a type that will be specified later when a function or class is used. It is primarily used to indicate that a particular type can be of any type, providing flexibility and generic type annotations in Python.

In Python 3.12, this has become much simpler.

You can also extend it to classes.

Previously we used to take help from TypeVar.

Now, in Python 3.12

Use the type keyword to define your own aliases.

Previously, we used TypeAlias from the typing module.

Now, in Python 3.12

PEP 709 Comprehension Inlining

In the past, dictionary, list, and set comprehensions were defined using a mechanism that involved creating functions. Essentially, the contents of comprehension were compiled into a separate function, which was then instantiated and immediately executed. This process incurred some overhead because it required the creation of a function object and the establishment of a stack frame when the function was called.

However, the implementation has been changed. Dictionary, list, and set comprehensions no longer rely on functions in the background. Instead, all comprehensions are now compiled directly within the context of the current function.

The comprehension’s bytecode is contained within an individual code object. Whenever inline_comprehension() is invoked, a new temporary function object is created via MAKE_FUNCTION, executed (resulting in the establishment and subsequent removal of a new frame on the Python stack), and promptly discarded.

Python 3.12

This alteration means that there is no longer a separate stack frame associated with the comprehension.

PEP 684 Per Interpreter GIL

If you’d like to learn more about the Global Interpreter Lock (GIL), I recommend watching this video where Guido discusses the Global Interpreter Lock and subinterpreters.

Python operates as an interpreted language, setting it apart from compiled languages that employ compilers to convert code into machine language. In contrast, Python reads and executes instructions directly within its interpreter. Performance enhancements in Python releases often translate to improvements in the CPython interpreter.

When you execute a Python program using CPython, it creates an interpreter instance. The initial instance is called the main interpreter and it is capable of generating subinterpreters. Most aspects of subinterpreters are distinct from one another, but not entirely. This subinterpreter concept isn’t new and has existed since Python 1.5, although it typically operates beneath the language’s surface.

Handling parallel execution can be tricky, especially when multiple processes attempt to modify a single value simultaneously, leading to consistency issues. Python employs the Global Interpreter Lock to mitigate such problems, but it’s been a source of frustration for developers seeking to write parallel code.

interp = interpreters.create() print('before')'print("during")') print('after')

Efforts are underway to minimize the GIL’s impact and potentially eliminate it. 

PEP 684 and PEP 554 impact the structure of subinterpreters. PEP 684 relocates the GIL from the global level to a subinterpreter level, while PEP 554 is focused on enabling the fundamental capability of multiple interpreters, isolated from each other, in the same Python process.

It’s crucial to understand that these adjustments are largely behind the scenes, and Python users will not encounter them directly until Python 3.13 is released.

To learn more about PEP 684, visit

PEP 669 Low Impact Monitoring

PyCharm has added initial support for debugging based on PEP 669, improving overall debugger performance and making functionality such as tracing of raised exceptions and dropping into the debugger on a failed test almost penalty-less compared with the old sys.settrace based approach.

Credits: mCoding

import sys def my_trace_call(code, instruction_offset, call, arg0):    print("Event: call") def my_trace_line(code, line_number):    print("Event: line") def setup_monitoring():    mo = sys.monitoring    events =    mo.use_tool_id(0, "my_debugger")    mo.set_events(0, events.CALL | events.LINE)    mo.register_callback(0, events.CALL, my_trace_call)    mo.register_callback(0, events.LINE, my_trace_line) def main():    for x in range(5):        print(x) if __name__ == "__main__":    setup_monitoring()    main()

In the past, Python debuggers used sys.settrace, which offered essentially the same functionality but in a less efficient manner. The new sys.monitoring namespace introduces a streamlined API for event registration, and its implementation details enable it to leverage the ongoing efforts to specialize instructions at runtime.

To know more about PEP 669

PEP 683 Immortal Objects

Meta, the company behind Instagram, utilizes Python (Django) for its front-end server. They implement a multi-process architecture with asyncio to handle parallelism. However, the high scale of operations and request volume can lead to memory inefficiency issues. To address this, they employ a pre-fork web server architecture to cache objects in shared memory, reducing private memory usage.

Upon closer examination, they found that the private memory of processes increased over time, while shared memory decreased. This issue was caused by Python objects, which although mostly immutable, still underwent modifications through reference counts and garbage collection (GC) operations, triggering a copy-on-write mechanism in server processes.

To resolve this problem, they introduced Immortal Objects (PEP-683), marking objects as truly immutable. This approach ensures that the reference count and GC header remain unchanged, reducing memory overhead.

To learn more about Immortal Objects, read the Meta Engineering Blog

Linux Perf Profiler

A profiler serves as a valuable instrument for observing and diagnosing the efficiency of your scripts and programs. Profiling your code allows you to obtain precise measurements, which can be utilized to refine your implementation.

Python has a history of supporting profiling through standard library tools such as timeit, cProfile, and memray from Bloomberg. Furthermore, there are third-party alternatives that provide more functionality.

Linux perf is a profiling and performance analysis tool that is integrated into the Linux kernel. It provides a wide range of features and capabilities for monitoring and analyzing the performance of a Linux system. Linux perf is a powerful utility that allows you to collect and analyze data on various aspects of system behavior, such as CPU utilization, memory usage, hardware events, and more. Some of its key features include:

1. CPU Profiling: Linux perf can be used to profile CPU usage, helping you identify hotspots in your code and understand how CPU time is distributed among different processes and functions.

2. Hardware Events: It can collect data on hardware events like cache misses, branch mispredictions, and instruction counts, which is valuable for optimizing code and understanding the impact of hardware on performance.

3. System-wide Profiling: Linux perf can capture system-wide data, enabling you to analyze the performance of all running processes and system components simultaneously.

4. Kernel Profiling: You can use Linux perf to analyze the performance of the Linux kernel itself, helping you pinpoint kernel-level bottlenecks and issues.

5. Tracing: It supports dynamic tracing of kernel and user-space events, allowing you to trace the execution of specific programs or system calls.

6. Performance Counters: Linux perf can access the performance monitoring counters available in modern CPUs, providing detailed information about processor behavior.

Linux perf is a versatile tool that is commonly used by developers, system administrators, and performance analysts to optimize software and diagnose performance problems on Linux systems. It provides a wealth of information that can help improve the efficiency and performance of applications and the overall system.

This article, authored by Peter McConnell, explores the use of performance engineering with Python 3.12. It begins by introducing the Linux perf tool and the FlameGraph visualization tool. The goal is to reduce the runtime of a Python script from 36 seconds to 0.8 seconds, emphasizing the importance of Python 3.12’s performance profiling support.

The article explores the use of environment variables to enable perf support and repeats the profiling process with Python 3.12, generating an improved FlameGraph. The source code responsible for the performance issue is examined.


Python 3.12 comes with a bunch of welcome ergonomics improvements. Declaring generic classes, functions, and type aliases for type hinting is now as straightforward as in many statically typed languages with first-class syntactic support provided by PEP 695. Already universally loved f-strings are now even easier to use thanks to former grammar restrictions, such as preventing re-using quotes and including escape sequences inside them, being lifted in PEP 701. Low overhead debugging features make using a debugger by default for all development tasks a no-brainer. Apart from that, there are new typing features, various performance improvements, and new standard library APIs.

Explore the capabilities of Python 3.12 with PyCharm 2023.3, now available in the Early Access Program (EAP). This version introduces a swifter debugging experience and enhanced code assistance tailored to Python 3.12’s new typing features. Unlock the potential of the new language features with the tool designed for it.

Try PyCharm 2023.3 EAP

Learn more about Python 3.12 Support in PyCharm:

For a detailed exploration of additional features, please refer to the official documentation at

Categories: FLOSS Project Planets

Python Software Foundation: It's time for our annual year-end PSF fundraiser and membership drive 🎉

Wed, 2023-11-15 05:30
Support Python in 2023!  
For the fifth year in a row, the PSF is partnering with JetBrains on our end-of-year fundraiser. Over that time, the partnership has raised a total of over $95,000. Amazing! Thank you, JetBrains, for all your support.

There are three ways to join in the drive this year:
  • Save on PyCharm! JetBrains is once again supporting the PSF by providing a 30% discount on PyCharm and all proceeds will go to the PSF! You can take advantage of this discount by clicking the button on the page linked here, and the discount will be automatically applied when you check out. The promotion will only be available through November 27th, so go grab the deal today!
  • Donate directly to the PSF! Every dollar makes a difference. (Does every dollar also make a puppy’s tail wag? We make no promises, but may you should try, just in case? 🐶)
  • Become a member! Sign up as a Supporting member of the PSF. Be a part of the PSF, and help us sustain what we do with your annual support.

Or, heck, why not do all three? 🥳

 Your Donations:

  • Keep Python thriving
  • Invest directly in CPython and PyPI progress
  • Bring the global Python community together
  • Make our community more diverse and robust every year

Let’s take a look back on 2023:

PyCon US - We held our 20th PyCon US, in Salt Lake City and online, which was an exhilarating success! For the online component, PyCon US OX, we added two moderated online hallway tracks (in Spanish and English) and saw a 33% increase in virtual engagement. It was great to see everyone again in 2023, and we’re grateful to all the speakers, volunteers, attendees, and sponsors who made it such a special event.

Security Developer in Residence - Seth Larson joined the PSF earlier this year as our first ever Security Developer-in-Residence. Seth is already well-known to the Python community – he was named a PSF Fellow in 2022 and has already written a lot about Python and security on his blog. This critical role would not be possible without funding from the OpenSSF Alpha-Omega Project.

PyPI Safety & Security Engineer - Mike Fiedler joined the PSF earlier this year as our first ever PyPI Safety & Security Engineer. Mike is already a dedicated member of the Python packaging community – he has been a Python user for some 15 years, maintains and contributes to open source, and became a PyPI Maintainer in 2022. You can see some of what he's achieved for PyPI already on the PyPI blog. This critical role would not be possible without funding from AWS.

Welcome, Marisa and Marie!
- In 2023 we were able to add two new full time staff members to the PSF. Marisa Comacho joined as Community Events Manager and Marie Nordin joined as Community Communications Manager. We are excited to add two full time dedicated staff members to the PSF to support PyCon US, our communications, and the community as a whole.  

CPython Developer in Residence
- Our CPython Developer in Residence, Łukasz Langa, continued to provide trusted support and advancement of the Python language, including oversight for the releases of Python 3.8 and 3.9, adoption of Sigstore, and stewardship of PEP 703 (to name a few of many!). Łukasz also engaged with the community by orchestrating the Python Language Summit and participating in events such as PyCon US 2023, EuroPython, and PyCon Colombia. This critical role would not be possible without funding from Meta.

Authorized as CVE Numbering Authority (CNA) - Being authorized as a CNA is one milestone in the Python Software Foundation's strategy to improve the vulnerability response processes of critical projects in the Python ecosystem. The Python Software Foundation CNA scope covers Python and pip, two projects which are fundamental to the rest of Python ecosystem.

Five new Fiscal Sponsorees
- Welcome to Bandit, BaPya, Twisted, PyOhio, and North Bay Python as new Fiscal Sponsorees of the PSF! The PSF provides 501(c)(3) tax-exempt status to fiscal sponsorees and provides back office support so they can focus on their missions.


Our Thanks:

Thank you for being a part of this drive and of the Python community! Keep an eye on this space and on our social media in the coming weeks for updates on the drive and the PSF 👀

Your support means the world to us. We’re incredibly grateful to be in community with you!

Categories: FLOSS Project Planets

PyCharm: PyCharm 2023.3 EAP 6 Is Out!

Tue, 2023-11-14 14:57

You can download the build from our website, get it from the free Toolbox App, or update to it using snaps if you’re an Ubuntu user.

Download PyCharm 2023.3 EAP

The sixth build of the Early Access Program for PyCharm 2023.3 brings improvements to:

  • Support for Type Parameter Syntax (PEP 695).
  • Django Structure view.
  • Django Live Preview.

These are the most important updates for this build. For the full list of changes in this EAP build, read the release notes.

We’re dedicated to giving you the best possible experience, and your feedback is vital. If you find any bugs, please report them via our issue tracker. And if you have any questions or comments, feel free to share them in the comments below or get in touch with us on X (formerly Twitter).

Categories: FLOSS Project Planets

PyCoder’s Weekly: Issue #603 (Nov. 14, 2023)

Tue, 2023-11-14 14:30

#603 – NOVEMBER 14, 2023
View in Browser »

SciPy Builds on Windows Are a Minor Miracle

Moving SciPy to Meson meant finding a different Fortran compiler on Windows, which was particularly tricky to pull off for conda-forge. This blog tells the story about how things looked pretty grim for the Python 3.12 release, and how things ended up working out just in the nick of time. Associated HN discussion.

An Unbiased Evaluation of Environment and Packaging Tools

This detailed article covers the wide world of packaging in Python, how the different tools overlap, and how each has its own area of specialization. A great deep dive on all the choices out there that can help you pick the right tool for your project.

Automate LLM Backend Deployments Using Infrastructure as Code

New GitHub project to provision, update, and destroy the cloud infrastructure for a LLM backend using infrastructure as code (Python). Deployment options include deploying Hugging Face models to Docker (local), Runpod, and Azure →
PULUMI sponsor

Document Your Python Code and Projects With ChatGPT

Good documentation is a critical feature of any successful Python project. In practice, writing documentation is hard and can take a lot of time and effort. Nowadays, with tools like ChatGPT, you can quickly document your Python code and projects.

PSF Receives “Wonderfully Welcoming Award” From GitHub!


Discussions Idea: Return a NamedTuple


Articles & Tutorials Python Errors as Values

Error handling can be done in a variety of ways, and this article discusses why one organization decided to use returned error values instead of exceptions. Along the way, you’ll see comparisons between Python, Go, and Rust to better understand the different mechanisms.

Guide to Hash Tables in Python

Hash tables offer an efficient and flexible method of storing and retrieving data, making them indispensable for tasks involving large data sets or requiring rapid access to stored items. Python’s dict is a hash, learn how it works and how it can help your code.

Confusing git Terminology

Julia is working on a doc that explains git and in doing so polled some people about what git terminology they found confusing. This post covers the most common responses and attempts to clear up the confusion.

Check if a Python String Contains a Substring

In this video course, you’ll learn the best way to check whether a Python string contains a substring. You’ll also learn about idiomatic ways to inspect the substring further, match substrings with conditions using regular expressions, and search for substrings in pandas.

Building a Python Compiler and Interpreter

This article starts the journey of building a compiler and interpreter for the Python programming language, in Python. You’ll learn all about tokenizing, parsing, compiling, and interpreting.
RODRIGO GIRÃO SERRÃO • Shared by Rodrigo Girão Serrão

TIL: Django Constraints

Constraints in Django allow you to further restrict how data is managed in the database. This quick post covers how to use the CheckConstraint and UniqueConstraint classes in Django.

PEP 733: An Evaluation of Python’s Public C API

This is an informational PEP describing the shared public view of the C API in Python. It talks about why the C API exists, who the stakeholders are, and problems with the interface.

What Stage Startup Offers the Best Risk-Reward Tradeoff?

A deep dive on the success rate statistics of startups in the US with analysis on what joining at different stages means to a stock package payout.

Let’s Make a Silly JSON-like Parser

This article goes into deep detail on how you would construct a JSON parser in Python. If you’re new to parsing, this is a great place to start.

Rust vs. Go, Java, and Python in AWS Lambda Functions

A performance comparison of JSON parsing in AWS Lambda functions using Rust, Go, Java, and Python.

Everything You Can Do With Python’s bisect

Learn how to optimize search and keep your data sorted in Python with the bisect module
MARTIN HEINZ • Shared by Martin Heinz

Projects & Code uapi: Microframework for HTTP APIs


queryish: Data Queries Following Django’s QuerySet API


grablinks: Extract Links From a Remote HTML Resource


Bottle: Lightweight WSGI Micro Web Framework


FunASR: Speech Recognition Toolkit


Events Weekly Real Python Office Hours Q&A (Virtual)

November 15, 2023

PyData Bristol Meetup

November 16, 2023

PyData Karlsruhe #8

November 16, 2023

PyLadies Dublin

November 16, 2023

Hamburg Python Pizza

November 17 to November 18, 2023

PyCon ID 2023

November 18 to November 20, 2023

PyCon Chile 2023

November 24 to November 27, 2023

Happy Pythoning!
This was PyCoder’s Weekly Issue #603.
View in Browser »

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

Categories: FLOSS Project Planets

Real Python: Python Basics: Modules and Packages

Tue, 2023-11-14 09:00

As you gain experience writing code, you’ll eventually work on projects that are so large that keeping all the code in a single file becomes cumbersome.

Instead of writing a single file, you can put related code into separate files called modules. You can put individual modules together like building blocks to create a larger application.

In this video course, you’ll learn how to:

  • Create your own modules
  • Use modules in another file through the import statement
  • Organize several modules into a package

This video course is part of the Python Basics series, which accompanies Python Basics: A Practical Introduction to Python 3. You can also check out the other Python Basics courses.

Note that you’ll be using IDLE to interact with Python throughout this course.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Python Morsels: Solving programming exercises

Tue, 2023-11-14 09:00

How can you maximize the learning value from each coding challenge you solve?

Table of contents

  1. Outline an approach and walk away 💭
  2. Time-box yourself ⏲️
  3. Remove your distractions 🔕
  4. Write now, refactor later 📝
  5. Stuck? Stop! 🛑
  6. Flounder first, then seek help 🕵️
  7. Do it all over again 🔁
  8. Focus on the process, not the product ⛰️

Outline an approach and walk away 💭

Start by outlining your approach in a docstring or a comment. Be detailed, but use rough descriptions and pseudocode. You'll likely find yourself rereading the problem statement multiple times as you outline your approach.

For a challenging problem where you're likely to get stuck, time-box your outlining time. For example, set a timer for 15 minutes and then start outlining. When the timer goes off, walk away.

Walking away will let your brain work on the problem in the background. This will decrease the stress of getting stuck on a problem and allow your brain to be more creative because you're now unencumbered by the need to solve the problem quickly.

Ideally, after outlining the problem you might take a shower, make yourself a meal, or go for a walk. If you can, try to perform an activity that doesn't require intent focus, so your brain can wander.

When you walk away from an exercise before it's complete, you're likely to keep pondering it. You might realize your approach has a flaw or you might think of a completely different approach. The next time you sit down to solve your programming exercise, you'll likely find that you're a bit more eager to jump in than you would have if you'd kept coding right after outlining.

Time-box yourself ⏲️

Ready to sit down and …

Read the full article:
Categories: FLOSS Project Planets