Feeds

GNUnet News: libgnunetchat 0.3.0

GNU Planet! - Wed, 2024-03-06 18:00
libgnunetchat 0.3.0 released

We are pleased to announce the release of libgnunetchat 0.3.0.
This is a major new release bringing compatibility with the major changes in the Messenger service from latest GNUnet release 0.21.0 adding new message kinds, adjusting message processing and key management. This release will also require your GNUnet to be at least 0.21.0 because of that.

Download links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links may be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Noteworthy changes in 0.3.0
  • This release requires the GNUnet Messenger Service 0.3!
  • It allows ticket management for tickets sent from contacts.
  • Deletions or other updates of messages result in separate event calls.
  • It is possible to tag messages or contacts.
  • Invitations can be rejected via tag messages.
  • Contacts can be blocked or unblocked which results in filtering messages.
  • Processing of messages is ensured by enforcing logical order of callbacks while querying old messages.
  • Private messages are readable to its sender.
  • Messages provide information about its recipient.
  • Logouts get processed on application level on exit.
  • Delays message callbacks depending on message kind (deletion with custom delay).
  • New debug tools are available to visualize the message graph.
  • Add test case for message receivement.
  • Multiple issues are fixed.

A detailed list of changes can be found in the ChangeLog .

Categories: FLOSS Project Planets

Data School: Get started with conda environments 🤝

Planet Python - Wed, 2024-03-06 12:58

In a previous post, I explained the differences between conda, Anaconda, and Miniconda.

I said that you can use conda to manage virtual environments:

If you’re not familiar with virtual environments, they allow you to maintain isolated environments with different packages and versions of those packages.

In this post, I’m going to explain the benefits of virtual environments and how to use virtual environments in conda.

Let’s go! 👇

Why use virtual environments?

A virtual environment is like a “workspace” where you can install a set of packages with specific versions. Each environment is isolated from all other environments, and also isolated from the base environment. (The base environment is created when you install conda.)

So, why use virtual environments at all?

  • Different packages can have conflicting requirements for their dependencies, meaning installing one may cause the other to stop working.
  • If you put them in separate environments instead, you can switch between the environments as needed, and both will continue to work.

Thus by using environments, you won’t breaking existing projects when you install, update, or remove packages, since each project can have its own environment.

You can also delete environments once you’re done with them, and if you run into problems with an environment, it’s easy to start a new one!

Six commands you need to know

conda environments have a lot of complexity, but there are actually only six commands you need to learn in order to get most of the benefits:

1️⃣ conda create -n myenv jupyter pandas matplotlib scikit-learn

This tells conda to:

  • Create an environment named myenv (or whatever name you choose)
  • Install jupyter, pandas, matplotlib, and scikit-learn in myenv (or whatever packages you need installed)
  • Install all of their dependencies in myenv
  • Choose the latest version of every one of those packages (as long as it works with every other package being installed)

That last point is a mouthful, but it basically means that conda will try to avoid any conflicts between package dependencies.

Note: conda stores all of your environments in one location on your computer, so it doesn’t matter what directory you are in when you create an environment.

2️⃣ conda activate myenv

This activates the myenv environment, such that you are now working in the myenv “workspace”.

In other words:

  • If you now type python or jupyter lab (for example), you’ll be running the Python or JupyterLab that is installed in myenv.
  • If you then type import pandas, you’ll be importing the pandas that’s installed in myenv.

Note: Activating an environment does not change your working directory.

3️⃣ conda list

This lists all of the packages that are installed in the active environment (along with their version numbers). If you followed my commands above, you’ll see python, jupyter, pandas, matplotlib, scikit-learn, and all of their dependencies.

4️⃣ conda env list

This lists all of the conda environments on your system, with an asterisk (*) next to the active environment.

5️⃣ conda deactivate

This exits the active environment, which will usually take you back to the “base” environment (which was created by Anaconda or Miniconda during installation).

6️⃣ conda env remove -n myenv

This permanently deletes the myenv environment. You can’t delete the active environment, so you have to deactivate myenv (or activate a different environment) first.

Going further

If you want to learn more about conda environments, check out this section of conda’s user guide:

🔗 Managing environments

If you want a broader view of conda and its capabilities, check out this section:

🔗 Common tasks

Or, share your question in the comments below and I’m happy to help! 👇

Categories: FLOSS Project Planets

Steinar H. Gunderson: Reverse Amdahl's Law

Planet Debian - Wed, 2024-03-06 11:39

Everybody working in performance knows Amdahl's law, and it is usually framed as a negative result; if you optimize (in most formulations, parallelize) a part of an operation, you gain diminishing results after a while. (When optimizing a given fraction p of the total time T by a speedup factor s, the new time taken is (1-p)T + pT/s.)

However, Amdahl's law also works beautifully in reverse! When you optimize something, there's usually some limit where a given optimization isn't worth it anymore; I usually put this around 1% or so, although of course it varies with the cost of the optimization and such. (Most people would count 1% as ridiculously low, but it's usually how mature systems go; you rarely find single 30% speedups, but you can often find ten smaller speedups and apply them sequentially. SQLite famously tripled their speed by chaining optimizations so tiny that they needed to run in a simulator to measure them.) And as your total runtime becomes smaller, things that used to not be worth it now pop over that threshold! If you have enough developer resources and no real upper limit for how much performance you would want, you can keep going forever.

A different way to look at it is that optimizations give you compound interest; if measuring in terms of throughput instead of latency (i.e., items per second instead of seconds per item), which I contend is the only reasonable way to express performance percentages, you can simply multiply the factors together.[1] So 1% and then 1% means 1.01 * 1.01 = 1.0201 = 2.01% speedup and not 2%. Thirty 1% optimizations sum to 34.8%, not 30%.

So here's my formulation of Amdahl's law, in a more positive light: The more you speed up a given part of a system, the more impactful optimizations in the other parts will be. So go forth and fire up those profilers :-)

[1] Obviously throughput measurements are inappropriate if what you care about is e.g. 99p latency. It is still better to talk about a 50% speedup than removing 33% of the latency, though, especially as the speedup factor gets higher.

Categories: FLOSS Project Planets

Real Python: Build an LLM RAG Chatbot With LangChain

Planet Python - Wed, 2024-03-06 09:00

You’ve likely interacted with large language models (LLMs), like the ones behind OpenAI’s ChatGPT, and experienced their remarkable ability to answer questions, summarize documents, write code, and much more. While LLMs are remarkable by themselves, with a little programming knowledge, you can leverage libraries like LangChain to create your own LLM-powered chatbots that can do just about anything.

In an enterprise setting, one of the most popular ways to create an LLM-powered chatbot is through retrieval-augmented generation (RAG). When you design a RAG system, you use a retrieval model to retrieve relevant information, usually from a database or corpus, and provide this retrieved information to an LLM to generate contextually relevant responses.

In this tutorial, you’ll step into the shoes of an AI engineer working for a large hospital system. You’ll build a RAG chatbot in LangChain that uses Neo4j to retrieve data about the patients, patient experiences, hospital locations, visits, insurance payers, and physicians in your hospital system.

In this tutorial, you’ll learn how to:

  • Use LangChain to build custom chatbots
  • Design a chatbot using your understanding of the business requirements and hospital system data
  • Work with graph databases
  • Set up a Neo4j AuraDB instance
  • Build a RAG chatbot that retrieves both structured and unstructured data from Neo4j
  • Deploy your chatbot with FastAPI and Streamlit

Click the link below to download the complete source code and data for this project:

Get Your Code: Click here to download the free source code for your LangChain chatbot.

Demo: An LLM RAG Chatbot With LangChain and Neo4j

By the end of this tutorial, you’ll have a REST API that serves your LangChain chatbot. You’ll also have a Streamlit app that provides a nice chat interface to interact with your API:

Under the hood, the Streamlit app sends your messages to the chatbot API, and the chatbot generates and sends a response back to the Streamlit app, which displays it to the user.

You’ll get an in-depth overview of the data that your chatbot has access to later, but if you’re anxious to test it out, you can ask questions similar to the examples given in the sidebar:

Example questions can be found in the sidebar.

You’ll learn how to tackle each step, from understanding the business requirements and data to building the Streamlit app. There’s a lot to unpack in this tutorial, but don’t feel overwhelmed. You’ll get some background on each concept introduced, along with links to external sources that will deepen your understanding. Now, it’s time to dive in!

Prerequisites

This tutorial is best suited for intermediate Python developers who want to get hands-on experience creating custom chatbots. Aside from intermediate Python knowledge, you’ll benefit from having a high-level understanding of the following concepts and technologies:

Nothing listed above is a hard prerequisite, so don’t worry if you don’t feel knowledgeable in any of them. You’ll be introduced to each concept and technology along the way. Besides, there’s no better way to learn these prerequisites than to implement them yourself in this tutorial.

Next up, you’ll get a brief project overview and begin learning about LangChain.

Project Overview

Throughout this tutorial, you’ll create a few directories that make up your final chatbot. Here’s a breakdown of each directory:

  • langchain_intro/ will help you get familiar with LangChain and equip you with the tools that you need to build the chatbot you saw in the demo, and it won’t be included in your final chatbot. You’ll cover this in Step 1.

  • data/ has the raw hospital system data stored as CSV files. You’ll explore this data in Step 2. In Step 3, you’ll move this data into a Neo4j database that your chatbot will query to answer questions.

  • hospital_neo4j_etl/ contains a script that loads the raw data from data/ into your Neo4j database. You have to run this before building your chatbot, and you’ll learn everything you need to know about setting up a Neo4j instance in Step 3.

  • chatbot_api/ is your FastAPI app that serves your chatbot as a REST endpoint, and it’s the core deliverable of this project. The chatbot_api/src/agents/ and chatbot_api/src/chains/ subdirectories contain the LangChain objects that comprise your chatbot. You’ll learn what agents and chains are later, but for now, just know that your chatbot is actually a LangChain agent composed of chains and functions.

  • tests/ includes two scripts that test how fast your chatbot can answer a series of questions. This will give you a feel for how much time you save by making asynchronous requests to LLM providers like OpenAI.

  • chatbot_frontend/ is your Streamlit app that interacts with the chatbot endpoint in chatbot_api/. This is the UI that you saw in the demo, and you’ll build this in Step 5.

All the environment variables needed to build and run your chatbot will be stored in a .env file. You’ll deploy the code in hospital_neo4j_etl/, chatbot_api, and chatbot_frontend as Docker containers that’ll be orchestrated with Docker Compose. If you want to experiment with the chatbot before going through the rest of this tutorial, then you can download the materials and follow the instructions in the README file to get things running:

Get Your Code: Click here to download the free source code for your LangChain chatbot.

With the project overview and prerequisites behind you, you’re ready to get started with the first step—getting familiar with LangChain.

Read the full article at https://realpython.com/build-llm-rag-chatbot-with-langchain/ Âť

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Python GUIs: Drag & Drop Widgets with PySide6 — Sort widgets visually with drag and drop in a container

Planet Python - Wed, 2024-03-06 08:00

I had an interesting question from a reader of my PySide6 book, about how to handle dragging and dropping of widgets in a container showing the dragged widget as it is moved.

I'm interested in managing movement of a QWidget with mouse in a container. I've implemented the application with drag & drop, exchanging the position of buttons, but I want to show the motion of QPushButton, like what you see in Qt Designer. Dragging a widget should show the widget itself, not just the mouse pointer.

First, we'll implement the simple case which drags widgets without showing anything extra. Then we can extend it to answer the question. By the end of this quick tutorial we'll have a generic drag drop implementation which looks like the following.

Table of Contents Drag & Drop Widgets

We'll start with a simple application which creates a window using QWidget and places a series of QPushButton widgets into it.

You can substitute QPushButton for any other widget you like, e.g. QLabel. Any widget can have drag behavior implemented on it, although some input widgets will not work well as we capture the mouse events for the drag.

python from PySide6.QtWidgets import QApplication, QHBoxLayout, QPushButton, QWidget class Window(QWidget): def __init__(self): super().__init__() self.blayout = QHBoxLayout() for l in ["A", "B", "C", "D"]: btn = QPushButton(l) self.blayout.addWidget(btn) self.setLayout(self.blayout) app = QApplication([]) w = Window() w.show() app.exec()

If you run this you should see something like this.

The series of QPushButton widgets in a horizontal layout.

Here we're creating a window, but the Window widget is subclassed from QWidget, meaning you can add this widget to any other layout. See later for an example of a generic object sorting widget.

QPushButton objects aren't usually draggable, so to handle the mouse movements and initiate a drag we need to implement a subclass. We can add the following to the top of the file.

python from PySide6.QtCore import QMimeData, Qt from PySide6.QtGui import QDrag from PySide6.QtWidgets import QApplication, QHBoxLayout, QPushButton, QWidget class DragButton(QPushButton): def mouseMoveEvent(self, e): if e.buttons() == Qt.MouseButton.LeftButton: drag = QDrag(self) mime = QMimeData() drag.setMimeData(mime) drag.exec(Qt.DropAction.MoveAction)

We implement a mouseMoveEvent which accepts the single e parameter of the event. We check to see if the left mouse button is pressed on this event -- as it would be when dragging -- and then initiate a drag. To start a drag, we create a QDrag object, passing in self to give us access later to the widget that was dragged. We also must pass in mime data. This is used for including information about what is dragged, particularly for passing data between applications. However, as here, it is fine to leave this empty.

Finally, we initiate a drag by calling drag.exec_(Qt.MoveAction). As with dialogs exec_() starts a new event loop, blocking the main loop until the drag is complete. The parameter Qt.MoveAction tells the drag handler what type of operation is happening, so it can show the appropriate icon tip to the user.

You can update the main window code to use our new DragButton class as follows.

python class Window(QWidget): def __init__(self): super().__init__() self.setAcceptDrops(True) self.blayout = QHBoxLayout() for l in ["A", "B", "C", "D"]: btn = DragButton(l) self.blayout.addWidget(btn) self.setLayout(self.blayout) def dragEnterEvent(self, e): e.accept()

If you run the code now, you can drag the buttons, but you'll notice the drag is forbidden.

Dragging of the widget starts but is forbidden.

What's happening? The mouse movement is being detected by our DragButton object and the drag started, but the main window does not accept drag & drop.

To fix this we need to enable drops on the window and implement dragEnterEvent to actually accept them.

python class Window(QWidget): def __init__(self): super().__init__() self.setAcceptDrops(True) self.blayout = QHBoxLayout() for l in ["A", "B", "C", "D"]: btn = DragButton(l) self.blayout.addWidget(btn) self.setLayout(self.blayout) def dragEnterEvent(self, e): e.accept()

If you run this now, you'll see the drag is now accepted and you see the move icon. This indicates that the drag has started and been accepted by the window we're dragging onto. The icon shown is determined by the action we pass when calling drag.exec_().

Dragging of the widget starts and is accepted, showing a move icon.

Releasing the mouse button during a drag drop operation triggers a dropEvent on the widget you're currently hovering the mouse over (if it is configured to accept drops). In our case that's the window. To handle the move we need to implement the code to do this in our dropEvent method.

The drop event contains the position the mouse was at when the button was released & the drop triggered. We can use this to determine where to move the widget to.

To determine where to place the widget, we iterate over all the widgets in the layout, until we find one who's x position is greater than that of the mouse pointer. If so then when insert the widget directly to the left of this widget and exit the loop.

If we get to the end of the loop without finding a match, we must be dropping past the end of the existing items, so we increment n one further (in the else: block below).

python def dropEvent(self, e): pos = e.position() widget = e.source() self.blayout.removeWidget(widget) for n in range(self.blayout.count()): # Get the widget at each index in turn. w = self.blayout.itemAt(n).widget() if pos.x() < w.x(): # We didn't drag past this widget. # insert to the left of it. break else: # We aren't on the left hand side of any widget, # so we're at the end. Increment 1 to insert after. n += 1 self.blayout.insertWidget(n, widget) e.accept()

The effect of this is that if you drag 1 pixel past the start of another widget the drop will happen to the right of it, which is a bit confusing. To fix this we can adjust the cut off to use the middle of the widget using if pos.x() < w.x() + w.size().width() // 2: -- that is x + half of the width.

python def dropEvent(self, e): pos = e.position() widget = e.source() self.blayout.removeWidget(widget) for n in range(self.blayout.count()): # Get the widget at each index in turn. w = self.blayout.itemAt(n).widget() if pos.x() < w.x() + w.size().width() // 2: # We didn't drag past this widget. # insert to the left of it. break else: # We aren't on the left hand side of any widget, # so we're at the end. Increment 1 to insert after. n += 1 self.blayout.insertWidget(n, widget) e.accept()

The complete working drag-drop code is shown below.

python from PySide6.QtCore import QMimeData, Qt from PySide6.QtGui import QDrag from PySide6.QtWidgets import QApplication, QHBoxLayout, QPushButton, QWidget class DragButton(QPushButton): def mouseMoveEvent(self, e): if e.buttons() == Qt.MouseButton.LeftButton: drag = QDrag(self) mime = QMimeData() drag.setMimeData(mime) drag.exec(Qt.DropAction.MoveAction) class Window(QWidget): def __init__(self): super().__init__() self.setAcceptDrops(True) self.blayout = QHBoxLayout() for l in ["A", "B", "C", "D"]: btn = DragButton(l) self.blayout.addWidget(btn) self.setLayout(self.blayout) def dragEnterEvent(self, e): e.accept() def dropEvent(self, e): pos = e.position() widget = e.source() self.blayout.removeWidget(widget) for n in range(self.blayout.count()): # Get the widget at each index in turn. w = self.blayout.itemAt(n).widget() if pos.x() < w.x() + w.size().width() // 2: # We didn't drag past this widget. # insert to the left of it. break else: # We aren't on the left hand side of any widget, # so we're at the end. Increment 1 to insert after. n += 1 self.blayout.insertWidget(n, widget) e.accept() app = QApplication([]) w = Window() w.show() app.exec() Visual Drag & Drop

We now have a working drag & drop implementation. Next we'll move onto improving the UX by showing the drag visually. First we'll add support for showing the button being dragged next to the mouse point as it is dragged. That way the user knows exactly what it is they are dragging.

Qt's QDrag handler natively provides a mechanism for showing dragged objects which we can use. We can update our DragButton class to pass a pixmap image to QDrag and this will be displayed under the mouse pointer as the drag occurs. To show the widget, we just need to get a QPixmap of the widget we're dragging.

python from PySide6.QtCore import QMimeData, Qt from PySide6.QtGui import QDrag from PySide6.QtWidgets import QApplication, QHBoxLayout, QPushButton, QWidget class DragButton(QPushButton): def mouseMoveEvent(self, e): if e.buttons() == Qt.MouseButton.LeftButton: drag = QDrag(self) mime = QMimeData() drag.setMimeData(mime) drag.exec(Qt.DropAction.MoveAction)

To create the pixmap we create a QPixmap object passing in the size of the widget this event is fired on with self.size(). This creates an empty pixmap which we can then pass into self.render to render -- or draw -- the current widget onto it. That's it. Then we set the resulting pixmap on the drag object.

If you run the code with this modification you'll see something like the following --

Dragging of the widget showing the dragged widget.

Generic Drag & Drop Container

We now have a working drag and drop behavior implemented on our window. We can take this a step further and implement a generic drag drop widget which allows us to sort arbitrary objects. In the code below we've created a new widget DragWidget which can be added to any window.

You can add items -- instances of DragItem -- which you want to be sorted, as well as setting data on them. When items are re-ordered the new order is emitted as a signal orderChanged.

python from PySide6.QtCore import QMimeData, Qt, Signal from PySide6.QtGui import QDrag, QPixmap from PySide6.QtWidgets import ( QApplication, QHBoxLayout, QLabel, QMainWindow, QVBoxLayout, QWidget, ) class DragItem(QLabel): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.setContentsMargins(25, 5, 25, 5) self.setAlignment(Qt.AlignmentFlag.AlignCenter) self.setStyleSheet("border: 1px solid black;") # Store data separately from display label, but use label for default. self.data = self.text() def set_data(self, data): self.data = data def mouseMoveEvent(self, e): if e.buttons() == Qt.MouseButton.LeftButton: drag = QDrag(self) mime = QMimeData() drag.setMimeData(mime) pixmap = QPixmap(self.size()) self.render(pixmap) drag.setPixmap(pixmap) drag.exec(Qt.DropAction.MoveAction) class DragWidget(QWidget): """ Generic list sorting handler. """ orderChanged = Signal(list) def __init__(self, *args, orientation=Qt.Orientation.Vertical, **kwargs): super().__init__() self.setAcceptDrops(True) # Store the orientation for drag checks later. self.orientation = orientation if self.orientation == Qt.Orientation.Vertical: self.blayout = QVBoxLayout() else: self.blayout = QHBoxLayout() self.setLayout(self.blayout) def dragEnterEvent(self, e): e.accept() def dropEvent(self, e): pos = e.position() widget = e.source() self.blayout.removeWidget(widget) for n in range(self.blayout.count()): # Get the widget at each index in turn. w = self.blayout.itemAt(n).widget() if self.orientation == Qt.Orientation.Vertical: # Drag drop vertically. drop_here = pos.y() < w.y() + w.size().height() // 2 else: # Drag drop horizontally. drop_here = pos.x() < w.x() + w.size().width() // 2 if drop_here: break else: # We aren't on the left hand/upper side of any widget, # so we're at the end. Increment 1 to insert after. n += 1 self.blayout.insertWidget(n, widget) self.orderChanged.emit(self.get_item_data()) e.accept() def add_item(self, item): self.blayout.addWidget(item) def get_item_data(self): data = [] for n in range(self.blayout.count()): # Get the widget at each index in turn. w = self.blayout.itemAt(n).widget() data.append(w.data) return data class MainWindow(QMainWindow): def __init__(self): super().__init__() self.drag = DragWidget(orientation=Qt.Orientation.Vertical) for n, l in enumerate(["A", "B", "C", "D"]): item = DragItem(l) item.set_data(n) # Store the data. self.drag.add_item(item) # Print out the changed order. self.drag.orderChanged.connect(print) container = QWidget() layout = QVBoxLayout() layout.addStretch(1) layout.addWidget(self.drag) layout.addStretch(1) container.setLayout(layout) self.setCentralWidget(container) app = QApplication([]) w = MainWindow() w.show() app.exec()

Generic drag-drop sorting in horizontal orientation.

You'll notice that when creating the item, you can set the label by passing it in as a parameter (just like for a normal QLabel which we've subclassed from). But you can also set a data value, which is the internal value of this item -- this is what will be emitted when the order changes, or if you call get_item_data yourself. This separates the visual representation from what is actually being sorted, meaning you can use this to sort anything not just strings.

In the example above we're passing in the enumerated index as the data, so dragging will output (via the print connected to orderChanged) something like:

python [1, 0, 2, 3] [1, 2, 0, 3] [1, 0, 2, 3] [1, 2, 0, 3]

If you remove the item.set_data(n) you'll see the labels emitted on changes.

python ['B', 'A', 'C', 'D'] ['B', 'C', 'A', 'D']

We've also implemented orientation onto the DragWidget using the Qt built in flags Qt.Orientation.Vertical or Qt.Orientation.Horizontal. This setting this allows you sort items either vertically or horizontally -- the calculations are handled for both directions.

Generic drag-drop sorting in vertical orientation.

Adding a Visual Drop Target

If you experiment with the drag-drop tool above you'll notice that it doesn't feel completely intuitive. When dragging you don't know where an item will be inserted until you drop it. If it ends up in the wrong place, you'll then need to pick it up and re-drop it again, using guesswork to get it right.

With a bit of practice you can get the hang of it, but it would be nicer to make the behavior immediately obvious for users. Many drag-drop interfaces solve this problem by showing a preview of where the item will be dropped while dragging -- either by showing the item in the place where it will be dropped, or showing some kind of placeholder.

In this final section we'll implement this type of drag and drop preview indicator.

The first step is to define our target indicator. This is just another label, which in our example is empty, with custom styles applied to make it have a solid "shadow" like background. This makes it obviously different to the items in the list, so it stands out as something distinct.

python from PySide6.QtCore import QMimeData, Qt, Signal from PySide6.QtGui import QDrag, QPixmap from PySide6.QtWidgets import ( QApplication, QHBoxLayout, QLabel, QMainWindow, QVBoxLayout, QWidget, ) class DragTargetIndicator(QLabel): def __init__(self, parent=None): super().__init__(parent) self.setContentsMargins(25, 5, 25, 5) self.setStyleSheet( "QLabel { background-color: #ccc; border: 1px solid black; }" )

We've copied the contents margins from the items in the list. If you change your list items, remember to also update the indicator dimensions to match.

The drag item is unchanged, but we need to implement some additional behavior on our DragWidget to add the target, control showing and moving it.

First we'll add the drag target indicator to the layout on our DragWidget. This is hidden to begin with, but will be shown during the drag.

python class DragWidget(QWidget): """ Generic list sorting handler. """ orderChanged = Signal(list) def __init__(self, *args, orientation=Qt.Orientation.Vertical, **kwargs): super().__init__() self.setAcceptDrops(True) # Store the orientation for drag checks later. self.orientation = orientation if self.orientation == Qt.Orientation.Vertical: self.blayout = QVBoxLayout() else: self.blayout = QHBoxLayout() # Add the drag target indicator. This is invisible by default, # we show it and move it around while the drag is active. self._drag_target_indicator = DragTargetIndicator() self.blayout.addWidget(self._drag_target_indicator) self._drag_target_indicator.hide() self.setLayout(self.blayout)

Next we modify the DragWidget.dragMoveEvent to show the drag target indicator. We show it by inserting it into the layout and then calling .show -- inserting a widget which is already in a layout will move it. We also hide the original item which is being dragged.

In the earlier examples we determined the position on drop by removing the widget being dragged, and then iterating over what is left. Because we now need to calculate the drop location before the drop, we take a different approach.

If we wanted to do it the same way, we'd need to remove the item on drag start, hold onto it and implement re-inserting at it's old position on drag fail. That's a lot of work.

Instead, the dragged item is left in place and hidden during move.

python def dragMoveEvent(self, e): # Find the correct location of the drop target, so we can move it there. index = self._find_drop_location(e) if index is not None: # Inserting moves the item if its alreaady in the layout. self.blayout.insertWidget(index, self._drag_target_indicator) # Hide the item being dragged. e.source().hide() # Show the target. self._drag_target_indicator.show() e.accept()

The method self._find_drop_location finds the index where the drag target will be shown (or the item dropped when the mouse released). We'll implement that next.

The calculation of the drop location follows the same pattern as before. We iterate over the items in the layout and calculate whether our mouse drop location is to the left of each widget. If it isn't to the left of any widget, we drop on the far right.

python def _find_drop_location(self, e): pos = e.position() spacing = self.blayout.spacing() / 2 for n in range(self.blayout.count()): # Get the widget at each index in turn. w = self.blayout.itemAt(n).widget() if self.orientation == Qt.Orientation.Vertical: # Drag drop vertically. drop_here = ( pos.y() >= w.y() - spacing and pos.y() <= w.y() + w.size().height() + spacing ) else: # Drag drop horizontally. drop_here = ( pos.x() >= w.x() - spacing and pos.x() <= w.x() + w.size().width() + spacing ) if drop_here: # Drop over this target. break return n

The drop location n is returned for use in the dragMoveEvent to place the drop target indicator.

Next wee need to update the get_item_data handler to ignore the drop target indicator. To do this we check w against self._drag_target_indicator and skip if it is the same. With this change the method will work as expected.

python def get_item_data(self): data = [] for n in range(self.blayout.count()): # Get the widget at each index in turn. w = self.blayout.itemAt(n).widget() if w != self._drag_target_indicator: # The target indicator has no data. data.append(w.data) return data

If you run the code a this point the drag behavior will work as expected. But if you drag the widget outside of the window and drop you'll notice a problem: the target indicator will stay in place, but dropping the item won't drop the item in that position (the drop will be cancelled).

To fix that we need to implement a dragLeaveEvent which hides the indicator.

python def dragLeaveEvent(self, e): self._drag_target_indicator.hide() e.accept()

With those changes, the drag-drop behavior should be working as intended. The complete code is shown below.

python from PySide6.QtCore import QMimeData, Qt, Signal from PySide6.QtGui import QDrag, QPixmap from PySide6.QtWidgets import ( QApplication, QHBoxLayout, QLabel, QMainWindow, QVBoxLayout, QWidget, ) class DragTargetIndicator(QLabel): def __init__(self, parent=None): super().__init__(parent) self.setContentsMargins(25, 5, 25, 5) self.setStyleSheet( "QLabel { background-color: #ccc; border: 1px solid black; }" ) class DragItem(QLabel): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.setContentsMargins(25, 5, 25, 5) self.setAlignment(Qt.AlignmentFlag.AlignCenter) self.setStyleSheet("border: 1px solid black;") # Store data separately from display label, but use label for default. self.data = self.text() def set_data(self, data): self.data = data def mouseMoveEvent(self, e): if e.buttons() == Qt.MouseButton.LeftButton: drag = QDrag(self) mime = QMimeData() drag.setMimeData(mime) pixmap = QPixmap(self.size()) self.render(pixmap) drag.setPixmap(pixmap) drag.exec(Qt.DropAction.MoveAction) class DragWidget(QWidget): """ Generic list sorting handler. """ orderChanged = Signal(list) def __init__(self, *args, orientation=Qt.Orientation.Vertical, **kwargs): super().__init__() self.setAcceptDrops(True) # Store the orientation for drag checks later. self.orientation = orientation if self.orientation == Qt.Orientation.Vertical: self.blayout = QVBoxLayout() else: self.blayout = QHBoxLayout() # Add the drag target indicator. This is invisible by default, # we show it and move it around while the drag is active. self._drag_target_indicator = DragTargetIndicator() self.blayout.addWidget(self._drag_target_indicator) self._drag_target_indicator.hide() self.setLayout(self.blayout) def dragEnterEvent(self, e): e.accept() def dragLeaveEvent(self, e): self._drag_target_indicator.hide() e.accept() def dragMoveEvent(self, e): # Find the correct location of the drop target, so we can move it there. index = self._find_drop_location(e) if index is not None: # Inserting moves the item if its alreaady in the layout. self.blayout.insertWidget(index, self._drag_target_indicator) # Hide the item being dragged. e.source().hide() # Show the target. self._drag_target_indicator.show() e.accept() def dropEvent(self, e): widget = e.source() # Use drop target location for destination, then remove it. self._drag_target_indicator.hide() index = self.blayout.indexOf(self._drag_target_indicator) if index is not None: self.blayout.insertWidget(index, widget) self.orderChanged.emit(self.get_item_data()) widget.show() self.blayout.activate() e.accept() def _find_drop_location(self, e): pos = e.position() spacing = self.blayout.spacing() / 2 for n in range(self.blayout.count()): # Get the widget at each index in turn. w = self.blayout.itemAt(n).widget() if self.orientation == Qt.Orientation.Vertical: # Drag drop vertically. drop_here = ( pos.y() >= w.y() - spacing and pos.y() <= w.y() + w.size().height() + spacing ) else: # Drag drop horizontally. drop_here = ( pos.x() >= w.x() - spacing and pos.x() <= w.x() + w.size().width() + spacing ) if drop_here: # Drop over this target. break return n def add_item(self, item): self.blayout.addWidget(item) def get_item_data(self): data = [] for n in range(self.blayout.count()): # Get the widget at each index in turn. w = self.blayout.itemAt(n).widget() if w != self._drag_target_indicator: # The target indicator has no data. data.append(w.data) return data class MainWindow(QMainWindow): def __init__(self): super().__init__() self.drag = DragWidget(orientation=Qt.Orientation.Vertical) for n, l in enumerate(["A", "B", "C", "D"]): item = DragItem(l) item.set_data(n) # Store the data. self.drag.add_item(item) # Print out the changed order. self.drag.orderChanged.connect(print) container = QWidget() layout = QVBoxLayout() layout.addStretch(1) layout.addWidget(self.drag) layout.addStretch(1) container.setLayout(layout) self.setCentralWidget(container) app = QApplication([]) w = MainWindow() w.show() app.exec()

If you run this example on macOS you may notice that the widget drag preview (the QPixmap created on DragItem) is a bit blurry. On high-resolution screens you need to set the device pixel ratio and scale up the pixmap when you create it. Below is a modified DragItem class which does this.

Update DragItem to support high resolution screens.

python class DragItem(QLabel): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.setContentsMargins(25, 5, 25, 5) self.setAlignment(Qt.AlignmentFlag.AlignCenter) self.setStyleSheet("border: 1px solid black;") # Store data separately from display label, but use label for default. self.data = self.text() def set_data(self, data): self.data = data def mouseMoveEvent(self, e): if e.buttons() == Qt.MouseButton.LeftButton: drag = QDrag(self) mime = QMimeData() drag.setMimeData(mime) # Render at x2 pixel ratio to avoid blur on Retina screens. pixmap = QPixmap(self.size().width() * 2, self.size().height() * 2) pixmap.setDevicePixelRatio(2) self.render(pixmap) drag.setPixmap(pixmap) drag.exec(Qt.DropAction.MoveAction)

That's it! We've created a generic drag-drop handled which can be added to any projects where you need to be able to reposition items within a list. You should feel free to experiment with the styling of the drag items and targets as this won't affect the behavior.

Categories: FLOSS Project Planets

The Drop Times: Sci-Fi to Software: James Shields' Evolution with Drupal

Planet Drupal - Wed, 2024-03-06 06:14
Discover the inspiring journey of James Shields in our latest interview, where he dives deep into his 15-year voyage within the Drupal ecosystem. Uncover how his passion for science fiction conventions led to significant contributions and the shaping of digital communities. Join us as James reflects on the transformative power of collaboration, continuous learning, and his commitment to fostering innovation in open-source development.
Categories: FLOSS Project Planets

Talking Drupal: Skills Upgrade - Episode #1

Planet Drupal - Tue, 2024-03-05 21:53

This is the first episode of Skills Upgrade, a Talking Drupal mini-series following the journey of a Drupal 7 developer learning Drupal 10.

Topics
  • Chad and Mike's first meeting
  • Chad's Background
  • Chad's goals
  • Tasks for the week
Resources

Chad's Drupal 10 Learning Curriclum & Journal Chad's Drupal 10 Learning Notes

Hosts

AmyJune Hineline - @volkswagenchick

Guests

Chad Hester - chadkhester.com @chadkhest Mike Anello - DrupalEasy.com @ultamike

Categories: FLOSS Project Planets

KDE Plasma 5.27.11, Bugfix Release for March

Planet KDE - Tue, 2024-03-05 19:00

Wednesday, 6 March 2024. Today KDE releases a bugfix update to KDE Plasma 5, versioned 5.27.11.

Plasma 5.27 was released in February 2023 with many feature refinements and new modules to complete the desktop experience.

The bugfixes are typically small but important and include:

  • Plasma Browser Integration: 🍒🍒Download Job: Truncate excessively long URLs. Commit.
  • KWin: Tabbox: match Shift+Backtab against Shift+Tab. Commit. Fixes bug #438991
  • Powerdevil: Kbd backlight: Fix double brightness restore on LidOpen-resume. Commit.
View full changelog
Categories: FLOSS Project Planets

GNUnet News: GNUnet 0.21.0

GNU Planet! - Tue, 2024-03-05 18:00
GNUnet 0.21.0 released

We are pleased to announce the release of GNUnet 0.21.0.
GNUnet is an alternative network stack for building secure, decentralized and privacy-preserving distributed applications. Our goal is to replace the old insecure Internet protocol stack. Starting from an application for secure publication of files, it has grown to include all kinds of basic protocol components and applications towards the creation of a GNU internet.

This release marks a noteworthy milestone in that it includes a completely new transport layer . It lays the groundwork for fixing some major design issues and may also already alleviate a variety of issues seen in previous releases related to connectivity. This change also deprecates our testbed and ATS subsystem.

This is a new major release. It breaks protocol compatibility with the 0.20.x versions. Please be aware that Git master is thus henceforth (and has been for a while) INCOMPATIBLE with the 0.20.x GNUnet network, and interactions between old and new peers will result in issues. In terms of usability, users should be aware that there are still a number of known open issues in particular with respect to ease of use, but also some critical privacy issues especially for mobile users. Also, the nascent network is tiny and thus unlikely to provide good anonymity or extensive amounts of interesting information. As a result, the 0.21.0 release is still only suitable for early adopters with some reasonable pain tolerance .

Download links

The GPG key used to sign is: 3D11063C10F98D14BD24D1470B0998EF86F59B6A

Note that due to mirror synchronization, not all links might be functional early after the release. For direct access try http://ftp.gnu.org/gnu/gnunet/

Changes

A detailed list of changes can be found in the git log , the NEWS and the bug tracker .

Known Issues
  • There are known major design issues in the CORE subsystems which will need to be addressed in the future to achieve acceptable usability, performance and security.
  • There are known moderate implementation limitations in CADET that negatively impact performance.
  • There are known moderate design issues in FS that also impact usability and performance.
  • There are minor implementation limitations in SET that create unnecessary attack surface for availability.
  • The RPS subsystem remains experimental.

In addition to this list, you may also want to consult our bug tracker at bugs.gnunet.org which lists about 190 more specific issues.

Thanks

This release was the work of many people. The following people contributed code and were thus easily identified: Christian Grothoff, t3sserakt, TheJackiMonster, Pedram Fardzadeh, dvn, Sebastian Nadler and Martin Schanzenbach.

Categories: FLOSS Project Planets

Lullabot: The New Storybook Module for Drupal

Planet Drupal - Tue, 2024-03-05 16:05

Component-based web development has become the de facto approach for new Drupal projects. Componentizing your UI has multiple advantages, like modular development, improved automated testing, easier maintenance, code reusability, better collaboration, and more.

Categories: FLOSS Project Planets

PyCoder’s Weekly: Issue #619 (March 5, 2024)

Planet Python - Tue, 2024-03-05 14:30

#619 – MARCH 5, 2024
View in Browser Âť

Duck Typing in Python: Writing Flexible and Decoupled Code

In this tutorial, you’ll learn about duck typing in Python. It’s a typing system based on objects’ behaviors rather than on inheritance. By taking advantage of duck typing, you can create flexible and decoupled sets of Python classes that you can use together or individually.
REAL PYTHON

Using IPython Jupyter Magic Commands

“IPython Jupyter Magic commands (e.g. lines in notebook cells starting with % or %%) can decorate a notebook cell, or line, to modify its behavior.” This article shows you how to define them and where they can be useful.
STEFAN KRAWCZYK

Posit Connect - Make Deployment the Easiest Part of Your Data Science Workflow

Data scientists use Posit Connect to securely share insights. Automate time-consuming tasks & distribute custom-built tools & solutions across teams. Publish data apps, docs, notebooks, & dashboards. Deploy models as APIs, & configure reports to run & get distributed on a custom schedule →
POSIT sponsor

Monkeying Around With Python: A Guide to Monkey Patching

Monkey patching is the practice of modifying live code. This article shows you how its done and why and when to use the practice.
KARISHMA SHUKLA

DjangoCon US Call for Proposals

DJANGOCON

White House Recommends Use of Python

PYTHON SOFTWARE FOUNDATION

JupyterLab 4.1 and Notebook 7.1 Released

JUPYTER

Articles & Tutorials Requests Maintainer Reflects on the Project After a Decade

One of the oldest active maintainers of the popular requests libraries reflects on the project’s good and bad parts over the last several years. He posits things that would improve the project and maintenance thereof. He also talks about what’s holding the project back right now. Associated Hacker News discussion.
IAN STAPLETON CORDASCO • Shared by nah

Improve the Architecture of Your Python Using import-linter

For large Python projects, managing dependency relationships between modules can be challenging. Using import-linter, this task can be made easier. This article provides a simple introduction to the import-linter tool and presents 6 practical ways to fix inappropriate dependencies.
PIGLEI • Shared by piglei

We’re Building the Future of Humanity Using Python No Other Language Will Give You Better Results

Today, you can build AI & data apps using only Python! This open-source Python end-to-end app builder helps you with it. Similar to Steamlit but designed to build production-ready apps, it offers some differences: scales as more users hit the app, can work with huge datasets, and is multi-user →
TAIPY sponsor

How to Read User Input From the Keyboard in Python

Reading user input from the keyboard is a valuable skill for a Python programmer, and you can create interactive and advanced programs that run on the terminal. In this tutorial, you’ll learn how to create robust user input programs, integrating error handling and multiple entries.
REAL PYTHON

Tracing System Calls in Python

This article shows you how to trace the system calls made when you run your Python program. It includes how you can use the Perfetto trace viewer to visualize the interactions.
MATT STUCHLIK

The Most Important Python News in 2023

Vita has put together a site using data from the PyCoder’s newsletter. The site itself is built using Python tools. If you missed something in 2023, you might find it here.
VITA MIDORI • Shared by Vita Midori

Django Login, Logout, Signup, Password Change and Reset

Learn how to implement a complete user authentication system in Django from scratch, consisting of login, logout, signup, password change, and password reset.
WILL VINCENT • Shared by Will Vincent

Why Python’s Integer Division Floors

This article on the Python History blog talks about why the decision was made to have integer division use floors, instead of truncation like C.
GUIDO VAN ROSSUM

Falsehoods Junior Developers Believe About Becoming Senior

This opinion piece by Vadim discusses how newer developers perceive what it means to be a senior developer, and how they’re often wrong.
VADIM KRAVCENKO

What’s in a Name?

An article about names in Python, and why they’re not the same as objects. The article discusses reference counts and namespaces.
STEPHEN GRUPPETTA • Shared by Stephen Gruppetta

Reduce, Reuse, Recycle: McDonald’s Reusable Workflows

This post on McDonald’s technology blog talks about how they take advantage of reusable workflows with GitHub Actions.
MICHAEL GORELIK

Popular git Config Options

This post covers some of the more popular options you can use when configuring git, and why you might choose them.
JULIA EVANS

Projects & Code logot: Test Whether Your Code Is Logging Correctly

GITHUB.COM/ETIANEN

hypofuzz: Adaptive Fuzzing of Hypothesis Tests

GITHUB.COM/ZAC-HD

cantok: Implementation of the “Cancellation Token” Pattern

GITHUB.COM/POMPONCHIK • Shared by Evgeniy Blinov

xonsh: Python-Powered, Cross-Platform, Unix-Gazing Shell

GITHUB.COM/XONSH

django-queryhunter: Find Your Expensive Queries

GITHUB.COM/PAULGILMARTIN • Shared by Paul

Events Eclipse Insights: How AI Is Transforming Solar Astronomy

March 5, 2024
MEETUP.COM • Shared by Laura Stephens

Weekly Real Python Office Hours Q&A (Virtual)

March 6, 2024
REALPYTHON.COM

Canberra Python Meetup

March 7, 2024
MEETUP.COM

Sydney Python User Group (SyPy)

March 7, 2024
SYPY.ORG

PyCon Pakistan 2024

March 9 to March 11, 2024
PYCON.PK

Django Girls CDO Workshop 2024

March 9 to March 10, 2024
DJANGOGIRLS.ORG

PyCon SK 2024

March 15 to March 18, 2024
PYCON.SK

Happy Pythoning!
This was PyCoder’s Weekly Issue #619.
View in Browser Âť

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

Categories: FLOSS Project Planets

ImageX: Unlocking Customer Management Efficiency: A Deep Dive into CRM Integration with Drupal Websites

Planet Drupal - Tue, 2024-03-05 13:15

Authored by: Nadiia Nykolaichuk.

Managing customer interactions touches on many aspects. Luckily, instead of using endless spreadsheets, notes, emails, chats, and form fills, you can rely on just one application — a CRM, or customer management system. A CRM will help you easily consolidate, manage, track, and analyze all customer interactions at every step of their journey throughout your sales funnel. 

Categories: FLOSS Project Planets

Chapter Three: Chapter Three is a Certified Drupal 7 Migration Partner!

Planet Drupal - Tue, 2024-03-05 12:17
Happy March! If you’re reading this blog post on the day of publication, you now have exactly eight months to migrate out of Drupal 7 before it reaches end of life on January 5, 2025. After numerous extensions, this date is final and there will be no further extensions.
Categories: FLOSS Project Planets

Matt Glaman: Improving the Drupal theme starterkit and theme generation experience

Planet Drupal - Tue, 2024-03-05 09:18

Wanting to follow up on our work at MidCamp 2023 with the Development Settings form, Mike Herchel wanted to jam on another issue to improve the life of frontend developers. Drupal provides a way to generate themes from starterkit themes. Drupal core provides the Starterkit theme, and contributed themes can also mark themselves as a starterkit theme. All a theme has to do is have starterkit: true in their info.yml file. And that's it! Optionally, a specifically named PHP file can be provided in the theme for specific processing to finish generating the new theme. Unfortunately, the experience of creating starterkit themes currently involves writing a lot of PHP code to make the required file alterations.

Categories: FLOSS Project Planets

Real Python: Creating Asynchronous Tasks With Celery and Django

Planet Python - Tue, 2024-03-05 09:00

You’ve built a shiny Django app and want to release it to the public, but you’re worried about time-intensive tasks that are part of your app’s workflow. You don’t want your users to have a negative experience navigating your app. You can integrate Celery to help with that.

Celery is a distributed task queue for UNIX systems. It allows you to offload work from your Python app. Once you integrate Celery into your app, you can send time-intensive tasks to Celery’s task queue. That way, your web app can continue to respond quickly to users while Celery completes expensive operations asynchronously in the background.

In this video course, you’ll learn how to:

  • Recognize effective use cases for Celery
  • Differentiate between Celery beat and Celery workers
  • Integrate Celery and Redis in a Django project
  • Set up asynchronous tasks that run independently of your Django app
  • Refactor Django code to run a task with Celery instead

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

January and February in KDE PIM

Planet KDE - Tue, 2024-03-05 07:00

Here's our bi-monthly update from KDE's personal information management applications team. This report covers progress made in the months of January and February 2024.

Since the last report, 34 people contributed approximately 1742 code changes, focusing on bugfixes and improvements for the coming 24.02 release based on Qt6.

Akonadi

The database migration tool to help users easily switch between different database engines has been completed and merged. The tool can be used like this to migrate from the current engine to SQLite, however migration between any combinations of the supported database engines works.

akonadi-db-migrator --newengine sqlite

Unfortunately, the migrator is not available in the 24.02 megarelease, as some big fixes came too late to be safely merged into a stable branch. It will be included in the next feature release.

We also tried to improve handling of a race condition when syncing a very large IMAP folder, where if user marked an email as read in KMail while the folder was syncing, the change got reverted by the sync.

There were also some additional code improvements, test fixes and some last-minute fixes due to changes betwene Qt5 and Qt6 that we missed.

Thank's to g10 Code GmbH for sponsoring part of Dan's work on Akonadi improvements.

KOrganizer

A long-standing issue with un-answered and declined invitations not showing in KOrganizer was fixed. It's common when using enterprise calendar solutions (Google, EWS, etc.) that when a user is invited into a meeting, the invitation is automatically added to their calendar. KOrganizer used to filter those out and instead implemented special "Open Invitations" and "Declined Invitations" meta-calendars that were supposed to show those. Unfortunately this feature never really worked properly and so users were confused where are their meetings. This "feature" has been removed in 24.02 and KOrganizer will show all invites without any filter. In 24.05 release, KOrganizer will also have a checkbox in its Settings to let users to choose whether to hide declined invitations globally.

  • Fixed syncing of Google contacts not working if the sync token has expired.
  • Prepared for upcoming changes in Google Calendar API behavior.
  • Calendar colors not matching configured colors in Google or DAV has been fixed.
  • When creating a recurrence exception, KOrganizer now uses correct timezone #481305.
  • Fixed 'Create sub-todo' action in KOrganizer being always disabled #479351.
  • Fixed iCal resource getting stuck in a loop and consuming lots of CPU when syncing specific calendars #384309.

Dan's work on improving KOrganizer and calendaring was sponsored by g10 Code GmbH.

KMail

Big new feature is support for OAuth login for personal Outlook365 accounts via IMAP and SMTP. The OAuth login is more reliable than standard username+password authentication and it means that users with Two Factory Authentication for their Outlook365 accounts no longer have to set up an app-specific password. Note that users who have account as part of their school or work organizations should use the EWS resource, which already supports OAuth login.

The account wizard rewrite using QML is progressing and aside from some edge cases, the automatic configuration of your account based on the Thunderbird online configuration should work again. The UI also received some improvements:

In addition a lot of bugs were fixed. This includes:

  • Fixed encrpyted attachments not showing in email header even after decrypting the email.
  • Fixed EWS configuration dialog asking for both an username and an email address
  • Changed the default encryption of new IMAP and SMTP connections to the more secure SSL/TLS transport
Itinerary

With the Qt 6 transition completed Itinerary has gotten a new journey map view, public transport data coverage for more countries and many more improvements. See Itinerary's bi-monthly status update for more details.

Kleopatra

For large files the performance of encryption, decryption, signing, and signature verification has been improved (by making gpg read and write the files directly). This is most noticeable on Windows. T6351

The usability was improved in a few places:

  • The configuration of certificate groups is now much easier because changes are applied directly. T6662
  • Users can now change the name of the decryption result if a file with the same name already exists. T6851
  • The workflow of moving a key to a smart card was streamlined. T6933
  • Kleopatra now reads certificates automatically from certain smart cards. T6846

Additionally, a few bugs affecting the Windows builds have been fixed:

  • A wrong permission check when writing the decryption result to a network drive was fixed. (Technical details: Qt allows to enable expensive ownership and permissions checking on NTFS file systems. Unfortunately, these checks seem to give wrong results for network drives.) T6917
  • Kleopatra now displays all icons also in dark mode. Previously, some icons, most noteably the system tray icon, were missing when dark mode was enabled. T6926

Related to Kleopatra, pinentry-qt, the small program that asks for the password to unlock your OpenPGP keys also received some love:

  • pinentry-qt can now be built for Qt 6. T6883
  • pinentry-qt now supports external password managers (via libsecret), but only outside of KDE Plasma to avoid deadlocks with KWallets using GnuPG to encrypt the passwords. T6801
  • pinentry-qt now has better support for Wayland. T6887
Categories: FLOSS Project Planets

PyCharm: Deploying Django Apps in Kubernetes

Planet Python - Tue, 2024-03-05 05:55

As an open-source container orchestration platform that automates deployment, scaling, and load balancing, Kubernetes offers unparalleled resilience and flexibility in the management of your Django applications.

Whether you’re launching a small-scale project or managing a complex application, Kubernetes provides a robust environment to enhance your Django application, ensuring it’s ready to meet the demands of modern web development.

By automating the deployment, scaling, and operation of containerized applications, Kubernetes (or K8s) provides numerous benefits for organizations in the fast-paced tech industry.

Whether you’re a Django developer looking to enhance your deployment skills or a Kubernetes enthusiast eager to explore Django integration, this guide has something for everyone.

The introduction to this tutorial explores the symbiotic relationship between Django and Kubernetes, enabling you to seamlessly containerize your web application, distribute workloads across clusters, and ensure high availability.

Django

The Django framework, a high-level Python web framework, stands as a beacon of efficiency and simplicity in the world of web development. Born out of the need to create rapid, robust, and maintainable web applications, Django has become a go-to choice for developers and organizations.

At its core, Django embraces the “batteries-included” philosophy, offering an extensive array of built-in tools, libraries, and conventions that facilitate the development process. It simplifies complex tasks like URL routing, database integration, and user authentication, allowing developers to focus on building their applications.

One of Django’s foremost benefits is its adherence to the “Don’t repeat yourself” (DRY) principle, reducing redundancy and enhancing code maintainability. It also follows the model-view-controller (MVC) architectural pattern, making applications structured and easy to manage.

Django also prioritizes security, making it less prone to common web vulnerabilities. It includes features like cross-site scripting (XSS) and cross-site request forgery (CSRF) protection out of the box. Offering a potent combination of speed, simplicity, and security, Django is an ideal choice for developers looking to create robust, feature-rich web applications with minimal effort.

Container orchestrators

Container orchestrators are essential tools for managing and automating the deployment, scaling, and operation of containerized applications. There are several container orchestrators available on the market, the most popular of which include:

1. Kubernetes, the top open-source container orchestration platform, offers a robust and adaptable environment for handling containerized applications. It automates tasks such as scaling, load balancing, and container health checks, with a broad extension ecosystem.

2. Docker Swarm is a container orchestration solution provided by Docker. Designed to be simple to set up and use, it’s a good choice for smaller applications or organizations already using Docker as it uses the same command-line interface and API as Docker.

3. OpenShift is an enterprise Kubernetes platform developed by Red Hat. It adds developer and operational tools on top of Kubernetes, simplifying the deployment and management of containerized applications.

4. Nomad, developed by HashiCorp, is a lightweight and user-friendly orchestrator capable of managing containers and non-containerized applications.

5. Apache Mesos is an open-source distributed system kernel, and DC/OS (data center operating system) is an enterprise-grade platform built on Mesos. DC/OS extends Mesos with additional features for managing and scaling containerized applications.

Software teams primarily work with managed platforms offered by well-known cloud providers such as AWS, Google Cloud Platform, and Azure. These cloud providers offer services like Amazon EKS, Google GKE, and Azure AKS, all of which are managed Kubernetes solutions. These services streamline the setup, expansion, and administration of Kubernetes clusters and seamlessly integrate with the respective cloud environments, ensuring efficient container orchestration and application deployment.

Creating a Django Application in PyCharm

In this tutorial, we’ll start by generating a minimal Django application. We’ll then containerize the application and, in the final step, deploy it to a local Kubernetes cluster using Docker Desktop.

If you’re new to working with the Django framework but are eager to create a Django application from scratch, read this blog post.

You can access the source code used in this tutorial here.

Let’s begin by creating a new Django application in PyCharm.

To create your project, launch PyCharm and click New Project. If PyCharm is already running, select File | New Project from the main menu.

Furnish essential details such as project name, location, and interpreter type, utilizing either venv or a custom environment.

Then, click Create.

PyCharm will do the heavy lifting by setting up your project and creating the virtual environment.

Gunicorn

Once the project has been created, install Gunicorn – a popular Python web server gateway interface (WSGI) HTTP server. A pre-fork worker model web server used to serve Python web applications, Gunicorn is often used in combination with web frameworks like Django, Flask, and others to deploy web applications and make them accessible over the internet.

The Python Packages tool window provides the quickest and easiest way to preview and install packages for the currently selected Python interpreter.

You can open the tool window via View | Tool Windows | Python Packages.

Psycopg 2

Psycopg 2 is a Python library used to connect to and interact with PostgreSQL databases. It provides a Python interface for working with PostgreSQL, one of the most popular open-source relational database management systems. Psycopg 2 allows Python developers to perform various database operations, such as inserting, updating, and retrieving data, as well as executing SQL queries, and managing database connections from within Python programs.

Before installing psycopg2, you need to install the system-level dependencies, brew install libpq for macOS, and apt-get install libpq-dev for Linux.

Reference: postgresql.org/docs/16/libpq.html

libpq is the client library for PostgreSQL. It’s a C library that provides the necessary functionality for client applications to connect to, interact with, and manage PostgreSQL database servers. 

After completing the particular modifications, update the section within settings.py related to DATABASES.

Always make sure to pass your secret credentials through environment variables.

STATIC_ROOT

In Django, STATIC_ROOT is a configuration setting used to specify the absolute file system path where collected static files will be stored when you run the collectstatic management command. Static files typically include CSS, JavaScript, images, and other assets used by your web application. The STATIC_ROOT setting is an essential part of serving static files in a production environment.

Set an environment variable for STATIC_ROOT at line 127. This variable will point to a file path that leads to a Kubernetes persistent volume. I’ll explain later how to configure this setup.

To collect static files, run the following command:

python manage.py collectstatic

This command will gather the static files and place them in the STATIC_ROOT directory. You can then serve these assets directly through an NGINX or Apache web server – a more efficient approach for production environments.

Dockerfile

A Dockerfile is a straightforward textual document containing directives for constructing Docker images.

1. FROM python:3.11: This line specifies the base image for the Docker image using the official Python 3.11 image from Docker Hub. The application will be built and run on top of this base image, which already has Python pre-installed.

2. ENV PYTHONUNBUFFERED 1: This line sets an environment variable PYTHONUNBUFFERED to 1. It’s often recommended to set this environment variable when running Python within Docker containers to ensure that Python doesn’t buffer the output. This helps in getting real-time logs and debugging information from the application.

3. WORKDIR /app: This line sets the working directory within the Docker container to /app. All subsequent commands will be executed in this directory.

4. COPY . /app: This line copies the contents of the current directory (the directory where the Dockerfile is located) to the /app directory within the container. This includes your application code and any files needed for the Docker image.

5. RUN pip install -r requirements.txt: This line runs the pip install command to install the Python dependencies listed in a requirements.txt file located in the /app directory. This is a common practice for Python applications as it allows you to manage the application’s dependencies.

6. EXPOSE 8000: This line informs Docker that the container listens on port 8000. It doesn’t actually publish the port. It’s a metadata declaration to indicate which ports the container may use.

7. CMD ["gunicorn", "django_kubernetes_tutorial.wsgi:application", "--bind", "0.0.0.0:8000"]: This line specifies the default command to run when the container starts. It uses Gunicorn to serve the Django application and binds Gunicorn to listen on all network interfaces (0.0.0.0) on port 8000 using the WSGI application defined in django_kubernetes_tutorial.wsgi:application.

DockerHub

Visit hub.docker.com and proceed to either log in or sign up on the platform.

Click on Create repository.

Next, provide the repository name and make the visibility public. If you’re working with sensitive or confidential information, set the visibility to Private.

Once the repository has been created, you need to build the docker image and then push the image to the registry.

Before executing the command, ensure that your requirements.txt file is current by running the following command:

pip freeze > requirements.txt

To construct an image, execute the following command: 

docker build -t mukulmantosh/django-kubernetes:1.0 .

docker build -t <USERNAME>/django-kubernetes:1.0 .

This command will vary depending on your specific circumstances, and you will need to utilize your personal username.

You then need to authenticate with Docker Hub to push the image to the registry.

Type the following command in the terminal:

docker login

Enter your username and password. Once they have been successfully authenticated, you can push the image by running:

docker push mukulmantosh/django-kubernetes:1.0 

docker push <USERNAME>/django-kubernetes:1.0 

Once the image has been successfully pushed, you can observe changes in Docker Hub.

If you don’t plan to push any more images to the registry, you can log out by running the following command:

docker logout

If you want to pull this image locally, visit hub.docker.com/r/mukulmantosh/django-kubernetes.

Kubernetes Configuration: Writing YAML Files

This section of the tutorial describes the deployment of applications to local Kubernetes clusters.

For this tutorial, we will be using Docker Desktop, but you could also use minkube or kind.

Namespaces

In Kubernetes, namespaces is a virtual partition within a cluster that is used to group and isolate resources and objects. It’s a way to create multiple virtual clusters within a single physical cluster. 

You can create and manage namespaces using kubectl, the Kubernetes command-line tool, or by defining them in YAML manifests when deploying resources. 

  • If you’ve chosen Docker Desktop as your preferred platform for running Kubernetes, be sure to enable Kubernetes in the settings by clicking the Enable Kubernetes checkbox.

Run the following command in the terminal to create namespace:

kubectl create ns django-app

Deploying databases with K8s

To begin, let’s establish a PostgreSQL instance in our Kubernetes cluster on the local environment.

PersistentVolume

In Kubernetes, a persistent volume (PV) is a piece of storage in the cluster that an administrator has provisioned. By storing data in a way that is independent of a pod’s life cycle, PVs allow for a more decoupled and flexible management and abstraction of storage resources. This means data can persist even if the pod that uses it is deleted or rescheduled to a different node in the cluster.

Let’s create a persistent volume and name it pv.yml.

This YAML configuration defines a PersistentVolume resource named postgres-pv with a capacity of 1 gigabyte, mounted using the ReadWriteOnce access mode, and provided by a local path on the node’s file system located at /data/db. This PV can be used to provide persistent storage for pods that need access to a directory on the node’s file system and is suitable for applications like PostgreSQL or other stateful services that need persistent storage.

For production, we recommend either using cloud solutions like AWS RDS or Google CloudSQL, or use Kubernetes StatefulSets.

PersistentVolumeClaim

In Kubernetes, a PersistentVolumeClaim (PVC) is a resource object used by a pod to request a specific amount of storage with certain properties from a PV. PVCs act as a way for applications to claim storage resources without needing to know the details of the underlying storage infrastructure.

By creating and using PVCs, Kubernetes provides a way to dynamically allocate and manage storage resources for applications while abstracting the underlying storage infrastructure, making it easier to work with and manage stateful applications in containerized environments.

Let’s create a PVC and name it pvc.yml.

A PVC requests storage resources and is bound to a PV that provides the actual storage. 

This YAML configuration defines a PersistentVolumeClaim named postgres-pvc within the django-app namespace. It requests a gigabyte of storage with the ReadWriteOnce access mode, and it explicitly specifies the manual StorageClass. This PVC is intended to be bound to an existing PV with the name postgres-pv, effectively reserving that volume for use by pods within the django-app namespace that reference this PVC.

ConfigMap

In Kubernetes, a ConfigMap is an API object that is used to store configuration data in key-value pairs. ConfigMaps provides a way to decouple configuration data from the application code, making it easier to manage and update configurations without modifying and redeploying containers. They are especially useful for configuring applications, microservices, and other components within a Kubernetes cluster.

Let’s create a ConfigMap and name it cm.yml.

Although this tutorial uses ConfigMaps, for security considerations, it’s recommended to store sensitive credentials in Kubernetes Secrets or explore alternatives like Bitnami Sealed Secrets, AWS Parameter Store, or HashiCorp Vault.

Deployment

In Kubernetes, a Deployment is a resource object used to manage the deployment and scaling of applications. It’s part of the Kubernetes API group and provides a declarative way to define and manage the desired state of your application.

A Deployment is a higher-level Kubernetes resource used for managing and scaling application pods. 

This YAML configuration defines a Deployment named postgres in the django-app namespace. It deploys a single replica of a PostgreSQL database (version 16.0) with persistent storage. The database pod is labeled as app: postgresdb, and the storage is provided by a PVC named postgres-pvc. Configuration and credentials for the PostgreSQL container are provided via a ConfigMap named db-secret-credentials.

Service

In Kubernetes, a Service is a resource object used to expose a set of pods as a network service. Services enable network communication between different parts of your application running in a Kubernetes cluster and provide a stable endpoint for clients to access those parts. Services abstract the underlying network infrastructure, making it easier to connect and discover the components of your application.

This YAML configuration defines a NodePort Service named postgres-service within the django-app namespace. It exposes the PostgreSQL service running in pods labeled with “app: postgresdb” on port 5432 within the cluster. External clients can access the Service on any node’s IP address using port 30004. This Service provides a way to make the PostgreSQL database accessible from outside the Kubernetes cluster.

Creating YAML configurations for a Django application PersistentVolume

This YAML defines a PersistentVolume named staticfiles-pv with a 1 GB storage capacity, allowing multiple pods to read and write to it simultaneously. The storage is provided by a local host path located at /data/static. The Django static files will be stored at this location.

PersistentVolumeClaim

This YAML defines a PVC named staticfiles-pvc in the django-app namespace. It requests storage with a capacity of at least 1 GB from a PV with the “manual” StorageClass, and it specifies that it needs ReadWriteMany access. The claim explicitly binds to an existing PV named staticfiles-pv to satisfy its storage needs. This allows pods in the django-app namespace to use this PVC to access and use the storage provided by the associated PV.

ConfigMap

This YAML defines a ConfigMap named app-cm in the django-app namespace, and it contains various key-value pairs that store configuration data. This ConfigMap can be used by pods or other resources within the django-app namespace to access configuration settings, such as database connection information and static file paths.

Deployment

This YAML defines a Deployment named django-app-deploy in the django-app namespace. It creates one replica (pod) running a container with a specific Docker image and configuration. The pod is associated with two volumes, postgres-db-storage and staticfiles, which are backed by PVCs. The container is configured to use environment variables from a ConfigMap named app-cm and listens on port 8000. The volumes are mounted at specific paths within the container to provide access to database storage and static files. This Deployment is a common way to run a Django application using Kubernetes.

If you are interested in pulling images from a private registry, then read here

Service

The Service listens on port 8000 and directs incoming traffic to pods labeled with app: django-application. This is a common configuration for exposing and load-balancing web applications in a Kubernetes cluster where multiple instances of the same application are running. The Service ensures that traffic is distributed evenly among them.

NGINX

NGINX is a high-performance web and reverse proxy server known for its speed, reliability, and scalability. It efficiently handles web traffic, load balancing, and content delivery, making it a popular choice for serving websites and applications.

ConfigMap

This YAML defines a Kubernetes ConfigMap named nginx-cm in the django-app namespace, and it contains a key-value pair where the key is default.conf and the value is a multiline NGINX configuration file that proxies the request to the backend server. 

Deployment

This YAML defines a Deployment that creates and manages pods running an NGINX container with specific volumes and configurations. The Deployment ensures that one replica of this pod is always running in the django-app namespace. It also overrides the default NGINX configuration by replacing it with the nginx-cm ConfigMap.

Service

This YAML configuration creates a Kubernetes Service named nginx-service in the django-app namespace. It exposes pods with the label app:nginx on port 80 within the cluster and also makes the Service accessible on NodePort 30005 on each cluster node. This allows external traffic to reach the pods running the NGINX application via the NodePort service.

Managing batch workloads with Kubernetes jobs

In Kubernetes, a Job is a resource object that represents a single unit of work or a finite task. Jobs are designed to run tasks to completion, with a specified number of successful completions. 

Database migration

This YAML configuration defines a Kubernetes Job named django-db-migrations in the django-app namespace. The Job runs a container using a custom Docker image for Django migrations and mounts a PVC to provide storage for database-related files. If the Job fails, it can be retried up to 15 times, and it will retain its pod for 100 seconds after completion. This Job will create new tables in PostgreSQL.

Static files

This YAML configuration defines a Kubernetes Job named django-staticfiles in the django-app namespace. The Job runs a container using a custom Docker image for collecting static files for a Django application and mounts a PVC to provide storage for the static files. If the Job fails, it can be retried up to three times, and it will retain its pod for 100 seconds after completion for debugging purposes. This Job will copy the static files to the mountPath, which is /data/static.

Launching the Application

To launch the application, navigate to the k8s directory and execute the following command:

After deploying the application, use the following command to verify the status of running pods:

kubectl get pods -n django-app -w

Creating a superuser in Django

Once all your applications are up and running, create a superuser to login into the Django admin.

Run the following command to get the list of running pods:

kubectl get pods -n django-app

To get inside the container shell, run:

kubectl exec -it <POD_NAME> -n django-app -- sh

Then, run:

python manage.py createsuperuser

Once you’ve successfully created the superuser, open http://127.0.0.1:30005 in a browser. You will be directed to the default welcome page.

Then, head over to the Django admin via http://127.0.0.1:30005/admin.

Enter the username and password that you’ve just created. 

Once authenticated, you will be redirected to the Django administration page. 

If you try logging through localhost:30005/admin, you might receive a 403 Forbidden (CSRF) error.

You can resolve this in the settings.py file under CSRF_TRUSTED_ORIGINS.

CSRF_TRUSTED_ORIGINS is a setting in Django that is used to specify a list of trusted origins for cross-site request forgery (CSRF) protection. CSRF is a security vulnerability that can occur when an attacker tricks a user into unknowingly making an unwanted request to a web application. To prevent this, Django includes built-in CSRF protection.

CSRF_TRUSTED_ORIGINS allows you to define a list of origins (websites) from which CSRF-protected requests are accepted. Any request originating from an origin not included in this list will be considered potentially malicious and duly blocked.

This setting can be used to allow certain cross-origin requests to your Django application while maintaining security against CSRF attacks. It’s particularly helpful in scenarios where your application needs to interact with other web services or APIs that are hosted on different domains.

If you are using GUI tools like Kubernetes Dashboard, you can easily visualize your running pods, deployments, persistent volumes, etc.

Kubernetes Support in PyCharm 

PyCharm offers an enhanced editor and runtime support tailored for Kubernetes, bringing a host of features to streamline your Kubernetes management, including:

  • Browsing cluster objects, extracting and editing their configurations, and describing them.
  • Viewing events.
  • Viewing and downloading pod logs.
  • Attaching the pod console.
  • Running shell in pods.
  • Forwarding ports to a pod.
  • Applying resource YAML configurations from the editor.
  • Deleting resources from the cluster.
  • Completion of ConfigMap and Secret entries from the cluster.
  • Configuring paths to kubectl.
  • Configuring custom kubeconfig files globally and per project.
  • Switching contexts and namespaces.
  • Using API schema (including CRD) from the active cluster to edit resource manifests.

Watch this video to learn more about working with Kubernetes in PyCharm Professional.

Try PyCharm for your Kubernetes tasks for free!

DOWNLOAD PYCHARM References

Already have a solid understanding of Kubernetes? Then, take the next step in your programming journey by exploring cloud solutions, and check out our tutorials on AWS EKS and Google Kubernetes Engine.

Categories: FLOSS Project Planets

Pages