FLOSS Project Planets
James Bennett: Easy HTTP status codes in Python
This is part of a series of posts I’m doing as a sort of Python/Django Advent calendar for Advent 2023, offering a small tip or piece of information each day from the first Sunday of Advent through Christmas Eve. See the first post in the series for an introduction.
The most useful testI could be misremembering, but I think Frank Wiles was the first person I ever heard explain that, for a web application, …
Talking Drupal: Talking Drupal #427 - Melissa Turns The Tables
On today’s show we are turning the tables and Nic and John will be interviewed by our guest host Melissa Bent. We’ll also cover Content Model Documentation as our module of the week.
For show notes visit: www.talkingDrupal.com/427
Topics- What made you decide to start the podcast
- Who does what tasks
- The first episode was on May 30 2013. What do you know now that you wish you knew then
- When did the guest host start
- What has been your favorite episode
- How did you come to this format
- Where did the tagline come from
- What technology do you use in production
- The show is supported by multiple platforms, would you recommend this
- What advice would you give someone just starting
- GovCon Session for Content Model Documentation
- First Episode
- Three hundred
- The Night before DrupalCon
- Wade Wingler
- Jono Bacon
- Dries’ First Episode
- Needs Review Issue Queue
- Sustainability Guidelines
- A Website’s Carbon Footprint
- vdo.ninja
- kdenlive
- Audacity
- Patreon
- Youtube
- Libsyn
- Kid3
- Auphonic
- Midjourney
- Trello
- Hackmd.io
Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi
HostsMelissa Bent - linkedin.com/in/melissabent merauluka
MOTW CorrespondentMartin Anderson-Clutz - @mandclu Content Model & Site Documentation
- Brief description:
- Have you wanted to make your Drupal site self-documenting directly within the admin UI? There’s a module for that.
- Brief history
- How old: created in Jan 2023
- Versions available: 1.0.23, compatible with Drupal 9 and 10
- Maintainership
- Actively maintained, latest release was a week ago
- Test coverage
- No official guide, but there was a recent talk about the module at GovCon, so you can watch that
- Number of open issues: 43 open, 9 of which are bugs
- Usage stats:
- 82 sites
- Maintainer(s):
- Steve Wirt (swirt) who works for Civic Actions
- Module features and usage
- Allows your Drupal site to generate its own documentation
- Has fieldable Content Model Document entities that allow you to customize what data will be stored
- Can optionally document things like your content types, taxonomies, block types, paragraph types, and more
- Documentation elements can also be associated with parts of your site configuration, and they become available within the relevant parts of your admin UI. For example, if you add documentation for a specific content type, when a site builder goes into edit that content type they will see a link to the documentation as a tab
- You can generate entity relationship diagrams using MermaidJS
- Also includes a field search capability originally developed as a separate module by Matthieu Scarset, who was a guest on this show back in episode #298
- Will also generate diagrams to illustrate your content workflows, showing the states defined and the transitions between them
Zato Blog: Smart IoT integrations with Akenza and Python
The Akenza IoT platform, on its own, excels in collecting and managing data from a myriad of IoT devices. However, it is integrations with other systems, such as enterprise resource planning (ERP), customer relationship management (CRM) platforms, workflow management or environmental monitoring tools that enable a complete view of the entire organizational landscape.
Complementing Akenza's capabilities, and enabling the smooth integrations, is the versatility of Python programming. Given how flexible Python is, the language is a natural choice when looking for a bridge between Akenza and the unique requirements of an organization looking to connect its intelligent infrastructure.
This article is about combining the two, Akenza and Python. At the end of it, you will have:
- A bi-directional connection to Akenza using Python and WebSockets
- A Python service subscribed to and receiving events from IoT devices through Akenza
- A Python service that will be sending data to IoT devices through Akenza
Since WebSocket connections are persistent, their usage enhances the responsiveness of IoT applications which in turn helps to exchange occurs in real-time, thus fostering a dynamic and agile integrated ecosystem.
Python and Akenza WebSocket connectionsFirst, let's have a look at full Python code - to be discussed later.
# -*- coding: utf-8 -*- # Zato from zato.server.service import WSXAdapter # ############################################################################################### # ############################################################################################### if 0: from zato.server.generic.api.outconn.wsx.common import OnClosed, \ OnConnected, OnMessageReceived # ############################################################################################### # ############################################################################################### class DemoAkenza(WSXAdapter): # Our name name = 'demo.akenza' def on_connected(self, ctx:'OnConnected') -> 'None': self.logger.info('Akenza OnConnected -> %s', ctx) # ############################################################################################### def on_message_received(self, ctx:'OnMessageReceived') -> 'None': # Confirm what we received self.logger.info('Akenza OnMessageReceived -> %s', ctx.data) # This is an indication that we are connected .. if ctx.data['type'] == 'connected': # .. for testing purposes, use a fixed asset ID .. asset_id:'str' = 'abc123' # .. build our subscription message .. data = {'type': 'subscribe', 'subscriptions': [{'assetId': asset_id, 'topic': '*'}]} ctx.conn.send(data) else: # .. if we are here, it means that we received a message other than type "connected". self.logger.info('Akenza message (other than "connected") -> %s', ctx.data) # ############################################################################################## def on_closed(self, ctx:'OnClosed') -> 'None': self.logger.info('Akenza OnClosed -> %s', ctx) # ############################################################################################## # ##############################################################################################Now, deploy the code to Zato and create a new outgoing WebSocket connection. Replace the API key with your own and make sure to set the data format to JSON.
Receiving messages from WebSocketsThe WebSocket Python services that you author have three methods of interest, each reacting to specific events:
-
on_connected - Invoked as soon as a WebSocket connection has been opened. Note that this is a low-level event and, in the case of Akenza, it does not mean yet that you are able to send or receive messages from it.
-
on_message_received - The main method that you will be spending most time with. Invoked each time a remote WebSocket sends, or pushes, an event to your service. With Akenza, this method will be invoked each time Akenza has something to inform you about, e.g. that you subscribed to messages, that
-
on_closed - Invoked when a WebSocket has been closed. It is no longer possible to use a WebSocket once it has been closed.
Let's focus on on_message_received, which is where the majority of action takes place. It receives a single parameter of type OnMessageReceived which describes the context of the received message. That is, it is in the "ctx" that you will both the current request as well as a handle to the WebSocket connection through which you can reply to the message.
The two important attributes of the context object are:
-
ctx.data - A dictionary of data that Akenza sent to you
-
ctx.conn - The underlying WebSocket connection through which the data was sent and through you can send a response
Now, the logic from lines 30-40 is clear:
-
First, we check if Akenza confirmed that we are connected (type=='connected'). You need to check the type of a message each time Akenza sends something to you and react to it accordingly.
-
Next, because we know that we are already connected (e.g. our API key was valid) we can subscribe to events from a given IoT asset. For testing purposes, the asset ID is given directly in the source code but, in practice, this information would be read from a configuration file or database.
-
Finally, for messages of any other type we simply log their details. Naturally, a full integration would handle them per what is required in given circumstances, e.g. by transforming and pushing them to other applications or management systems.
A sample message from Akenza will look like this:
INFO - WebSocketClient - Akenza message (other than "connected") -> {'type': 'subscribed', 'replyTo': None, 'timeStamp': '2023-11-20T13:32:50.028Z', 'subscriptions': [{'assetId': 'abc123', 'topic': '*', 'tagId': None, 'valid': True}], 'message': None} How to send messages to WebSocketsAn aspect not to be overlooked is communication in the other direction, that is, sending of messages to WebSockets. For instance, you may have services invoked through REST APIs, or perhaps from a scheduler, and their job will be to transform such calls into configuration commands for IoT devices.
Here is the core part of such a service, reusing the same Akenza WebSocket connection:
# -*- coding: utf-8 -*- # Zato from zato.server.service import Service # ############################################################################################## # ############################################################################################## class DemoAkenzaSend(Service): # Our name name = 'demo.akenza.send' def handle(self) -> 'None': # The connection to use conn_name = 'Akenza' # Get a connection .. with self.out.wsx[conn_name].conn.client() as client: # .. and send data through it. client.send('Hello') # ############################################################################################## # ##############################################################################################Note that responses to the messages sent to Akenza will be received using your first service's on_message_received method - WebSockets-based messaging is inherently asynchronous and the channels are independent.
Now, we have a complete picture of real-time, IoT connectivity with Akenza and WebSockets. We are able to establish persistent, responsive connections to assets, we can subscribe to and send messages to devices, and that lets us build intelligent automation and integration architectures that make use of powerful, emerging technologies.
More insights➤Daniel Roy Greenfeld: TIL: Forcing pip to use virtualenv
Necessary because installing things into your base python causes false positives, true negatives, and other head bangers.
Set this environment variable, preferably in your rc file:
# ~/.zshrc export PIP_REQUIRE_VIRTUALENV=trueNow if I try to use pip outside a virtualenv:
dj-notebook on main [$] is 📦 v0.6.1 via 🐍 v3.10.6 ❯ pip install ruff ERROR: Could not find an activated virtualenv (required).This TIL is thanks to David Winterbottom.
Ian Jackson: Don’t use apt-get source; use dgit
tl;dr:
If you are a Debian user who knows git, don’t work with Debian source packages. Don’t use apt source, or dpkg-source. Instead, use dgit and work in git.
Also, don’t use: “VCS” links on official Debian web pages, debcheckout, or Debian’s (semi-)official gitlab, Salsa. These are suitable for Debian experts only; for most people they can be beartraps. Instead, use dgit.
> Struggling with Debian source packages?A friend of mine recently asked for help on IRC. They’re an experienced Debian administrator and user, and were trying to: make a change to a Debian package; build and install and run binary packages from it; and record that change for their future self, and their colleagues. They ended up trying to comprehend quilt.
quilt is an ancient utility for managing sets of source code patches, from well before the era of modern version control. It has many strange behaviours and footguns. Debian’s ancient and obsolete tarballs-and-patches source package format (which I designed the initial version of in 1993) nowadays uses quilt, at least for most packages.
You don’t want to deal with any of this nonsense. You don’t want to learn quilt, and suffer its misbehaviours. You don’t want to learn about Debian source packages and wrestle dpkg-source.
Happily, you don’t need to.
Just use dgitOne of dgit’s main objectives is to minimise the amount of Debian craziness you need to learn. dgit aims to empower you to make changes to the software you’re running, conveniently and with a minimum of fuss.
You can use dgit to get the source code to a Debian package, as a git tree, with dgit clone (and dgit fetch). The git tree can be made into a binary package directly.
The only things you really need to know are:
By default dgit fetches from Debian unstable, the main work-in-progress branch. You may want something like dgit clone PACKAGE bookworm,-security (yes, with a comma).
You probably want to edit debian/changelog to make your packages have a different version number.
To build binaries, run dpkg-buildpackage -uc -b.
Debian package builds are often disastrously messsy: builds might modify source files; and the official debian/rules clean can be inadequate, or crazy. Always commit before building, and use git clean and git reset --hard instead of running clean rules from the package.
Don’t try to make a Debian source package. (Don’t read the dpkg-source manual!) Instead, to preserve and share your work, use the git branch.
dgit pull or dgit fetch can be used to get updates.
There is a more comprehensive tutorial, with example runes, in the dgit-user(7) manpage. (There is of course complete reference documentation, but you don’t need to bother reading it.)
Objections But I don’t want to learn yet another toolOne of dgit’s main goals is to save people from learning things you don’t need to. It aims to be straightforward, convenient, and (so far as Debian permits) unsurprising.
So: don’t learn dgit. Just run it and it will be fine :-).
Shouldn’t I be using “official” Debian git repos?Absolutely not.
Unless you are a Debian expert, these can be terrible beartraps. One possible outcome is that you might build an apparently working program but without the security patches. Yikes!
I discussed this in more detail in 2021 in another blog post plugging dgit.
Gosh, is Debian really this bad?Yes. On behalf of the Debian Project, I apologise.
Debian is a very conservative institution. Change usually comes very slowly. (And when rapid or radical change has been forced through, the results haven’t always been pretty, either technically or socially.)
Sadly this means that sometimes much needed change can take a very long time, if it happens at all. But this tendency also provides the stability and reliability that people have come to rely on Debian for.
I’m a Debian maintainer. You tell me dgit is something totally different!dgit is, in fact, a general bidirectional gateway between the Debian archive and git.
So yes, dgit is also a tool for Debian uploaders. You should use it to do your uploads, whenever you can. It’s more convenient and more reliable than git-buildpackage and dput runes, and produces better output for users. You too can start to forget how to deal with source packages!
A full treatment of this is beyond the scope of this blog post.
comments
Real Python: Serialize Your Data With Python
Whether you’re a data scientist crunching big data in a distributed cluster, a back-end engineer building scalable microservices, or a front-end developer consuming web APIs, you should understand data serialization. In this comprehensive guide, you’ll move beyond XML and JSON to explore several data formats that you can use to serialize data in Python. You’ll explore them based on their use cases, learning about their distinct categories.
By the end of this tutorial, you’ll have a deep understanding of the many data interchange formats available. You’ll master the ability to persist and transfer stateful objects, effectively making them immortal and transportable through time and space. Finally, you’ll learn to send executable code over the network, unlocking the potential of remote computation and distributed processing.
In this tutorial, you’ll learn how to:
- Choose a suitable data serialization format
- Take snapshots of stateful Python objects
- Send executable code over the wire for distributed processing
- Adopt popular data formats for HTTP message payloads
- Serialize hierarchical, tabular, and other shapes of data
- Employ schemas for validating and evolving the structure of data
To get the most out of this tutorial, you should have a good understanding of object-oriented programming principles, including classes and data classes, as well as type hinting in Python. Additionally, familiarity with the HTTP protocol and Python web frameworks would be a plus. This knowledge will make it easier for you to follow along with the tutorial.
You can download all the code samples accompanying this tutorial by clicking the link below:
Get Your Code: Click here to download the free sample code that shows you how to serialize your data with Python.
Feel free to skip ahead and focus on the part that interests you the most, or buckle up and get ready to catapult your data management skills to a whole new level!
Get an Overview of Data SerializationSerialization, also known as marshaling, is the process of translating a piece of data into an interim representation that’s suitable for transmission through a network or persistent storage on a medium like an optical disk. Because the serialized form isn’t useful on its own, you’ll eventually want to restore the original data. The inverse operation, which can occur on a remote machine, is called deserialization or unmarshaling.
Note: Although the terms serialization and marshaling are often used interchangeably, they can have slightly different meanings for different people. In some circles, serialization is only concerned with the translation part, while marshaling is also about moving data from one place to another.
The precise meaning of each term depends on whom you ask. For example, Java programmers tend to use the word marshaling in the context of remote method invocation (RMI). In Python, marshaling refers almost exclusively to the format used for storing the compiled bytecode instructions.
Check out the comparison of serialization and marshaling on Wikipedia for more details.
The name serialization implies that your data, which may be structured as a dense graph of objects in the computer’s memory, becomes a linear sequence—or a series—of bytes. Such a linear representation is perfect to transmit or store. Raw bytes are universally understood by various programming languages, operating systems, and hardware architectures, making it possible to exchange data between otherwise incompatible systems.
When you visit an online store using your web browser, chances are it runs a piece of JavaScript code in the background to communicate with a back-end system. That back end might be implemented in Flask, Django, or FastAPI, which are Python web frameworks. Because JavaScript and Python are two different languages with distinct syntax and data types, they must share information using an interchange format that both sides can understand.
In other words, parties on opposite ends of a digital conversation may deserialize the same piece of information into wildly different internal representations due to their technical constraints and specifications. However, it would still be the same information from a semantic point of view.
Tools like Node.js make it possible to run JavaScript on the back end, including isomorphic JavaScript that can run on both the client and the server in an unmodified form. This eliminates language discrepancies altogether but doesn’t address more subtle nuances, such as big-endian vs little-endian differences in hardware.
Other than that, transporting data from one machine to another still requires converting it into a network-friendly format. Specifically, the format should allow the sender to partition and put the data into network packets, which the receiving machine can later correctly reassemble. Network protocols are fairly low-level, so they deal with streams of bytes rather than high-level data types.
Depending on your use case, you’ll want to pick a data serialization format that offers the best trade-off between its pros and cons. In the next section, you’ll learn about various categories of data formats used in serialization. If you already have prior knowledge about these formats and would like to explore their respective scenarios, then feel free to skip the basic introduction coming up next.
Compare Data Serialization FormatsThere are many ways to classify data serialization formats. Some of these categories aren’t mutually exclusive, making certain formats fall under a few of them simultaneously. In this section, you’ll find an overview of the different categories, their trade-offs, and use cases, as well as examples of popular data serialization formats.
Later, you’ll get your hands on some practical applications of these data serialization formats under different programming scenarios. To follow along, download the sample code mentioned in the introduction and install the required dependencies from the included requirements.txt file into an active virtual environment by issuing the following command:
Shell (venv) $ python -m pip install -r requirements.txt Copied!This will install several third-party libraries, frameworks, and tools that will allow you to navigate through the remaining part of this tutorial smoothly.
Textual vs BinaryAt the end of the day, all serialized data becomes a stream of bytes regardless of its original shape or form. But some byte values—or their specific arrangement—may correspond to Unicode code points with a meaningful and human-readable representation. Data serialization formats whose syntax consists purely of characters visible to the naked eye are called textual data formats, as opposed to binary data formats meant for machines to read.
The main benefit of a textual data format is that people like you can read serialized messages, make sense of them, and even edit them by hand when needed. In many cases, these data formats are self-explanatory, with descriptive element or attribute names. For example, take a look at this excerpt from the Real Python web feed with information about the latest tutorials and courses published:
Read the full article at https://realpython.com/python-serialize-data/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
mandclu: Translatable Inline Lists in Drupal
One of the things that got me started using Drupal was its powerful ability to work with multiple languages. I believe it's still a differentiator for Drupal in an increasingly crowded CMS space, particularly the degree to which all the text that appears on a Drupal site can be translated through the admin interface.
TagsDjango Weblog: Django 5.0 released
The Django team is happy to announce the release of Django 5.0.
The release notes cover a deluge of exciting new features in detail, but a few highlights are:
- The database-computed default values allow for defining database-computed defaults to model fields.
- Continuing the trend of expanding the Django ORM, the generated model field allows the creation of database generated columns.
- The concept of a field group was added to the templates system to simplify form field rendering.
You can get Django 5.0 from our downloads page or from the Python Package Index. The PGP key ID used for this release is Natalia Bidart: 2EE82A8D9470983E.
With the release of Django 5.0, Django 4.2 has reached the end of mainstream support. The final minor bug fix release, 4.2.8, was issued today. Django 4.2 is an LTS release and will receive security and data loss fixes until April 2026. All users are encouraged to upgrade before then to continue receiving fixes for security issues.
Django 4.1 has reached the end of extended support. The final security release (4.1.13) was issued on November 1st. All Django 4.1 users are encouraged to upgrade to Django 4.2 or later.
See the downloads page for a table of supported versions and the future release schedule.
PyCharm: Django 5.0 Delight: Unraveling the Newest Features
LN Webworks: The Admin Toolbar - A Phenomenal Drupal Module for Website
Drupal is a cutting-edge content management system (CMS) with many marvelous features and functionalities. Some are centered around causing customer delight while others are focused on promoting the ease of administration. The admin toolbar module is a remarkable feature that has simplified the lives of site admins worldwide and has made a special place in the heart of every Drupal company. It can help you easily navigate to your desired destination, assign user permissions, and organize administrative links into submenus.
The best part is that you can use this Drupal module on all types of devices irrespective of their screen sizes. Are you already feeling captivated by the magic of the admin toolbar module? What we have discussed by far is just the tip of the iceberg. There is a lot more about this phenomenal Drupal module that we’ll discuss further in this blog.
qtatech.com blog: Choosing the Best Drupal Approach: Headless or Decoupled?
Choosing the best Drupal approach for your project can be a daunting task. Do you choose a headless or decoupled approach? Both approaches have their pros and cons and it is important to understand what they are before making a decision. In this blog post, we will compare the two approaches, explore the pros and cons of each, and help you determine which one is best for your project.
Golems GABB: The Future of Content Management with Decentralized Autonomous Organizations (DAOs)
Disclaimer: This blog is based on analyzing events that have taken place in the DAO market over the past few years. The DAO market is unpredictable, so we are not responsible for your sudden desire to start creating blockchain content right now.
This blog offers insights into Decentralized Autonomous Organizations (DAOs), which are often subject to innovation, cosmic disruptions, and quantum leaps.
Note: the future is uncertain, so no one can know exactly what will happen next minute. It can surprise you with an unexpected challenge. However, you must first get acquainted with DAO to learn how to make money on both. Please fasten your seat belts because we are going on an incredible journey.
But: Before acting independently, consult our expert to avoid any mistakes or spontaneous DAO-related endeavors.
Django Weblog: Django bugfix release: 4.2.8
Today we've issued the 4.2.8 bugfix release.
The release package and checksums are available from our downloads page, as well as from the Python Package Index. The PGP key ID used for this release is Mariusz Felisiak: 2EF56372BA48CD1B.
Russ Allbery: Cumulative haul
I haven't done one of these in quite a while, long enough that I've already read and reviewed many of these books.
John Joseph Adams (ed.) — The Far Reaches (sff anthology)
Poul Anderson — The Shield of Time (sff)
Catherine Asaro — The Phoenix Code (sff)
Catherine Asaro — The Veiled Web (sff)
Travis Baldree — Bookshops & Bonedust (sff)
Sue Burke — Semiosis (sff)
Jacqueline Carey — Cassiel's Servant (sff)
Rob Copeland — The Fund (nonfiction)
Mar Delaney — Wolf Country (sff)
J.S. Dewes — The Last Watch (sff)
J.S. Dewes — The Exiled Fleet (sff)
Mike Duncan — Hero of Two Worlds (nonfiction)
Mike Duncan — The Storm Before the Storm (nonfiction)
Kate Elliott — King's Dragon (sff)
Zeke Faux — Number Go Up (nonfiction)
Nicola Griffith — Menewood (sff)
S.L. Huang — The Water Outlaws (sff)
Alaya Dawn Johnson — The Library of Broken Worlds (sff)
T. Kingfisher — Thornhedge (sff)
Naomi Kritzer — Liberty's Daughter (sff)
Ann Leckie — Translation State (sff)
Michael Lewis — Going Infinite (nonfiction)
Jenna Moran — Magical Bears in the Context of Contemporary Political
Theory (sff collection)
Ari North — Love and Gravity (graphic novel)
Ciel Pierlot — Bluebird (sff)
Terry Pratchett — A Hat Full of Sky (sff)
Terry Pratchett — Going Postal (sff)
Terry Pratchett — Thud! (sff)
Terry Pratchett — Wintersmith (sff)
Terry Pratchett — Making Money (sff)
Terry Pratchett — Unseen Academicals (sff)
Terry Pratchett — I Shall Wear Midnight (sff)
Terry Pratchett — Snuff (sff)
Terry Pratchett — Raising Steam (sff)
Terry Pratchett — The Shepherd's Crown (sff)
Aaron A. Reed — 50 Years of Text Games (nonfiction)
Dashka Slater — Accountable (nonfiction)
Rory Stewart — The Marches (nonfiction)
Emily Tesh — Silver in the Wood (sff)
Emily Tesh — Drowned Country (sff)
Valerie Vales — Chilling Effect (sff)
Martha Wells — System Collapse (sff)
Martha Wells — Witch King (sff)