Feeds

Sahil Dhiman: Debian Mirrors Hierarchy

Planet Debian - Mon, 2024-12-23 10:32

After finding AlmaLinux sync capacity is around 140Gbps at Tier 0 (or Tier 1, however you look at it), I wanted to find source and hierarchy in Debian mirroring systems.

There are two main types of mirrors in Debian - Debian package mirrors (for package installs and updates) and Debian CD mirrors (for ISO and others medias). Let’s talk about package mirrors (and it’s hierarchy) first.

Package mirror hierarchy

Trace file was a good starting point for checking upstream for a package mirror in Debian. It resides at <URL>/debian/project/trace/_traces and shows flow of data. Sample trace file from jing.rocks’s mirror. It showed, canonical source for packages was ftp-master.debian.org. Checking via https://db.debian.org/machines.cgi, showed it’s fasolo.d.o hosted at Brown University, US. This serves as “Master Archive Server”, making it a Tier 0 mirror. It’s entry mentions that it has 1Gbps shared LAN connectivity (dated information?) but it only has to push to 3 other machines/sites.

Side note - .d.o is .debian.org

As shown on https://mirror-master.debian.org/status/mirror-hierarchy.html, three other mirror sites are:

  • syncproxy2.eu.debian.org ie smit.d.o hosted by University of Twente, Netherlands with 2x10Gbps connectivity.
  • syncproxy4.eu.debian.org ie schmelzer.d.o hosted by Conova in Austria with 2x10Gbps connectivity.
  • syncproxy2.wna.debian.org - https://db.debian.org/machines.cgi entry mentions it being hosted at UBC here, but IP seems to be pointing to OSUOSL IP range as of now. IIRC few months ago, syncproxy2.wna.d.o was made to point to other host due to some issue (?). mirror-osuosl.d.o seems to be serving as syncproxy2.wna.d.o now. Bandwidth isn’t explicitly mentioned but from my experience seeing bandwidths which other free software projects hosted at OSUOSL have, it would be atleast 10Gbps and maybe more for Debian.
syncproxy2.eu.d.o (NL) ---> to the world / ftp-master.d.o (US) -- syncproxy4.eu.d.o (AT) --> to the world \ syncproxy2.wna.d.o (US) --> to the world A visualation of flow of package from ftp-master.d.o

These form the Debian Tier 1 mirror network, as all the mirrors sync from them. So Debian has atleast 50Gbps+ capacity at Tier 1. A normal Debian user might never directly interact with any of these 3 machines, but every Debian package they run/download/install flows through these machines. Though, I’m unsure what wna stands for (syncproxy2.wna.d.o). NA probably is North America and W is west (coast)? If you know, do let me know.

After Tier 1, there are a few more syncproxies (detailed below). There are atleast 45 mirrors at Tier 2. Most country mirrors i.e. ftp..debian.org are at Tier 2 too (barring a few like ftp.au.d.o, ftp.nz.do etc).

Coming back to Sync proxies at Tier 2:

  • syncproxy3.wna.debian.org - gretchaninov.d.o which is marked as syncproxy2 on db.d.o (information dated). It’s hosted in University of British Columbia, Canada, where a lot of Debian infrastructure including Salsa is hosted.
  • syncproxy.eu.debian.org - Croatian Academic and Research Network managed machine. CNAME/redirects to debian.carnet.hr. Seems to be directly managed by hosting organization.
  • syncproxy.au.debian.org - mirror-anu.d.o hosted by Australian National University with 100Mbps connectivity. Closest sync proxy for all Australian mirrors.
  • syncproxy4.wna.debian.org - syncproxy-aws-wna-01.d.o hosted in AWS, in US (according to GeoIP). IPv6 only (CNAME to syncproxy-aws-wna-01.debian.org. which only has an AAAA record, no A record). A m6g.2xlarge instance which has speeds upto 10Gbps.

Coming back to https://mirror-master.debian.org/status/mirror-hierarchy.html, one can see chain extend till Tier 6 like in case of this mirror in AU which should add some latency for the updates from being pushed at ftp-master.d.o to them. Ideally, which shouldn’t be a problem as https://www.debian.org/mirror/ftpmirror#when mentions “The main archive gets updated four times a day”.

In my case, I get my updates from NITC mirror, so my updates flows from US > US > TW > IN > me in IN.

CDNs have to internally manage cache purging too unlike normal mirrors which directly serve static file. Both deb.debian.org (sponsored by Fastly) and cdn-aws.deb.debian.org (sponsored by Amazon Cloudfront) sync from following CDN backends:

See deb.d.o trace file and cdn-aws.deb.d.o trace file.

(Thanks to Philipp Kern for the heads up here.)

CD image mirrors Hierarchy

Till now, I have only talked about Debian package mirrors. When you see /debian directory on various mirrors, they’re usually for package install and updates. If you want to grab the latest (and greatest) Debian ISO, you go to Debian CD (as they’re still called) mirror site.

casulana.d.o is mentioned as CD builder site hosted by Bytemark while pettersson-ng.d.o is mentioned as CD publishing server hosted at Academic Computer Club in Umeå, Sweden. Primary download site for Debian CD when you click download on debian.org homepage is https://cdimage.debian.org/debian-cd/ is hosted here as well. This essentially becomes Tier 0 mirror for Debian CD. All Debian CD mirrors are downstream to it.

pettersson-ng.d.o / cdimage.d.o (SE) ---> to the world A visualation of flow of Debian CD from cdimage.d.o

Academic Computer Club’s mirror setup uses a combination of multiple machines (called frontends and offloading servers) to load balance requests. Their document setup is a highly recommended read. Also, in that document, they mention , “All machines are reachable via both IPv4 and IPv6 and connected with 10 or 25 gigabit Ethernet, external bandwidth available is 200 gigabit/s.”

For completeness sake, following mirror (or mirror systems) exists too for Debian:

Debian heavily rely on various organizations to distribute and update Debian. Compiling above information made me thankful to all these organizations. Many thanks to DSA and mirror team as well for managing these stuffs.

I relied heavily on https://db.debian.org/machines.cgi which seems to be manually updated, so things might have changed along the way. If anything looks amiss, feel free to ping.

Categories: FLOSS Project Planets

Juri Pakaste: New Swift Package: tui-fuzzy-finder

Planet Python - Mon, 2024-12-23 10:25

Speaking of new Swift libraries, I released another one: tui-fuzzy-finder is a terminal UI library for Swift that provides an incremental search and selection UI that imitates the core functionality of fzf very closely.

I have a ton of scripts that wrap fzf. Some of them try to provide some kind of command line interface with options. Most of them work with pipes where I fetch data from somewhere, parse it with jq, feed it fzf, use the selection again as a part of a parameter for something else, etc. It's all great, except that I really don't love shell scripting.

With tui-fuzzy-finder I want to be able to write tools like that in a language I do actually enjoy a great deal. The package provides both a command line tool and a library, but the purpose of the command line tool is just to allow me to test the library, as writing automatic tests for terminal control is difficult. Competing with fzf in the general purpose CLI tool space is a non-goal.

I haven't implemented the preview features of fzf, nor key binding configuration. I'm not ruling either of those out, but I have not needed them yet and don't plan to work on them before a need arises.

Documentation at Swift Package Index.

Categories: FLOSS Project Planets

Juri Pakaste: New Swift Package: provision-info

Planet Python - Mon, 2024-12-23 10:20

I released a new Swift library! provision-info is a Swift package for macOS. Its purpose is to parse and show information about provisioning profile files. There's a command line tool and Swift library. The library part might work on iOS, too, but I have not tried. It relies on Apple's Security framework so no Linux.

It's not actually that new, but it's been sitting in a GitHub repo without any releases or changes for nearly three years. I needed the library in a tool at work a couple of weeks ago, so I added couple of features and finally made the first releases.

The CLI tool allows you to print out the basic metadata fields, the entitlements, the device IDs and the certificates in a profile file. You get them in plain text or as JSON. The library exposes the same data as Swift types.

There's documentation for the Swift APIs at Swift Package Index's excellent documentation hosting service. The command line tool prints out help with --help.

Categories: FLOSS Project Planets

Freelock Blog: Automatically moderate comments using AI

Planet Drupal - Mon, 2024-12-23 10:00
Automatically moderate comments using AI Anonymous (not verified) Mon, 12/23/2024 - 07:00 Tags Content Management Drupal ECA Artificial Intelligence Drupal Planet

When you allow the general Internet to post comments, or any other kind of content, you're inviting spam and abuse. We see far more spam comments than anything relevant or useful -- but when there is something relevant or useful, we want to hear it!

With the AI module and the Events, Conditions, and Actions module, you can set up automatic comment moderation.

Like any use of AI, setting an appropriate prompt is crucial to getting a decent result. Here's the one we're trying out:

Categories: FLOSS Project Planets

Joey Hess: the twenty-fifth year of my free software career

Planet Debian - Mon, 2024-12-23 09:57

I've been lucky to be able to spend twenty! five! years! developing free software and making a living on it, and this was a banner year for that career.

To start with, there was the Distribits conference. There's a big ecosystem of tools and projects that are based on git-annex, especially in scientific data management, and this was the first conference focused on that. Basically every talk involved git-annex in some way. It's been a while since I was at a conference where my software was in the center like that -- reminded me of Debconf days.

I gave a talk on how git-annex was probably basically feature complete. I have been very busy ever since adding new features to it, because in mapping out git-annex's feature set, I discovered new possibilities.

Meeting people and getting a better feel for the shape of that ecosytem, both technically and funding wise, led to several big developments in funding later in the year. Going into the year, I had an ongoing source of funding from several projects at Dartmouth that use git-annex, but after 10 years, some of that was winding up.

That all came together in my essentially writing a grant proposal to the OpenNeuro project at Stanford, to spend 6 months building out a whole constellation of features. The summer became a sprint to get it all done. Signficant amounts of very productive design work were done while swimming in the river. That was great.

(Somehow in there, I ended up onstage at FOSSY in Portland, in a keynote panel on Open Source and AI. This required developing a nuanced understanding of the mess of the OSI's Open Source AI definition, but I was mostly on the panel as the unqualified guy.)

Capping off the year, I have a new maintenance contract with Forschungszentrum Jülich. This covers the typical daily grind kind of tasks, like bug triage, keeping on top of security, release preparation, and updating dependencies, which is the kind of thing I've never been able to find dedicated funding for before.

A career in free software is a succession of hurdles. How to do something new and worthwhile? How to make any income while developing it at all? How to maintain your independant vision when working on it for hire? How to deal with burn-out? How to grow a project to be more than a one developer affair? And on and on.

How does a free software project keep paying the bills once it's feature complete? Maybe I am starting to get a glimpse of an answer.

Categories: FLOSS Project Planets

The Drop Times: Drupal4Gov Earns Nonprofit Status: Empowering Government Through Open Source

Planet Drupal - Mon, 2024-12-23 09:29
Drupal4Gov also provides extensive resources, including tutorials and training materials, to empower government agencies and developers.
Categories: FLOSS Project Planets

Real Python: How to Remove Items From Lists in Python

Planet Python - Mon, 2024-12-23 09:00

Removing items from a Python list is a common task that you can accomplish with various techniques. Whether you need to remove an item by its position or value, Python has you covered. In this tutorial, you’ll explore different approaches to removing items from a list, including using .pop(), the del statement, and .remove().

The .remove() method allows you to delete the first occurrence of a specified value, while .pop() can remove an item by its index and return it. The del statement offers another way to remove items by index, and you can also use it to delete slices of a list. The approach you choose will depend on your specific needs.

By the end of this tutorial, you’ll understand that:

  • To remove an item from a list in Python, you can use various approaches like .pop(), del, .remove(), and .clear().
  • To remove items from a certain position in a list, you use the .pop() method.
  • To delete items and slices from a list in Python, you use the del statement.
  • You use the .remove() method to delete the first occurrence of a specified value from a list.
  • To remove all the items from a list, you use .clear().
  • You can also remove duplicate items using a loop, dictionary, or set.

To get the most out of this tutorial, you should be familiar with basic Python list topics like creating lists, adding items to a list, and accessing items in a list.

Get Your Code: Click here to download the free sample code that you’ll use to remove items from lists in Python.

Take the Quiz: Test your knowledge with our interactive “How to Remove Items From Lists in Python” quiz. You’ll receive a score upon completion to help you track your learning progress:

Interactive Quiz

How to Remove Items From Lists in Python

In this quiz, you'll test your understanding of removing items from lists in Python. This is a fundamental skill in Python programming, and mastering it will enable you to manipulate lists effectively.

How to Remove Specific Items From a List

One common operation you’ll perform on a Python list is to remove specific list items. You may need to remove items based on their position in the list, or their value.

To illustrate how you can accomplish this task, suppose you’re creating a website for a public library. Your web app will allow users to save a list of books they would like to read. It should also allow them to edit and remove books from the list, as well as sort the list.

You can use a Python list to store the user’s reading list as a collection of book titles. For example, the reading list might look something like this:

Python >>> books = ["Dragonsbane", "The Hobbit", "Wonder", "Jaws"] Copied!

Now that you have a list of books, you have several ways to remove a single, specific book from the list. One approach is to use the .pop() method.

Removing Items Using the .pop() Method

Sometimes, you may need to remove items at a certain position in a list. For example, in a public library app, users might select books to remove by ticking checkboxes in the user interface. Your app will delete each selected item based on its index, which is the item’s position in the list.

If you know the index of the item you want to remove, then you can use the .pop() method. This method takes the item’s index as an optional argument and then removes and returns the item at that index. If you don’t pass an index argument to the method call, then .pop() will remove and return the last item in the list.

Note that Python lists use zero-based indexing for positioning, which means that the first element in a list is at index 0, the second element is at index 1, and so on. With that in mind, here’s an example of how you can use .pop() to remove and display the first element in your books list:

Python >>> books.pop(0) 'Dragonsbane' Copied!

You invoke the .pop() method on the books list with an index of 0, indicating the first element in the list. This call removes the first title, Dragonsbane, from the list and then returns it.

If you check the content of your list after running this code, then you’ll notice that Dragonsbane isn’t there anymore:

Python >>> books ['The Hobbit', 'Wonder', 'Jaws'] Copied!

Here, you display the book list again after the .pop() call. You can see that your list is now one element shorter because .pop() removed the first title.

As you learned earlier in the tutorial, .pop() removes an item and also returns its value, which you can then use for other operations. For example, suppose the library app also allows users to store a separate list of books they’ve read. Once the user has read a book, they can remove it from the initial book list and transfer the title to the read list:

Python >>> books = ["Dragonsbane", "The Hobbit", "Wonder", "Jaws"] >>> read_books = [] >>> read = books.pop(0) >>> read_books.append(read) >>> read_books ['Dragonsbane'] >>> books ['The Hobbit', 'Wonder', 'Jaws'] Copied!

On the second line in the example, you create a new, empty list called read_books to store the names of the books the user has read. Next, you use the .pop() method to remove the first title from the original book list and store it in a variable. Then, you use .append() to add the stored title to the read_books list.

Read the full article at https://realpython.com/remove-item-from-list-python/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Mike Driscoll: An Intro to pre-commit

Planet Python - Mon, 2024-12-23 08:09

You can use many great tools to help you in your software development journey. One such tool is pre-commit, a framework for managing and maintaining multi-language pre-commit hooks. You use pre-commit to run one or more tools before allowing you to commit your code locally. For example, you might run the Flake8 linter or the Ruff formatter on your Python code in GitHub Actions or some other CI. But rather than waiting for CI to run, you want to run those checks locally and automatically.

That is where pre-commit comes in. You tell pre-c0mmit what to run, and it will run right before it allows you to commit your code.If any of those checks fail, you must fix your code before committing it.

Installing pre-commit

pre-commit is a Python package, so you can install it using pip. Here’s the command you’ll need to run in your terminal:

pip install pre-commit

Once pre-commit is installed, you can confirm that it works by running the following:

pre-commit --version Adding the git Hooks

The next step is to navigate to one of your local GitHub code bases in your terminal. Once inside one of your repos, you will need to run this command:

pre-commit install

This command installs pre-commit in your .git\hooks folder so that pre-commit runs whenever you commit. But how does pre-commit know what to run?

You have to define what pre-commit runs using a special YAML file. You’ll learn how in the next section!

Adding a pre-commit Configuration

You need to add a file named .pre-commit-config.yaml (note the leading period) into the root of your repo. If you want to generate a simple config file, you can run this command:

pre-commit sample-config

Here’s an example config for running Black on your code:

repos: - repo: https://github.com/pre-commit/pre-commit-hooks rev: v2.3.0 hooks: - id: check-yaml - id: end-of-file-fixer - id: trailing-whitespace - repo: https://github.com/psf/black rev: 22.10.0 hooks: - id: black

Personally, I like to run the Ruff formatter and linter as well as a couple of defaults, so I use this config a lot:

repos: - repo: https://github.com/pre-commit/pre-commit-hooks rev: v3.2.0 hooks: - id: trailing-whitespace - id: end-of-file-fixer - id: check-added-large-files - repo: https://github.com/astral-sh/ruff-pre-commit # Ruff version. rev: v0.1.7 hooks: # Run the linter. - id: ruff # Run the formatter. - id: ruff-format

When you add a new rule to pre-commit, you should run that rule against all the files in your repo so you don’t have any surprises later on. To do that, you need to run this command:

pre-commit run --all-files

Once you have run all your new rules against all your code files, you can start working on your next feature or bug fix. Then, when you run,  git commit the pre-commit hooks will run, and you’ll see if your code is good enough to pass.

Wrapping Up

There are TONs of hooks you can add to pre-commit. A lot of them are mentioned on the pre-commit website. You can add Mypy, pytest, and much, much more to your pre-commit hooks. Just don’t get too crazy, or they may take too long to run, and you’ll go nuts waiting for it.

Overall, running so many of your CI hooks locally is great because your machine is usually faster than waiting on a queue in CI. Give it a try and see what think!

The post An Intro to pre-commit appeared first on Mouse Vs Python.

Categories: FLOSS Project Planets

Droptica: How to Build a Job Application Form in Drupal? A Detailed Guide

Planet Drupal - Mon, 2024-12-23 07:27

On-page job application forms allow you to quickly and efficiently collect information from candidates interested in job opportunities, facilitating the process of selecting resumes of future employees. In this article, I’ll show you how to build a recruitment form with the Webform module and embed it on a Drupal landing page. All this without having to spend hours on tedious configuration. I invite you to read the article or watch an episode of the  “Nowoczesny Drupal” series.

Categories: FLOSS Project Planets

Thomas Lange: Happy Birthday FAI!

Planet Debian - Mon, 2024-12-23 06:45
A Brief History of FAI, Which Began 25 Years Ago

On Dec 21st, 1999 version 1.0 of FAI (Fully Automatic Installation) was announced. That was 25 years ago.

Some months before, the computer science department of the University of Cologne bought a small HPC cluster with 16 nodes (each with dual CPU Pentium II 400Mhz, 256 MB RAM) and I was too lazy to install those nodes manually. That's why I started the FAI project. With FAI you can install computers in a few minutes from scratch to a machine with a custom configuration that is ready to go for their users.

At that time Debian 2.1 aka slink was using kernel 2.0.36 and it was the first release using apt. Many things have happened since then.

In the beginning we wrote the first technical report about FAI and a lot of documentation were added afterwards. I gave more than 45 talks about FAI all over the world. Over the past 25 years, there has been an average of more than one commit per day to the FAI software repository.

Several top500.org HPC clusters were built using FAI and many companies are using FAI for their IT infrastructure or deploying Linux on their products using FAI. An overview of users can be found here.

Some major milestones of FAI are listed in the blog post of the 20th anniversary.

What Happended in the Last 5 Years?
  • Live images can be created
  • Writeable data partition on USB sticks
  • FAIme web service creates custom live ISOs
  • Support for Alpine Linux and Arch Linux package managers
  • Automatic detect a local config space
  • Live and installation images for Debian for new hardware using a backports kernel or using the Debian testing release
  • The FAIme web services created more than 30.000 customized ISOs

Currently, I'm preparing for the next FAI release and I still have ideas for new features.

Thanks for all the feedback from you, which helped a lot in making FAI a successful project. About FAI

FAI is a tool for unattended mass deployment of Linux. It's a system to install and configure Linux systems and software packages on computers as well as virtual machines, from small labs to large-scale infrastructures like clusters and cloud environments. You can take one or more virgin PC's, turn on the power, and after a few minutes, the systems are installed, and completely configured to your exact needs, without any interaction necessary.

Categories: FLOSS Project Planets

LostCarPark Drupal Blog: Drupal Advent Calendar day 23 - AI Track

Planet Drupal - Mon, 2024-12-23 04:00
Drupal Advent Calendar day 23 - AI Track james Mon, 12/23/2024 - 09:00

Welcome back for the penultimate door of this year’s Drupal Advent Calendar, and today we’ve recruited the legendary Mike Anello to bring us up to speed on a big topic, the AI track of Drupal CMS.

The stated goal of the AI track is to make it easier for non-technical users to build and extend their sites - it is really interesting to note that this is mainly geared towards admin-facing UI, not site user-facing AI. With that in mind, let’s take a look at what is included (so far!)

AI generated alternate text for images

With virtually no configuration (other than entering your LLM API key) the…

Categories: FLOSS Project Planets

Python Bytes: #415 Just put the fries in the bag bro

Planet Python - Mon, 2024-12-23 03:00
<strong>Topics covered in this episode:</strong><br> <ul> <li><a href="https://github.com/dbos-inc/dbos-transact-py?featured_on=pythonbytes"><strong>dbos-transact-py</strong></a></li> <li><strong><a href="https://engineering.fb.com/2024/12/09/developer-tools/typed-python-2024-survey-meta/?featured_on=pythonbytes">Typed Python in 2024: Well adopted, yet usability challenges persist</a></strong></li> <li><strong><a href="https://github.com/RightTyper/RightTyper?featured_on=pythonbytes">RightTyper</a></strong></li> <li><strong><a href="https://treyhunner.com/2024/12/lazy-self-installing-python-scripts-with-uv/?featured_on=pythonbytes">Lazy self-installing Python scripts with uv</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=xdR4JFcb01o' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="415">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy"><strong>@mkennedy@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes"><strong>@mkennedy.codes</strong></a> <strong>(bsky)</strong></li> <li>Brian: <a href="https://fosstodon.org/@brianokken"><strong>@brianokken@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes"><strong>@brianokken.bsky.social</strong></a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes"><strong>@pythonbytes@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/pythonbytes.fm"><strong>@pythonbytes.fm</strong></a> <strong>(bsky)</strong></li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1:</strong> <a href="https://github.com/dbos-inc/dbos-transact-py?featured_on=pythonbytes"><strong>dbos-transact-py</strong></a></p> <ul> <li>DBOS Transact is a Python library providing <strong>ultra-lightweight durable execution</strong>.</li> <li>Durable execution means your program is <strong>resilient to any failure</strong>.</li> <li>If it is ever interrupted or crashes, all your workflows will automatically resume from the last completed step.</li> <li>Under the hood, DBOS Transact works by storing your program's execution state (which workflows are currently executing and which steps they've completed) in a Postgres database.</li> <li>Incredibly fast, for example <a href="https://www.dbos.dev/blog/dbos-vs-aws-step-functions-benchmark?featured_on=pythonbytes">25x faster than AWS Step Functions</a>.</li> </ul> <p><strong>Brian #2:</strong> <a href="https://engineering.fb.com/2024/12/09/developer-tools/typed-python-2024-survey-meta/?featured_on=pythonbytes">Typed Python in 2024: Well adopted, yet usability challenges persist</a></p> <ul> <li>Aaron Pollack on Engineering at Meta blog</li> <li>“Overall findings <ul> <li>88% of respondents “Always” or “Often” use Types in their Python code.</li> <li>IDE tooling, documentation, and catching bugs are drivers for the high adoption of types in survey responses,</li> <li>The usability of types and ability to express complex patterns still are challenges that leave some code unchecked.</li> <li>Latency in tooling and lack of types in popular libraries are limiting the effectiveness of type checkers.</li> <li>Inconsistency in type check implementations and poor discoverability of the documentation create friction in onboarding types into a project and seeking help when using the tools. “</li> </ul></li> <li>Notes <ul> <li>Seems to be a different survey than the 2023 (current) dev survey. Diff time frame and results. July 29 - Oct 8, 2024</li> </ul></li> </ul> <p><strong>Michael #3:</strong> <a href="https://github.com/RightTyper/RightTyper?featured_on=pythonbytes">RightTyper</a></p> <ul> <li>A fast and efficient type assistant for Python, including tensor shape inference</li> </ul> <p><strong>Brian #4:</strong> <a href="https://treyhunner.com/2024/12/lazy-self-installing-python-scripts-with-uv/?featured_on=pythonbytes">Lazy self-installing Python scripts with uv</a></p> <ul> <li>Trey Hunner</li> <li>Creating your own ~/bin full of single-file command line scripts is common for *nix folks, still powerful but underutilized on Mac, and trickier but still useful on Windows.</li> <li>Python has been difficult in the past to use for standalone scripts if you need dependencies, but that’s no longer the case with uv.</li> <li>Trey walks through user scripts (*nix and Mac) <ul> <li>Using #! for scripts that don’thave dependencies</li> <li>Using #! with uv run --script and /// script for dependencies</li> <li>Discussion about how uv handles that.</li> </ul></li> </ul> <p><strong>Extras</strong> </p> <p>Brian:</p> <ul> <li><a href="https://courses.pythontest.com?featured_on=pythonbytes">Courses at pythontest.com</a> <ul> <li>If you live in a place (or are in a place in your life) where these prices are too much, let me know. I had a recent request and I really appreciate it.</li> </ul></li> </ul> <p>Michael:</p> <ul> <li><a href="https://bsky.app/profile/hugovk.bsky.social/post/3ldjdh66jy22o?featured_on=pythonbytes">Python 3.14 update</a> released</li> <li><a href="https://talkpython.fm/blog/posts/top-talk-python-podcast-episodes-of-2024/?featured_on=pythonbytes">Top episodes of 2024</a> at Talk Python</li> <li>Universal check for updates macOS: <ul> <li>Settings &gt; Keyboard &gt; Keyboard shortcuts &gt; App shortcuts &gt; +</li> <li>Then add shortcut for single app, ^U and the menu title.</li> <li><img src="https://blobs.pythonbytes.fm/universial-update-check.jpg" alt="" /></li> </ul></li> </ul> <p><strong>Joke:</strong> <a href="https://github.com/shamith09/pygyat?featured_on=pythonbytes">Python with rizz</a></p>
Categories: FLOSS Project Planets

Zato Blog: Using OAuth in API Integrations

Planet Python - Mon, 2024-12-23 03:00
Using OAuth in API Integrations 2024-12-23, by Dariusz Suchojad

OAuth is often employed in processes requiring permissions to be granted to frontend applications and end users. Yet, what we typically need in API systems integrations is a way to secure connections between the integration middleware and backend systems without a need for any ongoing human interactions.

OAuth can be a good choice for that scenario and this article shows how it can be achieved in Python, with backend systems using REST and HL7 FHIR.

What we would like to have

Let's say we have a typical integration scenario as in the diagram below:

  • External systems and applications invoke the interoperability layer (Zato) which is expected to further invoke a few backend systems, e.g. a REST and HL7 FHIR one so as to return a combined result of backend API invocations. It does not matter what technology the client systems use, i.e. whether they are REST ones or not.

  • The interoperability layer needs to identify itself with the backend systems before it is allowed to invoke them - they need to make sure that it really is Zato and that it accesses only the resources allowed.

  • An OAuth server issues time-based access tokens, which are simple strings, like web browser session cookies, confirming that such and such bearer of the said token is allowed to make such and such requests. Note that the tokens have an explicit expiration time, e.g. they will become invalid after one hour. Also observe that Zato stores the tokens as-is, they are genuinely opaque strings.

  • If a client system invokes the interoperability layer, the layer will obtain a token from the OAuth server and keep it in an internal cache. Next, Zato will invoke the backend systems, bearing the token among other HTTP headers. Each invoked backend system will extract the token from the incoming request and validate it.

How the validation looks like in practices is something that Zato will not be aware of because it treats the token as an opaque string but, in practice, if the token is self-contained (e.g. JWT data) the system may validate it on its own, and if it is not self-contained, the system may invoke an introspection endpoint on the OAuth server to validate the access token from Zato.

Once the validation succeeds, the backend system will reply with the business data and the interoperability layer will combine the results for the calling application's benefit.

In subsequent requests, the same access token will be reused by Zato with the same flow of messages as previously. However, if the cached token expires, Zato will request a new one from the OAuth server - this will be transparent to the calling application - and the flow will resume.

In OAuth terminology, what is described above has specific names, the overall flow of messages between Zato and the OAuth server is called a "Client Credential Flow" and Zato is then considered a "client" from the OAuth server's perspective.

Configuring OAuth

First, we need to create an OAuth security definition that contains the OAuth server's connection details. In this case, the server is Okta. Note the scopes field - it is a list of permissions ("scopes") that Zato will be able to make use of.

What exactly the list of scopes should look like is something to be coordinated with the people who are responsible for the configuration of the OAuth server. If it is you personally, simply ensure that what is in the the OAuth server and in Zato is in sync.

Calling REST

To invoke REST services, fill out a form as below, pointing the "Security" field to the newly created OAuth definition. This suffices for Zato to understand when and how to obtain new tokens from the underlying OAuth server.

Here is sample code to invoke a backend REST system - note that we merely refer to a connection by its name, without having to think about security at all. It is Zato that knows how to get and use OAuth tokens as required.

# -*- coding: utf-8 -*- # Zato from zato.server.service import Service class GetClientBillingPlan(Service): """ Returns a billing plan for the input client. """ def handle(self): # In a real service, this would be read from input payload = {'client_id': 123} # Get a connection to the server .. conn = self.out.rest['REST Server'].conn # .. invoke it .. response = conn.get(self.cid, payload) # .. and handle the response here. ... Calling HL7 FHIR

Similarly to REST endpoints, to invoke HL7 FHIR servers, fill out a form as below and let the "Security" field point to the OAuth definition just created. This will suffice for Zato to know when and how to use tokens received from the underlying OAuth server.

Here is sample code to invoke a FHIR server system - as with REST servers above, observe that we only refer to a connection by its name and Zato takes care of OAuth.

# -*- coding: utf-8 -*- # Zato from zato.server.service import Service class GetPractitioner(Service): """ Returns a practictioner matching input data. """ def handle(self) -> 'None': # Connection to use conn_name = 'My EHR' # In a real service, this would be read from input practitioner_id = 456 # Get a connection to the server .. with self.out.hl7.fhir[conn_name].conn.client() as client: # Get a reference to a FHIR resource .. practitioners = client.resources('Practitioner') # .. look up the practitioner .. result = practitioners.search(active=True, _id=practitioner_id).get() # .. and handle the response here. ... What about the API clients?

One aspect omitted above are the initial API clients - this is on purpose. How they invoke Zato, using what protocols, with what security mechanisms, and how to build responses based on their input data, this is completely independent of how Zato uses OAuth in its own communication with backend systems.

All of these aspects can and will be independent in practice, e.g. clients will use Basic Auth rather than OAuth. Or perhaps the clients will use AMQP, Odoo, SAP, or IBM MQ, without any HTTP, or maybe there will be no explicit API invocations and what we call "clients" will be actually CSV files in a shared directory that your services will be scheduled to periodically pick up. Yet, once more, regardless of what makes the input data available, the backend OAuth mechanism will work independently of it all.



Next steps

API programming screenshots
➤ Python API integration tutorial
➤ More API programming examples in Python
➤ Visit the support center for more articles and FAQ
Open-source iPaaS in Python

More blog posts
Categories: FLOSS Project Planets

The Drop Times: Hope and Progress Ahead

Planet Drupal - Mon, 2024-12-23 02:17

As 2024 comes to a close, it’s time to reflect on an inspiring year for the Drupal community. This year marked the beginning of the transformative Starshot Initiative, setting an ambitious vision for the future of Drupal. Among the highlights was the highly anticipated release of Drupal 11, a milestone that brought enhanced capabilities, improved user experience, and reinforced Drupal’s position as a leading open-source content management system.  

This year wasn't only about technical achievements—it was a year of hope and collaboration too. The community has come together, embracing challenges with resilience and charting a path forward with optimism. Much like the spirit of Christmas, this year’s developments remind us of the joy in beginnings and the promise of what lies ahead.  

As we step into this festive season, let’s celebrate the milestones we’ve achieved and the community that made it all possible. Let’s also look forward to an even brighter future, one filled with innovation, inclusivity, and growth for Drupal. Here’s to a new year brimming with possibilities and the collective hope that Drupal continues to shine even brighter in 2025. Happy holidays!

DrupalCon Singapore 2024Discover DrupalEventsFree SoftwareOrganization News

To get timely updates, follow us on LinkedIn, Twitter and Facebook. You can also join us on Drupal Slack at #thedroptimes.

Categories: FLOSS Project Planets

LN Webworks: LN Webworks at DrupalCon Singapore 2024

Planet Drupal - Mon, 2024-12-23 01:09

It's the Second DrupalCon for LNWebWorks, filled with incredible memories and the opportunity to forge new connections. This time, the event is hosted at the prestigious ParkRoyal Collection Marina Bay Hall. Luckily, our hotel—Carlton City Hotel —is just a stone's throw away, making it a quick 5-minute cab ride to the venue. Here's a glimpse of my hotel room view, showcasing the breathtaking skyline of the tallest buildings!

Categories: FLOSS Project Planets

Russ Allbery: Review: The House That Walked Between Worlds

Planet Debian - Sun, 2024-12-22 22:33

Review: The House That Walked Between Worlds, by Jenny Schwartz

Series: Uncertain Sanctuary #1 Publisher: Jenny Schwartz Copyright: 2020 Printing: September 2024 ASIN: B0DBX6GP8Z Format: Kindle Pages: 215

The House That Walked Between Worlds is the first book of a self-published trilogy of... hm. Space fantasy? Pure fantasy with a bit of science fiction thrown in for flavor? Something like that. I read it as part of the Uncertain Sanctuary omnibus, which is reflected in the sidebar metadata.

Kira Aist is a doctor. She's also a witch and a direct descendant of Baba Yaga. Her Russian grandmother warned her to never use magic and never reveal who she was because people would hunt her and her family if she did. She broke the rule to try to save a child, her grandmother was right, and now multiple people are dead, including her parents. As the story opens, she's deep in the wilds of New Zealand in a valley with buried moa bones, summoning her House so that she can flee Earth.

Kira's first surprise is that her House is not the small hut that she was expecting from childhood visits to Baba Yaga. It's larger. A lot larger: an obsidian castle with nine towers and legs that resemble dragons rather than the moas whose magic she drew on. Her magic apparently had a much different idea of what she needs than she did.

Her second surprise is that her magical education is highly incomplete, and she is not the witch that she thought she was. Her ability to create a House means that she's a sorcerer, the top tier of magical power in a hierarchy about which she knows essentially nothing. Thankfully the House has a library, but Kira has a lot to learn about the universe and her place in it.

I picked this up because the premise sounded a little like the Innkeeper novels, and since another novel in that series does not appear to be immediately forthcoming, I went looking elsewhere for my cozy sentient building fix. The House That Walked Between Worlds is nowhere near as well-written (or, frankly, coherent) as the Innkeeper books, but it did deliver some of the same vibes.

You should know going in that there isn't much in the way of a plot. Schwartz invented an elaborate setting involving archetype worlds inhabited by classes of mythological creatures that in some mystical sense surround a central system called Qaysar. These archetype worlds spawn derived worlds, each of which seems to be its own dimension, although the details are a bit murky to me. The world Kira thinks of as Earth is just one of the universes branched off of an archetypal Earth, and is the only one of those branchings where the main population is human. The other Earth-derived worlds are populated by the Dinosaurians and the Neanderthals. Similarly, there is a Fae world that branches into Elves and Goblins, an Epic world that branches into Shifters, Trolls, and Kobolds, and so forth. Travel between these worlds is normally by slow World Walker Caravans, but Houses break the rules of interdimensional travel in ways that no one entirely understands.

If your eyes are already starting to glaze over, be warned there's a lot of this. The House That Walked Between Worlds is infodumping mixed with vibes, and I think you have to enjoy the setting, or at least the sheer enthusiasm of Schwartz's presentation of it, to get along with this book. The rest of the story is essentially Kira picking up strays: first a dangerous-looking elf cyborg, then a juvenile giant cat (because of course there's a pet fantasy space cat; it's that sort of book), and then a charming martial artist who I'm fairly sure is up to no good. Kira is entirely out of her depth and acting on instinct, which luckily plays into stereotypes of sorcerers as mysterious and unpredictable. It also helps that her magic is roughly "anything she wants to happen, happens."

This is, in other words, not a tightly-crafted story with coherent rules and a sense of risk and danger. It's a book that succeeds or fails almost entirely on how much you like the main characters and enjoy the world-building. Thankfully, I thought the characters were fun, if not (so far) all that deep. Kira deals with her trauma without being excessively angsty and leans into her new situation with a chaotic decisiveness that I found charming. The cyborg elf is taciturn and a bit inscrutable at first, but he grew on me, and thankfully this book does not go immediately to romance. Late in the book, Kira picks up a publicity expert, which was not at all the type of character that I was expecting and which I found delightful.

Most importantly, the House was exactly what I was looking for: impish, protective, mysterious, inhuman, and absurdly overpowered. I adore cozy sentient building stories, so I'm an easy audience for this sort of thing, but I'm already eager to read more about the House.

This is not great writing by any stretch, and you will be unsurprised that it's self-published. If you're expecting the polish and plot coherence of the Innkeeper stories, you'll be disappointed. But if you just want to spend some time with a giant sentient space-traveling mansion inhabited by unlikely misfits, and you don't mind large amounts of space fantasy infodumping, consider giving this a shot. I had fun with it and plan on reading the rest of the omnibus.

Followed by House in Hiding.

Rating: 6 out of 10

Categories: FLOSS Project Planets

Simon Josefsson: OpenSSH and Git on a Post-Quantum SPHINCS+

Planet Debian - Sun, 2024-12-22 19:44

Are you aware that Git commits and tags may be signed using OpenSSH? Git signatures may be used to improve integrity and authentication of our software supply-chain. Popular signature algorithms include Ed25519, ECDSA and RSA. Did you consider that these algorithms may not be safe if someone builds a post-quantum computer?

As you may recall, I have earlier blogged about the efficient post-quantum key agreement mechanism called Streamlined NTRU Prime and its use in SSH and I have attempted to promote the conservatively designed Classic McEliece in a similar way, although it remains to be adopted.

What post-quantum signature algorithms are available? There is an effort by NIST to standardize post-quantum algorithms, and they have a category for signature algorithms. According to wikipedia, after round three the selected algorithms are CRYSTALS-Dilithium, FALCON and SPHINCS+. Of these, SPHINCS+ appears to be a conservative choice suitable for long-term digital signatures. Can we get this to work?

Recall that Git uses the ssh-keygen tool from OpenSSH to perform signing and verification. To refresh your memory, let’s study the commands that Git uses under the hood for Ed25519. First generate a Ed25519 private key:

jas@kaka:~$ ssh-keygen -t ed25519 -f my_ed25519_key -P "" Generating public/private ed25519 key pair. Your identification has been saved in my_ed25519_key Your public key has been saved in my_ed25519_key.pub The key fingerprint is: SHA256:fDa5+jmC2+/aiLhWeWA3IV8Wj6yMNTSuRzqUZlIGlXQ jas@kaka The key's randomart image is: +--[ED25519 256]--+ | .+=.E .. | | oo=.ooo | | . =o=+o . | | =oO+o . | | .=+S.= | | oo.o o | | . o . | | ...o.+.. | | .o.o.=**. | +----[SHA256]-----+ jas@kaka:~$ cat my_ed25519_key -----BEGIN OPENSSH PRIVATE KEY----- b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW QyNTUxOQAAACAWP/aZ8hzN0WNRMSpjzbgW1tJXNd2v6/dnbKaQt7iIBQAAAJCeDotOng6L TgAAAAtzc2gtZWQyNTUxOQAAACAWP/aZ8hzN0WNRMSpjzbgW1tJXNd2v6/dnbKaQt7iIBQ AAAEBFRvzgcD3YItl9AMmVK4xDKj8NTg4h2Sluj0/x7aSPlhY/9pnyHM3RY1ExKmPNuBbW 0lc13a/r92dsppC3uIgFAAAACGphc0BrYWthAQIDBAU= -----END OPENSSH PRIVATE KEY----- jas@kaka:~$ cat my_ed25519_key.pub ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBY/9pnyHM3RY1ExKmPNuBbW0lc13a/r92dsppC3uIgF jas@kaka jas@kaka:~$

Then let’s sign something with this key:

jas@kaka:~$ echo "Hello world!" > msg jas@kaka:~$ ssh-keygen -Y sign -f my_ed25519_key -n my-namespace msg Signing file msg Write signature to msg.sig jas@kaka:~$ cat msg.sig -----BEGIN SSH SIGNATURE----- U1NIU0lHAAAAAQAAADMAAAALc3NoLWVkMjU1MTkAAAAgFj/2mfIczdFjUTEqY824FtbSVz Xdr+v3Z2ymkLe4iAUAAAAMbXktbmFtZXNwYWNlAAAAAAAAAAZzaGE1MTIAAABTAAAAC3Nz aC1lZDI1NTE5AAAAQLmWsq05tqOOZIJqjxy5ZP/YRFoaX30lfIllmfyoeM5lpVnxJ3ZxU8 SF0KodDr8Rtukg2N3Xo80NGvZOzbG/9Aw= -----END SSH SIGNATURE----- jas@kaka:~$

Now let’s create a list of trusted public-keys and associated identities:

jas@kaka:~$ echo 'my.name@example.org ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBY/9pnyHM3RY1ExKmPNuBbW0lc13a/r92dsppC3uIgF' > allowed-signers jas@kaka:~$

Then let’s verify the message we just signed:

jas@kaka:~$ cat msg | ssh-keygen -Y verify -f allowed-signers -I my.name@example.org -n my-namespace -s msg.sig Good "my-namespace" signature for my.name@example.org with ED25519 key SHA256:fDa5+jmC2+/aiLhWeWA3IV8Wj6yMNTSuRzqUZlIGlXQ jas@kaka:~$

I have implemented support for SPHINCS+ in OpenSSH. This is early work, but I wanted to announce it to get discussion of some of the details going and to make people aware of it.

What would a better way to demonstrate SPHINCS+ support in OpenSSH by validating the Git commit that implements it, using its own implementation?

Here is how to proceed, first get a suitable development environment up and running. I’m using a Debian container launched in a protected environment using podman.

jas@kaka:~$ podman run -it --rm debian:stable

Then install the necessary build dependencies for OpenSSH.

# apt-get update # apt-get install git build-essential autoconf libz-dev libssl-dev

Now clone my OpenSSH branch with the SPHINCS+ implentation and build it. You may browse the commit on GitHub first if you are curious.

# cd # git clone https://github.com/jas4711/openssh-portable.git -b sphincsp # cd openssh-portable # autoreconf -fvi # ./configure # make

Configure a Git allowed signers list with my SPHINCS+ public key (make sure to keep the public key on one line with the whitespace being one ASCII SPC character):

# mkdir -pv ~/.ssh # echo 'simon@josefsson.org ssh-sphincsplus@openssh.com AAAAG3NzaC1zcGhpbmNzcGx1c0BvcGVuc3NoLmNvbQAAAECI6eacTxjB36xcPtP0ZyxJNIGCN350GluLD5h0KjKDsZLNmNaPSFH2ynWyKZKOF5eRPIMMKSCIV75y+KP9d6w3' > ~/.ssh/allowed_signers # git config gpg.ssh.allowedSignersFile ~/.ssh/allowed_signers

Then verify the commit using the newly built ssh-keygen binary:

# PATH=$PWD:$PATH # git log -1 --show-signature commit ce0b590071e2dc845373734655192241a4ace94b (HEAD -> sphincsp, origin/sphincsp) Good "git" signature for simon@josefsson.org with SPHINCSPLUS key SHA256:rkAa0fX0lQf/7V7QmuJHSI44L/PAPPsdWpis4nML7EQ Author: Simon Josefsson <simon@josefsson.org> Date: Tue Dec 3 18:44:25 2024 +0100 Add SPHINCS+. # git verify-commit ce0b590071e2dc845373734655192241a4ace94b Good "git" signature for simon@josefsson.org with SPHINCSPLUS key SHA256:rkAa0fX0lQf/7V7QmuJHSI44L/PAPPsdWpis4nML7EQ #

Yay!

So what are some considerations?

SPHINCS+ comes in many different variants. First it comes with three security levels approximately matching 128/192/256 bit symmetric key strengths. Second choice is between the SHA2-256, SHAKE256 (SHA-3) and Haraka hash algorithms. Final choice is between a “robust” and a “simple” variant with different security and performance characteristics. To get going, I picked the “sphincss256sha256robust” SPHINCS+ implementation from SUPERCOP 20241022. There is a good size comparison table in the sphincsplus implementation, if you want to consider alternative variants.

SPHINCS+ public-keys are really small, as you can see in the allowed signers file. This is really good because they are handled by humans and often by cut’n’paste.

What about private keys? They are slightly longer than Ed25519 private keys but shorter than typical RSA private keys.

# ssh-keygen -t sphincsplus -f my_sphincsplus_key -P "" Generating public/private sphincsplus key pair. Your identification has been saved in my_sphincsplus_key Your public key has been saved in my_sphincsplus_key.pub The key fingerprint is: SHA256:4rNfXdmLo/ySQiWYzsBhZIvgLu9sQQz7upG8clKziBg root@ad600ff56253 The key's randomart image is: +[SPHINCSPLUS 256-+ | . .o | |o . oo. | | = .o.. o | |o o o o . . o | |.+ = S o o .| |Eo= . + . . .. .| |=*.+ o . . oo . | |B+= o o.o. . | |o*o ... .oo. | +----[SHA256]-----+ # cat my_sphincsplus_key.pub ssh-sphincsplus@openssh.com AAAAG3NzaC1zcGhpbmNzcGx1c0BvcGVuc3NoLmNvbQAAAEAltAX1VhZ8pdW9FgC+NdM6QfLxVXVaf1v2yW4v+tk2Oj5lxmVgZftfT37GOMOlK9iBm9SQHZZVYZddkEJ9F1D7 root@ad600ff56253 # cat my_sphincsplus_key -----BEGIN OPENSSH PRIVATE KEY----- b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAYwAAABtzc2gtc3 BoaW5jc3BsdXNAb3BlbnNzaC5jb20AAABAJbQF9VYWfKXVvRYAvjXTOkHy8VV1Wn9b9slu L/rZNjo+ZcZlYGX7X09+xjjDpSvYgZvUkB2WVWGXXZBCfRdQ+wAAAQidiIwanYiMGgAAAB tzc2gtc3BoaW5jc3BsdXNAb3BlbnNzaC5jb20AAABAJbQF9VYWfKXVvRYAvjXTOkHy8VV1 Wn9b9sluL/rZNjo+ZcZlYGX7X09+xjjDpSvYgZvUkB2WVWGXXZBCfRdQ+wAAAIAbwBxEhA NYzITN6VeCMqUyvw/59JM+WOLXBlRbu3R8qS7ljc4qFVWUtmhy8B3t9e4jrhdO6w0n5I4l mnLnBi2hJbQF9VYWfKXVvRYAvjXTOkHy8VV1Wn9b9sluL/rZNjo+ZcZlYGX7X09+xjjDpS vYgZvUkB2WVWGXXZBCfRdQ+wAAABFyb290QGFkNjAwZmY1NjI1MwECAwQ= -----END OPENSSH PRIVATE KEY----- #

Signature size? Now here is the challenge, for this variant the size is around 29kb or close to 600 lines of base64 data:

# git cat-file -p ce0b590071e2dc845373734655192241a4ace94b | head -10 tree ede42093e7d5acd37fde02065a4a19ac1f418703 parent 826483d51a9fee60703298bbf839d9ce37943474 author Simon Josefsson <simon@josefsson.org> 1733247865 +0100 committer Simon Josefsson <simon@josefsson.org> 1734907869 +0100 gpgsig -----BEGIN SSH SIGNATURE----- U1NIU0lHAAAAAQAAAGMAAAAbc3NoLXNwaGluY3NwbHVzQG9wZW5zc2guY29tAAAAQIjp5p xPGMHfrFw+0/RnLEk0gYI3fnQaW4sPmHQqMoOxks2Y1o9IUfbKdbIpko4Xl5E8gwwpIIhX vnL4o/13rDcAAAADZ2l0AAAAAAAAAAZzaGE1MTIAAHSDAAAAG3NzaC1zcGhpbmNzcGx1c0 BvcGVuc3NoLmNvbQAAdGDHlobgfgkKKQBo3UHmnEnNXczCMNdzJmeYJau67QM6xZcAU+d+ 2mvhbksm5D34m75DWEngzBb3usJTqWJeeDdplHHRe3BKVCQ05LHqRYzcSdN6eoeZqoOBvR # git cat-file -p ce0b590071e2dc845373734655192241a4ace94b | tail -5 ChvXUk4jfiNp85RDZ1kljVecfdB2/6CHFRtxrKHJRDiIavYjucgHF1bjz0fqaOSGa90UYL RZjZ0OhdHOQjNP5QErlIOcZeqcnwi0+RtCJ1D1wH2psuXIQEyr1mCA== -----END SSH SIGNATURE----- Add SPHINCS+. # git cat-file -p ce0b590071e2dc845373734655192241a4ace94b | wc -l 579 #

What about performance? Verification is really fast:

# time git verify-commit ce0b590071e2dc845373734655192241a4ace94b Good "git" signature for simon@josefsson.org with SPHINCSPLUS key SHA256:rkAa0fX0lQf/7V7QmuJHSI44L/PAPPsdWpis4nML7EQ real 0m0.010s user 0m0.005s sys 0m0.005s #

On this machine, verifying an Ed25519 signature is a couple of times slower, and needs around 0.07 seconds.

Signing is slower, it takes a bit over 2 seconds on my laptop.

# echo "Hello world!" > msg # time ssh-keygen -Y sign -f my_sphincsplus_key -n my-namespace msg Signing file msg Write signature to msg.sig real 0m2.226s user 0m2.226s sys 0m0.000s # echo 'my.name@example.org ssh-sphincsplus@openssh.com AAAAG3NzaC1zcGhpbmNzcGx1c0BvcGVuc3NoLmNvbQAAAEAltAX1VhZ8pdW9FgC+NdM6QfLxVXVaf1v2yW4v+tk2Oj5lxmVgZftfT37GOMOlK9iBm9SQHZZVYZddkEJ9F1D7' > allowed-signers # cat msg | ssh-keygen -Y verify -f allowed-signers -I my.name@example.org -n my-namespace -s msg.sig Good "my-namespace" signature for my.name@example.org with SPHINCSPLUS key SHA256:4rNfXdmLo/ySQiWYzsBhZIvgLu9sQQz7upG8clKziBg #

Welcome to our new world of Post-Quantum safe digital signatures of Git commits, and Happy Hacking!

Categories: FLOSS Project Planets

#! code: Drupal 11: The Queues API

Planet Drupal - Sun, 2024-12-22 14:18

I've talked a lot about the Batch API in Drupal recently, and I've mentioned that it is built upon the Queue API, but I haven't gone any deeper than that. I wrote about the Queues API in Drupal 7, but thought I would bring my understanding up to date.

A queue is a data construct that uses a "first in, last out" (or FILO) flow where items are processed in the order that they were added to the queue. This system has a lot of different uses, but is most important when it comes to asynchronous data processing. Drupal and many modules make use of the queue system to process information behind the scenes.

The difference between a queue and a batch is that the batch is for time sensitive things where the user is expecting something to happen. A queue, on the other hand, is more for data processing that needs to happen behind the scenes or without any user triggering the process.

Batches also tend to be stateless, meaning that if the batch fails half way through it is sometimes difficult to re-start the batch from the same point. It is possible if you create your batches in just the right way, but this is actually a little rate. A queue manages this much better by having all of the items in the queue and then giving you options about what you can do with each item as you process it. This means that you might pop a queue item back into the queue for later processing if it failed.

In this article I will look at the Queue API in Drupal 11, how it is used and what sort of best practices are used when using the API.

Creating A Queue

To create a queue in Drupal you need to create an instance of the 'queue' service. This is a factory that can be used to create and manage your queues inside Drupal. By default, all queues in Drupal are database queues (handled via the queue.database default queue factory), although this can be changed with configuration settings.

Read more

Categories: FLOSS Project Planets

Freelock Blog: Automatically set fields on content

Planet Drupal - Sun, 2024-12-22 10:00
Automatically set fields on content Anonymous (not verified) Sun, 12/22/2024 - 07:00 Tags Drupal ECA Drupal Planet

One of the easiest things to do with the Events, Conditions, and Actions (ECA) module is to set values on fields. You can populate forms with names and addresses from a user's profile. You can set date values to offsets from the current time. You can perform calculations and store the result in a summary field, which can make using them in views much more straightforward.

Categories: FLOSS Project Planets

Real Python: Strings and Character Data in Python

Planet Python - Sun, 2024-12-22 09:00

Python strings are a sequence of characters used for handling textual data. You can create strings in Python using quotation marks or the str() function, which converts objects into strings. Strings in Python are immutable, meaning once you define a string, you can’t change it.

To access specific elements of a string, you use indexing, where indices start at 0 for the first character. You specify an index in square brackets, such as "hello"[0], which gives you "h". For string interpolation you can use curly braces {} in a string.

By the end of this tutorial, you’ll understand that:

  • A Python string is a sequence of characters used for textual data.
  • The str() function converts objects to their string representation.
  • You can use curly braces {} to insert values in a Python string.
  • You access string elements in Python using indexing with square brackets.
  • You can join all elements in a list into a single string using .join().

You’ll explore creating strings with string literals and functions, using operators and built-in functions with strings, indexing and slicing techniques, and methods for string interpolation and formatting. These skills will help you manipulate and format textual data in your Python programs effectively.

To get the most out of this tutorial, you should have a good understanding of core Python concepts, including variables, functions, and operators and expressions.

Get Your Code: Click here to download the free sample code that shows you how to work with strings and character data in Python.

Take the Quiz: Test your knowledge with our interactive “Python Strings and Character Data” quiz. You’ll receive a score upon completion to help you track your learning progress:

Interactive Quiz

Python Strings and Character Data

This quiz will test your understanding of Python's string data type and your knowledge about manipulating textual data with string objects. You'll cover the basics of creating strings using literals and the str() function, applying string methods, using operators and built-in functions, and more!

Getting to Know Strings and Characters in Python

Python provides the built-in string (str) data type to handle textual data. Other programming languages, such as Java, have a character data type for single characters. Python doesn’t have that. Single characters are strings of length one.

In practice, strings are immutable sequences of characters. This means you can’t change a string once you define it. Any operation that modifies a string will create a new string instead of modifying the original one.

A string is also a sequence, which means that the characters in a string have a consecutive order. This feature allows you to access characters using integer indices that start with 0. You’ll learn more about these concepts in the section about indexing strings. For now, you’ll learn about how to create strings in Python.

Creating Strings in Python

There are different ways to create strings in Python. The most common practice is to use string literals. Because strings are everywhere and have many use cases, you’ll find a few different types of string literals. There are standard literals, raw literals, and formatted literals.

Additionally, you can use the built-in str() function to create new strings from other existing objects.

In the following sections, you’ll learn about the multiple ways to create strings in Python and when to use each of them.

Standard String Literals

A standard string literal is just a piece of text or a sequence of characters that you enclose in quotes. To create single-line strings, you can use single ('') and double ("") quotes:

Python >>> 'A single-line string in single quotes' 'A single-line string in single quotes' >>> "A single-line string in double quotes" 'A single-line string in double quotes' Copied!

In the first example, you use single quotes to delimit the string literal. In the second example, you use double quotes.

Note: Python’s standard REPL displays string objects using single quotes even though you create them using double quotes.

You can define empty strings using quotes without placing characters between them:

Python >>> "" '' >>> '' '' >>> len("") 0 Copied!

An empty string doesn’t contain any characters, so when you use the built-in len() function with an empty string as an argument, you get 0 as a result.

To create multiline strings, you can use triple-quoted strings. In this case, you can use either single or double quotes:

Read the full article at https://realpython.com/python-strings/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Pages