Feeds

Thomas Lange: Happy Birthday FAI!(link is external)

Planet Debian - Mon, 2024-12-23 06:45
A Brief History of FAI, Which Began 25 Years Ago

On Dec 21st, 1999 version 1.0 of FAI (Fully Automatic Installation) was announced(link is external). That was 25 years ago.

Some months before, the computer science department of the University of Cologne bought a small HPC cluster with 16 nodes (each with dual CPU Pentium II 400Mhz, 256 MB RAM) and I was too lazy to install those nodes manually. That's why I started the FAI project. With FAI you can install computers in a few minutes from scratch to a machine with a custom configuration that is ready to go for their users.

At that time Debian 2.1 aka slink was using kernel 2.0.36 and it was the first release using apt. Many things have happened since then.

In the beginning we wrote the first technical report about FAI and a lot of documentation(link is external) were added afterwards. I gave more than 45 talks(link is external) about FAI all over the world. Over the past 25 years, there has been an average of more than one commit per day to the FAI software repository.

Several top500.org HPC clusters(link is external) were built using FAI and many companies are using FAI for their IT infrastructure or deploying Linux on their products using FAI. An overview of users can be found here(link is external).

Some major milestones of FAI are listed in the blog post(link is external) of the 20th anniversary.

What Happended in the Last 5 Years?
  • Live images can be created
  • Writeable data partition on USB sticks
  • FAIme web service creates custom live ISOs
  • Support for Alpine Linux and Arch Linux package managers
  • Automatic detect a local config space
  • Live and installation images for Debian for new hardware using a backports kernel or using the Debian testing release
  • The FAIme web services created more than 30.000 customized ISOs

Currently, I'm preparing for the next FAI release and I still have ideas for new features.

Thanks for all the feedback from you, which helped a lot in making FAI a successful project. About FAI

FAI is a tool for unattended mass deployment of Linux. It's a system to install and configure Linux systems and software packages on computers as well as virtual machines, from small labs to large-scale infrastructures like clusters and cloud environments. You can take one or more virgin PC's, turn on the power, and after a few minutes, the systems are installed, and completely configured to your exact needs, without any interaction necessary.

Categories: FLOSS Project Planets

LostCarPark Drupal Blog: Drupal Advent Calendar day 23 - AI Track(link is external)

Planet Drupal - Mon, 2024-12-23 04:00
Drupal Advent Calendar day 23 - AI Track james Mon, 12/23/2024 - 09:00

Welcome back for the penultimate door of this year’s Drupal Advent Calendar, and today we’ve recruited the legendary Mike Anello to bring us up to speed on a big topic, the AI track of Drupal CMS.

The stated goal of the AI track is to make it easier for non-technical users to build and extend their sites - it is really interesting to note that this is mainly geared towards admin-facing UI, not site user-facing AI. With that in mind, let’s take a look at what is included (so far!)

AI generated alternate text for images

With virtually no configuration (other than entering your LLM API key) the…

(link is external)
Categories: FLOSS Project Planets

Python Bytes: #415 Just put the fries in the bag bro(link is external)

Planet Python - Mon, 2024-12-23 03:00
<strong>Topics covered in this episode:</strong><br> <ul> <li><a href="https://github.com/dbos-inc/dbos-transact-py?featured_on=pythonbytes"><strong>dbos-transact-py</strong></a></li> <li><strong><a href="https://engineering.fb.com/2024/12/09/developer-tools/typed-python-2024-survey-meta/?featured_on=pythonbytes">Typed Python in 2024: Well adopted, yet usability challenges persist</a></strong></li> <li><strong><a href="https://github.com/RightTyper/RightTyper?featured_on=pythonbytes">RightTyper</a></strong></li> <li><strong><a href="https://treyhunner.com/2024/12/lazy-self-installing-python-scripts-with-uv/?featured_on=pythonbytes">Lazy self-installing Python scripts with uv</a></strong></li> <li><strong>Extras</strong></li> <li><strong>Joke</strong></li> </ul><a href='https://www.youtube.com/watch?v=xdR4JFcb01o' style='font-weight: bold;'data-umami-event="Livestream-Past" data-umami-event-episode="415">Watch on YouTube</a><br> <p><strong>About the show</strong></p> <p>Sponsored by us! Support our work through:</p> <ul> <li>Our <a href="https://training.talkpython.fm/?featured_on=pythonbytes"><strong>courses at Talk Python Training</strong></a></li> <li><a href="https://courses.pythontest.com/p/the-complete-pytest-course?featured_on=pythonbytes"><strong>The Complete pytest Course</strong></a></li> <li><a href="https://www.patreon.com/pythonbytes"><strong>Patreon Supporters</strong></a></li> </ul> <p><strong>Connect with the hosts</strong></p> <ul> <li>Michael: <a href="https://fosstodon.org/@mkennedy"><strong>@mkennedy@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/mkennedy.codes?featured_on=pythonbytes"><strong>@mkennedy.codes</strong></a> <strong>(bsky)</strong></li> <li>Brian: <a href="https://fosstodon.org/@brianokken"><strong>@brianokken@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/brianokken.bsky.social?featured_on=pythonbytes"><strong>@brianokken.bsky.social</strong></a></li> <li>Show: <a href="https://fosstodon.org/@pythonbytes"><strong>@pythonbytes@fosstodon.org</strong></a> <strong>/</strong> <a href="https://bsky.app/profile/pythonbytes.fm"><strong>@pythonbytes.fm</strong></a> <strong>(bsky)</strong></li> </ul> <p>Join us on YouTube at <a href="https://pythonbytes.fm/stream/live"><strong>pythonbytes.fm/live</strong></a> to be part of the audience. Usually <strong>Monday</strong> at 10am PT. Older video versions available there too.</p> <p>Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to <a href="https://pythonbytes.fm/friends-of-the-show">our friends of the show list</a>, we'll never share it.</p> <p><strong>Michael #1:</strong> <a href="https://github.com/dbos-inc/dbos-transact-py?featured_on=pythonbytes"><strong>dbos-transact-py</strong></a></p> <ul> <li>DBOS Transact is a Python library providing <strong>ultra-lightweight durable execution</strong>.</li> <li>Durable execution means your program is <strong>resilient to any failure</strong>.</li> <li>If it is ever interrupted or crashes, all your workflows will automatically resume from the last completed step.</li> <li>Under the hood, DBOS Transact works by storing your program's execution state (which workflows are currently executing and which steps they've completed) in a Postgres database.</li> <li>Incredibly fast, for example <a href="https://www.dbos.dev/blog/dbos-vs-aws-step-functions-benchmark?featured_on=pythonbytes">25x faster than AWS Step Functions</a>.</li> </ul> <p><strong>Brian #2:</strong> <a href="https://engineering.fb.com/2024/12/09/developer-tools/typed-python-2024-survey-meta/?featured_on=pythonbytes">Typed Python in 2024: Well adopted, yet usability challenges persist</a></p> <ul> <li>Aaron Pollack on Engineering at Meta blog</li> <li>“Overall findings <ul> <li>88% of respondents “Always” or “Often” use Types in their Python code.</li> <li>IDE tooling, documentation, and catching bugs are drivers for the high adoption of types in survey responses,</li> <li>The usability of types and ability to express complex patterns still are challenges that leave some code unchecked.</li> <li>Latency in tooling and lack of types in popular libraries are limiting the effectiveness of type checkers.</li> <li>Inconsistency in type check implementations and poor discoverability of the documentation create friction in onboarding types into a project and seeking help when using the tools. “</li> </ul></li> <li>Notes <ul> <li>Seems to be a different survey than the 2023 (current) dev survey. Diff time frame and results. July 29 - Oct 8, 2024</li> </ul></li> </ul> <p><strong>Michael #3:</strong> <a href="https://github.com/RightTyper/RightTyper?featured_on=pythonbytes">RightTyper</a></p> <ul> <li>A fast and efficient type assistant for Python, including tensor shape inference</li> </ul> <p><strong>Brian #4:</strong> <a href="https://treyhunner.com/2024/12/lazy-self-installing-python-scripts-with-uv/?featured_on=pythonbytes">Lazy self-installing Python scripts with uv</a></p> <ul> <li>Trey Hunner</li> <li>Creating your own ~/bin full of single-file command line scripts is common for *nix folks, still powerful but underutilized on Mac, and trickier but still useful on Windows.</li> <li>Python has been difficult in the past to use for standalone scripts if you need dependencies, but that’s no longer the case with uv.</li> <li>Trey walks through user scripts (*nix and Mac) <ul> <li>Using #! for scripts that don’thave dependencies</li> <li>Using #! with uv run --script and /// script for dependencies</li> <li>Discussion about how uv handles that.</li> </ul></li> </ul> <p><strong>Extras</strong> </p> <p>Brian:</p> <ul> <li><a href="https://courses.pythontest.com?featured_on=pythonbytes">Courses at pythontest.com</a> <ul> <li>If you live in a place (or are in a place in your life) where these prices are too much, let me know. I had a recent request and I really appreciate it.</li> </ul></li> </ul> <p>Michael:</p> <ul> <li><a href="https://bsky.app/profile/hugovk.bsky.social/post/3ldjdh66jy22o?featured_on=pythonbytes">Python 3.14 update</a> released</li> <li><a href="https://talkpython.fm/blog/posts/top-talk-python-podcast-episodes-of-2024/?featured_on=pythonbytes">Top episodes of 2024</a> at Talk Python</li> <li>Universal check for updates macOS: <ul> <li>Settings &gt; Keyboard &gt; Keyboard shortcuts &gt; App shortcuts &gt; +</li> <li>Then add shortcut for single app, ^U and the menu title.</li> <li><img src="https://blobs.pythonbytes.fm/universial-update-check.jpg" alt="" /></li> </ul></li> </ul> <p><strong>Joke:</strong> <a href="https://github.com/shamith09/pygyat?featured_on=pythonbytes">Python with rizz</a></p>
Categories: FLOSS Project Planets

Zato Blog: Using OAuth in API Integrations(link is external)

Planet Python - Mon, 2024-12-23 03:00
Using OAuth in API Integrations(link is external) 2024-12-23, by Dariusz Suchojad

OAuth is often employed in processes requiring permissions to be granted to frontend applications and end users. Yet, what we typically need in API systems integrations is a way to secure connections between the integration middleware and backend systems without a need for any ongoing human interactions.

OAuth can be a good choice for that scenario and this article shows how it can be achieved in Python, with backend systems using REST and HL7 FHIR.

What we would like to have

Let's say we have a typical integration scenario as in the diagram below:

  • External systems and applications invoke the interoperability layer (Zato) which is expected to further invoke a few backend systems, e.g. a REST and HL7 FHIR one so as to return a combined result of backend API invocations. It does not matter what technology the client systems use, i.e. whether they are REST ones or not.

  • The interoperability layer needs to identify itself with the backend systems before it is allowed to invoke them - they need to make sure that it really is Zato and that it accesses only the resources allowed.

  • An OAuth server issues time-based access tokens, which are simple strings, like web browser session cookies, confirming that such and such bearer of the said token is allowed to make such and such requests. Note that the tokens have an explicit expiration time, e.g. they will become invalid after one hour. Also observe that Zato stores the tokens as-is, they are genuinely opaque strings.

  • If a client system invokes the interoperability layer, the layer will obtain a token from the OAuth server and keep it in an internal cache. Next, Zato will invoke the backend systems, bearing the token among other HTTP headers. Each invoked backend system will extract the token from the incoming request and validate it.

How the validation looks like in practices is something that Zato will not be aware of because it treats the token as an opaque string but, in practice, if the token is self-contained (e.g. JWT data) the system may validate it on its own, and if it is not self-contained, the system may invoke an introspection endpoint on the OAuth server to validate the access token from Zato.

Once the validation succeeds, the backend system will reply with the business data and the interoperability layer will combine the results for the calling application's benefit.

In subsequent requests, the same access token will be reused by Zato with the same flow of messages as previously. However, if the cached token expires, Zato will request a new one from the OAuth server - this will be transparent to the calling application - and the flow will resume.

In OAuth terminology, what is described above has specific names, the overall flow of messages between Zato and the OAuth server is called a "Client Credential Flow" and Zato is then considered a "client" from the OAuth server's perspective.

Configuring OAuth

First, we need to create an OAuth security definition that contains the OAuth server's connection details. In this case, the server is Okta(link is external). Note the scopes field - it is a list of permissions ("scopes") that Zato will be able to make use of.

What exactly the list of scopes should look like is something to be coordinated with the people who are responsible for the configuration of the OAuth server. If it is you personally, simply ensure that what is in the the OAuth server and in Zato is in sync.

Calling REST

To invoke REST services, fill out a form as below, pointing the "Security" field to the newly created OAuth definition. This suffices for Zato to understand when and how to obtain new tokens from the underlying OAuth server.

Here is sample code to invoke a backend REST system - note that we merely refer to a connection by its name, without having to think about security at all. It is Zato that knows how to get and use OAuth tokens as required.

(link is external)# -*- coding: utf-8 -*- (link is external) (link is external)# Zato (link is external)from zato.server.service import Service (link is external) (link is external)class GetClientBillingPlan(Service): (link is external) """ Returns a billing plan for the input client. (link is external) """ (link is external) def handle(self): (link is external) (link is external) # In a real service, this would be read from input (link is external) payload = {'client_id': 123} (link is external) (link is external) # Get a connection to the server .. (link is external) conn = self.out.rest['REST Server'].conn (link is external) (link is external) # .. invoke it .. (link is external) response = conn.get(self.cid, payload) (link is external) (link is external) # .. and handle the response here. (link is external) ... Calling HL7 FHIR

Similarly to REST endpoints, to invoke HL7 FHIR servers, fill out a form as below and let the "Security" field point to the OAuth definition just created. This will suffice for Zato to know when and how to use tokens received from the underlying OAuth server.

Here is sample code to invoke a FHIR server system - as with REST servers above, observe that we only refer to a connection by its name and Zato takes care of OAuth.

(link is external)# -*- coding: utf-8 -*- (link is external) (link is external)# Zato (link is external)from zato.server.service import Service (link is external) (link is external)class GetPractitioner(Service): (link is external) """ Returns a practictioner matching input data. (link is external) """ (link is external) def handle(self) -> 'None': (link is external) (link is external) # Connection to use (link is external) conn_name = 'My EHR' (link is external) (link is external) # In a real service, this would be read from input (link is external) practitioner_id = 456 (link is external) (link is external) # Get a connection to the server .. (link is external) with self.out.hl7.fhir[conn_name].conn.client() as client: (link is external) (link is external) # Get a reference to a FHIR resource .. (link is external) practitioners = client.resources('Practitioner') (link is external) (link is external) # .. look up the practitioner .. (link is external) result = practitioners.search(active=True, _id=practitioner_id).get() (link is external) (link is external) # .. and handle the response here. (link is external) ... What about the API clients?

One aspect omitted above are the initial API clients - this is on purpose. How they invoke Zato, using what protocols, with what security mechanisms, and how to build responses based on their input data, this is completely independent of how Zato uses OAuth in its own communication with backend systems.

All of these aspects can and will be independent in practice, e.g. clients will use Basic Auth rather than OAuth. Or perhaps the clients will use AMQP, Odoo, SAP, or IBM MQ, without any HTTP, or maybe there will be no explicit API invocations and what we call "clients" will be actually CSV files in a shared directory that your services will be scheduled to periodically pick up. Yet, once more, regardless of what makes the input data available, the backend OAuth mechanism will work independently of it all.



Next steps

API programming screenshots(link is external)
➤ Python API integration tutorial(link is external)
➤ More API programming examples in Python(link is external)
➤ Visit the support center(link is external) for more articles and FAQ
Open-source iPaaS(link is external) in Python

More blog posts(link is external)
Categories: FLOSS Project Planets

The Drop Times: Hope and Progress Ahead(link is external)

Planet Drupal - Mon, 2024-12-23 02:17

As 2024 comes to a close, it’s time to reflect on an inspiring year for the Drupal community. This year marked the beginning of the transformative Starshot Initiative, setting an ambitious vision for the future of Drupal. Among the highlights was the highly anticipated release of Drupal 11, a milestone that brought enhanced capabilities, improved user experience, and reinforced Drupal’s position as a leading open-source content management system.  

This year wasn't only about technical achievements—it was a year of hope and collaboration too. The community has come together, embracing challenges with resilience and charting a path forward with optimism. Much like the spirit of Christmas, this year’s developments remind us of the joy in beginnings and the promise of what lies ahead.  

As we step into this festive season, let’s celebrate the milestones we’ve achieved and the community that made it all possible. Let’s also look forward to an even brighter future, one filled with innovation, inclusivity, and growth for Drupal. Here’s to a new year brimming with possibilities and the collective hope that Drupal continues to shine even brighter in 2025. Happy holidays!

DrupalCon Singapore 2024Discover DrupalEventsFree SoftwareOrganization News

To get timely updates, follow us on LinkedIn, Twitter and Facebook. You can also join us on Drupal Slack at #thedroptimes.

Categories: FLOSS Project Planets

LN Webworks: LN Webworks at DrupalCon Singapore 2024(link is external)

Planet Drupal - Mon, 2024-12-23 01:09

It's the Second DrupalCon for LNWebWorks, filled with incredible memories and the opportunity to forge new connections. This time, the event is hosted at the prestigious ParkRoyal Collection Marina Bay Hall. Luckily, our hotel—Carlton City Hotel(link is external) —is just a stone's throw away, making it a quick 5-minute cab ride to the venue. Here's a glimpse of my hotel room view, showcasing the breathtaking skyline of the tallest buildings!

Categories: FLOSS Project Planets

Russ Allbery: Review: The House That Walked Between Worlds(link is external)

Planet Debian - Sun, 2024-12-22 22:33

Review: The House That Walked Between Worlds, by Jenny Schwartz

Series: Uncertain Sanctuary #1 Publisher: Jenny Schwartz Copyright: 2020 Printing: September 2024 ASIN: B0DBX6GP8Z Format: Kindle Pages: 215

The House That Walked Between Worlds is the first book of a self-published trilogy of... hm. Space fantasy? Pure fantasy with a bit of science fiction thrown in for flavor? Something like that. I read it as part of the Uncertain Sanctuary omnibus, which is reflected in the sidebar metadata.

Kira Aist is a doctor. She's also a witch and a direct descendant of Baba Yaga. Her Russian grandmother warned her to never use magic and never reveal who she was because people would hunt her and her family if she did. She broke the rule to try to save a child, her grandmother was right, and now multiple people are dead, including her parents. As the story opens, she's deep in the wilds of New Zealand in a valley with buried moa bones, summoning her House so that she can flee Earth.

Kira's first surprise is that her House is not the small hut that she was expecting from childhood visits to Baba Yaga. It's larger. A lot larger: an obsidian castle with nine towers and legs that resemble dragons rather than the moas whose magic she drew on. Her magic apparently had a much different idea of what she needs than she did.

Her second surprise is that her magical education is highly incomplete, and she is not the witch that she thought she was. Her ability to create a House means that she's a sorcerer, the top tier of magical power in a hierarchy about which she knows essentially nothing. Thankfully the House has a library, but Kira has a lot to learn about the universe and her place in it.

I picked this up because the premise sounded a little like the Innkeeper novels(link is external), and since another novel in that series does not appear to be immediately forthcoming, I went looking elsewhere for my cozy sentient building fix. The House That Walked Between Worlds is nowhere near as well-written (or, frankly, coherent) as the Innkeeper books, but it did deliver some of the same vibes.

You should know going in that there isn't much in the way of a plot. Schwartz invented an elaborate setting involving archetype worlds inhabited by classes of mythological creatures that in some mystical sense surround a central system called Qaysar. These archetype worlds spawn derived worlds, each of which seems to be its own dimension, although the details are a bit murky to me. The world Kira thinks of as Earth is just one of the universes branched off of an archetypal Earth, and is the only one of those branchings where the main population is human. The other Earth-derived worlds are populated by the Dinosaurians and the Neanderthals. Similarly, there is a Fae world that branches into Elves and Goblins, an Epic world that branches into Shifters, Trolls, and Kobolds, and so forth. Travel between these worlds is normally by slow World Walker Caravans, but Houses break the rules of interdimensional travel in ways that no one entirely understands.

If your eyes are already starting to glaze over, be warned there's a lot of this. The House That Walked Between Worlds is infodumping mixed with vibes, and I think you have to enjoy the setting, or at least the sheer enthusiasm of Schwartz's presentation of it, to get along with this book. The rest of the story is essentially Kira picking up strays: first a dangerous-looking elf cyborg, then a juvenile giant cat (because of course there's a pet fantasy space cat; it's that sort of book), and then a charming martial artist who I'm fairly sure is up to no good. Kira is entirely out of her depth and acting on instinct, which luckily plays into stereotypes of sorcerers as mysterious and unpredictable. It also helps that her magic is roughly "anything she wants to happen, happens."

This is, in other words, not a tightly-crafted story with coherent rules and a sense of risk and danger. It's a book that succeeds or fails almost entirely on how much you like the main characters and enjoy the world-building. Thankfully, I thought the characters were fun, if not (so far) all that deep. Kira deals with her trauma without being excessively angsty and leans into her new situation with a chaotic decisiveness that I found charming. The cyborg elf is taciturn and a bit inscrutable at first, but he grew on me, and thankfully this book does not go immediately to romance. Late in the book, Kira picks up a publicity expert, which was not at all the type of character that I was expecting and which I found delightful.

Most importantly, the House was exactly what I was looking for: impish, protective, mysterious, inhuman, and absurdly overpowered. I adore cozy sentient building stories, so I'm an easy audience for this sort of thing, but I'm already eager to read more about the House.

This is not great writing by any stretch, and you will be unsurprised that it's self-published. If you're expecting the polish and plot coherence of the Innkeeper stories, you'll be disappointed. But if you just want to spend some time with a giant sentient space-traveling mansion inhabited by unlikely misfits, and you don't mind large amounts of space fantasy infodumping, consider giving this a shot. I had fun with it and plan on reading the rest of the omnibus.

Followed by House in Hiding.

Rating: 6 out of 10

Categories: FLOSS Project Planets

Simon Josefsson: OpenSSH and Git on a Post-Quantum SPHINCS+(link is external)

Planet Debian - Sun, 2024-12-22 19:44

Are you aware that Git commits and tags(link is external) may be signed using OpenSSH(link is external)? Git signatures may be used to improve integrity and authentication of our software supply-chain. Popular signature algorithms include Ed25519, ECDSA and RSA. Did you consider that these algorithms may not be safe if someone builds a post-quantum computer?

As you may recall, I have earlier blogged about the efficient post-quantum key agreement mechanism called Streamlined NTRU Prime and its use in SSH(link is external) and I have attempted to promote the conservatively designed Classic McEliece in a similar way(link is external), although it remains to be adopted.

What post-quantum signature algorithms are available? There is an effort by NIST to standardize post-quantum algorithms, and they have a category for signature algorithms. According to wikipedia, after round three the selected algorithms(link is external) are CRYSTALS-Dilithium, FALCON and SPHINCS+. Of these, SPHINCS+(link is external) appears to be a conservative choice suitable for long-term digital signatures. Can we get this to work?

Recall that Git uses the ssh-keygen tool from OpenSSH to perform signing and verification. To refresh your memory, let’s study the commands that Git uses under the hood for Ed25519. First generate a Ed25519 private key:

jas@kaka:~$ ssh-keygen -t ed25519 -f my_ed25519_key -P "" Generating public/private ed25519 key pair. Your identification has been saved in my_ed25519_key Your public key has been saved in my_ed25519_key.pub The key fingerprint is: SHA256:fDa5+jmC2+/aiLhWeWA3IV8Wj6yMNTSuRzqUZlIGlXQ jas@kaka The key's randomart image is: +--[ED25519 256]--+ | .+=.E .. | | oo=.ooo | | . =o=+o . | | =oO+o . | | .=+S.= | | oo.o o | | . o . | | ...o.+.. | | .o.o.=**. | +----[SHA256]-----+ jas@kaka:~$ cat my_ed25519_key -----BEGIN OPENSSH PRIVATE KEY----- b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW QyNTUxOQAAACAWP/aZ8hzN0WNRMSpjzbgW1tJXNd2v6/dnbKaQt7iIBQAAAJCeDotOng6L TgAAAAtzc2gtZWQyNTUxOQAAACAWP/aZ8hzN0WNRMSpjzbgW1tJXNd2v6/dnbKaQt7iIBQ AAAEBFRvzgcD3YItl9AMmVK4xDKj8NTg4h2Sluj0/x7aSPlhY/9pnyHM3RY1ExKmPNuBbW 0lc13a/r92dsppC3uIgFAAAACGphc0BrYWthAQIDBAU= -----END OPENSSH PRIVATE KEY----- jas@kaka:~$ cat my_ed25519_key.pub ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBY/9pnyHM3RY1ExKmPNuBbW0lc13a/r92dsppC3uIgF jas@kaka jas@kaka:~$

Then let’s sign something with this key:

jas@kaka:~$ echo "Hello world!" > msg jas@kaka:~$ ssh-keygen -Y sign -f my_ed25519_key -n my-namespace msg Signing file msg Write signature to msg.sig jas@kaka:~$ cat msg.sig -----BEGIN SSH SIGNATURE----- U1NIU0lHAAAAAQAAADMAAAALc3NoLWVkMjU1MTkAAAAgFj/2mfIczdFjUTEqY824FtbSVz Xdr+v3Z2ymkLe4iAUAAAAMbXktbmFtZXNwYWNlAAAAAAAAAAZzaGE1MTIAAABTAAAAC3Nz aC1lZDI1NTE5AAAAQLmWsq05tqOOZIJqjxy5ZP/YRFoaX30lfIllmfyoeM5lpVnxJ3ZxU8 SF0KodDr8Rtukg2N3Xo80NGvZOzbG/9Aw= -----END SSH SIGNATURE----- jas@kaka:~$

Now let’s create a list of trusted public-keys and associated identities:

jas@kaka:~$ echo 'my.name@example.org ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBY/9pnyHM3RY1ExKmPNuBbW0lc13a/r92dsppC3uIgF' > allowed-signers jas@kaka:~$

Then let’s verify the message we just signed:

jas@kaka:~$ cat msg | ssh-keygen -Y verify -f allowed-signers -I my.name@example.org -n my-namespace -s msg.sig Good "my-namespace" signature for my.name@example.org with ED25519 key SHA256:fDa5+jmC2+/aiLhWeWA3IV8Wj6yMNTSuRzqUZlIGlXQ jas@kaka:~$

I have implemented support for SPHINCS+ in OpenSSH(link is external). This is early work, but I wanted to announce it to get discussion of some of the details going and to make people aware of it.

What would a better way to demonstrate SPHINCS+ support in OpenSSH by validating the Git commit that implements it, using its own implementation?

Here is how to proceed, first get a suitable development environment up and running. I’m using a Debian(link is external) container launched in a protected environment using podman(link is external).

jas@kaka:~$ podman run -it --rm debian:stable

Then install the necessary build dependencies for OpenSSH.

# apt-get update # apt-get install git build-essential autoconf libz-dev libssl-dev

Now clone my OpenSSH branch with the SPHINCS+ implentation and build it. You may browse the commit on GitHub(link is external) first if you are curious.

# cd # git clone https://github.com/jas4711/openssh-portable.git -b sphincsp # cd openssh-portable # autoreconf -fvi # ./configure # make

Configure a Git allowed signers list with my SPHINCS+ public key (make sure to keep the public key on one line with the whitespace being one ASCII SPC character):

# mkdir -pv ~/.ssh # echo 'simon@josefsson.org ssh-sphincsplus@openssh.com AAAAG3NzaC1zcGhpbmNzcGx1c0BvcGVuc3NoLmNvbQAAAECI6eacTxjB36xcPtP0ZyxJNIGCN350GluLD5h0KjKDsZLNmNaPSFH2ynWyKZKOF5eRPIMMKSCIV75y+KP9d6w3' > ~/.ssh/allowed_signers # git config gpg.ssh.allowedSignersFile ~/.ssh/allowed_signers

Then verify the commit using the newly built ssh-keygen(link is external) binary:

# PATH=$PWD:$PATH # git log -1 --show-signature commit ce0b590071e2dc845373734655192241a4ace94b (HEAD -> sphincsp, origin/sphincsp) Good "git" signature for simon@josefsson.org with SPHINCSPLUS key SHA256:rkAa0fX0lQf/7V7QmuJHSI44L/PAPPsdWpis4nML7EQ Author: Simon Josefsson <simon@josefsson.org> Date: Tue Dec 3 18:44:25 2024 +0100 Add SPHINCS+. # git verify-commit ce0b590071e2dc845373734655192241a4ace94b Good "git" signature for simon@josefsson.org with SPHINCSPLUS key SHA256:rkAa0fX0lQf/7V7QmuJHSI44L/PAPPsdWpis4nML7EQ #

Yay!

So what are some considerations?

SPHINCS+ comes in many different variants. First it comes with three security levels approximately matching 128/192/256 bit symmetric key strengths. Second choice is between the SHA2-256, SHAKE256 (SHA-3) and Haraka hash algorithms. Final choice is between a “robust” and a “simple” variant with different security and performance characteristics. To get going, I picked the “sphincss256sha256robust” SPHINCS+ implementation from SUPERCOP 20241022(link is external). There is a good size comparison table in the sphincsplus(link is external) implementation, if you want to consider alternative variants.

SPHINCS+ public-keys are really small, as you can see in the allowed signers file. This is really good because they are handled by humans and often by cut’n’paste.

What about private keys? They are slightly longer than Ed25519 private keys but shorter than typical RSA private keys.

# ssh-keygen -t sphincsplus -f my_sphincsplus_key -P "" Generating public/private sphincsplus key pair. Your identification has been saved in my_sphincsplus_key Your public key has been saved in my_sphincsplus_key.pub The key fingerprint is: SHA256:4rNfXdmLo/ySQiWYzsBhZIvgLu9sQQz7upG8clKziBg root@ad600ff56253 The key's randomart image is: +[SPHINCSPLUS 256-+ | . .o | |o . oo. | | = .o.. o | |o o o o . . o | |.+ = S o o .| |Eo= . + . . .. .| |=*.+ o . . oo . | |B+= o o.o. . | |o*o ... .oo. | +----[SHA256]-----+ # cat my_sphincsplus_key.pub ssh-sphincsplus@openssh.com AAAAG3NzaC1zcGhpbmNzcGx1c0BvcGVuc3NoLmNvbQAAAEAltAX1VhZ8pdW9FgC+NdM6QfLxVXVaf1v2yW4v+tk2Oj5lxmVgZftfT37GOMOlK9iBm9SQHZZVYZddkEJ9F1D7 root@ad600ff56253 # cat my_sphincsplus_key -----BEGIN OPENSSH PRIVATE KEY----- b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAYwAAABtzc2gtc3 BoaW5jc3BsdXNAb3BlbnNzaC5jb20AAABAJbQF9VYWfKXVvRYAvjXTOkHy8VV1Wn9b9slu L/rZNjo+ZcZlYGX7X09+xjjDpSvYgZvUkB2WVWGXXZBCfRdQ+wAAAQidiIwanYiMGgAAAB tzc2gtc3BoaW5jc3BsdXNAb3BlbnNzaC5jb20AAABAJbQF9VYWfKXVvRYAvjXTOkHy8VV1 Wn9b9sluL/rZNjo+ZcZlYGX7X09+xjjDpSvYgZvUkB2WVWGXXZBCfRdQ+wAAAIAbwBxEhA NYzITN6VeCMqUyvw/59JM+WOLXBlRbu3R8qS7ljc4qFVWUtmhy8B3t9e4jrhdO6w0n5I4l mnLnBi2hJbQF9VYWfKXVvRYAvjXTOkHy8VV1Wn9b9sluL/rZNjo+ZcZlYGX7X09+xjjDpS vYgZvUkB2WVWGXXZBCfRdQ+wAAABFyb290QGFkNjAwZmY1NjI1MwECAwQ= -----END OPENSSH PRIVATE KEY----- #

Signature size? Now here is the challenge, for this variant the size is around 29kb or close to 600 lines of base64 data:

# git cat-file -p ce0b590071e2dc845373734655192241a4ace94b | head -10 tree ede42093e7d5acd37fde02065a4a19ac1f418703 parent 826483d51a9fee60703298bbf839d9ce37943474 author Simon Josefsson <simon@josefsson.org> 1733247865 +0100 committer Simon Josefsson <simon@josefsson.org> 1734907869 +0100 gpgsig -----BEGIN SSH SIGNATURE----- U1NIU0lHAAAAAQAAAGMAAAAbc3NoLXNwaGluY3NwbHVzQG9wZW5zc2guY29tAAAAQIjp5p xPGMHfrFw+0/RnLEk0gYI3fnQaW4sPmHQqMoOxks2Y1o9IUfbKdbIpko4Xl5E8gwwpIIhX vnL4o/13rDcAAAADZ2l0AAAAAAAAAAZzaGE1MTIAAHSDAAAAG3NzaC1zcGhpbmNzcGx1c0 BvcGVuc3NoLmNvbQAAdGDHlobgfgkKKQBo3UHmnEnNXczCMNdzJmeYJau67QM6xZcAU+d+ 2mvhbksm5D34m75DWEngzBb3usJTqWJeeDdplHHRe3BKVCQ05LHqRYzcSdN6eoeZqoOBvR # git cat-file -p ce0b590071e2dc845373734655192241a4ace94b | tail -5 ChvXUk4jfiNp85RDZ1kljVecfdB2/6CHFRtxrKHJRDiIavYjucgHF1bjz0fqaOSGa90UYL RZjZ0OhdHOQjNP5QErlIOcZeqcnwi0+RtCJ1D1wH2psuXIQEyr1mCA== -----END SSH SIGNATURE----- Add SPHINCS+. # git cat-file -p ce0b590071e2dc845373734655192241a4ace94b | wc -l 579 #

What about performance? Verification is really fast:

# time git verify-commit ce0b590071e2dc845373734655192241a4ace94b Good "git" signature for simon@josefsson.org with SPHINCSPLUS key SHA256:rkAa0fX0lQf/7V7QmuJHSI44L/PAPPsdWpis4nML7EQ real 0m0.010s user 0m0.005s sys 0m0.005s #

On this machine, verifying an Ed25519 signature is a couple of times slower, and needs around 0.07 seconds.

Signing is slower, it takes a bit over 2 seconds on my laptop.

# echo "Hello world!" > msg # time ssh-keygen -Y sign -f my_sphincsplus_key -n my-namespace msg Signing file msg Write signature to msg.sig real 0m2.226s user 0m2.226s sys 0m0.000s # echo 'my.name@example.org ssh-sphincsplus@openssh.com AAAAG3NzaC1zcGhpbmNzcGx1c0BvcGVuc3NoLmNvbQAAAEAltAX1VhZ8pdW9FgC+NdM6QfLxVXVaf1v2yW4v+tk2Oj5lxmVgZftfT37GOMOlK9iBm9SQHZZVYZddkEJ9F1D7' > allowed-signers # cat msg | ssh-keygen -Y verify -f allowed-signers -I my.name@example.org -n my-namespace -s msg.sig Good "my-namespace" signature for my.name@example.org with SPHINCSPLUS key SHA256:4rNfXdmLo/ySQiWYzsBhZIvgLu9sQQz7upG8clKziBg #

Welcome to our new world of Post-Quantum safe digital signatures of Git commits, and Happy Hacking!

Categories: FLOSS Project Planets

#! code: Drupal 11: The Queues API(link is external)

Planet Drupal - Sun, 2024-12-22 14:18

I've talked a lot about the Batch API in Drupal recently, and I've mentioned that it is built upon the Queue API, but I haven't gone any deeper than that. I wrote about the Queues API in Drupal 7(link is external), but thought I would bring my understanding up to date.

A queue is a data construct that uses a "first in, last out" (or FILO) flow where items are processed in the order that they were added to the queue. This system has a lot of different uses, but is most important when it comes to asynchronous data processing. Drupal and many modules make use of the queue system to process information behind the scenes.

The difference between a queue and a batch is that the batch is for time sensitive things where the user is expecting something to happen. A queue, on the other hand, is more for data processing that needs to happen behind the scenes or without any user triggering the process.

Batches also tend to be stateless, meaning that if the batch fails half way through it is sometimes difficult to re-start the batch from the same point. It is possible if you create your batches in just the right way, but this is actually a little rate. A queue manages this much better by having all of the items in the queue and then giving you options about what you can do with each item as you process it. This means that you might pop a queue item back into the queue for later processing if it failed.

In this article I will look at the Queue API in Drupal 11, how it is used and what sort of best practices are used when using the API.

Creating A Queue

To create a queue in Drupal you need to create an instance of the 'queue' service. This is a factory that can be used to create and manage your queues inside Drupal. By default, all queues in Drupal are database queues (handled via the queue.database default queue factory), although this can be changed with configuration settings.

Read more(link is external)

Categories: FLOSS Project Planets

Freelock Blog: Automatically set fields on content(link is external)

Planet Drupal - Sun, 2024-12-22 10:00
Automatically set fields on content (link is external) Anonymous (not verified) Sun, 12/22/2024 - 07:00 Tags Drupal(link is external) ECA(link is external) Drupal Planet(link is external)

One of the easiest things to do with the Events, Conditions, and Actions (ECA) module(link is external) is to set values on fields. You can populate forms with names and addresses from a user's profile. You can set date values to offsets from the current time. You can perform calculations and store the result in a summary field, which can make using them in views much more straightforward.

Categories: FLOSS Project Planets

Real Python: Strings and Character Data in Python(link is external)

Planet Python - Sun, 2024-12-22 09:00

Python strings are a sequence of characters used for handling textual data. You can create strings in Python using quotation marks or the str() function, which converts objects into strings. Strings in Python are immutable, meaning once you define a string, you can’t change it.

To access specific elements of a string, you use indexing, where indices start at 0 for the first character. You specify an index in square brackets, such as "hello"[0], which gives you "h". For string interpolation you can use curly braces {} in a string.

By the end of this tutorial, you’ll understand that:

  • A Python string is a sequence of characters used for textual data.
  • The str() function converts objects to their string representation.
  • You can use curly braces {} to insert values in a Python string.
  • You access string elements in Python using indexing with square brackets.
  • You can join all elements in a list into a single string using .join().

You’ll explore creating strings with string literals and functions, using operators and built-in functions with strings, indexing and slicing techniques, and methods for string interpolation and formatting. These skills will help you manipulate and format textual data in your Python programs effectively.

To get the most out of this tutorial, you should have a good understanding of core Python concepts, including variables(link is external), functions(link is external), and operators and expressions(link is external).

Get Your Code: Click here to download the free sample code(link is external) that shows you how to work with strings and character data in Python.

Take the Quiz: Test your knowledge with our interactive “Python Strings and Character Data” quiz. You’ll receive a score upon completion to help you track your learning progress:

(link is external)

Interactive Quiz

Python Strings and Character Data(link is external)

This quiz will test your understanding of Python's string data type and your knowledge about manipulating textual data with string objects. You'll cover the basics of creating strings using literals and the str() function, applying string methods, using operators and built-in functions, and more!

Getting to Know Strings and Characters in Python(link is external)

Python provides the built-in string (str)(link is external) data type to handle textual data. Other programming languages, such as Java(link is external), have a character data type for single characters. Python doesn’t have that. Single characters are strings of length one.

In practice, strings are immutable(link is external) sequences of characters. This means you can’t change a string once you define it. Any operation that modifies a string will create a new string instead of modifying the original one.

A string is also a sequence(link is external), which means that the characters in a string have a consecutive order. This feature allows you to access characters using integer indices that start with 0. You’ll learn more about these concepts in the section about indexing strings(link is external). For now, you’ll learn about how to create strings in Python.

Creating Strings in Python(link is external)

There are different ways to create strings in Python. The most common practice is to use string literals(link is external). Because strings are everywhere and have many use cases, you’ll find a few different types of string literals. There are standard literals, raw literals, and formatted literals.

Additionally, you can use the built-in str() function to create new strings from other existing objects.

In the following sections, you’ll learn about the multiple ways to create strings in Python and when to use each of them.

Standard String Literals(link is external)

A standard string literal is just a piece of text or a sequence(link is external) of characters that you enclose in quotes. To create single-line strings, you can use single ('') and double ("") quotes:

Python >>> 'A single-line string in single quotes' 'A single-line string in single quotes' >>> "A single-line string in double quotes" 'A single-line string in double quotes' Copied!

In the first example, you use single quotes to delimit the string literal. In the second example, you use double quotes.

Note: Python’s standard REPL(link is external) displays string objects using single quotes even though you create them using double quotes.

You can define empty strings using quotes without placing characters between them:

Python >>> "" '' >>> '' '' >>> len("") 0 Copied!

An empty string doesn’t contain any characters, so when you use the built-in len()(link is external) function with an empty string as an argument, you get 0 as a result.

To create multiline strings, you can use triple-quoted strings. In this case, you can use either single or double quotes:

Read the full article at https://realpython.com/python-strings/ »(link is external)

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples(link is external) ]

Categories: FLOSS Project Planets

Real Python: Working With JSON Data in Python(link is external)

Planet Python - Sun, 2024-12-22 09:00

Python’s json module provides you with the tools you need to effectively handle JSON data. You can convert Python data types to a JSON-formatted string with json.dumps() or write them to files using json.dump(). Similarly, you can read JSON data from files with json.load() and parse JSON strings with json.loads().

JSON, or JavaScript Object Notation, is a widely-used text-based format for data interchange. Its syntax resembles Python dictionaries but with some differences, such as using only double quotes for strings and lowercase for Boolean values. With built-in tools for validating syntax and manipulating JSON files, Python makes it straightforward to work with JSON data.

By the end of this tutorial, you’ll understand that:

  • JSON in Python is handled using the standard-library json module, which allows for data interchange between JSON and Python data types.
  • JSON is a good data format to use with Python as it’s human-readable and straightforward to serialize and deserialize, which makes it ideal for use in APIs and data storage.
  • You write JSON with Python using json.dump() to serialize data to a file.
  • You can minify and prettify JSON using Python’s json.tool module.

Since its introduction, JSON(link is external) has rapidly emerged as the predominant standard for the exchange of information. Whether you want to transfer data with an API(link is external) or store information in a document database(link is external), it’s likely you’ll encounter JSON. Fortunately, Python provides robust tools to facilitate this process and help you manage JSON data efficiently.

While JSON is the most common format for data distribution, it’s not the only option for such tasks. Both XML(link is external) and YAML(link is external) serve similar purposes. If you’re interested in how the formats differ, then you can check out the tutorial on how to serialize your data with Python(link is external).

Free Bonus: Click here to download the free sample code(link is external) that shows you how to work with JSON data in Python.

Take the Quiz: Test your knowledge with our interactive “Working With JSON Data in Python” quiz. You’ll receive a score upon completion to help you track your learning progress:

(link is external)

Interactive Quiz

Working With JSON Data in Python(link is external)

In this quiz, you'll test your understanding of working with JSON in Python. By working through this quiz, you'll revisit key concepts related to JSON data manipulation and handling in Python.

Introducing JSON(link is external)

The acronym JSON stands for JavaScript Object Notation(link is external). As the name suggests, JSON originated from JavaScript(link is external). However, JSON has transcended its origins to become language-agnostic and is now recognized as the standard(link is external) for data interchange.

The popularity of JSON can be attributed to native support by the JavaScript language, resulting in excellent parsing performance in web browsers. On top of that, JSON’s straightforward syntax allows both humans and computers to read and write JSON data effortlessly.

To get a first impression of JSON, have a look at this example code:

JSON hello_world.json { "greeting": "Hello, world!" } Copied!

You’ll learn more about the JSON syntax later in this tutorial. For now, recognize that the JSON format is text-based. In other words, you can create JSON files using the code editor of your choice. Once you set the file extension to .json, most code editors display your JSON data with syntax highlighting out of the box:

(link is external)

The screenshot above shows how VS Code(link is external) displays JSON data using the Bearded color theme(link is external). You’ll have a closer look at the syntax of the JSON format next!

Examining JSON Syntax(link is external)

In the previous section, you got a first impression of how JSON data looks. And as a Python developer, the JSON structure probably reminds you of common Python data structures(link is external), like a dictionary that contains a string as a key and a value. If you understand the syntax of a dictionary(link is external) in Python, you already know the general syntax of a JSON object.

Note: Later in this tutorial, you’ll learn that you’re free to use lists and other data types at the top level of a JSON document.

The similarity between Python dictionaries and JSON objects is no surprise. One idea behind establishing JSON as the go-to data interchange format was to make working with JSON as convenient as possible, independently of which programming language you use:

[A collection of key-value pairs and arrays] are universal data structures. Virtually all modern programming languages support them in one form or another. It makes sense that a data format that is interchangeable with programming languages is also based on these structures. (Source(link is external))

To explore the JSON syntax further, create a new file named hello_frieda.json and add a more complex JSON structure as the content of the file:

JSON hello_frieda.json 1{ 2 "name": "Frieda", 3 "isDog": true, 4 "hobbies": ["eating", "sleeping", "barking"], 5 "age": 8, 6 "address": { 7 "work": null, 8 "home": ["Berlin", "Germany"] 9 }, 10 "friends": [ 11 { 12 "name": "Philipp", 13 "hobbies": ["eating", "sleeping", "reading"] 14 }, 15 { 16 "name": "Mitch", 17 "hobbies": ["running", "snacking"] 18 } 19 ] 20} Copied!

In the code above, you see data about a dog named Frieda, which is formatted as JSON. The top-level value is a JSON object. Just like Python dictionaries, you wrap JSON objects inside curly braces ({}).

In line 1, you start the JSON object with an opening curly brace ({), and then you close the object at the end of line 20 with a closing curly brace (}).

Read the full article at https://realpython.com/python-json/ »(link is external)

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples(link is external) ]

Categories: FLOSS Project Planets

Real Python: How to Flatten a List of Lists in Python(link is external)

Planet Python - Sun, 2024-12-22 09:00

Flattening a list in Python involves converting a nested list structure into a single, one-dimensional list. A common approach to flatten a list of lists is to use a for loop to iterate through each sublist. Then, add each item to a new list with the .extend() method or the augmented concatenation operator (+=). This will “unlist” the list, resulting in a flattened list.

Alternatively, Python’s standard library offers tools like itertools.chain() and functools.reduce() to achieve similar results. You can also use a list comprehension for a concise one-liner solution. Each method has its own performance characteristics, with for loops and list comprehensions generally being more efficient.

By the end of this tutorial, you’ll understand that:

  • Flattening a list involves converting nested lists into a single list.
  • You can use a for loop and .extend() to flatten lists in Python.
  • List comprehensions provide a concise syntax for list transformations.
  • Standard-library functions like itertools.chain() and functools.reduce() can also flatten lists.
  • The .flatten() method in NumPy efficiently flattens arrays for data science tasks.
  • Unlisting a list means to flatten nested lists into one list.

To better illustrate what it means to flatten a list, say that you have the following matrix of numeric values:

Python >>> matrix = [ ... [9, 3, 8, 3], ... [4, 5, 2, 8], ... [6, 4, 3, 1], ... [1, 0, 4, 5], ... ] Copied!

The matrix variable holds a Python list(link is external) that contains four nested lists. Each nested list represents a row in the matrix. The rows store four items or numbers each. Now say that you want to turn this matrix into the following list:

Python [9, 3, 8, 3, 4, 5, 2, 8, 6, 4, 3, 1, 1, 0, 4, 5] Copied!

How do you manage to flatten your matrix and get a one-dimensional list like the one above? In this tutorial, you’ll learn how to do that in Python.

Free Bonus: Click here to download the free sample code(link is external) that showcases and compares several ways to flatten a list of lists in Python.

Take the Quiz: Test your knowledge with our interactive “How to Flatten a List of Lists in Python” quiz. You’ll receive a score upon completion to help you track your learning progress:

(link is external)

Interactive Quiz

How to Flatten a List of Lists in Python(link is external)

In this quiz, you'll test your understanding of how to flatten a list in Python. Flattening a list involves converting a multidimensional list, such as a matrix, into a one-dimensional list. This is a common operation when working with data stored as nested lists.

How to Flatten a List of Lists With a for Loop(link is external)

How can you flatten a list of lists in Python? In general, to flatten a list of lists, you can run the following steps either explicitly or implicitly:

  1. Create a new empty list to store the flattened data.
  2. Iterate over each nested list or sublist in the original list.
  3. Add every item from the current sublist to the list of flattened data.
  4. Return the resulting list with the flattened data.

You can follow several paths and use multiple tools to run these steps in Python. Arguably, the most natural and readable way to do this is to use a for loop(link is external), which allows you to explicitly iterate over the sublists.

Then you need a way to add items to the new flattened list. For that, you have a couple of valid options. First, you’ll turn to the .extend() method from the list class itself, and then you’ll give the augmented concatenation operator(link is external) (+=) a go.

To continue with the matrix example, here’s how you would translate these steps into Python code using a for loop and the .extend() method:

Python >>> def flatten_extend(matrix): ... flat_list = [] ... for row in matrix: ... flat_list.extend(row) ... return flat_list ... Copied!

Inside flatten_extend(), you first create a new empty list called flat_list. You’ll use this list to store the flattened data when you extract it from matrix. Then you start a loop to iterate over the inner, or nested, lists from matrix. In this example, you use the name row to represent the current nested list.

In every iteration, you use .extend() to add the content of the current sublist to flat_list. This method takes an iterable(link is external) as an argument and appends its items to the end of the target list.

Now go ahead and run the following code to check that your function does the job:

Python >>> flatten_extend(matrix) [9, 3, 8, 3, 4, 5, 2, 8, 6, 4, 3, 1, 1, 0, 4, 5] Copied!

That’s neat! You’ve flattened your first list of lists. As a result, you have a one-dimensional list containing all the numeric values from matrix.

With .extend(), you’ve come up with a Pythonic and readable way to flatten your lists. You can get the same result using the augmented concatenation operator(link is external) (+=) on your flat_list object. However, this alternative approach may not be as readable:

Python >>> def flatten_concatenation(matrix): ... flat_list = [] ... for row in matrix: ... flat_list += row ... return flat_list ... Copied! Read the full article at https://realpython.com/python-flatten-list/ »(link is external)

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples(link is external) ]

Categories: FLOSS Project Planets

This Week in KDE Apps: Search in Merkuro Mail, Tokodon For Android, LabPlot new documentation and more(link is external)

Planet KDE - Sun, 2024-12-22 07:50

Welcome to a new issue of "This Week in KDE Apps"! Every week we cover as much as possible of what's happening in the world of KDE apps(link is external).

AudioTube(link is external) YouTube Music app

AudioTube now shows synchronized lyrics provided by LRCLIB(link is external). This automatically falls back to normal lyrics if synced lyrics are not available. (Kavinu Nethsara, 25.04.0. Link(link is external))

Dolphin(link is external) Manage your files

Quickly renaming multiple files by switching between them with the keyboard arrow keys now correctly starts a renaming of the next file even if a sorting change moved it. (Ilia Kats, 25.04.0. Link(link is external))

Fixed a couple of regressions in the 24.12.0 release. (Akseli Lahtinen, 24.12.1. Link 1(link is external), link 2(link is external), link 3(link is external))

KDE Itinerary(link is external) Digital travel assistant

Improved the touch targets of the buttons in the bottom drawer which appears on mobile. (Carl Schwan, 24.05.0. Link(link is external))

Akonadi(link is external) Background service for KDE PIM apps

Improve the stability of changing tags. Now deleting a tag will properly remove it from all items. (Daniel Vrátil, 24.12.1. Link 1(link is external) and link 2(link is external))

KMail(link is external) A feature-rich email application

The tooltip of your folder in KMail will now show the absolute space quota in bytes. (Fabian Vogt, 25.04.0. Link(link is external))

KMyMoney(link is external) Personal finance manager based on double-entry bookkeeping

An initial port of KMyMoney for Qt6 was merged. (Ralf Habacker. Link(link is external))

Krita(link is external) Digital Painting, Creative Freedom

Krita has a new plugin for fast sketching. You can find more about this on their blog post(link is external).

KTorrent(link is external) BitTorrent Client

Added the support for getting IPv6 peers from peer exchange. (Jack Hill, 25.04.0. Link(link is external))

LabPlot(link is external) Interactive Data Visualization and Analysis

We now show more plot types in the "Add new plot" context menu. (Alexander Senke. Link(link is external))

LabPlot has announced(link is external) a new dedicated user manual page(link is external).

Okular(link is external) View and annotate documents

We improved how we are displaying the signature and certificate details in the mobile version of Okular. (Carl Schwan, 25.04.0. Link(link is external))

When selecting a certificate to use when digitally signing a PDF with the GPG backend, the fingerprints are rendered more nicely. (Sune Vuorela, 25.04.0. Link(link is external))

It's now possible to choose a custom default zoom level in Okular. (Wladimir Leuschner, 25.04.0. Link(link is external))

Merkuro Mail(link is external) Read your emails with speed and ease

Merkuro Mail now lets you search across your emails with a full text search. (Carl Schwan, 25.04.0. Link(link is external))

Additionally, the Merkuro Mail sidebar will now remember which folders were collapsed or expanded as well as the last selected folder across application restarts. (Carl Schwan, 25.04.0. Link(link is external))

PowerPlant(link is external) Keep your plants alive

We started the "KDE Review" process for PowerPlant(link is external), so expect a release in the comming weeks.

We added support for Windows and Android. (Laurent Montel, 1.0.0. Link 1(link is external), link 2(link is external) and link 3(link is external))

Ruqola(link is external) Rocket Chat Client

Ruqola 2.4.0 is out. You can now mute/unmute other users, cleanup the room history and more. Read the full announcement(link is external).

Tokodon(link is external) Browse the Fediverse

This week, Joshua spent some time improving Tokodon for mobile and in particular for Android. This includes performance optimization, adding missing icons and some mobile specific user experience improvements. (Joshua Goins, 25.04.0. Link 1(link is external), link 2(link is external) and link 3(link is external)). A few more improvements for Android, like proper push notifications via unified push, are in the work.

Joshua also improved the draft and scheduled post features, allowing now to discard scheduled posts and drafts and showing when a draft was created. (Joshua Goins, 25.04.0. Link(link is external))

We also added a keyboard shortcut configuration page in Tokodon settings. (Joshua Goins and Carl Schwan, 25.04.0. Link 1(link is external) and link 2(link is external))

Finally, we created a new server information page with the server rules and made the existing announcements page a subpage of it. Speaking of announcements, we added support for the announcement's emoji reactions. (Joshua Goins, 25.04.0. Link(link is external))

WashiPad(link is external) Minimalist Sketchnoting Application

WashiPad was ported to Kirigami instead of using its own custom QtQuick components. (Carl Schwan. Link(link is external))

…And Everything Else

This blog only covers the tip of the iceberg! If you’re hungry for more, check out Nate's blog about Plasma(link is external) and be sure not to miss his This Week in Plasma(link is external) series, where every Saturday he covers all the work being put into KDE's Plasma desktop environment(link is external).

For a complete overview of what's going on, visit KDE's Planet(link is external), where you can find all KDE news unfiltered directly from our contributors.

Get Involved

The KDE organization has become important in the world, and your time and contributions have helped us get there. As we grow, we're going to need your support for KDE to become sustainable.

You can help KDE by becoming an active community member and getting involved(link is external). Each contributor makes a huge difference in KDE — you are not a number or a cog in a machine! You don’t have to be a programmer either. There are many things you can do: you can help hunt and confirm bugs, even maybe solve them; contribute designs for wallpapers, web pages, icons and app interfaces; translate messages and menu items into your own language; promote KDE in your local community; and a ton more things.

You can also help us by donating(link is external). Any monetary contribution, however small, will help us cover operational costs, salaries, travel expenses for contributors and in general just keep KDE bringing Free Software to the world.

To get your application mentioned here, please ping us in invent(link is external) or in Matrix(link is external).

Categories: FLOSS Project Planets

LostCarPark Drupal Blog: Drupal Advent Calendar day 22 - Gin Admin Theme track(link is external)

Planet Drupal - Sun, 2024-12-22 04:00
Drupal Advent Calendar day 22 - Gin Admin Theme track james Sun, 12/22/2024 - 09:00

Once more, we welcome you back to the Drupal Advent Calendar, to see what’s behind door number twenty-two. Today we are welcoming back an old friend, the Gin Admin Theme(link is external) which was covered all the way back in Door 1(link is external) of the 2023 Drupal Advent Calendar.

So why feature it again? Well back then, Gin was something of a rebel, for use on cutting edge Drupal sites, but perhaps a bit too “punk” for respectable production sites.

But a year later Gin is becoming respectable, and as part of that, it has been selected as the default admin theme for Drupal CMS. 

Drupal CMS is focused on giving the easiest to…

Tags (link is external)
Categories: FLOSS Project Planets

Steinar H. Gunderson: Kernel adventures: When two rights make a wrong(link is external)

Planet Debian - Sun, 2024-12-22 03:50

My 3D printer took me on another adventure recently. Or, well, actually someone else's 3D printer did: It turns out that building a realtime system (with high-speed motors controlling to a 300-degree metal rod) by cobbling together a bunch of Python and JavaScript on an anemic Arm SoC with zero resource isolation doesn't always meet those realtime guarantees. So in particular after installing a bunch of plugins, people would report the infamous “MCU timer too close” Klipper error, which essentially means that the microcontroller didn't get new commands in time from the Linux host and shut down as a failsafe. (Understandably, this sucks if it happens in the middle of an eight-hour print. Nobody really invented a way to reliably resume from these things yet.)

I was wondering whether it was possible to provoke this and then look at what was actually going on in the scheduler; perf sched lets you look at scheduling history on the host, so if I could reproduce the error while collecting data, I could go in afterwards and see what was the biggest CPU hog, or at least that was the theory.

However, to my surprise, perf sched record died with an error essentially saying that the kernel was compiled without ftrace support (which is needed for the scheduler hooks; it's somewhat possible to do without by just doing a regular profile, but that's a different story and much more annoying). Not very surprising, these things tend to run stone-age vendor kernels from some long-forgotten branch with zero security support and seemingly no ftrace.

Now, I did not actually run said vendor kernel; at some point, I upgraded to the latest stable kernel (6.6) from Armbian(link is external), which is still far from mainline (for one, it needs to carry out-of-tree drivers to make wireless work at all) but which I trust infinitely more to actually provide updated kernels over time. It doesn't support ftrace either, so I thought the logical step would be to upgrade to the latest “edge” kernel (aka 6.11) and then compile with the right stuff on.

After a couple of hours of compiling (almost nostalgic to have such slow kernel compiles; cross-compiling didn't work for me!), I could boot into the new kernel, and:

[ 23.775976] platform 5070400.thermal-sensor: deferred probe pending: platform: wait for supplier

and then Klipper would refuse to start because it couldn't find the host thermal sensors. (I don't know exactly why it is a hard dependency, but seemingly, it is.) A bit of searching shows that this error message is doubly vexing; it should have said “wait for supplier /i2c@fdd40000/pmic@20/regulators/SWITCH_REG1” or something similar, but ends only in a space and then nothing.

So evidently this has to be something about the device tree (DT), and switching out the new DT for the old one didn't work. Bisecting was also pretty much out of the question (especially with 400+ patches that go on top of the git tree), but after a fair bit of printk debugging and some more reading, I figured out what had happened:

First, the sun8i-thermal driver, which had been carried out-of-tree in Armbian, had gone into mainline. But it was in a slightly different version; while the out-of-tree version used previously (in Armbian's 6.6 kernel) had relied on firmware (run as part of U-Boot, as I understand it) to set a special register bit, the mainline version would be stricter and take care to set it itself. I don't really know what the bit does, short of “if you don't set it, all the values you get back are really crazy”, so this is presumably a good change. So the driver would set a bit in a special memory address somewhere (sidenote: MMIO will always feel really weird to me; like, some part of the CPU has to check all memory accesses in case they're really not to RAM at all?), and for that, the thermal driver would need to take on a DT reference to the allwinner,sram (comma is evidently some sort of hierarchical separator) node so that it could get its address. Like, in case it was moved around in future SoCs or something.

Second, there was an Armbian patch that dealt with exactly these allwinner,sram nodes in another way; it would make sure that references to them would cause devlink references between the nodes. I don't know what those are either, but it seems the primary use case is for waiting: If you have a dependency from A to B, then A's initialization will wait until B is ready. The configuration bit in question is always ready, but I guess it's cleaner somehow, and you get a little symlink somewhere in /sys to explain the relationship, so perhaps it's good? But that's what the error message means; “A: deferred probe pending: wait for supplier B” means that we're not probing for A's existence yet, because it wants B to supply something and B isn't ready yet.

But why is the relationship broken? Well, for that, we need to look at how the code in the patch looks:

sram_node = of_parse_phandle(np, prop_name, 0); sram_node = of_get_parent(sram_node); sram_node = of_get_parent(sram_node); return sram_node;

And how the device tree is set up in this case (lots of irrelevant stuff removed for clarity):

bus@1000000 { /* this works */ reg = <0x1000000 0x400000>; allwinner,sram = <&de3_sram 1>; }; ths: thermal-sensor@5070400 { /* this doesn't */ allwinner,sram = <&syscon>; }; syscon: syscon@3000000 { sram_c: sram@28000 { de3_sram: sram-section@0 { reg = <0x0000 0x1e000>; }; }; };

So that explains it; the code expects that all DT references are to a child of a child of syscon to find the supplier, and just goes up two levels to find it. But for the thermal sensor, the reference is directly to the syscon itself, and it goes up past the root of the tree, which is, well, NULL. And then the error message doesn't have a node name to print out, and the dependency just fails forever.

So that's two presumably good changes that just interacted in a really bad way (in particular, due to too little flexibility in the second one). A small patch(link is external) later, and the kernel boots with thermals again!

Oh, and those scheduling issues I wanted to debug? I never managed to reliably reproduce them; I have seen them, but they're very rare for me. I guess that upstream for the plugins in question just made things a bit less RAM-hungry in the meantime, or that having a newer kernel improves things enough in itself. Shrug. :-)

Categories: FLOSS Project Planets

Junichi Uekawa: Looking at my private repositories for what language I wrote.(link is external)

Planet Debian - Sun, 2024-12-22 02:08
Looking at my private repositories for what language I wrote. I'm counting the number of days I wrote a certain file with specific file extension. Markdown was at the top, because I usually have a doc, that's fine. C++ is my top language. Then lilypond and then js. Rust came much lower. Lilypond was for my band scores it seems.

Categories: FLOSS Project Planets

KDE @ 38C3(link is external)

Planet KDE - Sun, 2024-12-22 02:00

In less than a week from now KDE(link is external) will again be present at the 38th Chaos Communication Congress (38C3)(link is external) in Hamburg, Germany.

(link is external) Chaos Communication Congress

Even bigger than FOSDEM and much wider in scope many impactful collaborations during the past couple of years can be traced back to contacts made at Congress. Be it KDE Eco(link is external), joint projects with the Open Transport community, the weather and emergency alert aggregation server(link is external) or indoor routing(link is external) to just name a few.

KDE Assembly

At last year’s edition, 37C3(link is external), we had a KDE assembly (think “stand” or “booth” at other events) for the first time. That not only helps people to find us, it’s also very useful anchor point for the growing KDE delegation.

This year we’ll further improve on that, by being there with even more people and by having the KDE assembly(link is external) as part of the Bits & Bäume Habitat(link is external). That not only comes with some shared infrastructure like a workshop space but also puts us next to some of our friends, like OSM(link is external), FSFE(link is external) and Wikimedia(link is external).

We’ll be in the foyer on floor level 1 next to the escalators (map(link is external)).

More of our friends and partners have their own assemblies elsewhere as well, such as Matrix(link is external) and Linux on Mobile(link is external).

A special thanks goes again to the nice people at CCC-P(link is external) and WMDE(link is external) who helped us get tickets!

Talks & Workshops

We’ll also have three talks by KDE people, all of them featuring collaborations beyond the classical KDE scope.

There will also be two workshops chaired by Jospeh on the latter subject:

Make sure to monitor the schedule(link is external) for last-minute changes though.

See you in Hamburg!

Looking forward to many interesting discussions, if you are at 38C3 as well make sure to come by the KDE assembly!

Categories: FLOSS Project Planets

Russ Allbery: Review: Beyond the Fringe(link is external)

Planet Debian - Sat, 2024-12-21 22:17

Review: Beyond the Fringe, by Miles Cameron

Series: Arcana Imperii #1.5 Publisher: Gollancz Copyright: 2023 ISBN: 1-3996-1537-8 Format: Kindle Pages: 173

Beyond the Fringe is a military science fiction short story collection set in the same universe as Artifact Space(link is external). It is intended as a bridge between that novel and its sequel, Deep Black.

Originally I picked this up for exactly the reason it was published: I was eagerly awaiting Deep Black and thought I'd pass the time with some filler short fiction. Then, somewhat predictably, I didn't get around to reading it until after Deep Black was already out. I still read this collection first, partly because I'm stubborn about reading things in publication order but mostly to remind myself of what was going on in Artifact Space before jumping into the sequel.

My stubbornness was satisfied. My memory was not; there's little to no background information here, and I had to refresh my memory of the previous book anyway to figure out the connections between these stories and the novel.

My own poor decisions aside, these stories are... fine, I guess? They're competent military SF short fiction, mostly more explicitly military than Artifact Space. All of them were reasonably engaging. None of them were that memorable or would have gotten me to read the series on their own. They're series filler, in other words, offering a bit of setup for the next novel but not much in the way of memorable writing or plot.

If you really want more in this universe, this exists, but my guess (not having read Deep Black) is that it's entirely skippable.

"Getting Even": A DHC paratrooper lands on New Shenzen, a planet that New Texas is trying to absorb into the empire it is attempting to build. He gets captured by one group of irregulars and then runs into another force with an odd way of counting battle objectives.

I think this exists because Cameron wanted to tell a version of a World War II story he'd heard, but it's basically a vignette about a weird military unit with no real conclusion, and I am at a loss as to the point of the story. There isn't even much in the way of world-building. I'm probably missing something, but I thought it was a waste of time. (4)

"Partners": The DHC send a planetary exobiologist to New Texas as a negotiator. New Texas is aggressively, abusively capitalist and is breaking DHC regulations on fair treatment of labor. Why send a planetary exobiologist is unclear (although probably ties into the theme of this collection that the reader slowly pieces together); maybe it's because he's originally from New Texas, but more likely it's because of his partner. Regardless, the New Texas government are exploitative assholes with delusions of grandeur, so the negotiations don't go very smoothly.

This was my favorite story of the collection just because I enjoy people returning rudeness and arrogance to sender, but like a lot of stories in this collection it doesn't have much of an ending. I suspect it's mostly setup for Deep Black. (7)

"Dead Reckoning": This is the direct fallout of the previous story and probably has the least characterization of this collection. It covers a few hours of a merchant ship having to make some fast decisions in a changing political situation. The story is framed around a veteran spacer and his new apprentice, although even that frame is mostly dropped once the action starts. It was suspenseful and enjoyable enough while I was reading it, but it's the sort of story that you forget entirely after it's over. (6)

"Trade Craft": Back on a planet for this story, which follows an intelligence agent on a world near but not inside New Texas's area of influence. I thought this was one of the better stories of the collection even though it's mostly action. There are some good snippets of characterization, an interesting mix of characters, and some well-written tense scenes. Unfortunately, I did not enjoy the ending for reasons that would be spoilers. Otherwise, this was good but forgettable. (6)

"One Hour": This is the first story with a protagonist outside of the DHC and its associates. It instead follows a PTX officer (PTX is a competing civilization that features in Artifact Space) who has suspicions about what his captain is planning and recruits his superior officer to help him do something about it.

This is probably the best story in the collection, although I personally enjoyed "Partners" a smidgen more. Shunfu, the first astrogator who is recruited by the protagonist, is a thoroughly enjoyable character, and the story is tense and exciting all the way through. For series readers, it also adds some depth to events in Artifact Space (if the reader remembers them), and I suspect will lead directly into Deep Black. (7)

"The Gifts of the Magi": A kid and his mother, struggling asteroid miners with ancient and malfunctioning equipment, stumble across a DHC ship lurking in the New Texas system for a secret mission. This is a stroke of luck for the miners, since the DHC is happy to treat the serious medical problems of the mother without charging unaffordable fees the way that the hyper-capitalist New Texas doctors would. It also gives the reader a view into DHC's covert monitoring of the activities of New Texas that all the stories in this collection have traced.

As you can tell from the title, this is a Christmas story. The crew of the DHC ship is getting ready to celebrate Alliday, which they claim rolls all of the winter holidays into one. Just like every other effort to do this, no, it does not, it just subsumes them all into Christmas with some lip service to other related holidays. I am begging people to realize that other religions often do not have major holidays in December, and therefore you cannot include everyone by just declaring December to be religious holiday time and thinking that will cover it.

There is the bones of an interesting story here. The covert mission setup has potential, the kid and his mother are charming if cliched, there's a bit of world-building around xenoglas (the magical alien material at the center of the larger series plot), and there's a lot of foreshadowing for Deep Black. Unfortunately, this is too obviously a side story and a setup story: none of this goes anywhere satisfying, and along the way the reader has to endure endless rather gratuitous Christmas references, such as the captain working on a Nutcracker ballet performance for the ship talent show.

This isn't bad, exactly, but it rubbed me the wrong way. If you love Christmas stories, you may find it more agreeable. (5)

Rating: 6 out of 10

Categories: FLOSS Project Planets

Michael Foord: New Article: Essential Python Web Security Part 1(link is external)

Planet Python - Sat, 2024-12-21 19:00

The Open Source Initiative(link is external) have published part one of an article of mine. The article is called “Essential Python Web Security” and it’s part one of a series called “The Absolute Minimum Every Python Web Application Developer Must Know About Security”. The subject is Full Stack Security for Python web applications, based on the Defence in Depth approach.

This series explores the critical security principles every Python web developer should know. Whilst hard and fast rules, like avoiding plaintext passwords and custom security algorithms, are essential - a deeper understanding of broader security principles is equally important. This first pots delves into fundamental security best practises, ranging from general principles to specific Python-related techniques.

Part 2, on Cryptographic Algorithms, will be published soon. When the series is complete it will probably also be available as an ebook. The full document, about fifty pages, can be read here:

Special thanks to Gigaclear Ltd who sponsored the creation of this article. Also thanks to Dr David Mertz and Daniel Roy Greenfeld for technical reviews of this article prior to publication.

Categories: FLOSS Project Planets

Pages