Feeds
KDE Plasma 6.2.2, Bugfix Release for October
Tuesday, 22 October 2024. Today KDE releases a bugfix update to KDE Plasma 6, versioned 6.2.2.
Plasma 6.2 was released in October 2024 with many feature refinements and new modules to complete the desktop experience.
This release adds a week's worth of new translations and fixes from KDE's contributors. The bugfixes are typically small but important and include:
- KWin Backends/drm: leave all outputs disabled by default, including VR headsets. Commit. Fixes bug #493148
- KWin Set WAYLAND_DISPLAY before starting wayland server. Commit.
- Plasma Audio Volume Control: Fix text display for auto_null device. Commit. Fixes bug #494324
FSF News: FSF associate members to assist in review of current board members
parallel @ Savannah: GNU Parallel 20241022 ('Sinwar Nasrallah') released [stable]
GNU Parallel 20241022 ('Sinwar Nasrallah') has been released. It is available for download at: lbry://@GnuParallel:4
Quote of the month:
GNU Parallel is one of the most helpful tools I've been using recently, and it's just something like: parallel -j4 'gzip {}' ::: folder/*.csv
-- Milton Pividori @miltondp@twitter
New in this release:
- No new features. This is a candidate for a stable release.
- Bug fixes and man page updates.
News about GNU Parallel:
- Separate arguments with a custom separator in GNU Parallel https://boxofcuriosities.co.uk/post/separate-arguments-with-a-custom-separator-in-gnu-parallel
- GNU parallel is underrated https://amontalenti.com/2021/11/10/parallel
- Unlocking the Power of Supercomputers: My HPC Adventure with 2800 Cores and GNU Parallel https://augalip.com/2024/03/10/unlocking-the-power-of-supercomputers-my-hpc-adventure-with-2800-cores-and-gnu-parallel/
- Converting WebP Images to PNG Using parallel and dwebp https://bytefreaks.net/gnulinux/bash/converting-webp-images-to-png-using-parallel-and-dwebp
GNU Parallel - For people who live life in the parallel lane.
If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.
GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.
If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.
GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.
For example you can run this to convert all jpeg files into png and gif files and have a progress bar:
parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif
Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:
find . -name '*.jpg' |
parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200
You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/
You can install GNU Parallel in just 10 seconds with:
$ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
fetch -o - http://pi.dk/3 ) > install.sh
$ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
12345678 883c667e 01eed62f 975ad28b 6d50e22a
$ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
cc21b4c9 43fd03e9 3ae1ae49 e28573c0
$ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52
79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
$ bash install.sh
Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.
When using programs that use GNU Parallel to process data for publication please cite:
O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.
If you like GNU Parallel:
- Give a demo at your local user group/team/colleagues
- Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
- Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
- Request or write a review for your favourite blog or magazine
- Request or build a package for your favourite distribution (if it is not already there)
- Invite me for your next conference
If you use programs that use GNU Parallel for research:
- Please cite GNU Parallel in you publications (use --citation)
If GNU Parallel saves you money:
- (Have your company) donate to FSF https://my.fsf.org/donate/
GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.
The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.
When using GNU SQL for a publication please cite:
O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.
GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.
Talking Drupal: Talking Drupal #472 - Access Policy API
Today we are talking about Access Policy API, What it does, and How you can use it with guest Kristiaan Van den Eynde. We’ll also cover Visitors as our module of the week.
For show notes visit: https://www.talkingDrupal.com/472
Topics- What is the Access Policy API
- Why does Drupal need the Access Policy API
- How did Drupal handle access before
- How does the Access Policy API interact with roles
- Does a module exist that shows a UI
- What is the difference between Policy Based Access Control (PBAC), Attribute Based Access Control (ABAC) and Role Based Access Control (RBAC)
- How does Access Policy API work with PBAC, ABAC and RBAC
- Can you apply an access policy via a recipe
- Is there a roadmap
- What was it like going through pitchburg
- How can people get involved
- Access Policy API
- Access Policy
- Talking Drupal #226 Group
- Flexible Permissions
- External roles
- Test Super access policy
- Access policy talk at Drupalcon barcelona
- D.o Issue about exception on security issue
Kristiaan Van den Eynde - kristiaanvandeneynde
HostsNic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Aubrey Sambor - star-shaped.org starshaped
MOTW CorrespondentMartin Anderson-Clutz - mandclu.com mandclu
- Brief description:
- Have you ever wanted a Drupal-native solution for tracking website visitors and their behavior? There’s a module for that
- Module name/project name:
- Brief history
- How old: created in Mar 2009 by gashev, though recent releases are by Steven Ayers (bluegeek9)
- Versions available: 8.x-2.19, which works with Drupal 10 and 11
- Maintainership
- Actively maintained
- Security coverage
- Test coverage
- Documentation guide is available
- Number of open issues: 20 open issues, none of which are bugs against the 8.x branch
- Usage stats:
- Over 6,000 sites
- Module features and usage
- A benefit of using a Drupal-native solution is that you retain full ownership over your visitor data. Not sharing that data with third parties can be important for data protection regulations, as well as data privacy concerns.
- You also have a variety of reports you can access directly within the Drupal UI, including top pages, referrers, and more
- There is a submodule for geoip lookups using Maxmind, if you also want reporting on what region, country, or city your visitors hail from
- It provides drush commands to download a geoip database, and then update your data based on geoip lookups using that database
- It should be mentioned that the downside of using Drupal as your analytics solution is the potential performance impact and also a likely uptick in usage for hosts that charge based on the number of dynamic requests served
Sahil Dhiman: Free Software Mirrors in India
List of public mirrors in India. Location discovered basis personal knowledge, traces or GeoIP. Mirrors which aren’t accessible outside their own ASN are excluded.
North India- Bharat Datacenter - mirror.bharatdatacenter.com (AS151704)
- CSE Department, IIT Kanpur - mirror.cse.iitk.ac.in (AS55479)
- Cyfuture - cyfuture.dl.sourceforge.net (AS55470)
- Extreme IX - repos.del.extreme-ix.org | repo.extreme-ix.org (AS135814)
- Garuda Linux - in-mirror.garudalinux.org (AS133661)
- Hopbox - mirrors.hopbox.net (AS10029)
- IIT Delhi - mirrors.iitd.ac.in (AS132780)
- NKN - debianmirror.nkn.in (AS4758)
- Nxtgen - mirrors.nxtgen.com (AS132717)
- Saswata Sarkar - mirrors.saswata.cc (AS132453)
- Shiv Nadar Institution of Eminence - ubuntu-mirror.snu.edu.in (AS132785)
- NISER Bhubaneshwar - mirror.niser.ac.in (AS141288))
- Cogan Ng - in.mirror.coganng.com (AS31898)
- CUSAT - foss.cusat.ac.in/mirror (AS55824))
- Excell Media - centos-stream.excellmedia.net | excellmedia.dl.sourceforge.net (AS17754)
- IIT Madras - ftp.iitm.ac.in (AS141340)
- NIT Calicut - mirror.nitc.ac.in (AS55824)
- NKN - mirrors-1.nkn.in (AS148003)
- Planet Unix - mirror.planetunix.net | ariel.in.ext.planetunix.net (AS14061)
- Shrirang Kahale - mirror.maa.albony.in (AS24560)
- Abhinav Krishna C K - mirrors.abhy.me (AS31898)
- Arun Mathai - mirrors.arunmathaisk.in (AS141995)
- Balvinder Singh Rawat - mirror.ubuntu.bsr.one (AS31898)
- ICTS - cran.icts.res.in (AS134322)
- Nilesh Patra - mirrors.nileshpatra.info (AS31898)
- PicoNets-WebWerks - mirrors.piconets.webwerks.in (AS133296)
- Ravi Dwivedi - mirrors.ravidwivedi.in (AS141995)
- Sahil Dhiman - mirrors.in.sahilister.net (AS141995)
- Shrirang Kahale - mirror.bom.albony.in | mirror.nag.albony.in (AS24560)
- Starburst Services - almalinux.in.ssimn.org | elrepo.in.ssimn.org | epel.in.ssimn.org | mariadb.in.ssimn.org (AS141995)
- Unknown - mirror.4v1.in (AS24560)
- Utkarsh Gupta - mirrors.utkarsh2102.org (AS31898)
- Amazon Cloudfront - cdn-aws.deb.debian.org (AS16509)
- Cicku - in.mirrors.cicku.me (AS13335)
- CIQ - rocky-linux-asia-south1.production.gcp.mirrors.ctrliq.cloud | rocky-linux-asia-south2.production.gcp.mirrors.ctrliq.cloud (GeoIP doubtful. Could be behind a CDN or single node) (AS396982)
- Cloudflare - cloudflare.cdn.openbsd.org | kali.download/ (AS13335)
- Edgecast - mirror.edgecast.com (AS15133)
- Fastly - cdn.openbsd.org | deb.debian.org | dlcdn.apache.org | dl-cdn.alpinelinux.org (sponsored?) | images-cdn.endlessm.com (sponsored?) | repo-fastly.voidlinux.org (AS54113)
- Naman Garg - in-mirror.chaotic.cx (AS13335)
- Microsoft - debian-archive.trafficmanager.net (AS8075)
- Niranjan Fartare - arch.niranjan.co | termux.niranjan.co (AS13335)
- Sahil Kokamkar - mirror.sahil.world (AS13335)
Let me know if I’m missing someone or something is amiss.
FSF Blogs: FSD meeting recap 2024-10-18
FSD meeting recap 2024-10-18
Real Python: Python's property(): Add Managed Attributes to Your Classes
With Python’s property(), you can create managed attributes in your classes. You can use managed attributes when you need to modify an attribute’s internal implementation and don’t want to change the class’s public API. Providing stable APIs will prevent you from breaking your users’ code when they rely on your code.
Properties are arguably the most popular way to create managed attributes quickly and in the purest Pythonic style.
In this tutorial, you’ll learn how to:
- Create managed attributes or properties in your classes
- Perform lazy attribute evaluation and provide computed attributes
- Make your classes Pythonic using properties instead of setter and getter methods
- Create read-only and read-write properties
- Create consistent and backward-compatible APIs for your classes
You’ll also write practical examples that use property() for validating input data, computing attribute values dynamically, logging your code, and more. To get the most out of this tutorial, you should know the basics of object-oriented programming, classes, and decorators in Python.
Get Your Code: Click here to download the free sample code that shows you how to use Python’s property() to add managed attributes to your classes.
Take the Quiz: Test your knowledge with our interactive “Python's property(): Add Managed Attributes to Your Classes” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Python's property(): Add Managed Attributes to Your ClassesIn this quiz, you'll test your understanding of Python's property(). With this knowledge, you'll be able to create managed attributes in your classes, perform lazy attribute evaluation, provide computed attributes, and more.
Managing Attributes in Your ClassesWhen you define a class in an object-oriented programming language, you’ll probably end up with some instance and class attributes. In other words, you’ll end up with variables that are accessible through the instance, class, or even both, depending on the language. Attributes represent and hold the internal state of a given object, which you’ll often need to access and mutate.
Typically, you have at least two ways to access and mutate an attribute. Either you can access and mutate the attribute directly or you can use methods. Methods are functions attached to a given class. They provide the behaviors and actions that an object can perform with its internal data and attributes.
If you expose attributes to the user, then they become part of the class’s public API. This means that your users will access and mutate them directly in their code. The problem comes when you need to change the internal implementation of a given attribute.
Say you’re working on a Circle class and add an attribute called .radius, making it public. You finish coding the class and ship it to your end users. They start using Circle in their code to create a lot of awesome projects and applications. Good job!
Now suppose that you have an important user that comes to you with a new requirement. They don’t want Circle to store the radius any longer. Instead, they want a public .diameter attribute.
At this point, removing .radius to start using .diameter could break the code of some of your other users. You need to manage this situation in a way other than removing .radius.
Programming languages such as Java and C++ encourage you to never expose your attributes to avoid this kind of problem. Instead, you should provide getter and setter methods, also known as accessors and mutators, respectively. These methods offer a way to change the internal implementation of your attributes without changing your public API.
Note: Getter and setter methods are often considered an anti-pattern and a signal of poor object-oriented design. The main argument behind this proposition is that these methods break encapsulation. They allow you to access and mutate the components of your objects from the outside.
These programming languages need getter and setter methods because they don’t have a suitable way to change an attribute’s internal implementation when a given requirement changes. Changing the internal implementation would require an API modification, which can break your end users’ code.
The Getter and Setter Approach in PythonTechnically, there’s nothing that stops you from using getter and setter methods in Python. Here’s a quick example that shows how this approach would look:
Python point_v1.py class Point: def __init__(self, x, y): self._x = x self._y = y def get_x(self): return self._x def set_x(self, value): self._x = value def get_y(self): return self._y def set_y(self, value): self._y = value Copied!In this example, you create a Point class with two non-public attributes ._x and ._y to hold the Cartesian coordinates of the point at hand.
Note: Python doesn’t have the notion of access modifiers, such as private, protected, and public, to restrict access to attributes and methods. In Python, the distinction is between public and non-public class members.
If you want to signal that a given attribute or method is non-public, then you have to use the well-known Python convention of prefixing the name with an underscore (_). That’s the reason behind the naming of the attributes ._x and ._y.
Note that this is just a convention. It doesn’t stop you and other programmers from accessing the attributes using dot notation, as in obj._attr. However, it’s bad practice to violate this convention.
To access and mutate the value of either ._x or ._y, you can use the corresponding getter and setter methods. Go ahead and save the above definition of Point in a Python module and import the class into an interactive session. Then run the following code:
Python >>> from point_v1 import Point >>> point = Point(12, 5) >>> point.get_x() 12 >>> point.get_y() 5 >>> point.set_x(42) >>> point.get_x() 42 >>> # Non-public attributes are still accessible >>> point._x 42 >>> point._y 5 Copied!With .get_x() and .get_y(), you can access the current values of ._x and ._y. You can use the setter method to store a new value in the corresponding managed attribute. From the two final examples, you can confirm that Python doesn’t restrict access to non-public attributes. Whether or not you access them directly is up to you.
Read the full article at https://realpython.com/python-property/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
SystemSeed.com: Prestigious medical journal - The Lancet - features SystemSeed project
Elise West, Evgeniy Maslovskiy and Andrey Yurtaev receive co-author credits for their work on the World Health Organization's EQUIP project
Tamsin Fox-Davies Mon, 10/21/2024 - 13:50Sven Hoexter: Terraform: Making Use of Precondition Checks
I'm in the unlucky position to have to deal with GitHub. Thus I've a terraform module in a project which deals with populating organization secrets in our GitHub organization, and assigning repositories access to those secrets.
Since the GitHub terraform provider internally works mostly with repository IDs, not slugs (this human readable organization/repo format), we've to do some mapping in between. In my case it looks like this:
#tfvars Input for Module org_secrets = { "SECRET_A" = { repos = [ "infra-foo", "infra-baz", "deployment-foobar", ] "SECRET_B" = { repos = [ "job-abc", "job-xyz", ] } } # Module Code /* Limitation: The GH search API which is queried returns at most 1000 results. Thus whenever we reach that limit this approach will no longer work. The query is also intentionally limited to internal repositories right now. */ data "github_repositories" "repos" { query = "org:myorg archived:false -is:public -is:private" include_repo_id = true } /* The properties of the github_repositories.repos data source queried above contains only lists. Thus we've to manually establish a mapping between the repository names we need as a lookup key later on, and the repository id we got in another list from the search query above. */ locals { # Assemble the set of repository names we need repo_ids for repos = toset(flatten([for v in var.org_secrets : v.repos])) # Walk through all names in the query result list and check # if they're also in our repo set. If yes add the repo name -> id # mapping to our resulting map repos_and_ids = { for i, v in data.github_repositories.repos.names : v => data.github_repositories.repos.repo_ids[i] if contains(local.repos, v) } } resource "github_actions_organization_secret" "org_secrets" { for_each = var.org_secrets secret_name = each.key visibility = "selected" # the logic how the secret value is sourced is omitted here plaintext_value = data.xxx selected_repository_ids = [ for r in each.value.repos : local.repos_and_ids[r] if can(local.repos_and_ids[r]) ] }Now if we do something bad, delete a repository and forget to remove it from the configuration for the module, we receive some error message that a (numeric) repository ID could not be found. Pretty much useless for the average user because you've to figure out which repository is still in the configuration list, but got deleted recently.
Luckily terraform supports since version 1.2 precondition checks, which we can use in an output-block to provide the information which repository is missing. What we need is the set of missing repositories and the validation condition:
locals { # Debug facility in combination with an output and precondition check # There we can report which repository we still have in our configuration # but no longer get as a result from the data provider query missing_repos = setsubtract(local.repos, data.github_repositories.repos.names) } # Debug facility - If we can not find every repository in our # search query result, report those repos as an error output "missing_repos" { value = local.missing_repos precondition { condition = length(local.missing_repos) == 0 error_message = format("Repos in config missing from resultset: %v", local.missing_repos) } }Now you only have to be aware that GitHub is GitHub and the TF provider has open bugs, but is not supported by GitHub and you will encounter inconsistent results. But it works, even if your terraform apply failed that way.
Kdenlive 24.08.2 released
Kdenlive 24.08.2 is out with many fixes to a wide range of bugs and regressions.
- Fix title producer update on edit undo. Commit. Fixes bug #494142.
- Fix typo in dance.xml. Commit.
- Fix single item(s) move. Commit.
- Fix cycle effects playling timeline and sometimes broken after reopening project. Commit.
- Fix recent regression breaking all sort of things when opening projects. Commit.
- Fix crash when dragging clip and using mouse wheel. Commit.
- Don’t play when clicking monitor container if disabled in settings. Commit.
- Fix effect zones lost on project reopening. Commit.
- Various fixes for bin clip effects. Commit.
- Disable check for ghost effects that currently removes valid effects. Commit.
- Detect and fix track producers with incorrect effects. Commit.
- Fix bin effects sometimes not correctly removed from timeline instance. Commit.
- Don’t try to build clone effect it if does not apply to the target. Commit.
- Don’t unnecessarily check MLT tractors. Commit.
- Fix crash opening file with missing clips. Commit.
- Fix crash on project close. Commit.
- Fix compilation. Commit.
- Fix possible crash opening an interlaced project. Commit.
- Fix on monitor seek to next/previous keyframe buttons. Commit.
- Fix crash editing keyframes in a bin clip with grouped effects enabled. Commit.
- Don’t try to connect to dbus jobview on command line rendering. Commit.
- Fix Qt5 compilation. Commit.
- FIx looping through clips in project monitor effect scene. Commit.
- Fix loop selected clip. Commit.
The post Kdenlive 24.08.2 released appeared first on Kdenlive.
GNU Guix: Build User Takeover Vulnerability
A security issue has been identified in guix-daemon which allows for a local user to gain the privileges of any of the build users and subsequently use this to manipulate the output of any build. Your are strongly advised to upgrade your daemon now (see instructions below), especially on multi-user systems.
This exploit requires the ability to start a derivation build and the ability to run arbitrary code with access to the store in the root PID namespace on the machine the build occurs on. As such, this represents an increased risk primarily to multi-user systems and systems using dedicated privilege-separation users for various daemons: without special sandboxing measures, any process of theirs can take advantage of this vulnerability.
VulnerabilityFor a very long time, guix-daemon has helpfully made the outputs of failed derivation builds available at the same location they were at in the build container. This has aided greatly especially in situations where test suites require the package to already be installed in order to run, as it allows one to re-run the test suite interactively outside of the container when built with --keep-failed. This transferral of store items from inside the chroot to the real store was implemented with a simple rename, and no modification of the store item or any files it may contain.
If an attacker starts a build of a derivation that creates a binary with the setuid and/or setgid bit in an output directory, then, and the build fails, that binary will be accessible unaltered for anybody on the system. The attacker or a cooperating user can then execute the binary, gain the privileges, and from there use a combination of signals and procfs to freeze a builder, open any file it has open via /proc/$PID/fd, and overwrite it with whatever it wants. This manipulation of builds can happen regardless of which user started the build, so it can work not only for producing compromised outputs for commonly-used programs before anybody else uses them, but also for compromising any builds another user happens to start.
A related vulnerability was also discovered concerning the outputs of successful builds. These were moved - also via rename() - outside of the container prior to having their permissions, ownership, and timestamps canonicalized. This means that there also exists a window of time for a successful build's outputs during which a setuid/setgid binary can be executed.
In general, any time that a build user running a build for some submitter can get a setuid/setgid binary to a place the submitter can execute it, it is possible for the submitter to use it to take over the build user. This situation always occurs when --disable-chroot is passed to guix-daemon. This holds even in the case where there are no dedicated build users, and builds happen under the same user the daemon runs as, as happens during make check in the guix repository. Consequently, if a permissive umask that allows execute permission for untrusted users on directories all the way to a user's guix checkout is used, an attacker can use that user's test-environment daemon to gain control over their user while make check is running.
MitigationThis security issue has been fixed by two commits. Users should make sure they have updated to the second commit to be protected from this vulnerability. Upgrade instructions are in the following section. If there is a possibility that a failed build has left a setuid/setgid binary lying around in the store by accident, run guix gc to remove all failed build outputs.
The fix was accomplished by sanitizing the permissions of all files in a failed build output prior to moving it to the store, and also by waiting to move successful build outputs to the store until after their permissions had been canonicalized. The sanitizing was done in such a way as to preserve as many non-security-critical properties of failed build outputs as possible to aid in debugging. After applying these two commits, the guix package in Guix was updated so that guix-daemon deployed using it would use the fixed version.
If you are using --disable-chroot, whether with dedicated build users or not, make sure that access to your daemon's socket is restricted to trusted users. This particularly affects anyone running make check and anyone running on GNU/Hurd. The former should either manually remove execute permission for untrusted users on their guix checkout or apply this patch, which restricts access to the test-environment daemon to the user running the tests. The latter should adjust the ownership and permissions of /var/guix/daemon-socket, which can be done for Guix System users using the new socket-directory-{perms,group,user} fields in this patch.
A proof of concept is available at the end of this post. One can run this code with:
guix repl -- setuid-exposure-vuln-check.scmThis will output whether the current guix-daemon being used is vulnerable or not. If it is vulnerable, the last line will contain your system is not vulnerable, otherwise the last line will contain YOUR SYSTEM IS VULNERABLE.
UpgradingDue to the severity of this security advisory, we strongly recommend all users to upgrade their guix-daemon immediately.
For Guix System, the procedure is to reconfigure the system after a guix pull, either restarting guix-daemon or rebooting. For example:
guix pull sudo guix system reconfigure /run/current-system/configuration.scm sudo herd restart guix-daemonwhere /run/current-system/configuration.scm is the current system configuration but could, of course, be replaced by a system configuration file of a user's choice.
For Guix running as a package manager on other distributions, one needs to guix pull with sudo, as the guix-daemon runs as root, and restart the guix-daemon service, as documented. For example, on a system using systemd to manage services, run:
sudo --login guix pull sudo systemctl restart guix-daemon.serviceNote that for users with their distro's package of Guix (as opposed to having used the install script) you may need to take other steps or upgrade the Guix package as per other packages on your distro. Please consult the relevant documentation from your distro or contact the package maintainer for additional information or questions.
ConclusionEven with the sandboxing features of modern kernels, it can be quite challenging to synthesize a situation in which two users on the same system who are determined to cooperate nevertheless cannot. Guix has an especially difficult job because it needs to not only realize such a situation, but also maintain the ability to interact with both users itself, while not allowing them to cooperate through itself in unintended ways. Keeping failed build outputs around for debugging introduced a vulnerability, but finding that vulnerability because of it enabled the discovery of an additional vulnerability that would have existed anyway, and prompted the use of mechanisms for securing access to the guix daemon.
I would like to thank Ludovic Courtès for giving feedback on these vulnerabilities and their fixes — discussion of which led to discovering the vulnerable time window with successful build outputs — and also for helping me to discover that my email server was broken.
Proof of ConceptBelow is code to check if your guix-daemon is vulnerable to this exploit. Save this file as setuid-exposure-vuln-check.scm and run following the instructions above, in "Mitigation."
(use-modules (guix) (srfi srfi-34)) (define maybe-setuid-file ;; Attempt to create a setuid file in the store, with one of the build ;; users as its owner. (computed-file "maybe-setuid-file" #~(begin (call-with-output-file #$output (const #t)) (chmod #$output #o6000) ;; Failing causes guix-daemon to copy the output from ;; its temporary location back to the store. (exit 1)))) (with-store store (let* ((drv (run-with-store store (lower-object maybe-setuid-file))) (out (derivation->output-path drv))) (guard (c (#t (if (zero? (logand #o6000 (stat:perms (stat out)))) (format #t "~a is not setuid: your system is not \ vulnerable.~%" out) (format #t "~a is setuid: YOUR SYSTEM IS VULNERABLE. Run 'guix gc' to remove that file and upgrade.~%" out)))) (build-things store (list (derivation-file-name drv))))))Python Bytes: #406 What's on Django TV tonight?
Akademy 2025 Call for Hosts
If you want to contribute to KDE in a significant way (beyond coding), here is your opportunity — help us organize Akademy 2025!
We are seeking hosts for Akademy 2025, which will occur in June, July, August, or September. This is your chance to bring KDE’s biggest event to your city! Download the Call for Hosts guide and submit your proposal to akademy-proposals@kde.org by December 1, 2024.
Feel free to reach out with any questions or concerns! We are here to help you organise a successful event and are here to offer you any advice, guidance, or help you need. Let’s work together to make Akademy 2025 an event to remember.
Zato Blog: HL7 FHIR Security with Basic Auth, OAuth and SSL/TLS
Preliminary reading: HL7 FHIR Integrations in Python
FHIR servers offer their APIs using REST, which in turn means that they are HTTP servers under the hood. As a result, a few common security mechanisms can be employed to secure access to the servers when you invoke them.
- Basic Auth
- OAuth
- SSL/TLS encryption
When you integrate with a FHIR server, consult with its maintainers what of these, if any, should be used.
Note that Basic Auth and OAuth, when used over HTTP, are mutually exclusive. The HTTP protocol reserves only one header, called Authorization, for the authorization credentials and there is no standard way in the protocol to use more than one authorization mechanism.
On the other hand, the SSL/TLS encryption is independent of the credentials used and can be employed with Basic Auth, OAuth, or in scenarios when no authorization header is sent at all.
This is why, when you create or update a FHIR connection, you can set Basic Auth or OAuth and SSL/TLS options independently, as in the screenshot below:
Basic AuthBasic Auth is a combination of username and password credentials. They are allocated in advance by the maintainers of a FHIR server.
To use them in your integrations, create a new Basic Auth definition in the Dashboard, set its password, and assign it to an existing or new outgoing FHIR connection. No restarts nor reloads are required.
OAuthAuthentication with OAuth is built around a notion of short-lived tokens, which are simple strings. Your Zato server has credentials, much like a username and password although they are called a client ID and secret, based on which an authentication server issues tokens that let Zato prove that in fact it does have the correct credentials and that it is allowed to invoke a particular API as it is defined through scopes.
Scopes can be likened to permissions to access a subset of resources from a FHIR server. When Zato receives a token from an authentication server the token is inherently linked to specific scopes within which it can interact with the FHIR server.
That tokens are short-lived means that they need to be refreshed periodically, e.g. at least once per hour Zato needs to ask the authentication server for a new token based on its credentials. This process takes place under the hood and requires no configuration on your part.
To use OAuth in your integrations, create a new OAuth definition in the Dashboard, set its secret, and assign it to an existing or new outgoing FHIR connection. No restarts nor reloads are required.
SSL/TLS encryptionIf the FHIR server that a connection points to uses SSL/TLS then a question arises of how to validate the certificate of the Certificate Authority (CA) that signed the certificate that the FHIR server uses. There are several options:
- Skip validation - the certificate will not be validated at all and it will be always accepted.
- Default bundle - the certificate will have to belong to one of the publicly recognized CAs. The bundle contains the same certificates that common web browsers do so this option is good if the FHIR server is available in the public Internet, e.g. https://fhir.example.com
- Custom bundle - if the FHIR server's certificate was signed by an internal, non-public CA then the CA's certificate bundle, in the PEM format, can be uploaded through the Dashboard to make it possible to choose it when the FHIR connection is being created or updated.
➤ Read about how to use Python to build and integrate enterprise APIs that your tests will cover
➤ Python API integration tutorial
➤ Python Integration platform as a Service (iPaaS)
➤ What is an Enterprise Service Bus (ESB)? What is SOA?
SystemSeed.com: Understanding the fundamentals of Single Sign-On systems (SSOs)
Single-Sign On (SSO) is a useful tool for organisations to maintain security, whilst improving the user experience for people who need to log in to multiple tools. In this article, SystemSeed CTO Evgeniy Maslovskiy explains how SSOs work, and how to get the most out of yours.
Evgeniy Maslovskiy Mon, 10/21/2024 - 07:08Julien Tayon: Hello world part II : actually recoding print
Here, we are gonna deep inside one of the most overlooked object oriented abastraction : a file and actually print what we can of hello world in 100 lines of code.
The file handler and the file descriptor
These two abstractions are the low level and high level abstractions of the same thing : a view on something more complex which access has been encapsulated in generic methods. Actually when you code a framebuffer driver you provide function pointers that are specialized to your device and you may omit those common to the class. This is done with a double lookup on the major node, minor node number. Of those « generic » methods you have : seek, write, tell, truncate, open, read, close ...
The file handler in python also handles extra bytes (pun) of facilities : like character encoding, stats, and buffering.
Here, we work with the low level abstraction : the file which we access with fileno through its file descriptor. And thanks to this abstraction, you don't care if the underlying implementation fragments the file itself (ex: on a hard drive), you can magically always ask without caring for the gory details to read any arbitrary block or chararacters at any given position.
Two of the most used methods on files here are seek and write.
The file descriptor method write is sensitive to the positionning set by seek. Hence, we can write a line, and position ourselves one line below to write the next line in a character.
matrices as a view of a row based array
When I speak of rows and columns I evocate abstractions that are not related to the framebuffer.
The coordinates are an abstraction we build for convenience to say I want to write from this line at this column.
And since human beings bug after 2 dimensions we split the last dimnension in a vector of dimension 4 called a pixel.
get_at function illustrates our use of this trick to position the (invisible) cursor at any given position on the screen expressed for clarity in size of glyphes.
We could actually code all this exercice through a 3D view of the framebuffer. I just wouldn't be able to pack the code in less than 100 lines of code and would introduce useless abstractions.
But if you have doubt on the numerous seek I do and why I mutiply lines and columns value the way I do check the preceding link for an understanding of raw based array nth matricial dimensionnal views.
fonts, chars glyphs...
Here we are gonna take matrices defining the glyphes (what you actually see on screen) by 8x8 = 64 1D array and map them on the screen with put_char. Put char does a little bit of magic by relying on python to do the chararcter to glyph conversion through the dict lookup that expecting strings does a de factor codepoint to glyph conversion without having to pass the codepoint conversion.
The set of characters to glyphs conversion with their common property is a font.
The hidden console
The console is an abstraction that keeps track of the global states such as : what is the current line we print at. Thus, here, being lazy I use the global variables instead of a singleton named « console » or « term » to keep track of them. But first and foremost, these « abstractions » are just expectations we share in a common mental mode. Like we expect « print » to add a newline at the end of the string and begin the printing at the next line.
The to be finished example
I limited the code to 100 lines so that it's fairly readable. I let as an exercise the following points :
- encoding the missing glyphes in the font to actually be able to write "hello world!",
- handling the edge case of reaching the bottom of the screen,
This example is a part of a project to write « hello world » on the framebuffer in numerous languages, bash included.
Annexe : the code
#!/usr/bin/env python3 from struct import pack from os import SEEK_CUR, lseek as seek, write w,h =map(int, open("/sys/class/graphics/fb0/virtual_size").read().split(",")) so_pixel = 4 stride = w * so_pixel encode = lambda b,g,r,a : pack("4B",b,g,r,a) font = { "height":8, "width":8, 'void' : [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, ], "l":[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, ], "o": [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0], "h": [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0], } def go_at(fh, x, y): global stride seek(fh.fileno(),x*so_pixel + y *stride, 0) def next_line(fh, reminder): seek(fh.fileno(), stride - reminder, SEEK_CUR) def put_char(fh, x,y, letter): go_at(fh, x, y) black = encode(0,0,0,255) white = encode(255,255,255,255) char = font.get(letter, None) or font["void"] line = "" for col,pixel in enumerate(char): write(fh.fileno(), white if pixel else black) if (col%font["width"]==font["width"]-1): next_line(fh, so_pixel * font["width"]) COL=0 LIN=0 OUT = open("/dev/fb0", "bw") FD = OUT.fileno() def newline(): global OUT,LIN,COL LIN+=1 go_at(OUT, 0, LIN * font["height"]) def print_(line): global OUT, COL, LIN COL=0 for c in line: if c == "\n": newline() put_char(OUT,COL * font["width"] , LIN * font['height'], c) COL+=1 newline() for i in range(30): print_("hello lol")
Russ Allbery: California general election
As usual with these every-two-year posts, probably of direct interest only to California residents. Maybe the more obscure things we're voting on will be a minor curiosity to people elsewhere. I'm a bit late this year, although not as late as last year, so a lot of people may have already voted, but I've been doing this for a while and wanted to keep it up.
This post will only be about the ballot propositions. I don't have anything useful to say about the candidates that isn't hyper-local. I doubt anyone who has read my posts will be surprised by which candidates I'm voting for.
As always with Calfornia ballot propositions, it's worth paying close attention to which propositions were put on the ballot by the legislature, usually because there's some state law requirement (often that I disagree with) that they be voted on by the public, and propositions that were put on the ballot by voter petition. The latter are often poorly written and have hidden problems. As a general rule of thumb, I tend to default to voting against propositions added by petition. This year, one can conveniently distinguish by number: the single-digit propositions were added by the legislature, and the two-digit ones were added by petition.
Proposition 2: YES. Issue $10 billion in bonds for public school infrastructure improvements. I generally vote in favor of spending measures like this unless they have some obvious problem. The opposition argument is a deranged rant against immigrants and government debt and fails to point out actual problems. The opposition argument also claims this will result in higher property taxes and, seriously, if only that were true. That would make me even more strongly in favor of it.
Proposition 3: YES. Enshrines the right to marriage without regard to sex or race into the California state constitution. This is already the law given US Supreme Court decisions, but fixing California state law is a long-overdue and obvious cleanup step. One of the quixotic things I would do if I were ever in government, which I will never be, would be to try to clean up the laws to make them match reality, repealing all of the dead clauses that were overturned by court decisions or are never enforced. I am in favor of all measures in this direction even when I don't agree with the direction of the change; here, as a bonus, I also strongly agree with the change.
Proposition 4: YES. Issue $10 billion in bonds for infrastructure improvements to mitigate climate risk. This is basically the same argument as Proposition 2. The one drawback of this measure is that it's kind of a mixed grab bag of stuff and probably some of it should be supported out of the general budget rather than bonds, but I consider this a minor problem. We definitely need to ramp up climate risk mitigation efforts.
Proposition 5: YES. Reduces the required super-majority to pass local bond measures for affordable housing from 67% to 55%. The fact that this requires a supermajority at all is absurd, California desperately needs to build more housing of any kind however we can, and publicly funded housing is an excellent idea.
Proposition 6: YES. Eliminates "involuntary servitude" (in other words, "temporary" slavery) as a legally permissible punishment for crimes in the state of California. I'm one of the people who think the 13th Amendment to the US Constitution shouldn't have an exception for punishment for crimes, so obviously I'm in favor of this. This is one very, very tiny step towards improving the absolutely atrocious prison conditions in the state.
Proposition 32: YES. Raises the minimum wage to $18 per hour from the current $16 per hour, over two years, and ties it to inflation. This is one of the rare petition-based propositions that I will vote in favor of because it's very straightforward, we clearly should be raising the minimum wage, and living in California is absurdly expensive because we refuse to build more housing (see Propositions 5 and 33). The opposition argument is the standard lie that a higher minimum wage will increase unemployment, which we know from numerous other natural experiments is simply not true.
Proposition 33: NO. Repeals Costa-Hawkins, which prohibits local municipalities from enacting rent control on properties built after 1995. This one is going to split the progressive vote rather badly, I suspect.
California has a housing crisis caused by not enough housing supply. It is not due to vacant housing, as much as some people would like you to believe that; the numbers just don't add up. There are way more people living here and wanting to live here than there is housing, so we need to build more housing.
Rent control serves a valuable social function of providing stability to people who already have housing, but it doesn't help, and can hurt, the project of meeting actual housing demand. Rent control alone creates a two-tier system where people who have housing are protected but people who don't have housing have an even harder time getting housing than they do today. It's therefore quite consistent with the general NIMBY playbook of trying to protect the people who already have housing by making life harder for the people who do not, while keeping the housing supply essentially static.
I am in favor of rent control in conjunction with real measures to increase the housing supply. I am therefore opposed to this proposition, which allows rent control without any effort to increase housing supply. I am quite certain that, if this passes, some municipalities will use it to make constructing new high-density housing incredibly difficult by requiring it all be rent-controlled low-income housing, thus cutting off the supply of multi-tenant market-rate housing entirely. This is already a common political goal in the part of California where I live. Local neighborhood groups advocate for exactly this routinely in local political fights.
Give me a mandate for new construction that breaks local zoning obstructionism, including new market-rate housing to maintain a healthy lifecycle of housing aging into affordable housing as wealthy people move into new market-rate housing, and I will gladly support rent control measures as part of that package. But rent control on its own just allocates winners and losers without addressing the underlying problem.
Proposition 34: NO. This is an excellent example of why I vote against petition propositions by default. This is a law designed to affect exactly one organization in the state of California: the AIDS Healthcare Foundation. The reason for this targeting is disputed; one side claims it's because of the AHF support for Proposition 33, and another side claims it's because AHF is a slumlord abusing California state funding. I have no idea which side of this is true. I also don't care, because I am fundamentally opposed to writing laws this way. Laws should establish general, fair principles that are broadly applicable, not be written with bizarrely specific conditions (health care providers that operate multifamily housing) that will only be met by a single organization. This kind of nonsense creates bad legal codes and the legal equivalent of technical debt. Just don't do this.
Proposition 35: YES. I am, reluctantly, voting in favor of this even though it is a petition proposition because it looks like a useful simplification and cleanup of state health care funding, makes an expiring tax permanent, and is supported by a very wide range of organizations that I generally trust to know what they're talking about. No opposition argument was filed, which I think is telling.
Proposition 36: NO. I am resigned to voting down attempts to start new "war on drugs" nonsense for the rest of my life because the people who believe in this crap will never, ever, ever stop. This one has bonus shoplifting fear-mongering attached, something that touches on nasty local politics that have included large retail chains manipulating crime report statistics to give the impression that shoplifting is up dramatically. It's yet another round of the truly horrific California "three strikes" criminal penalty obsession, which completely misunderstands both the causes of crime and the (almost nonexistent) effectiveness of harsh punishment as deterrence.
Oxygen Icons 6.1.0
Oxygen Icons 6.1.0 is a feature release of the Oxygen Icon set.
It features new SVG symbolic icons for use in Plasma Applets.
It also features improved icon theme compliance, fixed visability and added mimetype links.
URL: https://download.kde.org/stable/oxygen-icons/
Source: oxygen-icons-6.1.0.tar.xz
SHA256: 16ca971079c5067c4507cabf1b619dc87dd6b326fd5c2dd9f5d43810f2174d68
Signed by: E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Riddell jr@jriddell.org
https://jriddell.org/jriddell.pgp
Seth Michael Larson: Python and Sigstore
Published 2024-10-21 by Seth Larson
Reading time: minutes
I was a guest on the Open Source Security podcast this week talking about Sigstore and Python among other things I'm working on at the Python Software Foundation.
Sigstore is a digital signing method that has been used by CPython since 3.11.0 which focuses on ergonomics and uses short-lived keys with strongly-bound human-readable identities via OpenID Connect.
CPython also provides digital signatures using PGP and has been doing so for much longer. Below is a diagram showing the current state of affairs:
Distro (Offline?) Distro (Offline?) python.orgpython.orgGitHubGitHubSource Code (tar.gz)Source Code (tar.gz)PGP SignaturePGP SignatureSigstore Bundle...@python.orgSigstore Bundle...CPython
ReleaseInfraCPython...Distro
Package
(deb, rpm)Distro...Distro
Build
InfraDistro...PGP KeysPGP KeysAccounts (GitHub, Gmail)Accounts (Git...
ReleaseManagerReleaseMan...Rebuild from sourceRebuild from...Not used for verification
Not used for...Text is not SVG - cannot display
CPython offers two verification methods: PGP and Sigstore. Verifiers choose which method to use.
From this diagram you can see the two "sources" of identity provided by PGP and Sigstore both link back to release managers. PGP relies on private keys which are maintained and protected by individual release managers to create signatures of artifacts. Sigstore uses third-parties which support OpenID Connect such as GitHub and Google to bind a human-readable identity like "thomas@python.org" to a signing certificate and signature that can be later verified.
Why Sigstore?The problem is that securely maintaining PGP private keys is not an easy task and the burden rests on volunteers for a minimum of 7 years (1 year of pre-releases then 5 years of bug and security fixes across at least two consecutive releases). Contrast that experience to Sigstore where release managers only need to click a button to OAuth sign-in during the release process.
Sigstore externalizes the operational burden of maintaining long-lived signing keys from individual volunteers to teams of people who run an identity platform professionally and the public-good code-signing certificate authority Fulcio. This seems like a pretty good trade-off, considering we already trust these platforms' identity management teams for things like GitHub accounts.
For this reason, I've authored PEP 761 providing a deprecation and discontinuance plan of PGP signatures for CPython artifacts. You can join the discussion of this PEP on discuss.python.org. Big thank-you to Hugo van Kemenade, the release manager for Python 3.14 for sponsoring my first PEP and helping me with the process and thanks to William Woodruff for reviewing the PEP draft and explaining the nitty-gritty details of Sigstore.
This PEP deprecates the expectation that future CPython releases will provide PGP signatures and sets a timeline for discontinuance (Python 3.14) and a mechanism for that timeline to be extended by a vote of the Steering Council, if necessary.
What do verifiers need?Signatures are only useful if they are verified, so we must weigh the needs of verifiers!
CPython's expected downstream verifiers are primarily "distributions" of CPython, such as through a Linux distro (Debian, Fedora, Gentoo, etc) or in a container image. There are other distributions of Python such as pyenv and python-build-standalone.
These users of CPython's source code are great places to verify signatures, as they're likely to be high value targets themselves and can provide a consistent stream of verifications. If any one signature verification were to fail, it would signal that something is wrong with upstream CPython artifacts or python.org and would likely be investigated and remediated quickly.
This further constrains attackers looking to affect CPython downstream users, as compromising python.org would no longer be enough. Instead, attackers would need to compromise the build infrastructure or CPython source code.
From discussions, the requirements that I've gathered from verifiers are:
- Need a tool for verification that can be packaged by distros. The recommended tool for verifying with a CLI is either Cosign or sigstore-python, both of which have challenges for Linux distro packagers.
- OS packages would require Cosign to be packaged in those OS package managers. This isn't trivial as it requires the Go toolchain to build.
- Docker and other container images want verification tools to be available at the OS level. Needing to pull these externally (from Cosign's GitHub releases) would require multi-stage builds which they want to avoid if possible. Today Cosign is available in Alpine, but not yet in Debian (but there is the beginnings of support), Gentoo, or Fedora.
- Offline verification is important. Many package ecosystems ship the signatures inside the packages to enable "build from source" (such as Gentoo), and there's no guarantee that the package is being built online. It's okay that revocation or root of trust updates can't happen when offline. Cosign and other Sigstore verification tools support offline verification if the root of trust is "bundled" as a file locally.
- Online periodic updates to revocations and root of trust. Cosign and other Sigstore verifiers supports this use-case as the default behavior.
Overall, the availability of Cosign in OS package managers appears to be the biggest blocker to see adoption for verifying CPython's Sigstore signatures.
Why adopt a new signature method?So why have CPython's Sigstore signatures not seen adoption despite being available for multiple years? Here's a short list of self-reinforcing reasons:
- Verifiers don't need to adopt the new signature method (Sigstore), because the existing one (PGP) works and there's no expectation for discontinuance.
- Signers can't migrate away from the old signature method because there's apparent demand from verifiers for the old signature method.
- Verifiers don't try or test the new signature method, so the maintainers of signature tooling can't learn about or improve verifier use-cases.
- Concurrently supporting multiple signature methods is more work for both signers and verifiers.
- There are fewer available signatures using the new signing method, so the value of adopting the method as a verifier is less (but maybe this will change soon?).
Keep in mind that almost everyone involved in the above scenarios are volunteers. Doing work to adopt a new process when existing processes are working can feel like "busy-work", but I don't think that's the case for Sigstore.
Sigstore's benefits for ergonomics paired with its ability to use workload identity are two stand-out features compared to PGP. Workload identity being extra important now that many projects are moving to hosted build infrastructure for releases.
Workload IdentitySigstore supporting workload identity means that release manager accounts can no longer be hijacked to produce bad signatures. Artifacts get signed by the build infrastructure provider directly:
Distro (Offline?)Distro (Offline?)python.orgpython.orgGitHubGitHubSource Code (tar.gz)Source Code (tar.gz)Sigstore Bundlepython/releaseSigstore Bundle...CPython
ReleaseInfraCPython...Distro
Package
(deb, rpm)Distro...Distro
Build
InfraDistro...Accounts (GitHub, Gmail)Accounts (Git...
ReleaseManagerReleaseMan...Rebuild from sourceRebuild from...
Workload identity: verify artifacts came from CPython release infra
Switching to workload identity also means downstream verifiers no longer need to make changes when new release managers join the project, the expected identity would always be gh/python/release-tools/....
We still have a ways to go to adopt workload identity for CPython because our macOS and Windows release processes don't use hosted build platforms that support OpenID Connect and Sigstore. That means that for now we'll keep using release manager identities.
But this future may not be far off for Python packages hosted on PyPI...
Many more signatures are coming!William Woodruff and the team at Trail of Bits have authored PEP 740 which is provisionally accepted. The PEP specifies how attestations that can be verified by PyPI (like Sigstore) using workload identities specified with the secure publishing feature "Trusted Publishers" and then served alongside artifacts on PyPI.
There's a lot more to this story (but it's not for me to tell). Given Trusted Publishers' success, there clearly are exciting times are ahead. Subscribe to the PyPI blog to learn more once the project is complete.
That's all for this post! 👋 If you're interested in more you can read the last report.
Have thoughts or questions? Let's chat over email or social:
Want more articles like this one? Get notified of new posts by subscribing to the RSS feed or the email newsletter. I won't share your email or send spam, only whatever this is!
Want more content now? This blog's archive has ready-to-read articles. I also curate a list of cool URLs I find on the internet.
Find a typo? This blog is open source, pull requests are appreciated.
Thanks for reading! ♡ This work is licensed under CC BY-SA 4.0
︎