Planet Debian
Dimitri John Ledkov: Ubuntu 23.10 significantly reduces the installed kernel footprint
Ubuntu systems typically have up to 3 kernels installed, before they are auto-removed by apt on classic installs. Historically the installation was optimized for metered download size only. However, kernel size growth and usage no longer warrant such optimizations. During the 23.10 Mantic Minatour cycle, I led a coordinated effort across multiple teams to implement lots of optimizations that together achieved unprecedented install footprint improvements.
Given a typical install of 3 generic kernel ABIs in the default configuration on a regular-sized VM (2 CPU cores 8GB of RAM) the following metrics are achieved in Ubuntu 23.10 versus Ubuntu 22.04 LTS:
2x less disk space used (1,417MB vs 2,940MB, including initrd)
3x less peak RAM usage for the initrd boot (68MB vs 204MB)
0.5x increase in download size (949MB vs 600MB)
2.5x faster initrd generation (4.5s vs 11.3s)
approximately the same total time (103s vs 98s, hardware dependent)
For minimal cloud images that do not install either linux-firmware or modules extra the numbers are:
1.3x less disk space used (548MB vs 742MB)
2.2x less peak RAM usage for initrd boot (27MB vs 62MB)
0.4x increase in download size (207MB vs 146MB)
Hopefully, the compromise of download size, relative to the disk space & initrd savings is a win for the majority of platforms and use cases. For users on extremely expensive and metered connections, the likely best saving is to receive air-gapped updates or skip updates.
This was achieved by precompressing kernel modules & firmware files with the maximum level of Zstd compression at package build time; making actual .deb files uncompressed; assembling the initrd using split cpio archives - uncompressed for the pre-compressed files, whilst compressing only the userspace portions of the initrd; enabling in-kernel module decompression support with matching kmod; fixing bugs in all of the above, and landing all of these things in time for the feature freeze. Whilst leveraging the experience and some of the design choices implementations we have already been shipping on Ubuntu Core. Some of these changes are backported to Jammy, but only enough to support smooth upgrades to Mantic and later. Complete gains are only possible to experience on Mantic and later.
The discovered bugs in kernel module loading code likely affect systems that use LoadPin LSM with kernel space module uncompression as used on ChromeOS systems. Hopefully, Kees Cook or other ChromeOS developers pick up the kernel fixes from the stable trees. Or you know, just use Ubuntu kernels as they do get fixes and features like these first.
The team that designed and delivered these changes is large: Benjamin Drung, Andrea Righi, Juerg Haefliger, Julian Andres Klode, Steve Langasek, Michael Hudson-Doyle, Robert Kratky, Adrien Nader, Tim Gardner, Roxana Nicolescu - and myself Dimitri John Ledkov ensuring the most optimal solution is implemented, everything lands on time, and even implementing portions of the final solution.
Hi, It's me, I am a Staff Engineer at Canonical and we are hiring https://canonical.com/careers.
Lots of additional technical details and benchmarks on a huge range of diverse hardware and architectures, and bikeshedding all the things below:
- https://lists.ubuntu.com/archives/ubuntu-devel/2023-July/thread.html#42652
- https://lists.ubuntu.com/archives/kernel-team/2023-July/thread.html#141412
- https://lists.ubuntu.com/archives/ubuntu-devel/2023-July/thread.html#42707
- https://discourse.ubuntu.com/t/reduce-initramfs-size-and-speed-up-the-generation-in-ubuntu-23-10/38972
- https://lore.kernel.org/all/20230830155820.138178-1-andrea.righi@canonical.com/
- https://lore.kernel.org/all/20230829123808.325202-1-andrea.righi@canonical.com/
- https://facebook.github.io/zstd/
Martin-Éric Racine: dhcpcd almost ready to replace ISC dhclient in Debian
A lot of time has passed since my previous post on my work to make dhcpcd the drop-in replacement for the deprecated ISC dhclient a.k.a. isc-dhcp-client. Current status:
- Upstream now regularly produces releases and with a smaller delta than before. This makes it easier to track possible breakage.
- Debian packaging has essentially remained unchanged. A few Recommends were shuffled, but that's about it.
- The only remaining bug is fixing the build for Hurd. Patches are welcome. Once that is fixed, bumping dhcpcd-base's priority to important is all that's left.
Scarlett Gately Moore: Farewell for now, again.
I write this with a heavy heart. Exactly one year ago I lost my beloved job. That was all me, I had a terrible run of bad luck with COVID and I never caught up. In the last year, I have taken on several new projects to re-create a new image for myself and to make up for the previous year, and I believe I worked very hard in doing so. Unfortunately, my seemingly good interviews have not ended in a job. One potential job I did not put as much effort into as I should have because I put all my cards into a project that didn’t quite turn out as expected. I do hope it still goes through for the KDE community as a whole, because well it is really cool, but it isn’t the job I thought. I have been relying purely on donations for survival and it simply isn’t enough. I am faced once again with no internet to even do my open source work ( Snaps, KDE neon, Debian and everything that links to those ). I simply can’t put the burden of my stubbornness on my family any longer. Bills are long over due, we have learned to live without many things, but the stress of essential bills, living expenses going unpaid is simply too much. I do thank each and every one of you that has contributed to my fundraisers. It means the world to me that people do care. It just isn’t enough. So with the sunset of Witch Wells, I am sun setting my software career for now and will be looking for something, anything local just to pay some bills, calm our nerves and hopefully find some happiness again. I am tired, broke, stressed out and burned out. I will be back when I can breathe again with my finances.
If you can spare some changes to help with gas, propane, internet I would be so ever grateful.
So long for now.
~ Scarlett
John Goerzen: It’s More Important To Recognize What Direction People Are Moving Than Where They Are
I recently read a post on social media that went something like this (paraphrased):
“If you buy an EV, you’re part of the problem. You’re advancing car culture and are actively hurting the planet. The only ethical thing to do is ditch your cars and put all your effort into supporting transit. Anything else is worthless.”
There is some truth there; supporting transit in areas it makes sense is better than having more cars, even EVs. But of course the key here is in areas it makes sense.
My road isn’t even paved. I live miles from the nearest town. And get into the remote regions of the western USA and you’ll find people that live 40 miles from the nearest neighbor. There’s no realistic way that mass transit is ever going to be a thing in these areas. And even if it were somehow usable, sending buses over miles where nobody lives just to reach the few that are there will be worse than private EVs. And because I can hear this argument coming a mile away, no, it doesn’t make sense to tell these people to just not live in the country because the planet won’t support that anymore, because those people are literally the ones that feed the ones that live in the cities.
The funny thing is: the person that wrote that shares my concerns and my goals. We both care deeply about climate change. We both want positive change. And I, ahem, recently bought an EV.
I have seen this play out in so many ways over the last few years. Drive a car? Get yelled at. Support the wrong politician? Get a shunning. Not speak up loudly enough about the right politician? That’s a yellin’ too.
The problem is, this doesn’t make friends. In fact, it hurts the cause. It doesn’t recognize this truth:
It is more important to recognize what direction people are moving than where they are.
I support trains and transit. I’ve donated money and written letters to politicians. But, realistically, there will never be transit here. People in my county are unable to move all the way to transit. But what can we do? Plenty. We bought an EV. I’ve been writing letters to the board of our local electrical co-op advocating for relaxation of rules around residential solar installations, and am planning one myself. It may well be that our solar-powered transportation winds up having a lower carbon footprint than the poster’s transit use.
Pick your favorite cause. Whatever it is, consider your strategy: What do you do with someone that is very far away from you, but has taken the first step to move an inch in your direction? Do you yell at them for not being there instantly? Or do you celebrate that they have changed and are moving?
Freexian Collaborators: Monthly report about Debian Long Term Support, October 2023 (by Roberto C. Sánchez)
Like each month, have a look at the work funded by Freexian’s Debian LTS offering.
Debian LTS contributorsIn October, 18 contributors have been paid to work on Debian LTS, their reports are available:
- Adrian Bunk did 8.0h (out of 7.75h assigned and 10.0h from previous period), thus carrying over 9.75h to the next month.
- Anton Gladky did 9.5h (out of 9.5h assigned and 5.5h from previous period), thus carrying over 5.5h to the next month.
- Bastien Roucariès did 16.0h (out of 16.75h assigned and 1.0h from previous period), thus carrying over 1.75h to the next month.
- Ben Hutchings did 8.0h (out of 17.75h assigned), thus carrying over 9.75h to the next month.
- Chris Lamb did 17.0h (out of 17.75h assigned), thus carrying over 0.75h to the next month.
- Emilio Pozuelo Monfort did 17.5h (out of 17.75h assigned), thus carrying over 0.25h to the next month.
- Guilhem Moulin did 9.75h (out of 17.75h assigned), thus carrying over 8.0h to the next month.
- Helmut Grohne did 1.5h (out of 10.0h assigned), thus carrying over 8.5h to the next month.
- Lee Garrett did 10.75h (out of 17.75h assigned), thus carrying over 7.0h to the next month.
- Markus Koschany did 30.0h (out of 30.0h assigned).
- Ola Lundqvist did 4.0h (out of 0h assigned and 19.5h from previous period), thus carrying over 15.5h to the next month.
- Roberto C. Sánchez did 12.0h (out of 5.0h assigned and 7.0h from previous period).
- Santiago Ruano Rincón did 13.625h (out of 7.75h assigned and 8.25h from previous period), thus carrying over 2.375h to the next month.
- Sean Whitton did 13.0h (out of 6.0h assigned and 7.0h from previous period).
- Sylvain Beucler did 7.5h (out of 11.25h assigned and 6.5h from previous period), thus carrying over 10.25h to the next month.
- Thorsten Alteholz did 14.0h (out of 14.0h assigned).
- Tobias Frost did 16.0h (out of 9.25h assigned and 6.75h from previous period).
- Utkarsh Gupta did 0.0h (out of 0.75h assigned and 17.0h from previous period), thus carrying over 17.75h to the next month.
In October, we have released 49 DLAs.
Of particular note in the month of October, LTS contributor Chris Lamb issued DLA 3627-1 pertaining to Redis, the popular key-value database similar to Memcached, which was vulnerable to an authentication bypass vulnerability. Fixing this vulnerability involved dealing with a race condition that could allow another process an opportunity to establish an otherwise unauthorized connection. LTS contributor Markus Koschany was involved in the mitigation of CVE-2023-44487, which is a protocol-level vulnerability in the HTTP/2 protocol. The impacts within Debian involved multiple packages, across multiple releases, with multiple advisories being released (both DSA for stable and old-stable, and DLA for LTS). Markus reviewed patches and security updates prepared by other Debian developers, investigated reported regressions, provided patches for the aforementioned regressions, and issued several security updates as part of this.
Additionally, as MariaDB 10.3 (the version originally included with Debian buster) passed end-of-life earlier this year, LTS contributor Emilio Pozuelo Monfort has begun investigating the feasibility of backporting MariaDB 10.11. The work is in early stages, with much testing and analysis remaining before a final decision can be made, as this only one of several available potential courses of action concerning MariaDB.
Finally, LTS contributor Lee Garrett has invested considerable effort into the development the Functional Test Framework here. While so far only an initial version has been published, it already has several features which we intend to begin leveraging for testing of LTS packages. In particular, the FTF supports provisioning multiple VMs for the purposes of performing functional tests of network-facing services (e.g., file services, authentication, etc.). These tests are in addition to the various unit-level tests which are executed during package build time. Development work will continue on FTF and as it matures and begins to see wider use within LTS we expect to improve the quality of the updates we publish.
Thanks to our sponsorsSponsors that joined recently are in bold.
- Platinum sponsors:
- TOSHIBA (for 98 months)
- Civil Infrastructure Platform (CIP) (for 66 months)
- Gold sponsors:
- Roche Diagnostics International AG (for 109 months)
- Linode (for 103 months)
- Babiel GmbH (for 92 months)
- Plat’Home (for 92 months)
- University of Oxford (for 48 months)
- Deveryware (for 35 months)
- VyOS Inc (for 30 months)
- EDF SA (for 19 months)
- Silver sponsors:
- Domeneshop AS (for 113 months)
- Nantes Métropole (for 107 months)
- Univention GmbH (for 99 months)
- Université Jean Monnet de St Etienne (for 99 months)
- Ribbon Communications, Inc. (for 93 months)
- Exonet B.V. (for 83 months)
- Leibniz Rechenzentrum (for 77 months)
- CINECA (for 66 months)
- Ministère de l’Europe et des Affaires Étrangères (for 60 months)
- Cloudways Ltd (for 50 months)
- Dinahosting SL (for 48 months)
- Bauer Xcel Media Deutschland KG (for 42 months)
- Platform.sh (for 42 months)
- Moxa Inc. (for 36 months)
- sipgate GmbH (for 33 months)
- OVH US LLC (for 31 months)
- Tilburg University (for 31 months)
- GSI Helmholtzzentrum für Schwerionenforschung GmbH (for 23 months)
- Soliton Systems K.K. (for 20 months)
- Bronze sponsors:
- Evolix (for 114 months)
- Seznam.cz, a.s. (for 114 months)
- Intevation GmbH (for 111 months)
- Linuxhotel GmbH (for 111 months)
- Daevel SARL (for 109 months)
- Bitfolk LTD (for 108 months)
- Megaspace Internet Services GmbH (for 108 months)
- Greenbone AG (for 107 months)
- NUMLOG (for 107 months)
- WinGo AG (for 107 months)
- Ecole Centrale de Nantes - LHEEA (for 103 months)
- Entr’ouvert (for 98 months)
- Adfinis AG (for 95 months)
- GNI MEDIA (for 90 months)
- Laboratoire LEGI - UMR 5519 / CNRS (for 90 months)
- Tesorion (for 90 months)
- Bearstech (for 81 months)
- LiHAS (for 81 months)
- Catalyst IT Ltd (for 76 months)
- Supagro (for 71 months)
- Demarcq SAS (for 70 months)
- Université Grenoble Alpes (for 56 months)
- TouchWeb SAS (for 48 months)
- SPiN AG (for 45 months)
- CoreFiling (for 40 months)
- Institut des sciences cognitives Marc Jeannerod (for 35 months)
- Observatoire des Sciences de l’Univers de Grenoble (for 32 months)
- Tem Innovations GmbH (for 27 months)
- WordFinder.pro (for 26 months)
- CNRS DT INSU Résif (for 25 months)
- Alter Way (for 18 months)
- Institut Camille Jordan (for 7 months)
Lukas Märdian: Netplan brings consistent network configuration across Desktop, Server, Cloud and IoT
Ubuntu 23.10 “Mantic Minotaur” Desktop, showing network settings
We released Ubuntu 23.10 ‘Mantic Minotaur’ on 12 October 2023, shipping its proven and trusted network stack based on Netplan. Netplan is the default tool to configure Linux networking on Ubuntu since 2016. In the past, it was primarily used to control the Server and Cloud variants of Ubuntu, while on Desktop systems it would hand over control to NetworkManager. In Ubuntu 23.10 this disparity in how to control the network stack on different Ubuntu platforms was closed by integrating NetworkManager with the underlying Netplan stack.
Netplan could already be used to describe network connections on Desktop systems managed by NetworkManager. But network connections created or modified through NetworkManager would not be known to Netplan, so it was a one-way street. Activating the bidirectional NetworkManager-Netplan integration allows for any configuration change made through NetworkManager to be propagated back into Netplan. Changes made in Netplan itself will still be visible in NetworkManager, as before. This way, Netplan can be considered the “single source of truth” for network configuration across all variants of Ubuntu, with the network configuration stored in /etc/netplan/, using Netplan’s common and declarative YAML format.
Netplan Desktop integrationOn workstations, the most common scenario is for users to configure networking through NetworkManager’s graphical interface, instead of driving it through Netplan’s declarative YAML files. Netplan ships a “libnetplan” library that provides an API to access Netplan’s parser and validation internals, which is now used by NetworkManager to store any network interface configuration changes in Netplan. For instance, network configuration defined through NetworkManager’s graphical UI or D-Bus API will be exported to Netplan’s native YAML format in the common location at /etc/netplan/. This way, the only thing administrators need to care about when managing a fleet of Desktop installations is Netplan. Furthermore, programmatic access to all network configuration is now easily accessible to other system components integrating with Netplan, such as snapd. This solution has already been used in more confined environments, such as Ubuntu Core and is now enabled by default on Ubuntu 23.10 Desktop.
Migration of existing connection profilesOn installation of the NetworkManager package (network-manager >= 1.44.2-1ubuntu1) in Ubuntu 23.10, all your existing connection profiles from /etc/NetworkManager/system-connections/ will automatically and transparently be migrated to Netplan’s declarative YAML format and stored in its common configuration directory /etc/netplan/.
The same migration will happen in the background whenever you add or modify any connection profile through the NetworkManager user interface, integrated with GNOME Shell. From this point on, Netplan will be aware of your entire network configuration and you can query it using its CLI tools, such as “sudo netplan get” or “sudo netplan status” without interrupting traditional NetworkManager workflows (UI, nmcli, nmtui, D-Bus APIs). You can observe this migration on the apt-get command line, watching out for logs like the following:
Setting up network-manager (1.44.2-1ubuntu1.1) ... Migrating HomeNet (9d087126-ae71-4992-9e0a-18c5ea92a4ed) to /etc/netplan Migrating eduroam (37d643bb-d81d-4186-9402-7b47632c59b1) to /etc/netplan Migrating DebConf (f862be9c-fb06-4c0f-862f-c8e210ca4941) to /etc/netplanIn order to prepare for a smooth transition, NetworkManager tests were integrated into Netplan’s continuous integration pipeline at the upstream GitHub repository. Furthermore, we implemented a passthrough method of handling unknown or new settings that cannot yet be fully covered by Netplan, making Netplan future-proof for any upcoming NetworkManager release.
The future of NetplanNetplan has established itself as the proven network stack across all variants of Ubuntu – Desktop, Server, Cloud, or Embedded. It has been the default stack across many Ubuntu LTS releases, serving millions of users over the years. With the bidirectional integration between NetworkManager and Netplan the final piece of the puzzle is implemented to consider Netplan the “single source of truth” for network configuration on Ubuntu. With Debian choosing Netplan to be the default network stack for their cloud images, it is also gaining traction outside the Ubuntu ecosystem and growing into the wider open source community.
Within the development cycle for Ubuntu 24.04 LTS, we will polish the Netplan codebase to be ready for a 1.0 release, coming with certain guarantees on API and ABI stability, so that other distributions and 3rd party integrations can rely on Netplan’s interfaces. First steps into that direction have already been taken, as the Netplan team reached out to the Debian community at DebConf 2023 in Kochi/India to evaluate possible synergies.
ConclusionNetplan can be used transparently to control a workstation’s network configuration and plays hand-in-hand with many desktop environments through its tight integration with NetworkManager. It allows for easy network monitoring, using common graphical interfaces and provides a “single source of truth” to network administrators, allowing for configuration of Ubuntu Desktop fleets in a streamlined and declarative way. You can try this new functionality hands-on by following the “Access Desktop NetworkManager settings through Netplan” tutorial.
If you want to learn more, feel free to follow our activities on Netplan.io, GitHub, Launchpad, IRC or our Netplan Developer Diaries blog on discourse.
Lisandro Damián Nicanor Pérez Meyer: Mini DebConf 2023 in Montevideo, Uruguay
15 years, "la niña bonita", if you ask many of my fellow argentinians, is the amount of time I haven't been present in any Debian-related face to face activity. It was already time to fix that. Thanks to Santiago Ruano Rincón and Gunnar Wolf that proded me to come I finally attended the Mini DebConf Uruguay in Montevideo.
I took the opportunity to do my first trip by ferry, which is currently one of the best options to get from Buenos Aires to Montevideo, in my case through Colonia. Living ~700km at the south west of Buenos Aires city the trip was long, it included a 10 hours bus, a ferry and yet another bus... but of course, it was worth it.
In Buenos Aires' port I met Emmanuel eamanu Arias, a fellow Argentinian Debian Developer from La Rioja, so I had the pleasure to travel with him.
To be honest Gunnar already did a wonderful blog post with many pictures, I should have taken more.
I had the opportunity to talk about device trees, and even look at Gunnar's machine one in order to find why a Display Port port was not working on a kernel but did in another. At the same time I also had time to start packaging qt6-grpc. Sadly I was there just one entire day, as I arrived on Thursday afternoon and had to leave on Saturday after lunch, but we did have a lot of quality Debian time.
I'll repeat here what Gunnar already wrote:
We had a long, important conversation about an important discussion that we are about to present on debian-vote@lists.debian.org.
Stay tuned on that, I think this is something we should all get involved.
All in all I already miss hacking with people on the same room. Meetings for us mean a lot of distance to be traveled (well, I live far away of almost everything), but I really should try to this more often. Certainly more than just once every 15 years :-)
Petter Reinholdtsen: New and improved sqlcipher in Debian for accessing Signal database
For a while now I wanted to have direct access to the Signal database of messages and channels of my Desktop edition of Signal. I prefer the enforced end to end encryption of Signal these days for my communication with friends and family, to increase the level of safety and privacy as well as raising the cost of the mass surveillance government and non-government entities practice these days. In August I came across a nice recipe on how to use sqlcipher to extract statistics from the Signal database explaining how to do this. Unfortunately this did not work with the version of sqlcipher in Debian. The sqlcipher package is a "fork" of the sqlite package with added support for encrypted databases. Sadly the current Debian maintainer announced more than three years ago that he did not have time to maintain sqlcipher, so it seemed unlikely to be upgraded by the maintainer. I was reluctant to take on the job myself, as I have very limited experience maintaining shared libraries in Debian. After waiting and hoping for a few months, I gave up the last week, and set out to update the package. In the process I orphaned it to make it more obvious for the next person looking at it that the package need proper maintenance.
The version in Debian was around five years old, and quite a lot of changes had taken place upstream into the Debian maintenance git repository. After spending a few days importing the new upstream versions, realising that upstream did not care much for SONAME versioning as I saw library symbols being both added and removed with minor version number changes to the project, I concluded that I had to do a SONAME bump of the library package to avoid surprising the reverse dependencies. I even added a simple autopkgtest script to ensure the package work as intended. Dug deep into the hole of learning shared library maintenance, I set out a few days ago to upload the new version to Debian experimental to see what the quality assurance framework in Debian had to say about the result. The feedback told me the pacakge was not too shabby, and yesterday I uploaded the latest version to Debian unstable. It should enter testing today or tomorrow, perhaps delayed by a small library transition.
Armed with a new version of sqlcipher, I can now have a look at the SQL database in ~/.config/Signal/sql/db.sqlite. First, one need to fetch the encryption key from the Signal configuration using this simple JSON extraction command:
/usr/bin/jq -r '."key"' ~/.config/Signal/config.jsonAssuming the result from that command is 'secretkey', which is a hexadecimal number representing the key used to encrypt the database. Next, one can now connect to the database and inject the encryption key for access via SQL to fetch information from the database. Here is an example dumping the database structure:
% sqlcipher ~/.config/Signal/sql/db.sqlite sqlite> PRAGMA key = "x'secretkey'"; sqlite> .schema CREATE TABLE sqlite_stat1(tbl,idx,stat); CREATE TABLE conversations( id STRING PRIMARY KEY ASC, json TEXT, active_at INTEGER, type STRING, members TEXT, name TEXT, profileName TEXT , profileFamilyName TEXT, profileFullName TEXT, e164 TEXT, serviceId TEXT, groupId TEXT, profileLastFetchedAt INTEGER); CREATE TABLE identityKeys( id STRING PRIMARY KEY ASC, json TEXT ); CREATE TABLE items( id STRING PRIMARY KEY ASC, json TEXT ); CREATE TABLE sessions( id TEXT PRIMARY KEY, conversationId TEXT, json TEXT , ourServiceId STRING, serviceId STRING); CREATE TABLE attachment_downloads( id STRING primary key, timestamp INTEGER, pending INTEGER, json TEXT ); CREATE TABLE sticker_packs( id TEXT PRIMARY KEY, key TEXT NOT NULL, author STRING, coverStickerId INTEGER, createdAt INTEGER, downloadAttempts INTEGER, installedAt INTEGER, lastUsed INTEGER, status STRING, stickerCount INTEGER, title STRING , attemptedStatus STRING, position INTEGER DEFAULT 0 NOT NULL, storageID STRING, storageVersion INTEGER, storageUnknownFields BLOB, storageNeedsSync INTEGER DEFAULT 0 NOT NULL); CREATE TABLE stickers( id INTEGER NOT NULL, packId TEXT NOT NULL, emoji STRING, height INTEGER, isCoverOnly INTEGER, lastUsed INTEGER, path STRING, width INTEGER, PRIMARY KEY (id, packId), CONSTRAINT stickers_fk FOREIGN KEY (packId) REFERENCES sticker_packs(id) ON DELETE CASCADE ); CREATE TABLE sticker_references( messageId STRING, packId TEXT, CONSTRAINT sticker_references_fk FOREIGN KEY(packId) REFERENCES sticker_packs(id) ON DELETE CASCADE ); CREATE TABLE emojis( shortName TEXT PRIMARY KEY, lastUsage INTEGER ); CREATE TABLE messages( rowid INTEGER PRIMARY KEY ASC, id STRING UNIQUE, json TEXT, readStatus INTEGER, expires_at INTEGER, sent_at INTEGER, schemaVersion INTEGER, conversationId STRING, received_at INTEGER, source STRING, hasAttachments INTEGER, hasFileAttachments INTEGER, hasVisualMediaAttachments INTEGER, expireTimer INTEGER, expirationStartTimestamp INTEGER, type STRING, body TEXT, messageTimer INTEGER, messageTimerStart INTEGER, messageTimerExpiresAt INTEGER, isErased INTEGER, isViewOnce INTEGER, sourceServiceId TEXT, serverGuid STRING NULL, sourceDevice INTEGER, storyId STRING, isStory INTEGER GENERATED ALWAYS AS (type IS 'story'), isChangeCreatedByUs INTEGER NOT NULL DEFAULT 0, isTimerChangeFromSync INTEGER GENERATED ALWAYS AS ( json_extract(json, '$.expirationTimerUpdate.fromSync') IS 1 ), seenStatus NUMBER default 0, storyDistributionListId STRING, expiresAt INT GENERATED ALWAYS AS (ifnull( expirationStartTimestamp + (expireTimer * 1000), 9007199254740991 )), shouldAffectActivity INTEGER GENERATED ALWAYS AS ( type IS NULL OR type NOT IN ( 'change-number-notification', 'contact-removed-notification', 'conversation-merge', 'group-v1-migration', 'keychange', 'message-history-unsynced', 'profile-change', 'story', 'universal-timer-notification', 'verified-change' ) ), shouldAffectPreview INTEGER GENERATED ALWAYS AS ( type IS NULL OR type NOT IN ( 'change-number-notification', 'contact-removed-notification', 'conversation-merge', 'group-v1-migration', 'keychange', 'message-history-unsynced', 'profile-change', 'story', 'universal-timer-notification', 'verified-change' ) ), isUserInitiatedMessage INTEGER GENERATED ALWAYS AS ( type IS NULL OR type NOT IN ( 'change-number-notification', 'contact-removed-notification', 'conversation-merge', 'group-v1-migration', 'group-v2-change', 'keychange', 'message-history-unsynced', 'profile-change', 'story', 'universal-timer-notification', 'verified-change' ) ), mentionsMe INTEGER NOT NULL DEFAULT 0, isGroupLeaveEvent INTEGER GENERATED ALWAYS AS ( type IS 'group-v2-change' AND json_array_length(json_extract(json, '$.groupV2Change.details')) IS 1 AND json_extract(json, '$.groupV2Change.details[0].type') IS 'member-remove' AND json_extract(json, '$.groupV2Change.from') IS NOT NULL AND json_extract(json, '$.groupV2Change.from') IS json_extract(json, '$.groupV2Change.details[0].aci') ), isGroupLeaveEventFromOther INTEGER GENERATED ALWAYS AS ( isGroupLeaveEvent IS 1 AND isChangeCreatedByUs IS 0 ), callId TEXT GENERATED ALWAYS AS ( json_extract(json, '$.callId') )); CREATE TABLE sqlite_stat4(tbl,idx,neq,nlt,ndlt,sample); CREATE TABLE jobs( id TEXT PRIMARY KEY, queueType TEXT STRING NOT NULL, timestamp INTEGER NOT NULL, data STRING TEXT ); CREATE TABLE reactions( conversationId STRING, emoji STRING, fromId STRING, messageReceivedAt INTEGER, targetAuthorAci STRING, targetTimestamp INTEGER, unread INTEGER , messageId STRING); CREATE TABLE senderKeys( id TEXT PRIMARY KEY NOT NULL, senderId TEXT NOT NULL, distributionId TEXT NOT NULL, data BLOB NOT NULL, lastUpdatedDate NUMBER NOT NULL ); CREATE TABLE unprocessed( id STRING PRIMARY KEY ASC, timestamp INTEGER, version INTEGER, attempts INTEGER, envelope TEXT, decrypted TEXT, source TEXT, serverTimestamp INTEGER, sourceServiceId STRING , serverGuid STRING NULL, sourceDevice INTEGER, receivedAtCounter INTEGER, urgent INTEGER, story INTEGER); CREATE TABLE sendLogPayloads( id INTEGER PRIMARY KEY ASC, timestamp INTEGER NOT NULL, contentHint INTEGER NOT NULL, proto BLOB NOT NULL , urgent INTEGER, hasPniSignatureMessage INTEGER DEFAULT 0 NOT NULL); CREATE TABLE sendLogRecipients( payloadId INTEGER NOT NULL, recipientServiceId STRING NOT NULL, deviceId INTEGER NOT NULL, PRIMARY KEY (payloadId, recipientServiceId, deviceId), CONSTRAINT sendLogRecipientsForeignKey FOREIGN KEY (payloadId) REFERENCES sendLogPayloads(id) ON DELETE CASCADE ); CREATE TABLE sendLogMessageIds( payloadId INTEGER NOT NULL, messageId STRING NOT NULL, PRIMARY KEY (payloadId, messageId), CONSTRAINT sendLogMessageIdsForeignKey FOREIGN KEY (payloadId) REFERENCES sendLogPayloads(id) ON DELETE CASCADE ); CREATE TABLE preKeys( id STRING PRIMARY KEY ASC, json TEXT , ourServiceId NUMBER GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId'))); CREATE TABLE signedPreKeys( id STRING PRIMARY KEY ASC, json TEXT , ourServiceId NUMBER GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId'))); CREATE TABLE badges( id TEXT PRIMARY KEY, category TEXT NOT NULL, name TEXT NOT NULL, descriptionTemplate TEXT NOT NULL ); CREATE TABLE badgeImageFiles( badgeId TEXT REFERENCES badges(id) ON DELETE CASCADE ON UPDATE CASCADE, 'order' INTEGER NOT NULL, url TEXT NOT NULL, localPath TEXT, theme TEXT NOT NULL ); CREATE TABLE storyReads ( authorId STRING NOT NULL, conversationId STRING NOT NULL, storyId STRING NOT NULL, storyReadDate NUMBER NOT NULL, PRIMARY KEY (authorId, storyId) ); CREATE TABLE storyDistributions( id STRING PRIMARY KEY NOT NULL, name TEXT, senderKeyInfoJson STRING , deletedAtTimestamp INTEGER, allowsReplies INTEGER, isBlockList INTEGER, storageID STRING, storageVersion INTEGER, storageUnknownFields BLOB, storageNeedsSync INTEGER); CREATE TABLE storyDistributionMembers( listId STRING NOT NULL REFERENCES storyDistributions(id) ON DELETE CASCADE ON UPDATE CASCADE, serviceId STRING NOT NULL, PRIMARY KEY (listId, serviceId) ); CREATE TABLE uninstalled_sticker_packs ( id STRING NOT NULL PRIMARY KEY, uninstalledAt NUMBER NOT NULL, storageID STRING, storageVersion NUMBER, storageUnknownFields BLOB, storageNeedsSync INTEGER NOT NULL ); CREATE TABLE groupCallRingCancellations( ringId INTEGER PRIMARY KEY, createdAt INTEGER NOT NULL ); CREATE TABLE IF NOT EXISTS 'messages_fts_data'(id INTEGER PRIMARY KEY, block BLOB); CREATE TABLE IF NOT EXISTS 'messages_fts_idx'(segid, term, pgno, PRIMARY KEY(segid, term)) WITHOUT ROWID; CREATE TABLE IF NOT EXISTS 'messages_fts_content'(id INTEGER PRIMARY KEY, c0); CREATE TABLE IF NOT EXISTS 'messages_fts_docsize'(id INTEGER PRIMARY KEY, sz BLOB); CREATE TABLE IF NOT EXISTS 'messages_fts_config'(k PRIMARY KEY, v) WITHOUT ROWID; CREATE TABLE edited_messages( messageId STRING REFERENCES messages(id) ON DELETE CASCADE, sentAt INTEGER, readStatus INTEGER , conversationId STRING); CREATE TABLE mentions ( messageId REFERENCES messages(id) ON DELETE CASCADE, mentionAci STRING, start INTEGER, length INTEGER ); CREATE TABLE kyberPreKeys( id STRING PRIMARY KEY NOT NULL, json TEXT NOT NULL, ourServiceId NUMBER GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId'))); CREATE TABLE callsHistory ( callId TEXT PRIMARY KEY, peerId TEXT NOT NULL, -- conversation id (legacy) | uuid | groupId | roomId ringerId TEXT DEFAULT NULL, -- ringer uuid mode TEXT NOT NULL, -- enum "Direct" | "Group" type TEXT NOT NULL, -- enum "Audio" | "Video" | "Group" direction TEXT NOT NULL, -- enum "Incoming" | "Outgoing -- Direct: enum "Pending" | "Missed" | "Accepted" | "Deleted" -- Group: enum "GenericGroupCall" | "OutgoingRing" | "Ringing" | "Joined" | "Missed" | "Declined" | "Accepted" | "Deleted" status TEXT NOT NULL, timestamp INTEGER NOT NULL, UNIQUE (callId, peerId) ON CONFLICT FAIL ); [ dropped all indexes to save space in this blog post ] CREATE TRIGGER messages_on_view_once_update AFTER UPDATE ON messages WHEN new.body IS NOT NULL AND new.isViewOnce = 1 BEGIN DELETE FROM messages_fts WHERE rowid = old.rowid; END; CREATE TRIGGER messages_on_insert AFTER INSERT ON messages WHEN new.isViewOnce IS NOT 1 AND new.storyId IS NULL BEGIN INSERT INTO messages_fts (rowid, body) VALUES (new.rowid, new.body); END; CREATE TRIGGER messages_on_delete AFTER DELETE ON messages BEGIN DELETE FROM messages_fts WHERE rowid = old.rowid; DELETE FROM sendLogPayloads WHERE id IN ( SELECT payloadId FROM sendLogMessageIds WHERE messageId = old.id ); DELETE FROM reactions WHERE rowid IN ( SELECT rowid FROM reactions WHERE messageId = old.id ); DELETE FROM storyReads WHERE storyId = old.storyId; END; CREATE VIRTUAL TABLE messages_fts USING fts5( body, tokenize = 'signal_tokenizer' ); CREATE TRIGGER messages_on_update AFTER UPDATE ON messages WHEN (new.body IS NULL OR old.body IS NOT new.body) AND new.isViewOnce IS NOT 1 AND new.storyId IS NULL BEGIN DELETE FROM messages_fts WHERE rowid = old.rowid; INSERT INTO messages_fts (rowid, body) VALUES (new.rowid, new.body); END; CREATE TRIGGER messages_on_insert_insert_mentions AFTER INSERT ON messages BEGIN INSERT INTO mentions (messageId, mentionAci, start, length) SELECT messages.id, bodyRanges.value ->> 'mentionAci' as mentionAci, bodyRanges.value ->> 'start' as start, bodyRanges.value ->> 'length' as length FROM messages, json_each(messages.json ->> 'bodyRanges') as bodyRanges WHERE bodyRanges.value ->> 'mentionAci' IS NOT NULL AND messages.id = new.id; END; CREATE TRIGGER messages_on_update_update_mentions AFTER UPDATE ON messages BEGIN DELETE FROM mentions WHERE messageId = new.id; INSERT INTO mentions (messageId, mentionAci, start, length) SELECT messages.id, bodyRanges.value ->> 'mentionAci' as mentionAci, bodyRanges.value ->> 'start' as start, bodyRanges.value ->> 'length' as length FROM messages, json_each(messages.json ->> 'bodyRanges') as bodyRanges WHERE bodyRanges.value ->> 'mentionAci' IS NOT NULL AND messages.id = new.id; END; sqlite>Finally I have the tool needed to inspect and process Signal messages that I need, without using the vendor provided client. Now on to transforming it to a more useful format.
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
Gunnar Wolf: There once was a miniDebConf in Uruguay...
Meeting Debian people for having a good time together, for some good hacking, for learning, for teaching… Is always fun and welcome. It brings energy, life and joy. And this year, due to the six-months-long relocation my family and me decided to have to Argentina, I was unable to attend the real deal, DebConf23 at India.
And while I know DebConf is an experience like no other, this year I took part in two miniDebConfs. One I have already shared in this same blog: I was in MiniDebConf Tamil Nadu in India, followed by some days of pre-DebConf preparation and scouting in Kochi proper, where I got to interact with the absolutely great and loving team that prepared DebConf.
The other one is still ongoing (but close to finishing). Some months ago, I talked with Santiago Ruano, jokin as we were Spanish-speaking DDs announcing to the debian-private mailing list we’d be relocating to around Río de la Plata. And things… worked out normally: He has been for several months in Uruguay already, so he decided to rent a house for some days, and invite Debian people to do what we do best.
I left Paraná Tuesday night (and missed my online class at UNAM! Well, you cannot have everything, right?). I arrived early on Wednesday, and around noon came to the house of the keysigning (well, the place is properly called “Casa Key”, it’s a publicity agency that is also rented as a guesthouse in a very nice area of Montevideo, close to Nuevo Pocitos beach).
In case you don’t know it, Montevideo is on the Northern (or Eastern) shore of Río de la Plata, the widest river in the world (up to 300Km wide, with current and non-salty water). But most important for some Debian contributors: You can even come here by boat!
That first evening, we received Ilu, who was in Uruguay by chance for other issues (and we were very happy about it!) and a young and enthusiastic Uruguayan, Felipe, interested in getting involved in Debian. We spent the evening talking about life, the universe and everything… Which was a bit tiring, as I had to interface between Spanish and English, talking with two friends that didn’t share a common language 😉
On Thursday morning, I went out for an early walk at the beach. And lets say, if only just for the narrative, that I found a lost penguin emerging from Río de la Plata!
For those that don’t know (who’d be most of you, as he has not been seen at Debian events for 15 years), that’s Lisandro Damián Nicanor Pérez Meyer (or just lisandro), long-time maintainer of the Qt ecosystem, and one of our embedded world extraordinaires. So, after we got him dry and fed him fresh river fishes, he gave us a great impromptu talk about understanding and finding our way around the Device Tree Source files for development boards and similar machines, mostly in the ARM world.
From Argentina, we also had Emanuel (eamanu) crossing all the way from La Rioja.
I spent most of our first workday getting ① my laptop in shape to be useful as the driver for my online class on Thursday (which is no small feat — people that know the particularities of my much loved ARM-based laptop will understand), and ② running a set of tests again on my Raspberry Pi labortory, which I had not updated in several months.
I am happy to say we are also finally also building Raspberry images for Trixie (Debian 13, Testing)! Sadly, I managed to burn my USB-to-serial-console (UART) adaptor, and could neither test those, nor the oldstable ones we are still building (and will probably soon be dropped, if not for anything else, to save disk space).
We enjoyed a lot of socialization time. An important highlight of the conference for me was that we reconnected with a long-lost DD, Eduardo Trápani, and got him interested in getting involved in the project again! This second day, another local Uruguayan, Mauricio, joined us together with his girlfriend, Alicia, and Felipe came again to hang out with us. Sadly, we didn’t get photographic evidence of them (nor the permission to post it).
The nice house Santiago got for us was very well equipped for a miniDebConf. There were a couple of rounds of pool played by those that enjoyed it (I was very happy just to stand around, take some photos and enjoy the atmosphere and the conversation).
Today (Saturday) is the last full-house day of miniDebConf; tomorrow we will be leaving the house by noon. It was also a very productive day! We had a long, important conversation about an important discussion that we are about to present on debian-vote@lists.debian.org.
It has been a great couple of days! Sadly, it’s coming to an end… But this at least gives me the opportunity (and moral obligation!) to write a long blog post. And to thank Santiago for organizing this, and Debian, for sponsoring our trip, stay, foods and healthy enjoyment!
Matthias Klumpp: AppStream 1.0 released!
Today, 12 years after the meeting where AppStream was first discussed and 11 years after I released a prototype implementation I am excited to announce AppStream 1.0!
Check it out on GitHub, or get the release tarball or read the documentation or release notes!
Some nostalgic memoriesI was not in the original AppStream meeting, since in 2011 I was extremely busy with finals preparations and ball organization in high school, but I still vividly remember sitting at school in the students’ lounge during a break and trying to catch the really choppy live stream from the meeting on my borrowed laptop (a futile exercise, I watched parts of the blurry recording later).
I was extremely passionate about getting software deployment to work better on Linux and to improve the overall user experience, and spent many hours on the PackageKit IRC channel discussing things with many amazing people like Richard Hughes, Daniel Nicoletti, Sebastian Heinlein and others.
At the time I was writing a software deployment tool called Listaller – this was before Linux containers were a thing, and building it was very tough due to technical and personal limitations (I had just learned C!). Then in university, when I intended to recreate this tool, but for real and better this time as a new project called Limba, I needed a way to provide metadata for it, and AppStream fit right in! Meanwhile, Richard Hughes was tackling the UI side of things while creating GNOME Software and needed a solution as well. So I implemented a prototype and together we pretty much reshaped the early specification from the original meeting into what would become modern AppStream.
Back then I saw AppStream as a necessary side-project for my actual project, and didn’t even consider me as the maintainer of it for quite a while (I hadn’t been at the meeting afterall). All those years ago I had no idea that ultimately I was developing AppStream not for Limba, but for a new thing that would show up later, with an even more modern design called Flatpak. I also had no idea how incredibly complex AppStream would become and how many features it would have and how much more maintenance work it would be – and also not how ubiquitous it would become.
The modern Linux desktop uses AppStream everywhere now, it is supported by all major distributions, used by Flatpak for metadata, used for firmware metadata via Richard’s fwupd/LVFS, runs on every Steam Deck, can be found in cars and possibly many places I do not know yet.
What is new in 1.0? API breaksThe most important thing that’s new with the 1.0 release is a bunch of incompatible changes. For the shared libraries, all deprecated API elements have been removed and a bunch of other changes have been made to improve the overall API and especially make it more binding-friendly. That doesn’t mean that the API is completely new and nothing looks like before though, when possible the previous API design was kept and some changes that would have been too disruptive have not been made. Regardless of that, you will have to port your AppStream-using applications. For some larger ones I already submitted patches to build with both AppStream versions, the 0.16.x stable series as well as 1.0+.
For the XML specification, some older compatibility for XML that had no or very few users has been removed as well. This affects for example release elements that reference downloadable data without an artifact block, which has not been supported for a while. For all of these, I checked to remove only things that had close to no users and that were a significant maintenance burden. So as a rule of thumb: If your XML validated with no warnings with the 0.16.x branch of AppStream, it will still be 100% valid with the 1.0 release.
Another notable change is that the generated output of AppStream 1.0 will always be 1.0 compliant, you can not make it generate data for versions below that (this greatly reduced the maintenance cost of the project).
Developer elementFor a long time, you could set the developer name using the top-level developer_name tag. With AppStream 1.0, this is changed a bit. There is now a developer tag with a name child (that can be translated unless the translate="no" attribute is set on it). This allows future extensibility, and also allows to set a machine-readable id attribute in the developer element. This permits software centers to group software by developer easier, without having to use heuristics. If we decide to extend the developer information per-app in future, this is also now possible. Do not worry though the developer_name tag is also still read, so there is no high pressure to update. The old 0.16.x stable series also has this feature backported, so it can be available everywhere. Check out the developer tag specification for more details.
Scale factor for screenshotsScreenshot images can now have a scale attribute, to indicate an (integer) scaling factor to apply. This feature was a breaking change and therefore we could not have it for the longest time, but it is now available. Please wait a bit for AppStream 1.0 to become deployed more widespread though, as using it with older AppStream versions may lead to issues in some cases. Check out the screenshots tag specification for more details.
Screenshot environmentsIt is now possible to indicate the environment a screenshot was recorded in (GNOME, GNOME Dark, KDE Plasma, Windows, etc.) via an environment attribute on the respective screenshot tag. This was also a breaking change, so use it carefully for now! If projects want to, they can use this feature to supply dedicated screenshots depending on the environment the application page is displayed in. Check out the screenshots tag specification for more details.
References tagThis is a feature more important for the scientific community and scientific applications. Using the references tag, you can associate the AppStream component with a DOI (Digital object identifier) or provide a link to a CFF file to provide citation information. It also allows to link to other scientific registries. Check out the references tag specification for more details.
Release tagsReleases can have tags now, just like components. This is generally not a feature that I expect to be used much, but in certain instances it can become useful with a cooperating software center, for example to tag certain releases as long-term supported versions.
Multi-platform supportThanks to the interest and work of many volunteers, AppStream (mostly) runs on FreeBSD now, a NetBSD port exists, support for macOS was written and a Windows port is on its way! Thank you to everyone working on this
Better compatibility checksFor a long time I thought that the AppStream library should just be a thin layer above the XML and that software centers should just implement a lot of the actual logic. This has not been the case for a while, but there was still a lot of complex AppStream features that were hard for software centers to implement and where it makes sense to have one implementation that projects can just use.
The validation of component relations is one such thing. This was implemented in 0.16.x as well, but 1.0 vastly improves upon the compatibility checks, so you can now just run as_component_check_relations and retrieve a detailed list of whether the current component will run well on the system. Besides better API for software developers, the appstreamcli utility also has much improved support for relation checks, and I wrote about these changes in a previous post. Check it out!
With these changes, I hope this feature will be used much more, and beyond just drivers and firmware.
So much more!The changelog for the 1.0 release is huge, and there are many papercuts resolved and changes made that I did not talk about here, like us using gi-docgen (instead of gtkdoc) now for nice API documentation, or the many improvements that went into better binding support, or better search, or just plain bugfixes.
OutlookI expect the transition to 1.0 to take a bit of time. AppStream has not broken its API for many, many years (since 2016), so a bunch of places need to be touched even if the changes themselves are minor in many cases. In hindsight, I should have also released 1.0 much sooner and it should not have become such a mega-release, but that was mainly due to time constraints.
So, what’s in it for the future? Contrary to what I thought, AppStream does not really seem to be “done” and fetature complete at a point, there is always something to improve, and people come up with new usecases all the time. So, expect more of the same in future: Bugfixes, validator improvements, documentation improvements, better tools and the occasional new feature.
Onwards to 1.0.1!
Reproducible Builds: Reproducible Builds in October 2023
Welcome to the October 2023 report from the Reproducible Builds project. In these reports we outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries.
Reproducible Builds Summit 2023Between October 31st and November 2nd, we held our seventh Reproducible Builds Summit in Hamburg, Germany!
Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort, and this instance was no different.
During this enriching event, participants had the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. A number of concrete outcomes from the summit will documented in the report for November 2023 and elsewhere.
Amazingly the agenda and all notes from all sessions are already online.
The Reproducible Builds team would like to thank our event sponsors who include Mullvad VPN, openSUSE, Debian, Software Freedom Conservancy, Allotropia and Aspiration Tech.
Russ Cox posted a fascinating article on his blog prompted by the fortieth anniversary of Ken Thompson’s award-winning paper, Reflections on Trusting Trust:
[…] In March 2023, Ken gave the closing keynote [and] during the Q&A session, someone jokingly asked about the Turing award lecture, specifically “can you tell us right now whether you have a backdoor into every copy of gcc and Linux still today?”
Although Ken reveals (or at least claims!) that he has no such backdoor, he does admit that he has the actual code… which Russ requests and subsequently dissects in great but accessible detail.
Rahul Bajaj, Eduardo Fernandes, Bram Adams and Ahmed E. Hassan from the Maintenance, Construction and Intelligence of Software (MCIS) laboratory within the School of Computing, Queen’s University in Ontario, Canada have published a paper on the “Time to fix, causes and correlation with external ecosystem factors” of unreproducible builds.
The authors compare various response times within the Debian and Arch Linux distributions including, for example:
Arch Linux packages become reproducible a median of 30 days quicker when compared to Debian packages, while Debian packages remain reproducible for a median of 68 days longer once fixed.
A full PDF of their paper is available online, as are many other interesting papers on MCIS’ publication page.
On the NixOS Discourse instance, Arnout Engelen (raboof) announced that NixOS have created an independent, bit-for-bit identical rebuilding of the nixos-minimal image that is used to install NixOS. In their post, Arnout details what exactly can be reproduced, and even includes some of the history of this endeavour:
You may remember a 2021 announcement that the minimal ISO was 100% reproducible. While back then we successfully tested that all packages that were needed to build the ISO were individually reproducible, actually rebuilding the ISO still introduced differences. This was due to some remaining problems in the hydra cache and the way the ISO was created. By the time we fixed those, regressions had popped up (notably an upstream problem in Python 3.10), and it isn’t until this week that we were back to having everything reproducible and being able to validate the complete chain.
Congratulations to NixOS team for reaching this important milestone! Discussion about this announcement can be found underneath the post itself, as well as on Hacker News.
Seth Larson published a blog post investigating the reproducibility of the CPython source tarballs. Using diffoscope, reprotest and other tools, Seth documents his work that led to a pull request to make these files reproducible which was merged by Łukasz Langa.
Long-time sponsor of the project, Codethink, have generously replaced our old “Moonshot-Slides”, which they have generously hosted since 2016 with new KVM-based arm64 hardware. Holger Levsen integrated these new nodes to the Reproducible Builds’ continuous integration framework.
On our mailing list during October 2023 there were a number of threads, including:
-
Vagrant Cascadian continued a thread about the implementation details of a “snapshot” archive server required for reproducing previous builds. […]
-
Akihiro Suda shared an update on BuildKit, a toolkit for building Docker container images. Akihiro links to a interesting talk they recently gave at DockerCon titled Reproducible builds with BuildKit for software supply-chain security.
-
Alex Zakharov started a thread discussing and proposing fixes for various tools that create ext4 filesystem images. […]
Elsewhere, Pol Dellaiera made a number of improvements to our website, including fixing typos and links […][…], adding a NixOS “Flake” file […] and sorting our publications page by date […].
Vagrant Cascadian presented Reproducible Builds All The Way Down at the Open Source Firmware Conference.
distro-info is a Debian-oriented tool that can provide information about Debian (and Ubuntu) distributions such as their codenames (eg. bookworm) and so on. This month, Benjamin Drung uploaded a new version of distro-info that added support for the SOURCE_DATE_EPOCH environment variable in order to close bug #1034422. In addition, 8 reviews of packages were added, 74 were updated and 56 were removed this month, all adding to our knowledge about identified issues.
Bernhard M. Wiedemann published another monthly report about reproducibility within openSUSE.
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
-
Bernhard M. Wiedemann:
- edje_cc (race condition)
- elasticsearch (build failure)
- erlang-retest (embedded .zip timestamp)
- fdo-client (embeds private keys)
- fftw3 (random ordering)
- gsoap (date issue)
- gutenprint (date)
- hub/golang (embeds random build path)
- Hyprland (filesystem issue)
- kitty (sort-related issue, .tar file embeds modification time)
- libpinyin (ASLR)
- maildir-utils (date embedded in copyright)
- mame (order-related issue)
- mingw32-binutils & mingw64-binutils (date)
- MooseX (date from perl-MooseX-App)
- occt (sorting issue)
- openblas (embeds CPU count)
- OpenRGB (corruption-related issue)
- python-numpy (random file names)
- python-pandas (FTBFS)
- python-quantities (date)
- python3-pyside2 (order)
- qemu (date and Sphinx issue)
- qpid (sorting problem)
- rakudo (filesystem ordering issue)
- SLOF (date-related issue)
- spack (CPU counting issue)
- xemacs-packages (date-related issue)
-
Chris Lamb:
In addition, Chris Lamb fixed an issue in diffoscope, where if the equivalent of file -i returns text/plain, fallback to comparing as a text file. This was originally filed as Debian bug #1053668) by Niels Thykier. […] This was then uploaded to Debian (and elsewhere) as version 251.
The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In October, a number of changes were made by Holger Levsen:
-
Debian-related changes:
- Refine the handling of package blacklisting, such as sending blacklisting notifications to the #debian-reproducible-changes IRC channel. […][…][…]
- Install systemd-oomd on all Debian bookworm nodes (re. Debian bug #1052257). […]
- Detect more cases of failures to delete schroots. […]
- Document various bugs in bookworm which are (currently) being manually worked around. […]
-
Node-related changes:
-
Monitoring-related changes:
-
Misc:
In addition, Vagrant Cascadian added some packages and configuration for snapshot experiments. […]
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
-
IRC: #reproducible-builds on irc.oftc.net.
-
Mailing list: rb-general@lists.reproducible-builds.org
-
Mastodon: @reproducible_builds
-
Twitter: @ReproBuilds
Jonathan Dowland: Plato document reader
Kobo Libra 2
text-handling in Plato
Until now, I haven't hacked my Kobo Libra 2 ereader, despite knowing it is a relatively open device. The default document reader (Nickel) does everything I need it to. Syncing the books via USB is tedious, but I don't do it that often.
Via Videah's blog post My E-Reader Setup, I learned of Plato, an alternative document reader.
Plato doesn't really offer any headline features that I need, but it cost me nothing to try it out, so I installed it (fairly painlessly) and launched it just once. The library view seems good, although I've not used it much: I picked a book and read it through1, and I'm 60% through another2. I tend to read one ebook at a time.
The main reader interface is great: Just the text3. Page transitions are really, really fast. Tweaking the backlight intensity is a little slower than Nickel: menu-driven rather than an active scroll region (which is convenient in Nickel but easy to accidentally turn to 0% and hard to recover from in pitch black).
Now that I've started down the road of hacking the Kobo, I think I will explore wifi-syncing the library, perhaps using a variation on the hook scripts shared in Videah's blog post.
- Venomous Lumpsucker by Ned Beauman. It's fantastic. Guardian review↩
- There Is No Antimemetics Division by qntm↩
- I do miss Nickel's tiny progress bar somewhat: the only non-text bit of UX I left turned on.↩
Scarlett Gately Moore: KDE: Krita 5.2.1 Snap! KDE Gear 23.08.3 Snaps and KDE neon release
Today https://kde.org/announcements/gear/23.08.3/ !
I have finished all the snaps and have released to stable channel, if the snap you are looking for hasn’t arrived yet, there is an MR open and it will be soon!
I have finished all the applications in KDE neon and they are available in Unstable and I am snapshotting User edition and they will be available shortly.
Krita 5.2.1 Snap is complete and released to stable channel!
Enjoy!
I fixed some issues with a few of our –classic snaps, namely in Wayland sessions by bundling some missing wayland Qt libs. They should no longer go BOOM upon launch.
KF6 SDK snap is complete. Next freetime I will work on runtime and launcher.
Down to the last part on the akonadi snap build so PIM snaps are coming soon.
Personal:
As many of you know, I have been out of proper employment for a year now. I had a hopeful project in the works, but it is out of my hands now and the new project holder was only allowed to give me part time and it is still not in stone with further delays. I understand that these things take time and refinement to go through. I have put myself and my family in dire straights with my stubbornness and need to re-evaluate my priorities. I enjoy doing this work very much, but I also need to pay some very over due bills and well life costs money. With that said, I hope to have an interview next week with a local hospital that needs a Linux Administrator. Who knew someone in nowhere Arizona would have a Linux shop! Anyway, I will be going back to my grass roots, network administration is where I started way back in 1996. I will still be around! Just not at the level I am now obviously. I will still be in the project if they allow, I need 2 jobs to clean up this mess I have made for myself. In my spare time I will of course keep up with Debian and KDE neon and Snaps!
If you can spare any change to help with my gas for interview and 45 minute commute till I get a paycheck I would be super grateful. Hopefully I won’t have to ask for much longer. Thank you so much to everyone that has helped over the last year, it means the world to me.
Petter Reinholdtsen: New chrpath release 0.17
The chrpath package provide a simple command line tool to remove or modify the rpath or runpath of compiled ELF program. It is almost 10 years since I updated the code base, but I stumbled over the tool today, and decided it was time to move the code base from Subversion to git and find a new home for it, as the previous one (Debian Alioth) has been shut down. I decided to go with Codeberg this time, as it is my git service of choice these days, did a quick and dirty migration to git and updated the code with a few patches I found in the Debian bug tracker. These are the release notes:
New in 0.17 released 2023-11-10:
- Moved project to Codeberg, as Alioth is shut down.
- Add Solaris support (use <sys/byteorder.h> instead of <byteswap.h>). Patch from Rainer Orth.
- Added missing newline from printf() line. Patch from Frank Dana.
- Corrected handling of multiple ELF sections. Patch from Frank Dana.
- Updated build rules for .deb. Partly based on patch from djcj.
The latest edition is tagged and available from https://codeberg.org/pere/chrpath.
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
Steinar H. Gunderson: systemd shower thought
Something I thought of recently: It would be pretty nice if you could somehow connect screen (or tmux) to systemd units, so that you could just connect to running daemons and interact with them as if you started them in the foreground. (Nothing for Apache, of course, but there are certainly programs you'd like systemd to manage but that have more fancy TUIs.)
Jonathan Dowland: gitsigns (useful neovim plugins)
gitsigns is a Neovim plugin which adds a wonderfully subtle colour annotation in the left-hand gutter to reflect changes in the buffer since the last git commit1.
My long-term habit with Vim and Git is to frequently background Vim (^Z) to invoke the git command directly and then foreground Vim again (fg). Over the last few years I've been trying more and more to call vim-fugitive from within Vim instead. (I still do rebases and most merges the old-fashioned way). For the most part Gitsigns is a nice passive addition to that, but it can also do a lot of useful things that Fugitive also does. Previewing changed hunks in a little floating window, in particular when resolving an awkward merge conflict, is very handy.
The above picture shows it in action. I've changed two lines and added another three in the top-left buffer. Gitsigns tries to choose colours from the currently active colour scheme (in this case, tender)
- by default Gitsigns shows changes since the last commit, as git diff does, but you can easily switch out the base from which it compares.↩
Melissa Wen: AMD Driver-specific Properties for Color Management on Linux (Part 2)
This blog post explores the color capabilities of AMD hardware and how they are exposed to userspace through driver-specific properties. It discusses the different color blocks in the AMD Display Core Next (DCN) pipeline and their capabilities, such as predefined transfer functions, 1D and 3D lookup tables (LUTs), and color transformation matrices (CTMs). It also highlights the differences in AMD HW blocks for pre and post-blending adjustments, and how these differences are reflected in the available driver-specific properties.
Overall, this blog post provides a comprehensive overview of the color capabilities of AMD hardware and how they can be controlled by userspace applications through driver-specific properties. This information is valuable for anyone who wants to develop applications that can take advantage of the AMD color management pipeline.
Get a closer look at each hardware block’s capabilities, unlock a wealth of knowledge about AMD display hardware, and enhance your understanding of graphics and visual computing. Stay tuned for future developments as we embark on a quest for GPU color capabilities in the ever-evolving realm of rainbow treasures.
Operating Systems can use the power of GPUs to ensure consistent color reproduction across graphics devices. We can use GPU-accelerated color management to manage the diversity of color profiles, do color transformations to convert between High-Dynamic-Range (HDR) and Standard-Dynamic-Range (SDR) content and color enhacements for wide color gamut (WCG). However, to make use of GPU display capabilities, we need an interface between userspace and the kernel display drivers that is currently absent in the Linux/DRM KMS API.
In the previous blog post I presented how we are expanding the Linux/DRM color management API to expose specific properties of AMD hardware. Now, I’ll guide you to the color features for the Linux/AMD display driver. We embark on a journey through DRM/KMS, AMD Display Manager, and AMD Display Core and delve into the color blocks to uncover the secrets of color manipulation within AMD hardware. Here we’ll talk less about the color tools and more about where to find them in the hardware.
We resort to driver-specific properties to reach AMD hardware blocks with color capabilities. These blocks display features like predefined transfer functions, color transformation matrices, and 1-dimensional (1D LUT) and 3-dimensional lookup tables (3D LUT). Here, we will understand how these color features are strategically placed into color blocks both before and after blending in Display Pipe and Plane (DPP) and Multiple Pipe/Plane Combined (MPC) blocks.
That said, welcome back to the second part of our thrilling journey through AMD’s color management realm!
AMD Display Driver in the Linux/DRM Subsystem: The JourneyIn my 2022 XDC talk “I’m not an AMD expert, but…”, I briefly explained the organizational structure of the Linux/AMD display driver where the driver code is bifurcated into a Linux-specific section and a shared-code portion. To reveal AMD’s color secrets through the Linux kernel DRM API, our journey led us through these layers of the Linux/AMD display driver’s software stack. It includes traversing the DRM/KMS framework, the AMD Display Manager (DM), and the AMD Display Core (DC) [1].
The DRM/KMS framework provides the atomic API for color management through KMS properties represented by struct drm_property. We extended the color management interface exposed to userspace by leveraging existing resources and connecting them with driver-specific functions for managing modeset properties.
On the AMD DC layer, the interface with hardware color blocks is established. The AMD DC layer contains OS-agnostic components that are shared across different platforms, making it an invaluable resource. This layer already implements hardware programming and resource management, simplifying the external developer’s task. While examining the DC code, we gain insights into the color pipeline and capabilities, even without direct access to specifications. Additionally, AMD developers provide essential support by answering queries and reviewing our work upstream.
The primary challenge involved identifying and understanding relevant AMD DC code to configure each color block in the color pipeline. However, the ultimate goal was to bridge the DC color capabilities with the DRM API. For this, we changed the AMD DM, the OS-dependent layer connecting the DC interface to the DRM/KMS framework. We defined and managed driver-specific color properties, facilitated the transport of user space data to the DC, and translated DRM features and settings to the DC interface. Considerations were also made for differences in the color pipeline based on hardware capabilities.
Exploring Color Capabilities of the AMD display hardwareNow, let’s dive into the exciting realm of AMD color capabilities, where a abundance of techniques and tools await to make your colors look extraordinary across diverse devices.
First, we need to know a little about the color transformation and calibration tools and techniques that you can find in different blocks of the AMD hardware. I borrowed some images from [2] [3] [4] to help you understand the information.
Predefined Transfer Functions (Named Fixed Curves):Transfer functions serve as the bridge between the digital and visual worlds, defining the mathematical relationship between digital color values and linear scene/display values and ensuring consistent color reproduction across different devices and media. You can learn more about curves in the chapter GPU Gems 3 - The Importance of Being Linear by Larry Gritz and Eugene d’Eon.
ITU-R 2100 introduces three main types of transfer functions:
- OETF: the opto-electronic transfer function, which converts linear scene light into the video signal, typically within a camera.
- EOTF: electro-optical transfer function, which converts the video signal into the linear light output of the display.
- OOTF: opto-optical transfer function, which has the role of applying the “rendering intent”.
AMD’s display driver supports the following pre-defined transfer functions (aka named fixed curves):
- Linear/Unity: linear/identity relationship between pixel value and luminance value;
- Gamma 2.2, Gamma 2.4, Gamma 2.6: pure power functions;
- sRGB: 2.4: The piece-wise transfer function from IEC 61966-2-1:1999;
- BT.709: has a linear segment in the bottom part and then a power function with a 0.45 (~1/2.22) gamma for the rest of the range; standardized by ITU-R BT.709-6;
- PQ (Perceptual Quantizer): used for HDR display, allows luminance range capability of 0 to 10,000 nits; standardized by SMPTE ST 2084.
These capabilities vary depending on the hardware block, with some utilizing hardcoded curves and others relying on AMD’s color module to construct curves from standardized coefficients. It also supports user/custom curves built from a lookup table.
1D LUTs (1-dimensional Lookup Table):A 1D LUT is a versatile tool, defining a one-dimensional color transformation based on a single parameter. It’s very well explained by Jeremy Selan at GPU Gems 2 - Chapter 24 Using Lookup Tables to Accelerate Color Transformations
It enables adjustments to color, brightness, and contrast, making it ideal for fine-tuning. In the Linux AMD display driver, the atomic API offers a 1D LUT with 4096 entries and 8-bit depth, while legacy gamma uses a size of 256.
3D LUTs (3-dimensional Lookup Table):These tables work in three dimensions – red, green, and blue. They’re perfect for complex color transformations and adjustments between color channels. It’s also more complex to manage and require more computational resources. Jeremy also explains 3D LUT at GPU Gems 2 - Chapter 24 Using Lookup Tables to Accelerate Color Transformations
CTM (Color Transformation Matrices):Color transformation matrices facilitate the transition between different color spaces, playing a crucial role in color space conversion.
HDR Multiplier:HDR multiplier is a factor applied to the color values of an image to increase their overall brightness.
AMD Color Capabilities in the Hardware PipelineFirst, let’s take a closer look at the AMD Display Core Next hardware pipeline in the Linux kernel documentation for AMDGPU driver - Display Core Next
In the AMD Display Core Next hardware pipeline, we encounter two hardware blocks with color capabilities: the Display Pipe and Plane (DPP) and the Multiple Pipe/Plane Combined (MPC). The DPP handles color adjustments per plane before blending, while the MPC engages in post-blending color adjustments. In short, we expect DPP color capabilities to match up with DRM plane properties, and MPC color capabilities to play nice with DRM CRTC properties.
Note: here’s the catch – there are some DRM CRTC color transformations that don’t have a corresponding AMD MPC color block, and vice versa. It’s like a puzzle, and we’re here to solve it!
AMD Color Blocks and CapabilitiesWe can finally talk about the color capabilities of each AMD color block. As it varies based on the generation of hardware, let’s take the DCN3+ family as reference. What’s possible to do before and after blending depends on hardware capabilities describe in the kernel driver by struct dpp_color_caps and struct mpc_color_caps.
The AMD Steam Deck hardware provides a tangible example of these capabilities. Therefore, we take SteamDeck/DCN301 driver as an example and look at the “Color pipeline capabilities” described in the file: driver/gpu/drm/amd/display/dcn301/dcn301_resources.c
/* Color pipeline capabilities */ dc->caps.color.dpp.dcn_arch = 1; // If it is a Display Core Next (DCN): yes. Zero means DCE. dc->caps.color.dpp.input_lut_shared = 0; dc->caps.color.dpp.icsc = 1; // Intput Color Space Conversion (CSC) matrix. dc->caps.color.dpp.dgam_ram = 0; // The old degamma block for degamma curve (hardcoded and LUT). `Gamma correction` is the new one. dc->caps.color.dpp.dgam_rom_caps.srgb = 1; // sRGB hardcoded curve support dc->caps.color.dpp.dgam_rom_caps.bt2020 = 1; // BT2020 hardcoded curve support (seems not actually in use) dc->caps.color.dpp.dgam_rom_caps.gamma2_2 = 1; // Gamma 2.2 hardcoded curve support dc->caps.color.dpp.dgam_rom_caps.pq = 1; // PQ hardcoded curve support dc->caps.color.dpp.dgam_rom_caps.hlg = 1; // HLG hardcoded curve support dc->caps.color.dpp.post_csc = 1; // CSC matrix dc->caps.color.dpp.gamma_corr = 1; // New `Gamma Correction` block for degamma user LUT; dc->caps.color.dpp.dgam_rom_for_yuv = 0; dc->caps.color.dpp.hw_3d_lut = 1; // 3D LUT support. If so, it's always preceded by a shaper curve. dc->caps.color.dpp.ogam_ram = 1; // `Blend Gamma` block for custom curve just after blending // no OGAM ROM on DCN301 dc->caps.color.dpp.ogam_rom_caps.srgb = 0; dc->caps.color.dpp.ogam_rom_caps.bt2020 = 0; dc->caps.color.dpp.ogam_rom_caps.gamma2_2 = 0; dc->caps.color.dpp.ogam_rom_caps.pq = 0; dc->caps.color.dpp.ogam_rom_caps.hlg = 0; dc->caps.color.dpp.ocsc = 0; dc->caps.color.mpc.gamut_remap = 1; // Post-blending CTM (pre-blending CTM is always supported) dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; // Post-blending 3D LUT (preceded by shaper curve) dc->caps.color.mpc.ogam_ram = 1; // Post-blending regamma. // No pre-defined TF supported for regamma. dc->caps.color.mpc.ogam_rom_caps.srgb = 0; dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0; dc->caps.color.mpc.ogam_rom_caps.gamma2_2 = 0; dc->caps.color.mpc.ogam_rom_caps.pq = 0; dc->caps.color.mpc.ogam_rom_caps.hlg = 0; dc->caps.color.mpc.ocsc = 1; // Output CSC matrix.I included some inline comments in each element of the color caps to quickly describe them, but you can find the same information in the Linux kernel documentation. See more in struct dpp_color_caps, struct mpc_color_caps and struct rom_curve_caps.
Now, using this guideline, we go through color capabilities of DPP and MPC blocks and talk more about mapping driver-specific properties to corresponding color blocks.
DPP Color Pipeline: Before Blending (Per Plane)Let’s explore the capabilities of DPP blocks and what you can achieve with a color block. The very first thing to pay attention is the display architecture of the display hardware: previously AMD uses a display architecture called DCE
- Display and Compositing Engine, but newer hardware follows DCN - Display Core Next.
The architectute is described by: dc->caps.color.dpp.dcn_arch
AMD Plane Degamma: TF and 1D LUTDescribed by: dc->caps.color.dpp.dgam_ram, dc->caps.color.dpp.dgam_rom_caps,dc->caps.color.dpp.gamma_corr
AMD Plane Degamma data is mapped to the initial stage of the DPP pipeline. It is utilized to transition from scanout/encoded values to linear values for arithmetic operations. Plane Degamma supports both pre-defined transfer functions and 1D LUTs, depending on the hardware generation. DCN2 and older families handle both types of curve in the Degamma RAM block (dc->caps.color.dpp.dgam_ram); DCN3+ separate hardcoded curves and 1D LUT into two block: Degamma ROM (dc->caps.color.dpp.dgam_rom_caps) and Gamma correction block (dc->caps.color.dpp.gamma_corr), respectively.
Pre-defined transfer functions:
- they are hardcoded curves (read-only memory - ROM);
- supported curves: sRGB EOTF, BT.709 inverse OETF, PQ EOTF and HLG OETF, Gamma 2.2, Gamma 2.4 and Gamma 2.6 EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. Setting TF = Identity/Default and LUT as NULL means bypass.
References:
- [PATCH v3 04/32] drm/amd/display: add driver-specific property for plane degamma LUT
- [PATCH v3 05/32] drm/amd/display: add plane degamma TF driver-specific property
- [PATCH v3 19/32] drm/amd/display: add plane degamma TF and LUT support
AMD Plane CTM data goes to the DPP Gamut Remap block, supporting a 3x4 fixed point (s31.32) matrix for color space conversions. The data is interpreted as a struct drm_color_ctm_3x4. Setting NULL means bypass.
References:
- [PATCH v3 30/32] drm/amd/display: add plane CTM driver-specific property
- [PATCH v3 31/32] drm/amd/display: add plane CTM support
- [PATCH v3 32/32] drm/amd/display: Add 3x4 CTM support for plane CTM
Described by: dc->caps.color.dpp.hw_3d_lut
The Shaper block fine-tunes color adjustments before applying the 3D LUT, optimizing the use of the limited entries in each dimension of the 3D LUT. On AMD hardware, a 3D LUT always means a preceding shaper 1D LUT used for delinearizing and/or normalizing the color space before applying a 3D LUT, so this entry on DPP color caps dc->caps.color.dpp.hw_3d_lut means support for both shaper 1D LUT and 3D LUT.
Pre-defined transfer function enables delinearizing content with or without shaper LUT, where AMD color module calculates the resulted shaper curve. Shaper curves go from linear values to encoded values. If we are already in a non-linear space and/or don’t need to normalize values, we can set a Identity TF for shaper that works similar to bypass and is also the default TF value.
Pre-defined transfer functions:
- there is no DPP Shaper ROM. Curves are calculated by AMD color modules. Check calculate_curve() function in the file amd/display/modules/color/color_gamma.c.
- supported curves: Identity, sRGB inverse EOTF, BT.709 OETF, PQ inverse EOTF, HLG OETF, and Gamma 2.2, Gamma 2.4, Gamma 2.6 inverse EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. When setting Plane Shaper TF (!= Identity) and LUT at the same time, the color module will combine the pre-defined TF and the custom LUT values into the LUT that’s actually programmed. Setting TF = Identity/Default and LUT as NULL works as bypass.
References:
- [PATCH v3 10/32] drm/amd/display: add plane shaper LUT and TF
- [PATCH v3 23/32] drm/amd/display: add plane shaper LUT support
- [PATCH v3 24/32] drm/amd/display: add plane shaper TF support
Described by: dc->caps.color.dpp.hw_3d_lut
The 3D LUT in the DPP block facilitates complex color transformations and adjustments. 3D LUT is a three-dimensional array where each element is an RGB triplet. As mentioned before, the dc->caps.color.dpp.hw_3d_lut describe if DPP 3D LUT is supported.
The AMD driver-specific property advertise the size of a single dimension via LUT3D_SIZE property. Plane 3D LUT is a blog property where the data is interpreted as an array of struct drm_color_lut elements and the number of entries is LUT3D_SIZE cubic. The array contains samples from the approximated function. Values between samples are estimated by tetrahedral interpolation The array is accessed with three indices, one for each input dimension (color channel), blue being the outermost dimension, red the innermost. This distribution is better visualized when examining the code in [RFC PATCH 5/5] drm/amd/display: Fill 3D LUT from userspace by Alex Hung:
+ for (nib = 0; nib < 17; nib++) { + for (nig = 0; nig < 17; nig++) { + for (nir = 0; nir < 17; nir++) { + ind_lut = 3 * (nib + 17*nig + 289*nir); + + rgb_area[ind].red = rgb_lib[ind_lut + 0]; + rgb_area[ind].green = rgb_lib[ind_lut + 1]; + rgb_area[ind].blue = rgb_lib[ind_lut + 2]; + ind++; + } + } + }In our driver-specific approach we opted to advertise it’s behavior to the userspace instead of implicitly dealing with it in the kernel driver. AMD’s hardware supports 3D LUTs with 17-size or 9-size (4913 and 729 entries respectively), and you can choose between 10-bit or 12-bit. In the current driver-specific work we focus on enabling only 17-size 12-bit 3D LUT, as in [PATCH v3 25/32] drm/amd/display: add plane 3D LUT support:
+ /* Stride and bit depth are not programmable by API yet. + * Therefore, only supports 17x17x17 3D LUT (12-bit). + */ + lut->lut_3d.use_tetrahedral_9 = false; + lut->lut_3d.use_12bits = true; + lut->state.bits.initialized = 1; + __drm_3dlut_to_dc_3dlut(drm_lut, drm_lut3d_size, &lut->lut_3d, + lut->lut_3d.use_tetrahedral_9, + MAX_COLOR_3DLUT_BITDEPTH);A refined control of 3D LUT parameters should go through a follow-up version or generic API.
Setting 3D LUT to NULL means bypass.
References:
- [PATCH v3 09/32] drm/amd/display: add plane 3D LUT driver-specific properties
- [PATCH v3 25/32] drm/amd/display: add plane 3D LUT support
Described by: dc->caps.color.dpp.ogam_ram
The Blend/Out Gamma block applies the final touch-up before blending, allowing users to linearize content after 3D LUT and just before the blending. It supports both 1D LUT and pre-defined TF. We can see Shaper and Blend LUTs as 1D LUTs that are sandwich the 3D LUT. So, if we don’t need 3D LUT transformations, we may want to only use Degamma block to linearize and skip Shaper, 3D LUT and Blend.
Pre-defined transfer function:
- there is no DPP Blend ROM. Curves are calculated by AMD color modules;
- supported curves: Identity, sRGB EOTF, BT.709 inverse OETF, PQ EOTF, HLG inverse OETF, and Gamma 2.2, Gamma 2.4, Gamma 2.6 EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. If plane_blend_tf_property != Identity TF, AMD color module will combine the user LUT values with pre-defined TF into the LUT parameters to be programmed. Setting TF = Identity/Default and LUT to NULL means bypass.
References:
- [PATCH v3 11/32] drm/amd/display: add plane blend
- [PATCH v3 27/32] drm/amd/display: add plane blend LUT and TF support
The degamma lookup table (LUT) for converting framebuffer pixel data before apply the color conversion matrix. The data is interpreted as an array of struct drm_color_lut elements. Setting NULL means bypass.
Not really supported. The driver is currently reusing the DPP degamma LUT block (dc->caps.color.dpp.dgam_ram and dc->caps.color.dpp.gamma_corr) for supporting DRM CRTC Degamma LUT, as explaning by [PATCH v3 20/32] drm/amd/display: reject atomic commit if setting both plane and CRTC degamma.
DRM CRTC 3x3 CTMDescribed by: dc->caps.color.mpc.gamut_remap
It sets the current transformation matrix (CTM) apply to pixel data after the lookup through the degamma LUT and before the lookup through the gamma LUT. The data is interpreted as a struct drm_color_ctm. Setting NULL means bypass.
DRM CRTC Gamma 1D LUT + AMD CRTC Gamma TFDescribed by: dc->caps.color.mpc.ogam_ram
After all that, you might still want to convert the content to wire encoding. No worries, in addition to DRM CRTC 1D LUT, we’ve got a AMD CRTC gamma transfer function (TF) to make it happen. Possible TF values are defined by enum amdgpu_transfer_function.
Pre-defined transfer functions:
- there is no MPC Gamma ROM. Curves are calculated by AMD color modules.
- supported curves: Identity, sRGB inverse EOTF, BT.709 OETF, PQ inverse EOTF, HLG OETF, and Gamma 2.2, Gamma 2.4, Gamma 2.6 inverse EOTF.
The 1D LUT currently accepts 4096 entries of 8-bit. The data is interpreted as an array of struct drm_color_lut elements. When setting CRTC Gamma TF (!= Identity) and LUT at the same time, the color module will combine the pre-defined TF and the custom LUT values into the LUT that’s actually programmed. Setting TF = Identity/Default and LUT to NULL means bypass.
References:
- [PATCH v3 12/32] drm/amd/display: add CRTC gamma TF driver-specific property
- [PATCH v3 15/32] drm/amd/display: add CRTC gamma TF support
We have previously worked on exposing CRTC shaper and CRTC 3D LUT, but they were removed from the AMD driver-specific color series because they lack userspace case. CRTC shaper and 3D LUT works similar to plane shaper and 3D LUT but after blending (MPC block). The difference here is that setting (not bypass) Shaper and Gamma blocks together are not expected, since both blocks are used to delinearize the input space. In summary, we either set Shaper + 3D LUT or Gamma.
Input and Output Color Space ConversionThere are two other color capabilities of AMD display hardware that were integrated to DRM by previous works and worth a brief explanation here. The DC Input CSC sets pre-defined coefficients from the values of DRM plane color_range and color_encoding properties. It is used for color space conversion of the input content. On the other hand, we have de DC Output CSC (OCSC) sets pre-defined coefficients from DRM connector colorspace properties. It is uses for color space conversion of the composed image to the one supported by the sink.
References:
- [PATCH] amd/display/dc: Fix COLOR_ENCODING and COLOR_RANGE doing nothing for DCN20+
- [PATCH v6 00/13] Enable Colorspace connector property in amdgpu
If you want to understand a little more about this work, be sure to watch Joshua and I presented two talks at XDC 2023 about AMD/Steam Deck colors on Gamescope:
- The rainbow treasure map: advanced color management on Linux with AMD/Steam Deck
- Rainbow Frogs: HDR + Color Management in Gamescope/SteamOS
In the time between the first and second part of this blog post, Uma Shashank and Chaitanya Kumar Borah published the plane color pipeline for Intel and Harry Wentland implemented a generic API for DRM based on VKMS support. We discussed these two proposals and the next steps for Color on Linux during the Color Management workshop at XDC 2023 and I briefly shared workshop results in the 2023 XDC lightning talk session.
The search for rainbow treasures is not over yet! We plan to meet again next year in the 2024 Display Hackfest in Coruña-Spain (Igalia’s HQ) to keep up the pace and continue advancing today’s display needs on Linux.
Finally, a HUGE thank you to everyone who worked with me on exploring AMD’s color capabilities and making them available in userspace.
Matthew Palmer: PostgreSQL Encryption: The Available Options
On an episode of Postgres FM, the hosts had a (very brief) discussion of data encryption in PostgreSQL. While Postgres FM is a podcast well worth a subscribe, the hosts aren’t data security experts, and so as someone who builds a queryable database encryption system, I found the coverage to be somewhat… lacking. I figured I’d provide a more complete survey of the available options for PostgreSQL-related data encryption.
The Status QuoBy default, when you install PostgreSQL, there is no data encryption at all. That means that anyone who gets access to any part of the system can read all the data they have access to.
This is, of course, not peculiar to PostgreSQL: basically everything works much the same way.
What’s stopping an attacker from nicking off with all your data is the fact that they can’t access the database at all. The things that are acting as protection are “perimeter” defences, like putting the physical equipment running the server in a secure datacenter, firewalls to prevent internet randos connecting to the database, and strong passwords.
This is referred to as “tortoise” security – it’s tough on the outside, but soft on the inside. Once that outer shell is cracked, the delicious, delicious data is ripe for the picking, and there’s absolutely nothing to stop a miscreant from going to town and making off with everything.
It’s a good idea to plan your defenses on the assumption you’re going to get breached sooner or later. Having good defence-in-depth includes denying the attacker to your data even if they compromise the database. This is where encryption comes in.
Storage-Layer Defences: Disk / Volume EncryptionTo protect against the compromise of the storage that your database uses (physical disks, EBS volumes, and the like), it’s common to employ encryption-at-rest, such as full-disk encryption, or volume encryption. These mechanisms protect against “offline” attacks, but provide no protection while the system is actually running. And therein lies the rub: your database is always running, so encryption at rest typically doesn’t provide much value.
If you’re running physical systems, disk encryption is essential, but more to prevent accidental data loss, due to things like failing to wipe drives before disposing of them, rather than physical theft. In systems where volume encryption is only a tickbox away, it’s also worth enabling, if only to prevent inane questions from your security auditors. Relying solely on storage-layer defences, though, is very unlikely to provide any appreciable value in preventing data loss.
Database-Layer Defences: Transparent Database EncryptionIf you’ve used proprietary database systems in high-security environments, you might have come across Transparent Database Encryption (TDE). There are also a couple of proprietary extensions for PostgreSQL that provide this functionality.
TDE is essentially encryption-at-rest implemented inside the database server. As such, it has much the same drawbacks as disk encryption: few real-world attacks are thwarted by it. There is a very small amount of additional protection, in that “physical” level backups (as produced by pg_basebackup) are protected, but the vast majority of attacks aren’t stopped by TDE. Any attacker who can access the database while it’s running can just ask for an SQL-level dump of the stored data, and they’ll get the unencrypted data quick as you like.
Application-Layer Defences: Field EncryptionIf you want to take the database out of the threat landscape, you really need to encrypt sensitive data before it even gets near the database. This is the realm of field encryption, more commonly known as application-level encryption.
This technique involves encrypting each field of data before it is sent to be stored in the database, and then decrypting it again after it’s retrieved from the database. Anyone who gets the data from the database directly, whether via a backup or a direct connection, is out of luck: they can’t decrypt the data, and therefore it’s worthless.
There are, of course, some limitations of this technique.
For starters, every ORM and data mapper out there has rolled their own encryption format, meaning that there’s basically zero interoperability. This isn’t a problem if you build everything that accesses the database using a single framework, but if you ever feel the need to migrate, or use the database from multiple codebases, you’re likely in for a rough time.
The other big problem of traditional application-level encryption is that, when the database can’t understand what data its storing, it can’t run queries against that data. So if you want to encrypt, say, your users’ dates of birth, but you also need to be able to query on that field, you need to choose between one or the other: you can’t have both at the same time.
You may think to yourself, “but this isn’t any good, an attacker that breaks into my application can still steal all my data!”. That is true, but security is never binary. The name of the game is reducing the attack surface, making it harder for an attacker to succeed. If you leave all the data unencrypted in the database, an attacker can steal all your data by breaking into the database or by breaking into the application. Encrypting the data reduces the attacker’s options, and allows you to focus your resources on hardening the application against attack, safe in the knowledge that an attacker who gets into the database directly isn’t going to get anything valuable.
Sidenote: The Curious Case of pg_cryptoPostgreSQL ships a “contrib” module called pg_crypto, which provides encryption and decryption functions. This sounds ideal to use for encrypting data within our applications, as it’s available no matter what we’re using to write our application. It avoids the problem of framework-specific cryptography, because you call the same PostgreSQL functions no matter what language you’re using, which produces the same output.
However, I don’t recommend ever using pg_crypto’s data encryption functions, and I doubt you will find many other cryptographic engineers who will, either.
First up, and most horrifyingly, it requires you to pass the long-term keys to the database server. If there’s an attacker actively in the database server, they can capture the keys as they come in, which means all the data encrypted using that key is exposed. Sending the keys can also result in the keys ending up in query logs, both on the client and server, which is obviously a terrible result.
Less scary, but still very concerning, is that pg_crypto’s available cryptography is, to put it mildly, antiquated. We have a lot of newer, safer, and faster techniques for data encryption, that aren’t available in pg_crypto. This means that if you do use it, you’re leaving a lot on the table, and need to have skilled cryptographic engineers on hand to avoid the potential pitfalls.
In short: friends don’t let friends use pg_crypto.
The Future: EnquoAll this brings us to the project I run: Enquo. It takes application-layer encryption to a new level, by providing a language- and framework-agnostic cryptosystem that also enables encrypted data to be efficiently queried by the database.
So, you can encrypt your users’ dates of birth, in such a way that anyone with the appropriate keys can query the database to return, say, all users over the age of 18, but an attacker just sees unintelligible gibberish. This should greatly increase the amount of data that can be encrypted, and as the Enquo project expands its available data types and supported languages, the coverage of encrypted data will grow and grow. My eventual goal is to encrypt all data, all the time.
If this appeals to you, visit enquo.org to use or contribute to the open source project, or EnquoDB.com for commercial support and hosted database options.
Vincent Bernat: Non-interactive SSH password authentication
SSH offers several forms of authentication, such as passwords and public keys. The latter are considered more secure. However, password authentication remains prevalent, particularly with network equipment.1
A classic solution to avoid typing a password for each connection is sshpass, or its more correct variant passh. Here is a wrapper for Zsh, getting the password from pass, a simple password manager:2
pssh() { passh -p <(pass show network/ssh/password | head -1) ssh "$@" } compdef pssh=sshThis approach is a bit brittle as it requires to parse the output of the ssh command to look for a password prompt. Moreover, if no password is required, the password manager is still invoked. Since OpenSSH 8.4, we can use SSH_ASKPASS and SSH_ASKPASS_REQUIRE instead:
ssh() { set -o localoptions -o localtraps local passname=network/ssh/password local helper=$(mktemp) trap "command rm -f $helper" EXIT INT > $helper <<EOF #!$SHELL pass show $passname | head -1 EOF chmod u+x $helper SSH_ASKPASS=$helper SSH_ASKPASS_REQUIRE=force command ssh "$@" }If the password is incorrect, we can display a prompt on the second tentative:
ssh() { set -o localoptions -o localtraps local passname=network/ssh/password local helper=$(mktemp) trap "command rm -f $helper" EXIT INT > $helper <<EOF #!$SHELL if [ -k $helper ]; then { oldtty=\$(stty -g) trap 'stty \$oldtty < /dev/tty 2> /dev/null' EXIT INT TERM HUP stty -echo print "\rpassword: " read password printf "\n" } > /dev/tty < /dev/tty printf "%s" "\$password" else pass show $passname | head -1 chmod +t $helper fi EOF chmod u+x $helper SSH_ASKPASS=$helper SSH_ASKPASS_REQUIRE=force command ssh "$@" }A possible improvement is to use a different password entry depending on the remote host:3
ssh() { # Grab login information local -A details details=(${=${(M)${:-"${(@f)$(command ssh -G "$@" 2>/dev/null)}"}:#(host|hostname|user) *}}) local remote=${details[host]:-details[hostname]} local login=${details[user]}@${remote} # Get password name local passname case "$login" in admin@*.example.net) passname=company1/ssh/admin ;; bernat@*.example.net) passname=company1/ssh/bernat ;; backup@*.example.net) passname=company1/ssh/backup ;; esac # No password name? Just use regular SSH [[ -z $passname ]] && { command ssh "$@" return $? } # Invoke SSH with the helper for SSH_ASKPASS # […] }It is also possible to make scp invoke our custom ssh function:
scp() { set -o localoptions -o localtraps local helper=$(mktemp) trap "command rm -f $helper" EXIT INT > $helper <<EOF #!$SHELL source ${(%):-%x} ssh "\$@" EOF command scp -S $helper "$@" }For the complete code, have a look at my zshrc.
-
First, some vendors make it difficult to associate an SSH key with a user. Then, many vendors do not support certificate-based authentication, making it difficult to scale. Finally, interactions between public-key authentication and finer-grained authorization methods like TACACS+ and Radius are still uncharted territory. ↩︎
-
The clear-text password never appears on the command line, in the environment, or on the disk, making it difficult for a third party without elevated privileges to capture it. On Linux, Zsh provides the password through a file descriptor. ↩︎
-
To decipher the fourth line, you may get help from print -l and the zshexpn(1) manual page. details is an associative array defined from an array alternating keys and values. ↩︎
Thorsten Alteholz: My Debian Activities in October 2023
This month I accepted 361 and rejected 34 packages. The overall number of packages that got accepted was 362.
Debian LTSThis was my hundred-twelfth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.
During my allocated time I uploaded:
- [DLA 3615-1] libcue security update for one CVE to fix an out-of-bounds array access
- [DLA 3631-1] xorg-server security update for two CVEs. These were embargoed issues related to privilege escalation
- [DLA 3633-1] gst-plugins-bad1.0 security update for three CVEs to fix possible DoS or arbitrary code execution when processing crafted media files.
- [1052361]bookworm-pu: the upload has been done and processed for the point release
- [1052363]bullseye-pu: the upload has been done and processed for the point release
Unfortunately upstream still could not resolve whether the patch for CVE-2023-42118 of libspf2 is valid, so no progress happened here.
I also continued to work on bind9 and try to understand why some tests fail.
Last but not least I did some days of frontdesk duties and took part in the LTS meeting.
Debian ELTSThis month was the sixty-third ELTS month. During my allocated time I uploaded:
- [ELA-978-1]cups update in Jessie and Stretch for two CVEs. One issue is related to missing boundary checks which might lead to code execution when using crafted postscript documents. The other issue is related to unauthorized access to recently printed documents.
- [ELA-990-1]xorg-server update in Jessie and Stretch for two CVEs. These were embargoed issues related to privilege escalation.
- [ELA-993-1]gst-plugins-bad1.0 update in Jessie and Stretch for three CVEs to fix possible DoS or arbitrary code execution when processing crafted media files.
I also continued to work on bind9 and as with the version in LTS, I try to understand why some tests fail.
Last but not least I did some days of frontdesk duties .
Debian PrintingThis month I uploaded a new upstream version of:
- … rlpr
Within the context of preserving old printing packages, I adopted:
- … lprng
If you know of any other package that is also needed and still maintained by the QA team, please tell me.
I also uploaded new upstream version of packages or uploaded a package to fix one or the other issue:
- … cups
- … hannah-foo2zjs
This work is generously funded by Freexian!
Debian MobcomThis month I uploaded a package to fix one or the other issue:
- … osmo-pcu The bug was filed by Helmut and was related to /usr-merge
This month I uploaded new upstream version of packages, did a source upload for the transition or uploaded it to fix one or the other issue: