FLOSS Project Planets

Dropsolid Experience Agency: Why Drupal is a Shark

Planet Drupal - Sat, 2024-04-13 14:24
Drupal has been able to adapt and stay the top choice for open enterprise CMS, especially as the key component in the modern digital experience platform (DXP).
Categories: FLOSS Project Planets

Dropsolid Experience Agency: How to create a DXP (Digital Experience Platform) with Drupal

Planet Drupal - Sat, 2024-04-13 14:24
Are your customers asking for a DXP instead of a CMS? Are your developers struggling to build personalized digital experiences wit
Categories: FLOSS Project Planets

Dropsolid Experience Agency: Contributing to Open Source, what's your math?

Planet Drupal - Sat, 2024-04-13 14:24
1xINTERNET made a great post about calculating how much yo
Categories: FLOSS Project Planets

Dropsolid Experience Agency: ngrok: testing payment gateways in Drupal commerce

Planet Drupal - Sat, 2024-04-13 14:24
Building commerce websites always means building integrations.
Categories: FLOSS Project Planets

Dropsolid Experience Agency: Online payments in Drupal

Planet Drupal - Sat, 2024-04-13 14:24
In this day and age, it’s very hard to imagine a world without online payments.
Categories: FLOSS Project Planets

Simon Josefsson: Reproducible and minimal source-only tarballs

Planet Debian - Sat, 2024-04-13 12:44

With the release of Libntlm version 1.8 the release tarball can be reproduced on several distributions. We also publish a signed minimal source-only tarball, produced by git-archive which is the same format used by Savannah, Codeberg, GitLab, GitHub and others. Reproducibility of both tarballs are tested continuously for regressions on GitLab through a CI/CD pipeline. If that wasn’t enough to excite you, the Debian packages of Libntlm are now built from the reproducible minimal source-only tarball. The resulting binaries are hopefully reproducible on several architectures.

What does that even mean? Why should you care? How you can do the same for your project? What are the open issues? Read on, dear reader…

This article describes my practical experiments with reproducible release artifacts, following up on my earlier thoughts that lead to discussion on Fosstodon and a patch by Janneke Nieuwenhuizen to make Guix tarballs reproducible that inspired me to some practical work.

Let’s look at how a maintainer release some software, and how a user can reproduce the released artifacts from the source code. Libntlm provides a shared library written in C and uses GNU Make, GNU Autoconf, GNU Automake, GNU Libtool and gnulib for build management, but these ideas should apply to most project and build system. The following illustrate the steps a maintainer would take to prepare a release:

git clone https://gitlab.com/gsasl/libntlm.git cd libntlm git checkout v1.8 ./bootstrap ./configure make distcheck gpg -b libntlm-1.8.tar.gz

The generated files libntlm-1.8.tar.gz and libntlm-1.8.tar.gz.sig are published, and users download and use them. This is how the GNU project have been doing releases since the late 1980’s. That is a testament to how successful this pattern has been! These tarballs contain source code and some generated files, typically shell scripts generated by autoconf, makefile templates generated by automake, documentation in formats like Info, HTML, or PDF. Rarely do they contain binary object code, but historically that happened.

The XZUtils incident illustrate that tarballs with files that are not included in the git archive offer an opportunity to disguise malicious backdoors. I blogged earlier how to mitigate this risk by using signed minimal source-only tarballs.

The risk of hiding malware is not the only motivation to publish signed minimal source-only tarballs. With pre-generated content in tarballs, there is a risk that GNU/Linux distributions such as Trisquel, Guix, Debian/Ubuntu or Fedora ship generated files coming from the tarball into the binary *.deb or *.rpm package file. Typically the person packaging the upstream project never realized that some installed artifacts was not re-built through a typical autoconf -fi && ./configure && make install sequence, and never wrote the code to rebuild everything. This can also happen if the build rules are written but are buggy, shipping the old artifact. When a security problem is found, this can lead to time-consuming situations, as it may be that patching the relevant source code and rebuilding the package is not sufficient: the vulnerable generated object from the tarball would be shipped into the binary package instead of a rebuilt artifact. For architecture-specific binaries this rarely happens, since object code is usually not included in tarballs — although for 10+ years I shipped the binary Java JAR file in the GNU Libidn release tarball, until I stopped shipping it. For interpreted languages and especially for generated content such as HTML, PDF, shell scripts this happens more than you would like.

Publishing minimal source-only tarballs enable easier auditing of a project’s code, to avoid the need to read through all generated files looking for malicious content. I have taken care to generate the source-only minimal tarball using git-archive. This is the same format that GitLab, GitHub etc offer for the automated download links on git tags. The minimal source-only tarballs can thus serve as a way to audit GitLab and GitHub download material! Consider if/when hosting sites like GitLab or GitHub has a security incident that cause generated tarballs to include a backdoor that is not present in the git repository. If people rely on the tag download artifact without verifying the maintainer PGP signature using GnuPG, this can lead to similar backdoor scenarios that we had for XZUtils but originated with the hosting provider instead of the release manager. This is even more concerning, since this attack can be mounted for some selected IP address that you want to target and not on everyone, thereby making it harder to discover.

With all that discussion and rationale out of the way, let’s return to the release process. I have added another step here:

make srcdist gpg -b libntlm-1.8-src.tar.gz

Now the release is ready. I publish these four files in the Libntlm’s Savannah Download area, but they can be uploaded to a GitLab/GitHub release area as well. These are the SHA256 checksums I got after building the tarballs on my Trisquel 11 aramo laptop:

91de864224913b9493c7a6cec2890e6eded3610d34c3d983132823de348ec2ca libntlm-1.8-src.tar.gz ce6569a47a21173ba69c990965f73eb82d9a093eb871f935ab64ee13df47fda1 libntlm-1.8.tar.gz

So how can you reproduce my artifacts? Here is how to reproduce them in a Ubuntu 22.04 container:

podman run -it --rm ubuntu:22.04 apt-get update apt-get install -y --no-install-recommends autoconf automake libtool make git ca-certificates git clone https://gitlab.com/gsasl/libntlm.git cd libntlm git checkout v1.8 ./bootstrap ./configure make dist srcdist sha256sum libntlm-*.tar.gz

You should see the exact same SHA256 checksum values. Hooray!

This works because Trisquel 11 and Ubuntu 22.04 uses the same version of git, autoconf, automake, and libtool. These tools do not guarantee the same output content for all versions, similar to how GNU GCC does not generate the same binary output for all versions. So there is still some delicate version pairing needed.

Ideally, the artifacts should be possible to reproduce from the release artifacts themselves, and not only directly from git. It is possible to reproduce the full tarball in a AlmaLinux 8 container – replace almalinux:8 with rockylinux:8 if you prefer RockyLinux:

podman run -it --rm almalinux:8 dnf update -y dnf install -y make wget gcc wget https://download.savannah.nongnu.org/releases/libntlm/libntlm-1.8.tar.gz tar xfa libntlm-1.8.tar.gz cd libntlm-1.8 ./configure make dist sha256sum libntlm-1.8.tar.gz

The source-only minimal tarball can be regenerated on Debian 11:

podman run -it --rm debian:11 apt-get update apt-get install -y --no-install-recommends make git ca-certificates git clone https://gitlab.com/gsasl/libntlm.git cd libntlm git checkout v1.8 make -f cfg.mk srcdist sha256sum libntlm-1.8-src.tar.gz

As the Magnus Opus or chef-d’œuvre, let’s recreate the full tarball directly from the minimal source-only tarball on Trisquel 11 – replace docker.io/kpengboy/trisquel:11.0 with ubuntu:22.04 if you prefer.

podman run -it --rm docker.io/kpengboy/trisquel:11.0 apt-get update apt-get install -y --no-install-recommends autoconf automake libtool make wget git ca-certificates wget https://download.savannah.nongnu.org/releases/libntlm/libntlm-1.8-src.tar.gz tar xfa libntlm-1.8-src.tar.gz cd libntlm-v1.8 ./bootstrap ./configure make dist sha256sum libntlm-1.8.tar.gz

Yay! You should now have great confidence in that the release artifacts correspond to what’s in version control and also to what the maintainer intended to release. Your remaining job is to audit the source code for vulnerabilities, including the source code of the dependencies used in the build. You no longer have to worry about auditing the release artifacts.

I find it somewhat amusing that the build infrastructure for Libntlm is now in a significantly better place than the code itself. Libntlm is written in old C style with plenty of string manipulation and uses broken cryptographic algorithms such as MD4 and single-DES. Remember folks: solving supply chain security issues has no bearing on what kind of code you eventually run. A clean gun can still shoot you in the foot.

Side note on naming: GitLab exports tarballs with pathnames libntlm-v1.8/ (i.e.., PROJECT-TAG/) and I’ve adopted the same pathnames, which means my libntlm-1.8-src.tar.gz tarballs are bit-by-bit identical to GitLab’s exports and you can verify this with tools like diffoscope. GitLab name the tarball libntlm-v1.8.tar.gz (i.e., PROJECT-TAG.ARCHIVE) which I find too similar to the libntlm-1.8.tar.gz that we also publish. GitHub uses the same git archive style, but unfortunately they have logic that removes the ‘v’ in the pathname so you will get a tarball with pathname libntlm-1.8/ instead of libntlm-v1.8/ that GitLab and I use. The content of the tarball is bit-by-bit identical, but the pathname and archive differs. Codeberg (running Forgejo) uses another approach: the tarball is called libntlm-v1.8.tar.gz (after the tag) just like GitLab, but the pathname inside the archive is libntlm/, otherwise the produced archive is bit-by-bit identical including timestamps. Savannah’s CGIT interface uses archive name libntlm-1.8.tar.gz with pathname libntlm-1.8/, but otherwise file content is identical. Savannah’s GitWeb interface provides snapshot links that are named after the git commit (e.g., libntlm-a812c2ca.tar.gz with libntlm-a812c2ca/) and I cannot find any tag-based download links at all. Overall, we are so close to get SHA256 checksum to match, but fail on pathname within the archive. I’ve chosen to be compatible with GitLab regarding the content of tarballs but not on archive naming. From a simplicity point of view, it would be nice if everyone used PROJECT-TAG.ARCHIVE for the archive filename and PROJECT-TAG/ for the pathname within the archive. This aspect will probably need more discussion.

Side note on git archive output: It seems different versions of git archive produce different results for the same repository. The version of git in Debian 11, Trisquel 11 and Ubuntu 22.04 behave the same. The version of git in Debian 12, AlmaLinux/RockyLinux 8/9, Alpine, ArchLinux, macOS homebrew, and upcoming Ubuntu 24.04 behave in another way. Hopefully this will not change that often, but this would invalidate reproducibility of these tarballs in the future, forcing you to use an old git release to reproduce the source-only tarball. Alas, GitLab and most other sites appears to be using modern git so the download tarballs from them would not match my tarballs – even though the content would.

Side note on ChangeLog: ChangeLog files were traditionally manually curated files with version history for a package. In recent years, several projects moved to dynamically generate them from git history (using tools like git2cl or gitlog-to-changelog). This has consequences for reproducibility of tarballs: you need to have the entire git history available! The gitlog-to-changelog tool also output different outputs depending on the time zone of the person using it, which arguable is a simple bug that can be fixed. However this entire approach is incompatible with rebuilding the full tarball from the minimal source-only tarball. It seems Libntlm’s ChangeLog file died on the surgery table here.

So how would a distribution build these minimal source-only tarballs? I happen to help on the libntlm package in Debian. It has historically used the generated tarballs as the source code to build from. This means that code coming from gnulib is vendored in the tarball. When a security problem is discovered in gnulib code, the security team needs to patch all packages that include that vendored code and rebuild them, instead of merely patching the gnulib package and rebuild all packages that rely on that particular code. To change this, the Debian libntlm package needs to Build-Depends on Debian’s gnulib package. But there was one problem: similar to most projects that use gnulib, Libntlm depend on a particular git commit of gnulib, and Debian only ship one commit. There is no coordination about which commit to use. I have adopted gnulib in Debian, and add a git bundle to the *_all.deb binary package so that projects that rely on gnulib can pick whatever commit they need. This allow an no-network GNULIB_URL and GNULIB_REVISION approach when running Libntlm’s ./bootstrap with the Debian gnulib package installed. Otherwise libntlm would pick up whatever latest version of gnulib that Debian happened to have in the gnulib package, which is not what the Libntlm maintainer intended to be used, and can lead to all sorts of version mismatches (and consequently security problems) over time. Libntlm in Debian is developed and tested on Salsa and there is continuous integration testing of it as well, thanks to the Salsa CI team.

Side note on git bundles: unfortunately there appears to be no reproducible way to export a git repository into one or more files. So one unfortunate consequence of all this work is that the gnulib *.orig.tar.gz tarball in Debian is not reproducible any more. I have tried to get Git bundles to be reproducible but I never got it to work — see my notes in gnulib’s debian/README.source on this aspect. Of course, source tarball reproducibility has nothing to do with binary reproducibility of gnulib in Debian itself, fortunately.

One open question is how to deal with the increased build dependencies that is triggered by this approach. Some people are surprised by this but I don’t see how to get around it: if you depend on source code for tools in another package to build your package, it is a bad idea to hide that dependency. We’ve done it for a long time through vendored code in non-minimal tarballs. Libntlm isn’t the most critical project from a bootstrapping perspective, so adding git and gnulib as Build-Depends to it will probably be fine. However, consider if this pattern was used for other packages that uses gnulib such as coreutils, gzip, tar, bison etc (all are using gnulib) then they would all Build-Depends on git and gnulib. Cross-building those packages for a new architecture will therefor require git on that architecture first, which gets circular quick. The dependency on gnulib is real so I don’t see that going away, and gnulib is a Architecture:all package. However, the dependency on git is merely a consequence of how the Debian gnulib package chose to make all gnulib git commits available to projects: through a git bundle. There are other ways to do this that doesn’t require the git tool to extract the necessary files, but none that I found practical — ideas welcome!

Finally some brief notes on how this was implementated. Enabling bootstrappable source-only minimal tarballs via gnulib’s ./bootstrap is achieved by using the GNULIB_REVISION mechanism, locking down the gnulib commit used. I have always disliked git submodules because they add extra steps and has complicated interaction with CI/CD. The reason why I gave up git submodules now is because the particular commit to use is not recorded in the git archive output when git submodules is used. So the particular gnulib commit has to be mentioned explicitly in some source code that goes into the git archive tarball. Colin Watson added the GNULIB_REVISION approach to ./bootstrap back in 2018, and now it no longer made sense to continue to use a gnulib git submodule. One alternative is to use ./bootstrap with --gnulib-srcdir or --gnulib-refdir if there is some practical problem with the GNULIB_URL towards a git bundle the GNULIB_REVISION in bootstrap.conf.

The srcdist make rule is simple:

git archive --prefix=libntlm-v1.8/ -o libntlm-v1.8.tar.gz HEAD

Making the make dist generated tarball reproducible can be more complicated, however for Libntlm it was sufficient to make sure the modification times of all files were set deterministically to a timestamp found in the git repository. Interestingly there seems to be a couple of different ways to accomplish this, Guix doesn’t support minimal source-only tarballs but rely on a .tarball-timestamp file inside the tarball. Paul Eggert explained what TZDB is using some time ago. The approach I’m using now is fairly similar to the one I suggested over a year ago.

Doing continous testing of all this is critical to make sure things don’t regress. Libntlm’s pipeline definition now produce the generated libntlm-*.tar.gz tarballs and a checksum as a build artifact. Then I added the 000-reproducability job which compares the checksums and fails on mismatches. You can read its delicate output in the job for the v1.8 release. Right now we insists that builds on Trisquel 11 match Ubuntu 22.04, that PureOS 10 builds match Debian 11 builds, that AlmaLinux 8 builds match RockyLinux 8 builds, and AlmaLinux 9 builds match RockyLinux 9 builds. As you can see in pipeline job output, not all platforms lead to the same tarballs, but hopefully this state can be improved over time. There is also partial reproducibility, where the full tarball is reproducible across two distributions but not the minimal tarball, or vice versa.

If this way of working plays out well, I hope to implement it in other projects too.

What do you think? Happy Hacking!

Categories: FLOSS Project Planets

Simon Josefsson: Reproducible and minimal source-only tarballs

GNU Planet! - Sat, 2024-04-13 12:44

With the release of Libntlm version 1.8 the release tarball can be reproduced on several distributions. We also publish a signed minimal source-only tarball, produced by git-archive which is the same format used by Savannah, Codeberg, GitLab, GitHub and others. Reproducibility of both tarballs are tested continuously for regressions on GitLab through a CI/CD pipeline. If that wasn’t enough to excite you, the Debian packages of Libntlm are now built from the reproducible minimal source-only tarball. The resulting binaries are hopefully reproducible on several architectures.

What does that even mean? Why should you care? How you can do the same for your project? What are the open issues? Read on, dear reader…

This article describes my practical experiments with reproducible release artifacts, following up on my earlier thoughts that lead to discussion on Fosstodon and a patch by Janneke Nieuwenhuizen to make Guix tarballs reproducible that inspired me to some practical work.

Let’s look at how a maintainer release some software, and how a user can reproduce the released artifacts from the source code. Libntlm provides a shared library written in C and uses GNU Make, GNU Autoconf, GNU Automake, GNU Libtool and gnulib for build management, but these ideas should apply to most project and build system. The following illustrate the steps a maintainer would take to prepare a release:

git clone https://gitlab.com/gsasl/libntlm.git cd libntlm git checkout v1.8 ./bootstrap ./configure make distcheck gpg -b libntlm-1.8.tar.gz

The generated files libntlm-1.8.tar.gz and libntlm-1.8.tar.gz.sig are published, and users download and use them. This is how the GNU project have been doing releases since the late 1980’s. That is a testament to how successful this pattern has been! These tarballs contain source code and some generated files, typically shell scripts generated by autoconf, makefile templates generated by automake, documentation in formats like Info, HTML, or PDF. Rarely do they contain binary object code, but historically that happened.

The XZUtils incident illustrate that tarballs with files that are not included in the git archive offer an opportunity to disguise malicious backdoors. I blogged earlier how to mitigate this risk by using signed minimal source-only tarballs.

The risk of hiding malware is not the only motivation to publish signed minimal source-only tarballs. With pre-generated content in tarballs, there is a risk that GNU/Linux distributions such as Trisquel, Guix, Debian/Ubuntu or Fedora ship generated files coming from the tarball into the binary *.deb or *.rpm package file. Typically the person packaging the upstream project never realized that some installed artifacts was not re-built through a typical autoconf -fi && ./configure && make install sequence, and never wrote the code to rebuild everything. This can also happen if the build rules are written but are buggy, shipping the old artifact. When a security problem is found, this can lead to time-consuming situations, as it may be that patching the relevant source code and rebuilding the package is not sufficient: the vulnerable generated object from the tarball would be shipped into the binary package instead of a rebuilt artifact. For architecture-specific binaries this rarely happens, since object code is usually not included in tarballs — although for 10+ years I shipped the binary Java JAR file in the GNU Libidn release tarball, until I stopped shipping it. For interpreted languages and especially for generated content such as HTML, PDF, shell scripts this happens more than you would like.

Publishing minimal source-only tarballs enable easier auditing of a project’s code, to avoid the need to read through all generated files looking for malicious content. I have taken care to generate the source-only minimal tarball using git-archive. This is the same format that GitLab, GitHub etc offer for the automated download links on git tags. The minimal source-only tarballs can thus serve as a way to audit GitLab and GitHub download material! Consider if/when hosting sites like GitLab or GitHub has a security incident that cause generated tarballs to include a backdoor that is not present in the git repository. If people rely on the tag download artifact without verifying the maintainer PGP signature using GnuPG, this can lead to similar backdoor scenarios that we had for XZUtils but originated with the hosting provider instead of the release manager. This is even more concerning, since this attack can be mounted for some selected IP address that you want to target and not on everyone, thereby making it harder to discover.

With all that discussion and rationale out of the way, let’s return to the release process. I have added another step here:

make srcdist gpg -b libntlm-1.8-src.tar.gz

Now the release is ready. I publish these four files in the Libntlm’s Savannah Download area, but they can be uploaded to a GitLab/GitHub release area as well. These are the SHA256 checksums I got after building the tarballs on my Trisquel 11 aramo laptop:

91de864224913b9493c7a6cec2890e6eded3610d34c3d983132823de348ec2ca libntlm-1.8-src.tar.gz ce6569a47a21173ba69c990965f73eb82d9a093eb871f935ab64ee13df47fda1 libntlm-1.8.tar.gz

So how can you reproduce my artifacts? Here is how to reproduce them in a Ubuntu 22.04 container:

podman run -it --rm ubuntu:22.04 apt-get update apt-get install -y --no-install-recommends autoconf automake libtool make git ca-certificates git clone https://gitlab.com/gsasl/libntlm.git cd libntlm git checkout v1.8 ./bootstrap ./configure make dist srcdist sha256sum libntlm-*.tar.gz

You should see the exact same SHA256 checksum values. Hooray!

This works because Trisquel 11 and Ubuntu 22.04 uses the same version of git, autoconf, automake, and libtool. These tools do not guarantee the same output content for all versions, similar to how GNU GCC does not generate the same binary output for all versions. So there is still some delicate version pairing needed.

Ideally, the artifacts should be possible to reproduce from the release artifacts themselves, and not only directly from git. It is possible to reproduce the full tarball in a AlmaLinux 8 container – replace almalinux:8 with rockylinux:8 if you prefer RockyLinux:

podman run -it --rm almalinux:8 dnf update -y dnf install -y make wget gcc wget https://download.savannah.nongnu.org/releases/libntlm/libntlm-1.8.tar.gz tar xfa libntlm-1.8.tar.gz cd libntlm-1.8 ./configure make dist sha256sum libntlm-1.8.tar.gz

The source-only minimal tarball can be regenerated on Debian 11:

podman run -it --rm debian:11 apt-get update apt-get install -y --no-install-recommends make git ca-certificates git clone https://gitlab.com/gsasl/libntlm.git cd libntlm git checkout v1.8 make -f cfg.mk srcdist sha256sum libntlm-1.8-src.tar.gz

As the Magnus Opus or chef-d’œuvre, let’s recreate the full tarball directly from the minimal source-only tarball on Trisquel 11 – replace docker.io/kpengboy/trisquel:11.0 with ubuntu:22.04 if you prefer.

podman run -it --rm docker.io/kpengboy/trisquel:11.0 apt-get update apt-get install -y --no-install-recommends autoconf automake libtool make wget git ca-certificates wget https://download.savannah.nongnu.org/releases/libntlm/libntlm-1.8-src.tar.gz tar xfa libntlm-1.8-src.tar.gz cd libntlm-v1.8 ./bootstrap ./configure make dist sha256sum libntlm-1.8.tar.gz

Yay! You should now have great confidence in that the release artifacts correspond to what’s in version control and also to what the maintainer intended to release. Your remaining job is to audit the source code for vulnerabilities, including the source code of the dependencies used in the build. You no longer have to worry about auditing the release artifacts.

I find it somewhat amusing that the build infrastructure for Libntlm is now in a significantly better place than the code itself. Libntlm is written in old C style with plenty of string manipulation and uses broken cryptographic algorithms such as MD4 and single-DES. Remember folks: solving supply chain security issues has no bearing on what kind of code you eventually run. A clean gun can still shoot you in the foot.

Side note on naming: GitLab exports tarballs with pathnames libntlm-v1.8/ (i.e.., PROJECT-TAG/) and I’ve adopted the same pathnames, which means my libntlm-1.8-src.tar.gz tarballs are bit-by-bit identical to GitLab’s exports and you can verify this with tools like diffoscope. GitLab name the tarball libntlm-v1.8.tar.gz (i.e., PROJECT-TAG.ARCHIVE) which I find too similar to the libntlm-1.8.tar.gz that we also publish. GitHub uses the same git archive style, but unfortunately they have logic that removes the ‘v’ in the pathname so you will get a tarball with pathname libntlm-1.8/ instead of libntlm-v1.8/ that GitLab and I use. The content of the tarball is bit-by-bit identical, but the pathname and archive differs. Codeberg (running Forgejo) uses another approach: the tarball is called libntlm-v1.8.tar.gz (after the tag) just like GitLab, but the pathname inside the archive is libntlm/, otherwise the produced archive is bit-by-bit identical including timestamps. Savannah’s CGIT interface uses archive name libntlm-1.8.tar.gz with pathname libntlm-1.8/, but otherwise file content is identical. Savannah’s GitWeb interface provides snapshot links that are named after the git commit (e.g., libntlm-a812c2ca.tar.gz with libntlm-a812c2ca/) and I cannot find any tag-based download links at all. Overall, we are so close to get SHA256 checksum to match, but fail on pathname within the archive. I’ve chosen to be compatible with GitLab regarding the content of tarballs but not on archive naming. From a simplicity point of view, it would be nice if everyone used PROJECT-TAG.ARCHIVE for the archive filename and PROJECT-TAG/ for the pathname within the archive. This aspect will probably need more discussion.

Side note on git archive output: It seems different versions of git archive produce different results for the same repository. The version of git in Debian 11, Trisquel 11 and Ubuntu 22.04 behave the same. The version of git in Debian 12, AlmaLinux/RockyLinux 8/9, Alpine, ArchLinux, macOS homebrew, and upcoming Ubuntu 24.04 behave in another way. Hopefully this will not change that often, but this would invalidate reproducibility of these tarballs in the future, forcing you to use an old git release to reproduce the source-only tarball. Alas, GitLab and most other sites appears to be using modern git so the download tarballs from them would not match my tarballs – even though the content would.

Side note on ChangeLog: ChangeLog files were traditionally manually curated files with version history for a package. In recent years, several projects moved to dynamically generate them from git history (using tools like git2cl or gitlog-to-changelog). This has consequences for reproducibility of tarballs: you need to have the entire git history available! The gitlog-to-changelog tool also output different outputs depending on the time zone of the person using it, which arguable is a simple bug that can be fixed. However this entire approach is incompatible with rebuilding the full tarball from the minimal source-only tarball. It seems Libntlm’s ChangeLog file died on the surgery table here.

So how would a distribution build these minimal source-only tarballs? I happen to help on the libntlm package in Debian. It has historically used the generated tarballs as the source code to build from. This means that code coming from gnulib is vendored in the tarball. When a security problem is discovered in gnulib code, the security team needs to patch all packages that include that vendored code and rebuild them, instead of merely patching the gnulib package and rebuild all packages that rely on that particular code. To change this, the Debian libntlm package needs to Build-Depends on Debian’s gnulib package. But there was one problem: similar to most projects that use gnulib, Libntlm depend on a particular git commit of gnulib, and Debian only ship one commit. There is no coordination about which commit to use. I have adopted gnulib in Debian, and add a git bundle to the *_all.deb binary package so that projects that rely on gnulib can pick whatever commit they need. This allow an no-network GNULIB_URL and GNULIB_REVISION approach when running Libntlm’s ./bootstrap with the Debian gnulib package installed. Otherwise libntlm would pick up whatever latest version of gnulib that Debian happened to have in the gnulib package, which is not what the Libntlm maintainer intended to be used, and can lead to all sorts of version mismatches (and consequently security problems) over time. Libntlm in Debian is developed and tested on Salsa and there is continuous integration testing of it as well, thanks to the Salsa CI team.

Side note on git bundles: unfortunately there appears to be no reproducible way to export a git repository into one or more files. So one unfortunate consequence of all this work is that the gnulib *.orig.tar.gz tarball in Debian is not reproducible any more. I have tried to get Git bundles to be reproducible but I never got it to work — see my notes in gnulib’s debian/README.source on this aspect. Of course, source tarball reproducibility has nothing to do with binary reproducibility of gnulib in Debian itself, fortunately.

One open question is how to deal with the increased build dependencies that is triggered by this approach. Some people are surprised by this but I don’t see how to get around it: if you depend on source code for tools in another package to build your package, it is a bad idea to hide that dependency. We’ve done it for a long time through vendored code in non-minimal tarballs. Libntlm isn’t the most critical project from a bootstrapping perspective, so adding git and gnulib as Build-Depends to it will probably be fine. However, consider if this pattern was used for other packages that uses gnulib such as coreutils, gzip, tar, bison etc (all are using gnulib) then they would all Build-Depends on git and gnulib. Cross-building those packages for a new architecture will therefor require git on that architecture first, which gets circular quick. The dependency on gnulib is real so I don’t see that going away, and gnulib is a Architecture:all package. However, the dependency on git is merely a consequence of how the Debian gnulib package chose to make all gnulib git commits available to projects: through a git bundle. There are other ways to do this that doesn’t require the git tool to extract the necessary files, but none that I found practical — ideas welcome!

Finally some brief notes on how this was implementated. Enabling bootstrappable source-only minimal tarballs via gnulib’s ./bootstrap is achieved by using the GNULIB_REVISION mechanism, locking down the gnulib commit used. I have always disliked git submodules because they add extra steps and has complicated interaction with CI/CD. The reason why I gave up git submodules now is because the particular commit to use is not recorded in the git archive output when git submodules is used. So the particular gnulib commit has to be mentioned explicitly in some source code that goes into the git archive tarball. Colin Watson added the GNULIB_REVISION approach to ./bootstrap back in 2018, and now it no longer made sense to continue to use a gnulib git submodule. One alternative is to use ./bootstrap with --gnulib-srcdir or --gnulib-refdir if there is some practical problem with the GNULIB_URL towards a git bundle the GNULIB_REVISION in bootstrap.conf.

The srcdist make rule is simple:

git archive --prefix=libntlm-v1.8/ -o libntlm-v1.8.tar.gz HEAD

Making the make dist generated tarball reproducible can be more complicated, however for Libntlm it was sufficient to make sure the modification times of all files were set deterministically to a timestamp found in the git repository. Interestingly there seems to be a couple of different ways to accomplish this, Guix doesn’t support minimal source-only tarballs but rely on a .tarball-timestamp file inside the tarball. Paul Eggert explained what TZDB is using some time ago. The approach I’m using now is fairly similar to the one I suggested over a year ago.

Doing continous testing of all this is critical to make sure things don’t regress. Libntlm’s pipeline definition now produce the generated libntlm-*.tar.gz tarballs and a checksum as a build artifact. Then I added the 000-reproducability job which compares the checksums and fails on mismatches. You can read its delicate output in the job for the v1.8 release. Right now we insists that builds on Trisquel 11 match Ubuntu 22.04, that PureOS 10 builds match Debian 11 builds, that AlmaLinux 8 builds match RockyLinux 8 builds, and AlmaLinux 9 builds match RockyLinux 9 builds. As you can see in pipeline job output, not all platforms lead to the same tarballs, but hopefully this state can be improved over time. There is also partial reproducibility, where the full tarball is reproducible across two distributions but not the minimal tarball, or vice versa.

If this way of working plays out well, I hope to implement it in other projects too.

What do you think? Happy Hacking!

Categories: FLOSS Project Planets

Paul Tagliamonte: Domo Arigato, Mr. debugfs

Planet Debian - Sat, 2024-04-13 09:27

Years ago, at what I think I remember was DebConf 15, I hacked for a while on debhelper to write build-ids to debian binary control files, so that the build-id (more specifically, the ELF note .note.gnu.build-id) wound up in the Debian apt archive metadata. I’ve always thought this was super cool, and seeing as how Michael Stapelberg blogged some great pointers around the ecosystem, including the fancy new debuginfod service, and the find-dbgsym-packages helper, which uses these same headers, I don’t think I’m the only one.

At work I’ve been using a lot of rust, specifically, async rust using tokio. To try and work on my style, and to dig deeper into the how and why of the decisions made in these frameworks, I’ve decided to hack up a project that I’ve wanted to do ever since 2015 – write a debug filesystem. Let’s get to it.

Back to the Future It shouldn't shock anyone to learn I'm a huge fan of Go, right?

Time to admit something. I really love Plan 9. It’s just so good. So many ideas from Plan 9 are just so prescient, and everything just feels right. Not just right like, feels good – like, correct. The bit that I’ve always liked the most is 9p, the network protocol for serving a filesystem over a network. This leads to all sorts of fun programs, like the Plan 9 ftp client being a 9p server – you mount the ftp server and access files like any other files. It’s kinda like if fuse were more fully a part of how the operating system worked, but fuse is all running client-side. With 9p there’s a single client, and different servers that you can connect to, which may be backed by a hard drive, remote resources over something like SFTP, FTP, HTTP or even purely synthetic.

I even triggered a weird bug in vim when writing a 9p filesystem that wound up impacting WSL -- although it seems like maybe not due to 9p (rather, SMB)

The interesting (maybe sad?) part here is that 9p wound up outliving Plan 9 in terms of adoption – 9p is in all sorts of places folks don’t usually expect. For instance, the Windows Subsystem for Linux uses the 9p protocol to share files between Windows and Linux. ChromeOS uses it to share files with Crostini, and qemu uses 9p (virtio-p9) to share files between guest and host. If you’re noticing a pattern here, you’d be right; for some reason 9p is the go-to protocol to exchange files between hypervisor and guest. Why? I have no idea, except maybe due to being designed well, simple to implement, and it’s a lot easier to validate the data being shared and validate security boundaries. Simplicity has its value.

As a result, there’s a lot of lingering 9p support kicking around. Turns out Linux can even handle mounting 9p filesystems out of the box. This means that I can deploy a filesystem to my LAN or my localhost by running a process on top of a computer that needs nothing special, and mount it over the network on an unmodified machine – unlike fuse, where you’d need client-specific software to run in order to mount the directory. For instance, let’s mount a 9p filesystem running on my localhost machine, serving requests on 127.0.0.1:564 (tcp) that goes by the name “mountpointname” to /mnt.

Unfortunately, this requires root to mount and feels very un-plan9, but it does work and the protocol is good. $ mount -t 9p \ -o trans=tcp,port=564,version=9p2000.u,aname=mountpointname \ 127.0.0.1 \ /mnt

Linux will mount away, and attach to the filesystem as the root user, and by default, attach to that mountpoint again for each local user that attempts to use it. Nifty, right? I think so. The server is able to keep track of per-user access and authorization along with the host OS.

WHEREIN I STYX WITH IT "Simple" here is intended as my highest form of praise. Writing complex things is easy. Taking your work, and simplifying it down the core is the most difficult part of our work.

Since I wanted to push myself a bit more with rust and tokio specifically, I opted to implement the whole stack myself, without third party libraries on the critical path where I could avoid it. The 9p protocol (sometimes called Styx, the original name for it) is incredibly simple. It’s a series of client to server requests, which receive a server to client response. These are, respectively, “T” messages, which transmit a request to the server, which trigger an “R” message in response (Reply messages). These messages are TLV payload with a very straight forward structure – so straight forward, in fact, that I was able to implement a working server off nothing more than a handful of man pages.

There's also a 9P2000.L 9p variant which has more Linux specific extensions. There's a good chance I port this forward when I get the chance.

Later on after the basics worked, I found a more complete spec page that contains more information about the unix specific variant that I opted to use (9P2000.u rather than 9P2000) due to the level of Linux specific support for the 9P2000.u variant over the 9P2000 protocol.

MR ROBOTO It really bothers me rust libraries that deal with I/O need to support std::io, but to add support for async runtimes, you need to implement support for tokio::io and every other runtime; but them's the breaks I guess. I really miss Go's built-in async support and io module.

The backend stack over at zoo is rust and tokio running i/o for an HTTP and WebRTC server. I figured I’d pick something fairly similar to write my filesystem with, since 9P can be implemented on basically anything with I/O. That means tokio tcp server bits, which construct and use a 9p server, which has an idiomatic Rusty API that partially abstracts the raw R and T messages, but not so much as to cause issues with hiding implementation possibilities. At each abstraction level, there’s an escape hatch – allowing someone to implement any of the layers if required. I called this framework arigato which can be found over on docs.rs and crates.io.

/// Simplified version of the arigato File trait; this isn't actually /// the same trait; there's some small cosmetic differences. The /// actual trait can be found at: /// /// https://docs.rs/arigato/latest/arigato/server/trait.File.html trait File { /// OpenFile is the type returned by this File via an Open call. type OpenFile: OpenFile; /// Return the 9p Qid for this file. A file is the same if the Qid is /// the same. A Qid contains information about the mode of the file, /// version of the file, and a unique 64 bit identifier. fn qid(&self) -> Qid; /// Construct the 9p Stat struct with metadata about a file. async fn stat(&self) -> FileResult<Stat>; /// Attempt to update the file metadata. async fn wstat(&mut self, s: &Stat) -> FileResult<()>; /// Traverse the filesystem tree. async fn walk(&self, path: &[&str]) -> FileResult<(Option<Self>, Vec<Self>)>; /// Request that a file's reference be removed from the file tree. async fn unlink(&mut self) -> FileResult<()>; /// Create a file at a specific location in the file tree. async fn create( &mut self, name: &str, perm: u16, ty: FileType, mode: OpenMode, extension: &str, ) -> FileResult<Self>; /// Open the File, returning a handle to the open file, which handles /// file i/o. This is split into a second type since it is genuinely /// unrelated -- and the fact that a file is Open or Closed can be /// handled by the `arigato` server for us. async fn open(&mut self, mode: OpenMode) -> FileResult<Self::OpenFile>; } /// Simplified version of the arigato OpenFile trait; this isn't actually /// the same trait; there's some small cosmetic differences. The /// actual trait can be found at: /// /// https://docs.rs/arigato/latest/arigato/server/trait.OpenFile.html trait OpenFile { /// iounit to report for this file. The iounit reported is used for Read /// or Write operations to signal, if non-zero, the maximum size that is /// guaranteed to be transferred atomically. fn iounit(&self) -> u32; /// Read some number of bytes up to `buf.len()` from the provided /// `offset` of the underlying file. The number of bytes read is /// returned. async fn read_at( &mut self, buf: &mut [u8], offset: u64, ) -> FileResult<u32>; /// Write some number of bytes up to `buf.len()` from the provided /// `offset` of the underlying file. The number of bytes written /// is returned. fn write_at( &mut self, buf: &mut [u8], offset: u64, ) -> FileResult<u32>; } Thanks, decade ago paultag! If this isn't my record for longest idea-to-wip-project time, it's close.

Let’s do it! Let’s use arigato to implement a 9p filesystem we’ll call debugfs that will serve all the debug files shipped according to the Packages metadata from the apt archive. We’ll fetch the Packages file and construct a filesystem based on the reported Build-Id entries. For those who don’t know much about how an apt repo works, here’s the 2-second crash course on what we’re doing. The first is to fetch the Packages file, which is specific to a binary architecture (such as amd64, arm64 or riscv64). That architecture is specific to a component (such as main, contrib or non-free). That component is specific to a suite, such as stable, unstable or any of its aliases (bullseye, bookworm, etc). Let’s take a look at the Packages.xz file for the unstable-debug suite, main component, for all amd64 binaries.

$ curl \ https://deb.debian.org/debian-debug/dists/unstable-debug/main/binary-amd64/Packages.xz \ | unxz

This will return the Debian-style rfc2822-like headers, which is an export of the metadata contained inside each .deb file which apt (or other tools that can use the apt repo format) use to fetch information about debs. Let’s take a look at the debug headers for the netlabel-tools package in unstable – which is a package named netlabel-tools-dbgsym in unstable-debug.

Package: netlabel-tools-dbgsym Source: netlabel-tools (0.30.0-1) Version: 0.30.0-1+b1 Installed-Size: 79 Maintainer: Paul Tagliamonte <paultag@debian.org> Architecture: amd64 Depends: netlabel-tools (= 0.30.0-1+b1) Description: debug symbols for netlabel-tools Auto-Built-Package: debug-symbols Build-Ids: e59f81f6573dadd5d95a6e4474d9388ab2777e2a Description-md5: a0e587a0cf730c88a4010f78562e6db7 Section: debug Priority: optional Filename: pool/main/n/netlabel-tools/netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb Size: 62776 SHA256: 0e9bdb087617f0350995a84fb9aa84541bc4df45c6cd717f2157aa83711d0c60

So here, we can parse the package headers in the Packages.xz file, and store, for each Build-Id, the Filename where we can fetch the .deb at. Each .deb contains a number of files – but we’re only really interested in the files inside the .deb located at or under /usr/lib/debug/.build-id/, which you can find in debugfs under rfc822.rs. It’s crude, and very single-purpose, but I’m feeling a bit lazy.

Who needs dpkg?! Hilariously, the fourth? fifth? non-serious time (second serious time) I've had to do this for a new language.

For folks who haven’t seen it yet, a .deb file is a special type of .ar file, that contains (usually) three files inside – debian-binary, control.tar.xz and data.tar.xz. The core of an .ar file is a fixed size (60 byte) entry header, followed by the specified size number of bytes.

[8 byte .ar file magic] [60 byte entry header] [N bytes of data] [60 byte entry header] [N bytes of data] [60 byte entry header] [N bytes of data] ... I can't believe it's already been over a decade since my NM process, and nearly 16 years since I became an Ubuntu member.

First up was to implement a basic ar parser in ar.rs. Before we get into using it to parse a deb, as a quick diversion, let’s break apart a .deb file by hand – something that is a bit of a rite of passage (or at least it used to be? I’m getting old) during the Debian nm (new member) process, to take a look at where exactly the .debug file lives inside the .deb file.

$ ar x netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb $ ls control.tar.xz debian-binary data.tar.xz netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb $ tar --list -f data.tar.xz | grep '.debug$' ./usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug

Since we know quite a bit about the structure of a .deb file, and I had to implement support from scratch anyway, I opted to implement a (very!) basic debfile parser using HTTP Range requests. HTTP Range requests, if supported by the server (denoted by a accept-ranges: bytes HTTP header in response to an HTTP HEAD request to that file) means that we can add a header such as range: bytes=8-68 to specifically request that the returned GET body be the byte range provided (in the above case, the bytes starting from byte offset 8 until byte offset 68). This means we can fetch just the ar file entry from the .deb file until we get to the file inside the .deb we are interested in (in our case, the data.tar.xz file) – at which point we can request the body of that file with a final range request. I wound up writing a struct to handle a read_at-style API surface in hrange.rs, which we can pair with ar.rs above and start to find our data in the .deb remotely without downloading and unpacking the .deb at all.

I really like HTTP Range requests a lot. I did some stats to figure out what compression dbgsym packages use these days; my LAN debug mirror contains 113459 xz compressed tarfiles, and 9 gzip compressed tarfiles at the time of writing.

After we have the body of the data.tar.xz coming back through the HTTP response, we get to pipe it through an xz decompressor (this kinda sucked in Rust, since a tokio AsyncRead is not the same as an http Body response is not the same as std::io::Read, is not the same as an async (or sync) Iterator is not the same as what the xz2 crate expects; leading me to read blocks of data to a buffer and stuff them through the decoder by looping over the buffer for each lzma2 packet in a loop), and tarfile parser (similarly troublesome). From there we get to iterate over all entries in the tarfile, stopping when we reach our file of interest. Since we can’t seek, but gdb needs to, we’ll pull it out of the stream into a Cursor<Vec<u8>> in-memory and pass a handle to it back to the user.

From here on out its a matter of gluing together a File traited struct in debugfs, and serving the filesystem over TCP using arigato. Done deal!

A quick diversion about compression

I was originally hoping to avoid transferring the whole tar file over the network (and therefore also reading the whole debug file into ram, which objectively sucks), but quickly hit issues with figuring out a way around seeking around an xz file. What’s interesting is xz has a great primitive to solve this specific problem (specifically, use a block size that allows you to seek to the block as close to your desired seek position just before it, only discarding at most block size - 1 bytes), but data.tar.xz files generated by dpkg appear to have a single mega-huge block for the whole file. I don’t know why I would have expected any different, in retrospect. That means that this now devolves into the base case of “How do I seek around an lzma2 compressed data stream”; which is a lot more complex of a question.

After going through a lot of this, I realized just how complex the xz format is -- it's a lot more than just lzma2!

Thankfully, notoriously brilliant tianon was nice enough to introduce me to Jon Johnson who did something super similar – adapted a technique to seek inside a compressed gzip file, which lets his service oci.dag.dev seek through Docker container images super fast based on some prior work such as soci-snapshotter, gztool, and zran.c. He also pulled this party trick off for apk based distros over at apk.dag.dev, which seems apropos. Jon was nice enough to publish a lot of his work on this specifically in a central place under the name “targz” on his GitHub, which has been a ton of fun to read through.

The gist is that, by dumping the decompressor’s state (window of previous bytes, in-memory data derived from the last N-1 bytes) at specific “checkpoints” along with the compressed data stream offset in bytes and decompressed offset in bytes, one can seek to that checkpoint in the compressed stream and pick up where you left off – creating a similar “block” mechanism against the wishes of gzip. It means you’d need to do an O(n) run over the file, but every request after that will be sped up according to the number of checkpoints you’ve taken.

Given the complexity of xz and lzma2, I don’t think this is possible for me at the moment – especially given most of the files I’ll be requesting will not be loaded from again – especially when I can “just” cache the debug header by Build-Id. I want to implement this (because I’m generally curious and Jon has a way of getting someone excited about compression schemes, which is not a sentence I thought I’d ever say out loud), but for now I’m going to move on without this optimization. Such a shame, since it kills a lot of the work that went into seeking around the .deb file in the first place, given the debian-binary and control.tar.gz members are so small.

The Good

First, the good news right? It works! That’s pretty cool. I’m positive my younger self would be amused and happy to see this working; as is current day paultag. Let’s take debugfs out for a spin! First, we need to mount the filesystem. It even works on an entirely unmodified, stock Debian box on my LAN, which is huge. Let’s take it for a spin:

$ mount \ -t 9p \ -o trans=tcp,version=9p2000.u,aname=unstable-debug \ 192.168.0.2 \ /usr/lib/debug/.build-id/

And, let’s prove to ourselves that this actually mounted before we go trying to use it:

$ mount | grep build-id 192.168.0.2 on /usr/lib/debug/.build-id type 9p (rw,relatime,aname=unstable-debug,access=user,trans=tcp,version=9p2000.u,port=564)

Slick. We’ve got an open connection to the server, where our host will keep a connection alive as root, attached to the filesystem provided in aname. Let’s take a look at it.

$ ls /usr/lib/debug/.build-id/ 00 0d 1a 27 34 41 4e 5b 68 75 82 8E 9b a8 b5 c2 CE db e7 f3 01 0e 1b 28 35 42 4f 5c 69 76 83 8f 9c a9 b6 c3 cf dc E7 f4 02 0f 1c 29 36 43 50 5d 6a 77 84 90 9d aa b7 c4 d0 dd e8 f5 03 10 1d 2a 37 44 51 5e 6b 78 85 91 9e ab b8 c5 d1 de e9 f6 04 11 1e 2b 38 45 52 5f 6c 79 86 92 9f ac b9 c6 d2 df ea f7 05 12 1f 2c 39 46 53 60 6d 7a 87 93 a0 ad ba c7 d3 e0 eb f8 06 13 20 2d 3a 47 54 61 6e 7b 88 94 a1 ae bb c8 d4 e1 ec f9 07 14 21 2e 3b 48 55 62 6f 7c 89 95 a2 af bc c9 d5 e2 ed fa 08 15 22 2f 3c 49 56 63 70 7d 8a 96 a3 b0 bd ca d6 e3 ee fb 09 16 23 30 3d 4a 57 64 71 7e 8b 97 a4 b1 be cb d7 e4 ef fc 0a 17 24 31 3e 4b 58 65 72 7f 8c 98 a5 b2 bf cc d8 E4 f0 fd 0b 18 25 32 3f 4c 59 66 73 80 8d 99 a6 b3 c0 cd d9 e5 f1 fe 0c 19 26 33 40 4d 5a 67 74 81 8e 9a a7 b4 c1 ce da e6 f2 ff

Outstanding. Let’s try using gdb to debug a binary that was provided by the Debian archive, and see if it’ll load the ELF by build-id from the right .deb in the unstable-debug suite:

$ gdb -q /usr/sbin/netlabelctl Reading symbols from /usr/sbin/netlabelctl... Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug... (gdb)

Yes! Yes it will!

$ file /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter *empty*, BuildID[sha1]=e59f81f6573dadd5d95a6e4474d9388ab2777e2a, for GNU/Linux 3.2.0, with debug_info, not stripped The Bad

Linux’s support for 9p is mainline, which is great, but it’s not robust. Network issues or server restarts will wedge the mountpoint (Linux can’t reconnect when the tcp connection breaks), and things that work fine on local filesystems get translated in a way that causes a lot of network chatter – for instance, just due to the way the syscalls are translated, doing an ls, will result in a stat call for each file in the directory, even though linux had just got a stat entry for every file while it was resolving directory names. On top of that, Linux will serialize all I/O with the server, so there’s no concurrent requests for file information, writes, or reads pending at the same time to the server; and read and write throughput will degrade as latency increases due to increasing round-trip time, even though there are offsets included in the read and write calls. It works well enough, but is frustrating to run up against, since there’s not a lot you can do server-side to help with this beyond implementing the 9P2000.L variant (which, maybe is worth it).

The Ugly

Unfortunately, we don’t know the file size(s) until we’ve actually opened the underlying tar file and found the correct member, so for most files, we don’t know the real size to report when getting a stat. We can’t parse the tarfiles for every stat call, since that’d make ls even slower (bummer). Only hiccup is that when I report a filesize of zero, gdb throws a bit of a fit; let’s try with a size of 0 to start:

$ ls -lah /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug -r--r--r-- 1 root root 0 Dec 31 1969 /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug $ gdb -q /usr/sbin/netlabelctl Reading symbols from /usr/sbin/netlabelctl... Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug... warning: Discarding section .note.gnu.build-id which has a section size (24) larger than the file size [in module /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug] [...]

This obviously won’t work since gdb will throw away all our hard work because of stat’s output, and neither will loading the real size of the underlying file. That only leaves us with hardcoding a file size and hope nothing else breaks significantly as a result. Let’s try it again:

$ ls -lah /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug -r--r--r-- 1 root root 954M Dec 31 1969 /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug $ gdb -q /usr/sbin/netlabelctl Reading symbols from /usr/sbin/netlabelctl... Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug... (gdb)

Much better. I mean, terrible but better. Better for now, anyway.

Kilroy was here

Do I think this is a particularly good idea? I mean; kinda. I’m probably going to make some fun 9p arigato-based filesystems for use around my LAN, but I don’t think I’ll be moving to use debugfs until I can figure out how to ensure the connection is more resilient to changing networks, server restarts and fixes on i/o performance. I think it was a useful exercise and is a pretty great hack, but I don’t think this’ll be shipping anywhere anytime soon.

Along with me publishing this post, I’ve pushed up all my repos; so you should be able to play along at home! There’s a lot more work to be done on arigato; but it does handshake and successfully export a working 9P2000.u filesystem. Check it out on on my github at arigato, debugfs and also on crates.io and docs.rs.

At least I can say I was here and I got it working after all these years.

Categories: FLOSS Project Planets

Russell Coker: Software Needed for Work

Planet Debian - Sat, 2024-04-13 03:08

When I first started studying computer science setting up a programming project was easy, write source code files and a Makefile and that was it. IRC was the only IM system and email was the only other communications system that was used much. Writing Makefiles is difficult but products like the Borland Turbo series of IDEs did all that for you so you could just start typing code and press a function key to compile and run (F5 from memory).

Over the years the requirements and expectations of computer use have grown significantly. The typical office worker is now doing many more things with computers than serious programmers used to do. Running an IM system, an online document editing system, and a series of web apps is standard for companies nowadays. Developers have to do all that in addition to tools for version control, continuous integration, bug reporting, and feature tracking. The development process is also more complex with extra steps for reproducible builds, automated tests, and code coverage metrics for the tests. I wonder how many programmers who started in the 90s would have done something else if faced with Github as their introduction.

How much of this is good? Having the ability to send instant messages all around the world is great. Having dozens of different ways of doing so is awful. When a company uses multiple IM systems such as MS-Teams and Slack and forces some of it’s employees to use them both it’s getting ridiculous. Having different friend groups on different IM systems is anti-social networking. In the EU the Digital Markets Act [1] forces some degree of interoperability between different IM systems and as it’s impossible to know who’s actually in the EU that will end up being world-wide.

In corporations document management often involves multiple ways of storing things, you have Google Docs, MS Office online, hosted Wikis like Confluence, and more. Large companies tend to use several such systems which means that people need to learn multiple systems to be able to work and they also need to know which systems are used by the various groups that they communicate with. Microsoft deserves some sort of award for the range of ways they have for managing documents, Sharepoint, OneDrive, Office Online, attachments to Teams rooms, and probably lots more.

During WW2 the predecessor to the CIA produced an excellent manual for simple sabotage [2]. If something like that was written today the section General Interference with Organisations and Production would surely have something about using as many incompatible programs and web sites as possible in the work flow. The proliferation of software required for work is a form of denial of service attack against corporations.

The efficiency of companies doesn’t really bother me. It sucks that companies are creating a demoralising workplace that is unpleasant for workers. But the upside is that the biggest companies are the ones doing the worst things and are also the most afflicted by these problems. It’s almost like the Bureau of Sabotage in some of Frank Herbert’s fiction [3].

The thing that concerns me is the effect of multiple standards on free software development. We have IRC the most traditional IM support system which is getting replaced by Matrix but we also have some projects using Telegram, and Jabber hasn’t gone away. I’m sure there are others too. There are also multiple options for version control (although github seems to dominate the market), forums, bug trackers, etc. Reporting bugs or getting support in free software often requires interacting with several of them. Developing free software usually involves dealing with the bug tracking and documentation systems of the distribution you use as well as the upstream developers of the software. If the problem you have is related to compatibility between two different pieces of free software then you can end up dealing with even more bug tracking systems.

There are real benefits to some of the newer programs to track bugs, write documentation, etc. There is also going to be a cost in changing which gives an incentive for the older projects to keep using what has worked well enough for them in the past,

How can we improve things? Use only the latest tools? Prioritise ease of use? Aim more for the entry level contributors?

Related posts:

  1. Source Escrow for Proprietary Software British taxpayers are paying for extra support for Windows XP...
  2. Car Drivers vs Mechanics and Free Software In a comment on my post about Designing Unsafe Cars...
  3. Advertising Free Software Projects Today I just noticed the following advert on one of...
Categories: FLOSS Project Planets

This week in KDE: Explicit Sync

Planet KDE - Sat, 2024-04-13 01:46

This week something big got merged: support for Explicit Sync on Wayland!

What does this do? In a nutshell it allows apps to tell the compositor when to display frames on the screen, reducing latency and graphical glitches. The effect should be particularly noticeable with NVIDIA GPUs, which only support this rendering style, and not having support for it on Wayland was the most common source of random graphical glitches and slowdowns.

This work was done by Xaver Hugl, and lands in Plasma 6.1. You can read more about it in a recent blog post he wrote on the topic!

In addition to that impactful but technical change, this was a week of many UI improvements and bug fixes as well:

UI Improvements

Improved the visual quality of cross-screen screenshots that Spectacle takes when using multi-screen setups where the screens have different scale factors (Noah Davis, Spectacle 24.05. Link)

Improved quality of blurs, pixelations, and shadows in Spectacle’s annotations (Noah Davis, Spectacle 24.05. Link)

KWrite now shows a hamburger menu by default instead of a traditional menu bar, and the hamburger menu has gained an assortment of curated actions on the top to speed up access to them (Nathan Garside and Christoph Cullmann, KWrite 24.05. Link 1, link 2, and link 3):

Just KWrite! Nothing has changed for Kate.

Filelight now throws fewer annoying modal dialog windows in your face: it now shows directory access errors using inline placeholder messages, and failures to add duplicate exclusion paths using small toasts/passive notifications (Han Young, Filelight 24.05. Link 1 and link 2)

In the “Get New [Thing]” dialogs, the warning shown at the top is now different and more warningy if the available content has the potential to run executable code on your system (David Edmundson and me: Nate Graham, Plasma 6.1 and Gear apps 24.05. Link 1 and link 2):

In Plasma’s traditional Task Manager widget, the threshold for showing any text is now smaller, so you’ll see text more often even at relatively narrow task widths. This is based on the idea that if you’re using this widget, it’s because you want to see text! (me: Nate Graham, Plasma 6.0.4. Link)

On Wayland, the Dialog Parent effect that dims parent windows when dialogs are active now temporarily disables the dimming when the dialog window is a color picker that’s currently picking a screen color (Ivan Tkachenko, Plasma 6.1. Link)

System Settings’ Virtual Desktops page has now adopted the common “controls on the header row” paradigm to increase the apace available for content (Jakob Petsovits, Plasma 6.1. Link):

In Plasma’s Power and Battery widget, the controls and UI for blocking sleep and screen locking have moved from the header into the view, matching the location of other interactive controls and preventing the header from becoming super chunky when multiple apps are inhibiting sleep and screen locking (Natalie Clarius, Plasma 6.1. Link):

System Monitor now shows a tooltip with the full text when you hover over an elided piece of text in one of the table views (Joshua Goins, Plasma 6.1. Link)

Made the header message on System Settings’ File Search page frameless and border-touching, making use of new API created to enable this purpose. This API was also used for the message in the “Get new [thing]” dialog shown earlier. So expect to see more of this kind of thing in the coming weeks! (me: Nate Graham, Plasma 6.1. Link 1 and link 2):

Folder popups invoked from a Plasma panel can now be resized if you’d like more space to see everything in them (Ivan Tkachenko, Plasma 6.1. Link)

If you preferred the old style of app launching in Plasma’s traditional Task Manager widget whereby an app’s pinned launcher would disappear when launched, you can now get that back (Niccolò Venerandi, Plasma 6.1. Link)

Middle-clicking a desktop in KWin’s Overview effect no longer instantly removes it, because this made it too easy to accidentally destroy your layout (me: Nate Graham, Plasma 6.1. Link)

The radius of rounded corners throughout Breeze-themed UI elements has now been standardized at 5px, much better than the previous random assortment of corner radii that ranged generally from 2-5px, with some being as high as 8 or 16px! This is an easily bikeshedded topic, so please try to restrain yourselves; check out all the other screenshots in this post, which feature the change—I bet you thought they looked pretty good before I explicitly mentioned that the corner radius had increased, right!? (Akseli Lahtinen, Plasma 6.1 and Frameworks 6.2. Link)

Bug Fixes

In Dolphin’s Details view, navigating to a subdirectory no longer resets the current sort mode (Felix Ernst, Dolphin 24.02.2. Link)

Gwenview no longer inappropriately blocks sleep and screen locking when simply viewing an image (Daniel Strnad, Gwenview 24.02.2. Link)

Fixed a nasty bug in Spectacle that could cause the rectangular region overlay to get stuck on screen when taking a rectangular region screenshot right after a screen recording without quitting and re-launching Spectacle first (Noah Davis, Spectacle 24.05. Link)

Plasma’s Night Light feature no longer unintentionally connects to Mozilla Location Services to geolocate you for the purpose of figuring out the Night Light transition time when you don’t have that setting turned on (Natalie Clarius, Plasma 5.27.12. Link)

Plasma no longer crashes when you close the notification that gives you an opportunity to undo removing an Icon widget (Nicolas Fella, Plasma 6.0.4. Link)

Modal font dialogs in Plasma 6 are now ugly, but at least they work again; our previous styling approach caused them to end up non-interactive and has been reverted for now pending a better solution (Nicolas Fella, Plasma 6.0.4. Link)

Discover once again shows you everything you have installed on its “Installed” page—not just an assortment of packages that depended on what distro you were using (Harald Sitter, Plasma 6.0.4. Link)

When scrolling down in one app category on Kickoff and then switching categories, random items from the former category no longer sometimes appear inappropriately on the new page (David Redondo, Plasma 6.0.4. Link)

On Wayland, the color picker dialog no longer returns incorrect colors when an ICC profile is applied to the display (Xaver Hugl, Plasma 6.0.4. Link)

Fixed a strange issue whereby items in Plasma’s list view mode Folder View popups on a panel could only be highlighted when moving the cursor downwards, rather than upwards (Akseli Lahtinen, Plasma 6.0.4. Link)

Fixed a bug that made it possible, under certain circumstances, to drag a window completely off the screen (Yifan Zhu, Plasma 6.0.4. Link)

On Wayland, the clipboard menu that appears when you hit Meta+V no longer goes underneath windows set to always be on top (Tino Lorenz, Plasma 6.0.4. Link)

In KWin’s Overview effect, the thumbnails for inactive virtual desktops are now live, not static (David Redondo, Plasma 6.0.4. Link)

Fixed a problem creating new Wireguard VPNs (Stephen Robinson, Plasma 6.1. Link)

Keyboards are no longer sometimes shown as mice on System Settings’ Mouse page (me: Nate Graham, Plasma 6.1. Link)

Rapidly toggling the “Floating” setting for a Plasma panel on and off no longer sometimes causes the panel to get stuck in a position where it’s not floating, but also detached from the screen edge (Vlad Zahorodnii, Plasma 6.1. Link)

Discover no longer mislabels apps that get removed from a Flatpak repo as being not installed when they are in fact still installed (Aleix Pol and Ivan Tkachenko, Plasma 6.1. Link)

Fixed a case where KDE apps could run out of memory when told to open certain types of malformed image files (Mirco Miranda, Frameworks 6.1. Link)

Fixed the most common crash in the Baloo file indexing service, with 113 duplicates as of the time of writing (Christoph Cullmann, Frameworks 6.2. Link)

Fixed a rare case where KIO could exhaust all memory while trying and failing to process HTTP requests under certain unusual circumstances (Harald Sitter, Frameworks 6.2. Link)

WebDAV files accessed through Dolphin or other KDE apps once again show the correct modification times (Fabian Vogt, Frameworks 6.2. Link)

Other bug information of note:

Performance & Technical

After Qt changes regressed this, changing color schemes is once again nearly instant, giving the “blend changes” KWin effect enough time to provide a smooth transition between the colors (Kai Uwe Broulik, Frameworks 6.2 Link)

Files and folders on the desktop that are copied or dragged are now made available via the Portals system, so dropping them into sandboxed apps now works as expected (Karol Kosek, Plasma 6.0.4. Link)

…And Everything Else

This blog only covers the tip of the iceberg! If you’re hungry for more, check out https://planet.kde.org, where you can find more news from other KDE contributors.

How You Can Help

KDE has become important in the world, and your time and labor have helped to bring it there! But as we grow, it’s going to be equally important that this stream of labor be made sustainable, which primarily means paying for it. Right now the vast majority of KDE runs on labor not paid for by KDE e.V. (the nonprofit foundation behind KDE, which I am a board member for), and that’s a problem. We’ve taken steps to change this with paid technical contractors—but those steps are small due to limited financial resources. If you’d like to help change that, consider donating today!

Otherwise, visit https://community.kde.org/Get_Involved to discover other ways to be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don’t have to already be a programmer, either. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

Categories: FLOSS Project Planets

Hindi Translation of Cantor and KDE Connect - SoK 2024

Planet KDE - Fri, 2024-04-12 20:00
Hindi Translation of Cantor and KDE connect - Season of KDE 2024 This is the second and last blog for SoK 2024. During the second half of Season of KDE, I translated Cantor and KDE connect in hindi. Cantor and KDE connect had about 1000 and 500 lines respectively. In order to translate these softwares, I took reference from google translate and AI models to improve on my translation. Translation memory did a great job in finding duplicates and helped me to avoid translating the same words again.
Categories: FLOSS Project Planets

Kubuntu: Noble Numbat Beta available! Qt6 snaps coming soon.

Planet KDE - Fri, 2024-04-12 15:29

It has been a very busy couple of weeks as we worked against some major transitions and a security fix that required a rebuild of the $world. I am happy to report that against all odds we have a beta release! You can read all about it here: https://kubuntu.org/news/kubuntu-24-04-beta-released/ Post beta freeze I have already begun pushing our fixes for known issues today. A big one being our new branding! Very exciting times in the Kubuntu world.

In the snap world I will be using my free time to start knocking out KDE applications ( not covered by the project ). I have also recruited some help, so you should start seeing these pop up in the edge channel very soon!

Now that we are nearing the release of Noble Numbat, my contract is coming to an end with Kubuntu. If you would like to see Plasma 6 in the next release and in a PPA for Noble, please consider donating to extend my contract at https://kubuntu.org/donate !

On a personal level, I am still looking to help with my grandson and you can find that here: https://www.gofundme.com/f/in-loving-memory-of-william-billy-dean-scalf

Thanks for stopping by,

Scarlett

Categories: FLOSS Project Planets

Scarlett Gately Moore: Kubuntu: Noble Numbat Beta available! Qt6 snaps coming soon.

Planet Debian - Fri, 2024-04-12 15:29

It has been a very busy couple of weeks as we worked against some major transitions and a security fix that required a rebuild of the $world. I am happy to report that against all odds we have a beta release! You can read all about it here: https://kubuntu.org/news/kubuntu-24-04-beta-released/ Post beta freeze I have already begun pushing our fixes for known issues today. A big one being our new branding! Very exciting times in the Kubuntu world.

In the snap world I will be using my free time to start knocking out KDE applications ( not covered by the project ). I have also recruited some help, so you should start seeing these pop up in the edge channel very soon!

Now that we are nearing the release of Noble Numbat, my contract is coming to an end with Kubuntu. If you would like to see Plasma 6 in the next release and in a PPA for Noble, please consider donating to extend my contract at https://kubuntu.org/donate !

On a personal level, I am still looking to help with my grandson and you can find that here: https://www.gofundme.com/f/in-loving-memory-of-william-billy-dean-scalf

Thanks for stopping by,

Scarlett

Categories: FLOSS Project Planets

Kubuntu 24.04 Beta Released

Planet KDE - Fri, 2024-04-12 14:33
Join the Excitement: Test Kubuntu 24.04 Beta and Experience Innovation with KubuQA!

We’re thrilled to announce the availability of the Kubuntu 24.04 Beta! This release is packed with new features and enhancements, and we’re inviting you, our valued community, to join us in fine-tuning this exciting new version. Whether you’re a seasoned tester or new to software testing, your feedback is crucial to making Kubuntu 24.04 the best it can be.

To make your testing journey as easy as pie, we’re introducing a fantastic new tool: KubuQA. Designed with both new and experienced users in mind, KubuQA simplifies the testing process by automating the download, VirtualBox setup, and configuration steps. Now, everyone can participate in testing Kubuntu with ease!

This beta release also debuts our fresh new branding, artwork, and wallpapers—created and chosen by our own community through recent branding and wallpaper contests. These additions reflect the spirit and creativity of the Kubuntu family, and we can’t wait for you to see them.

Get Testing

By participating in the beta testing of Kubuntu 24.04, you’re not just helping improve the software; you’re becoming an integral part of a global community that values open collaboration and innovation. Your contributions help us identify and fix issues, ensuring Kubuntu remains a high-quality, stable, and user-friendly Linux distribution.

The benefits of joining our testing team extend beyond improving the software. You’ll gain valuable experience, meet like-minded individuals, and perhaps discover a new passion in the world of open-source software.

So why wait? Download the Kubuntu 24.04 Beta today, try out KubuQA, or follow our wiki to upgrade and help us make Kubuntu better than ever! Remember, your feedback is the key to our success.

Ready to make an impact?

Join us in this exciting phase of development and see your ideas come to life in Kubuntu. Plus, enjoy the satisfaction of knowing that you’ve contributed to a project used by millions around the world. Become a tester today and be part of something big!

Interested in more than testing?

By the way, have you thought about becoming a member of the Kubuntu Community? It’s a fantastic way to contribute more actively and help shape the future of Kubuntu. Learn more about joining the community.

Categories: FLOSS Project Planets

Web Review, Week 2024-15

Planet KDE - Fri, 2024-04-12 10:19

Turns out I managed to squeeze reading here and there and have enough content for a regular review… So let’s go for my web review for the week 2024-15.

Fairbuds are Fairphone’s proof that we really could make better tiny gadgets | Ars Technica

Tags: tech, repair, sound, hardware

Another type of devices where clearly they could be repairable and batteries could be swappable if manufacturers would put care in the design. At least, Fairphone is showing it’s doable.

https://arstechnica.com/gadgets/2024/04/fairbuds-take-the-fairphones-repairability-down-to-seemingly-impossible-size/?comments=1


The Rise and Fall of Silicon Graphics

Tags: tech, gpu, 3d, history

Interesting history behind the company which was instrumental in pushing computer graphics forward during its time.

https://www.abortretry.fail/p/the-rise-and-fall-of-silicon-graphics


Software eco-design: investigating and reducing the energy consumption of software

Tags: tech, performance, energy, ecology, java, research

More work about eco-design of software. This is definitely welcome. I found this work a bit weak on the state of the art and the interview parts (10 people in the same company). But the field is so nascent that it’s to be expected I guess, PhD students have to do with what they have access to. Unsurprisingly this shows a great lack of proper tools to tackle the measurement problem. This thesis shows interesting prospects to reduce variations in measurements though, some of the proposed guidelines might help but cannot offset the hardware heterogeneity completely… The parts focusing on practical advices around Java use and deployment are interestingly easy to apply though. You need to take into account the context of your application to make the right choices of course.

https://theses.hal.science/tel-03429300/document


The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con

Tags: tech, ai, machine-learning, gpt, cognition, criticism, scam

Interesting take on why people see more in LLM based systems than there really is. The parallels with psychics and mentalists tricks are well thought out.

https://softwarecrisis.dev/letters/llmentalist/


The Assist @ Things Of Interest

Tags: tech, ai, machine-learning, copilot, gpt, programming

All the good reasons why productivity increases with code assistants are massively overestimated. To be used why not, but with a light touch.

https://qntm.org/assist


Hello OLMo: A truly open LLM

Tags: tech, ai, machine-learning, gpt, open-access, research

This is how it should be done. This one comes with everything needed to reproduce the results. This is necessary to gain insights into how such models work internally.

https://blog.allenai.org/hello-olmo-a-truly-open-llm-43f7e7359222


The lifecycle of a code AI completion

Tags: tech, ai, machine-learning, copilot, programming, architecture

Wondering how one can design a coding assistant? Here is an in depth explanation of the choices made by one of the solutions out there. There’s quite some processing before and after actually running the inference with the LLM.

https://sourcegraph.com/blog/the-lifecycle-of-a-code-ai-completion


Results summary: 2024 Annual C++ Developer Survey “Lite” : Standard C++

Tags: tech, c++

Some interesting insights in this survey. It helps identify common concerns.

https://isocpp.org/blog/2024/04/results-summary-2024-annual-cpp-developer-survey-lite


Glory is only 11MB/sec away

Tags: tech, cloud, infrastructure, cost

When you do the math, the cloud offerings look very expensive for most workload indeed.

https://thmsmlr.com/cheap-infra


Building My First Homelab Server Rack · mtlynch.io

Tags: tech, hardware, homelab, self-hosting

Considering using a server rack for a homelab? This is a nice tutorial with plenty of advices.

https://mtlynch.io/building-first-homelab-rack/


Intro to TLS Certificates

Tags: tech, cryptography, tls, certificates, security

The title says it all. This article is a nice introduction to certificates, how they work, how the trust model is setup, etc.

https://carrickbartle.com/certificates.html


ratarmount: Access large archives as a filesystem efficiently

Tags: tech, tools, archive

Looks like a nice tool to manipulate large archives.

https://github.com/mxmlnkn/ratarmount


Hermit: A reproducible container

Tags: tech, debugging, tools, multithreading

Looks like an interesting tool to analyze hard to reproduce bugs, especially when concurrency is involved. This could be useful to find the source of flaky tests as well.

https://github.com/facebookexperimental/hermit


my deployment platform is a shell script

Tags: tech, self-hosting, deployment, complexity, shell, scripting

Keep things as simple as possible, they might turn out to be robust too.

https://j3s.sh/thought/my-deployment-platform-is-a-shell-script.html


Shell History Is Your Best Productivity Tool

Tags: tech, shell, zsh

A few interesting tips to improve history management with ZSH.

https://martinheinz.dev/blog/110


The Blessing of the Strings

Tags: tech, web, browser, javascript, reliability

Looks like an interesting mechanism to improve the reliability of web applications. Let’s see what people make with those trusted types.

https://bkardell.com/blog/blessing-strings.html


Don’t require people to change ‘source code’ to configure your programs

Tags: tech, programming, portability, craftsmanship

Hopefully nobody is handling configuration by assuming the user will modify the source code or build scripts by hand. Unfortunately I still encounter it from time to time…

https://utcc.utoronto.ca/~cks/space/blog/programming/ConfigureNoSourceCodeChanges


If Inheritance is so bad, why does everyone use it? • Buttondown

Tags: tech, object-oriented, history

Interesting look at the history of inheritance in programming languages. There’s clearly still room for improvements on this concept.

https://buttondown.email/hillelwayne/archive/if-inheritance-is-so-bad-why-does-everyone-use-it/


Thoughts on Pair Programming - DEV Community

Tags: tech, programming, pairing

Good criteria to decide to pair or not. This is still not practiced enough. Maybe knowing when it’s best to reach out to pair will help get more into it.

https://dev.to/shaharke/thoughts-on-pair-programming-1i8g


What I think about when I edit — Eva Parish

Tags: documentation, writing

Good advices to improve writing. I should apply such rules to myself more often.

https://evaparish.com/blog/how-i-edit


Simple Ways to Show Appreciation at Work

Tags: management, empathy

Plenty of good tricks in there. It has to be genuine of course, but said tricks reduce chances of unwillingly dropping the ball on the topic.

https://hbr.org/2023/10/simple-ways-to-show-appreciation-at-work


On Generating Ideas - Leadership & Work

Tags: meetings, leadership

This is indeed the best approach I’ve seen for brainstorming. It gives a chance to everyone to bring something forward, even the introverts.

https://read.perspectiveship.com/p/on-generating-ideas


Bye for now!

Categories: FLOSS Project Planets

NOKUBI Takatsugu: mailman3-web error when upgrading to bookworm

Planet Debian - Fri, 2024-04-12 09:34

I tried to upgrade bullseye machien to bookworm, so I got the following error:

File “/usr/lib/python3/dist-packages/django/contrib/auth/mixins.py”, line 5, in
from django.contrib.auth.views import redirect_to_login
File “/usr/lib/python3/dist-packages/django/contrib/auth/views.py”, line 20, in
from django.utils.http import (
ImportError: cannot import name ‘url_has_allowed_host_and_scheme’ from ‘django.utils.http’ (/usr/lib/python3/dist-packages/django/utils/http.py)

During handling of the above exception, another exception occurred:

It is similar to #1000810, but it is already closed.

My solution is:

  • apt remove mailman3-web
    • keep db and config files (do not purge)
  • apt autoremove
    • remove django related packages
  • apt install mailman3-web mailman3-full

I tried to send to the report, but it rerutns `550 Unknown or archived bug’ …

Categories: FLOSS Project Planets

Real Python: The Real Python Podcast – Episode #200: Avoiding Error Culture and Getting Help Inside Python

Planet Python - Fri, 2024-04-12 08:00

What is error culture, and how do you avoid it within your organization? How do you navigate alert and notification fatigue? Hey, it's episode #200! Real Python's editor-in-chief, Dan Bader, joins us this week to celebrate. Christopher Trudeau also returns to bring another batch of PyCoder's Weekly articles and projects.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Gábor Hojtsy: Supporting Drupal transitions at DrupalCon Portland 2024

Planet Drupal - Fri, 2024-04-12 06:07
Supporting Drupal transitions at DrupalCon Portland 2024

DrupalCon Portland 2024 is coming up next month! The event provides good opportunities to get help with three major transitions of Drupal in 2024. Drupal 7's end of life is near, while Drupal 11 is released this year. Finally, DrupalCI testing is superceeded by much improved GitLab CI pipelines shortly after DrupalCon. Here are some highlights of related events to not miss at DrupalCon!

Gábor Hojtsy Fri, 04/12/2024 - 13:07
Categories: FLOSS Project Planets

Tellico Hindi Translation - SoK 2024

Planet KDE - Fri, 2024-04-12 05:13
This is my final blog about my experience participating in season of KDE 2024. As part of my final term, I translated tellico to hindi. Tellico contains 2070 statements spanning between messages and docmessages. I used lokalize application for the translation. I hardly faced any issues during the translation work apart from some ambiguity in the translated words. Thanks Raghavendra Kamath for helping me resolve those few issues. Overall, It was an awesome experience during the entire program.
Categories: FLOSS Project Planets

LN Webworks: How To Create Hooks Vs Event Subscribers in Drupal 9

Planet Drupal - Fri, 2024-04-12 02:45

In Drupal development, understanding the differences between hooks and event subscribers is essential for building robust and flexible modules. Hooks are a fundamental part of Drupal's architecture, allowing modules to interact with and modify various aspects of the system's behavior. Moreover, event subscribers are a more recent addition to Drupal, introduced in Drupal 8 as part of its transition to a more modern, object-oriented architecture.

Hooks in Drupal

Hooks are specially named functions that a module defines and calls at specific times to alter, add, or modify the data.

Categories: FLOSS Project Planets

Pages