Planet Debian
Iustin Pop: Corydalis: new release and switching to date-versioning
After 4 years, I finally managed to tag a new release of Corydalis. There’s nothing special about this specific point in time, but there was also none in the last four years, so I gave up on trying to any kind of usual version release, and simply switched to CalVer.
So, Corydalis 2023.44.0 is up and running on https://demo.corydalis.io. I am 100% sure I’m the only one using it, but doing it open-source is nicer, and I still can’t imagine another way of managing/browsing my photo library (that keeps it under my own control), so I keep doing 10-20 commits per year to it.
There’s a lot of bugs to fix and functionality to improve (main thing - a real video player), but until I can find a chunk of free time, it is what it is 😣.
Petter Reinholdtsen: Test framework for DocBook processors / formatters
All the books I have published so far has been using DocBook somewhere in the process. For the first book, the source format was DocBook, while for every later book it was an intermediate format used as the stepping stone to be able to present the same manuscript in several formats, on paper, as ebook in ePub format, as a HTML page and as a PDF file either for paper production or for Internet consumption. This is made possible with a wide variety of free software tools with DocBook support in Debian. The source format of later books have been docx via rst, Markdown, Filemaker and Asciidoc, and for all of these I was able to generate a suitable DocBook file for further processing using pandoc, a2x and asciidoctor, as well as rendering using xmlto, dbtoepub, dblatex, docbook-xsl and fop.
Most of the books I have published are translated books, with English as the source language. The use of po4a to handle translations using the gettext PO format has been a blessing, but publishing translated books had triggered the need to ensure the DocBook tools handle relevant languages correctly. For every new language I have published, I had to submit patches dblatex, dbtoepub and docbook-xsl fixing incorrect language and country specific issues in the framework themselves. Typically this has been missing keywords like 'figure' or sort ordering of index entries. After a while it became tiresome to only discover issues like this by accident, and I decided to write a DocBook "test framework" exercising various features of DocBook and allowing me to see all features exercised for a given language. It consist of a set of DocBook files, a version 4 book, a version 5 book, a v4 book set, a v4 selection of problematic tables, one v4 testing sidefloat and finally one v4 testing a book of articles. The DocBook files are accompanied with a set of build rules for building PDF using dblatex and docbook-xsl/fop, HTML using xmlto or docbook-xsl and epub using dbtoepub. The result is a set of files visualizing footnotes, indexes, table of content list, figures, formulas and other DocBook features, allowing for a quick review on the completeness of the given locale settings. To build with a different language setting, all one need to do is edit the lang= value in the .xml file to pick a different ISO 639 code value and run 'make'.
The test framework source code is available from Codeberg, and a generated set of presentations of the various examples is available as Codeberg static web pages at https://pere.codeberg.page/docbook-example/. Using this test framework I have been able to discover and report several bugs and missing features in various tools, and got a lot of them fixed. For example I got Northern Sami keywords added to both docbook-xsl and dblatex, fixed several typos in Norwegian bokmål and Norwegian Nynorsk, support for non-ascii title IDs added to pandoc, Norwegian index sorting support fixed in xindy and initial Norwegian Bokmål support added to dblatex. Some issues still remains, though. Default index sorting rules are still broken in several tools, so the Norwegian letters æ, ø and å are more often than not sorted properly in the book index.
The test framework recently received some more polish, as part of publishing my latest book. This book contained a lot of fairly complex tables, which exposed bugs in some of the tools. This made me add a new test file with various tables, as well as spend some time to brush up the build rules. My goal is for the test framework to exercise all DocBook features to make it easier to see which features work with different processors, and hopefully get them all to support the full set of DocBook features. Feel free to send patches to extend the test set, and test it with your favorite DocBook processor. Please visit these two URLs to learn more:
If you want to learn more on Docbook and translations, I recommend having a look at the the DocBook web site, the DoCookBook site and my earlier blog post on how the Skolelinux project process and translate documentation, a talk I gave earlier this year on how to translate and publish books using free software (Norwegian only).
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
Dirk Eddelbuettel: RcppEigen 0.3.3.9.4 on CRAN: Maintenance, Matrix Changes
A new release 0.3.3.9.4 of RcppEigen arrived on CRAN yesterday, and went to Debian today. Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.
This update contains a small amount of the usual maintenance (see below), along with a very nice pull request by Mikael Jagan which simplifies to interface with the Matrix package and inparticular the CHOLMOD library that is part of SuiteSparse. This release is coordinated with lme4 and OpenMx which are also being updated.
The complete NEWS file entry follows.
Changes in RcppEigen version 0.3.3.9.4 (2023-11-01)The CITATION file has been updated for the new bibentry style.
The package skeleton generator has been updated and no longer sets an Imports:.
Some README.md URLs and badged have been updated.
The use of -fopenmp has been documented in Makevars, and a simple thread-count reporting function has been added.
The old manual src/init.c has been replaced by an autogenerated version, the RcppExports file have regenerated
The interface to package Matrix has been updated and simplified thanks to an excllent patch by Mikael Jagan.
The new upload is coordinated with packages lme4 and OpenMx.
Courtesy of CRANberries, there is also a diffstat report for the most recent release.
If you like this or other open-source work I do, you can sponsor me at GitHub.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
Valhalla's Things: Piecepack and postcard boxes
Thanks to All Saints’ Day, I’ve just had a 5 days weekend. One of those days I woke up and decided I absolutely needed a cartonnage box for the cardboard and linocut piecepack I’ve been working on for quite some time.
I started drawing a plan with measures before breakfast, then decided to change some important details, restarted from scratch, did a quick dig through the bookbinding materials and settled on 2 mm cardboard for the structure, black fabric-like paper for the outside and a scrap of paper with a manuscript print for the inside.
Then we had the only day with no rain among the five, so some time was spent doing things outside, but on the next day I quickly finished two boxes, at two different heights.
The weather situation also meant that while I managed to take passable pictures of the first stages of the box making in natural light, the last few stages required some creative artificial lightning, even if it wasn’t that late in the evening. I need to build1 myself a light box.
And then decided that since they are C6 sized, they also work well for postcards or for other A6 pieces of paper, so I will probably need to make another one when the piecepack set will be finally finished.
The original plan was to use a linocut of the piecepack suites as the front cover; I don’t currently have one ready, but will make it while printing the rest of the piecepack set. One day :D
One of the boxes was temporarily used for the plastic piecepack I got with the book, and that one works well, but since it’s a set with standard suites I think I will want to make another box, using some of the paper with fleur-de-lis that I saw in the stash.
I’ve also started to write detailed instructions: I will publish them as soon as they are ready, and then either update this post, or they will be mentioned in an additional post if I will have already made more boxes in the meanwhile.
you don’t really expect me to buy one, right? :D↩︎
Scarlett Gately Moore: KDE: Big fixes for Snaps! Debian and KDE neon updates.
A big thank you goes to my parents this week for contributing to my survival fund. With that I was able to make a big push on fixing some outstanding issues on some of our snaps.
- Marble! Now shows all maps and finds its plugins properly.
- Neochat: A significant fix regarding libsecret in which left users with endless loading screen because it could not authenticate. Bug https://bugs.kde.org/show_bug.cgi?id=473003 This actually affected any app in KDE that uses libsecret… KDE desktops do not ship with gnome-keyring, so this is why sometimes installing it would fix the issue ( if the portals were installed and working correctly AND XDG variables were set correctly). In most cases it works out of the box. In some cases, you must install gnome-keychain via apt and reinstall neochat to setup a new account and it will then prompt to save to keyring. If you are a KDE desktop user and wish to use Kwallet you can sudo snap connect neochat:password-manager-service :password-manager-service , my next order of business is to set up kwallet as a service inside the snaps. Should funding allow.
- Kdevelop: New release . MR to fix wayland no start issues. https://invent.kde.org/kdevelop/kdevelop/-/merge_requests/510
- Kid3: Fixed qml scripts issue https://bugs.kde.org/show_bug.cgi?id=473556
- Kate: New release. MR to fix wayland no start issues https://invent.kde.org/utilities/kate/-/merge_requests/1333
- Krita! 5.2.1 coming as soon as MR https://invent.kde.org/graphics/krita/-/merge_requests/1987 is approved and merged. I have successfully built and installed locally, great stuff!
- Store icons have been fixed.
- Another pile of 23.08.2 releases almost at 100% now.
KDE neon:
No major blowups this week. Worked out issues with kmail-account-wizard thanks to David R! This is now in the hands of upstream ( porting not complete )
Worked on more orange -> green packaging fixes.
Debian:
Fixed bug https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1054713 in squashfuse
Thanks for stopping by. Please help fund my efforts!
<noscript><a href="https://liberapay.com/sgmoore/donate"><img alt="Donate using Liberapay" src="https://liberapay.com/assets/widgets/donate.svg" /></a></noscript> DonateFrançois Marier: Upgrading from Debian 11 bullseye to 12 bookworm
Over the last few months, I upgraded my Debian machines from bullseye to bookworm. The process was uneventful, but I ended up reconfiguring several things afterwards in order to modernize my upgraded machines.
LogcheckI noticed in this release that the transition to journald is essentially complete. This means that rsyslog is no longer needed on most of my systems:
apt purge rsyslogOnce that was done, I was able to comment out the following lines in /etc/logcheck/logcheck.logfiles.d/syslog.logfiles:
#/var/log/syslog #/var/log/auth.logI did have to adjust some of my custom logcheck rules, particularly the ones that deal with kernel messages:
--- a/logcheck/ignore.d.server/local-kernel +++ b/logcheck/ignore.d.server/local-kernel @@ -1,1 +1,1 @@ -^\w{3} [ :[:digit:]]{11} [._[:alnum:]-]+ kernel: \[[0-9. ]+]\ IN=eno1 OUT= MAC=[0-9a-f:]+ SRC=[0-9a-f.:]+ +^\w{3} [ :[:digit:]]{11} [._[:alnum:]-]+ kernel: (\[[0-9. ]+]\ )?IN=eno1 OUT= MAC=[0-9a-f:]+ SRC=[0-9a-f.:]+Then I moved local entries from /etc/logcheck/logcheck.logfiles to /etc/logcheck/logcheck.logfiles.d/local.logfiles (/var/log/syslog and /var/log/auth.log are enabled by default when needed) and removed some files that are no longer used:
rm /var/log/mail.err* rm /var/log/mail.warn* rm /var/log/mail.info*Finally, I had to fix any unescaped | characters in my local rules. For example error == NULL || \*error == NULL must now be written as error == NULL \|\| \*error == NULL.
NetworkingAfter the upgrade, I got a notice that the isc-dhcp-client is now deprecated and so I removed if from my system:
apt purge isc-dhcp-clientThis however meant that I need to ensure that my network configuration software does not depend on the now-deprecated DHCP client.
On my laptop, I was already using NetworkManager for my main network interfaces and that has built-in DHCP support.
Migration to systemd-networkdOn my backup server, I took this opportunity to switch from ifupdown to systemd-networkd by removing ifupdown:
apt purge ifupdown rm /etc/network/interfacesputting the following in /etc/systemd/network/20-wired.network:
[Match] Name=eno1 [Network] DHCP=yesand then enabling/starting systemd-networkd:
systemctl enable systemd-networkd systemctl start systemd-networkdI also needed to install polkit:
apt install --no-install-recommends policykit-1in order to allow systemd-networkd to set the hostname.
In order to start my firewall automatically as interfaces are brought up, I wrote a dispatcher script to apply my existing iptables rules.
Migration to predictacle network interface namesOn my Linode server, I did the same as on the backup server, but I put the following in /etc/systemd/network/20-wired.network since it has a static IPv6 allocation:
[Match] Name=enp0s4 [Network] DHCP=yes Address=2600:3c01::xxxx:xxxx:xxxx:939f/64 Gateway=fe80::1and switched to predictable network interface names by deleting these two files:
- /etc/systemd/network/50-virtio-kernel-names.link
- /etc/systemd/network/99-default.link
and then changing eth0 to enp0s4 in:
- /etc/network/iptables.up.rules
- /etc/network/ip6tables.up.rules
- /etc/rc.local (for OpenVPN)
- /etc/logcheck/ignored.d.*/*
Then I regenerated all initramfs:
update-initramfs -u -k alland rebooted the virtual machine.
Dynamic DNSI replaced ddclient with inadyn since it doesn't work with no-ip.com anymore, using the configuration I described in an old blog post.
chkrootkitI moved my customizations in /etc/chkrootkit.conf to /etc/chkrootkit/chkrootkit.conf after seeing this message in my logs:
WARNING: /etc/chkrootkit.conf is deprecated. Please put your settings in /etc/chkrootkit/chkrootkit.conf instead: /etc/chkrootkit.conf will be ignored in a future release and should be deleted.
sshAs mentioned in Debian bug#1018106, to silence the following warnings:
sshd[6283]: pam_env(sshd:session): deprecated reading of user environment enabledI changed the following in /etc/pam.d/sshd:
--- a/pam.d/sshd +++ b/pam.d/sshd @@ -44,7 +44,7 @@ session required pam_limits.so session required pam_env.so # [1] # In Debian 4.0 (etch), locale-related environment variables were moved to # /etc/default/locale, so read that as well. -session required pam_env.so user_readenv=1 envfile=/etc/default/locale +session required pam_env.so envfile=/etc/default/locale # SELinux needs to intervene at login time to ensure that the process starts # in the proper default security context. Only sessions which are intendedI also made the following changes to /etc/ssh/sshd_config.d/local.conf based on the advice of ssh-audit 2.9.0:
-KexAlgorithms curve25519-sha256@libssh.org,curve25519-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha256 +KexAlgorithms curve25519-sha256@libssh.org,curve25519-sha256,sntrup761x25519-sha512@openssh.com,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512Reproducible Builds: Farewell from the Reproducible Builds Summit 2023!
Farewell from the Reproducible Builds summit, which just took place from in Hamburg, Germany:
This year, we were thrilled to host the seventh edition of this exciting event. Topics covered this year included:
- Project updates from OpenSUSE, Fedora, Debian, ElectroBSD, Reproducible Central and NixOS
- Mapping the “big picture”
- Towards a snapshot service
- Understanding user-facing needs and personas
- Language-specific package managers
- Defining our definitions
- Creating a “Ten Commandments” of reproducibility
- Embedded systems
- Next steps in GNU Guix’ reproducibility
- Signature storage and sharing
- Public verification services
- Verification use cases
- Web site audiences
- Enabling new projects to be “born reproducible”
- Collecting reproducibility success stories
- Reproducibility’s relationship to SBOMs
- SBOMs for RPM-based distributions
- Filtering diffoscope output
- Reproducibility of filesystem images, filesystems and containers
- Using verification data
- A deep-dive on Fedora and Arch Linux package reproducibility
- Debian rebuild archive service discussion
… as well as countless informal discussions and hacking sessions into the night. Projects represented at the venue included:
Debian, OpenSuSE, QubesOS, GNU Guix, Arch Linux, phosh, Mobian, PureOS, JustBuild, LibreOffice, Warpforge, OpenWrt, F-Droid, NixOS, ElectroBSD, Apache Security, Buildroot, Systemd, Apache Maven, Fedora, Privoxy, CHAINS, coreboot, GitHub, Tor Project, Ubuntu, rebuilderd, repro-env, spytrap-adb, arch-repo-status, etc.A huge thanks to our sponsors and partners for making the event possible:
Event facilitation
Platinum sponsor
If you weren’t able to make it this year, don’t worry; just look out for an announcement in 2024 for the next event.
Joachim Breitner: Joining the Lean FRO
Tomorrow is going to be a new first day in a new job for me: I am joining the Lean FRO, and I’m excited.
What is Lean?Lean is the new kid on the block of theorem provers.
It’s a pure functional programming language (like Haskell, with and on which I have worked a lot), but it’s dependently typed (which Haskell may be evolving to be as well, but rather slowly and carefully). It has a refreshing syntax, built on top of a rather good (I have been told, not an expert here) macro system.
As a dependently typed programming language, it is also a theorem prover, or proof assistant, and there exists already a lively community of mathematicians who started to formalize mathematics in a coherent library, creatively called mathlib.
What is a FRO?A Focused Research Organization has the organizational form of a small start up (small team, little overhead, a few years of runway), but its goals and measure for success are not commercial, as funding is provided by donors (in the case of the Lean FRO, the Simons Foundation International, the Alfred P. Sloan Foundation, and Richard Merkin). This allows us to build something that we believe is a contribution for the greater good, even though it’s not (or not yet) commercially interesting enough and does not fit other forms of funding (such as research grants) well. This is a very comfortable situation to be in.
Why am I excited?To me, working on Lean seems to be the perfect mix: I have been working on language implementation for about a decade now, and always with a preference for functional languages. Add to that my interest in theorem proving, where I have used Isabelle and Coq so far, and played with Agda and others. So technically, clearly up my alley.
Furthermore, the language isn’t too old, and plenty of interesting things are simply still to do, rather than tried before. The ecosystem is still evolving, so there is a good chance to have some impact.
On the other hand, the language isn’t too young either. It is no longer an open question whether we will have users: we have them already, they hang out on zulip, so if I improve something, there is likely someone going to be happy about it, which is great. And the community seems to be welcoming and full of nice people.
Finally, this library of mathematics that these users are building is itself an amazing artifact: Lots of math in a consistent, machine-readable, maintained, documented, checked form! With a little bit of optimism I can imagine this changing how math research and education will be done in the future. It could be for math what Wikipedia is for encyclopedic knowledge and OpenStreetMap for maps – and the thought of facilitating that excites me.
With this new job I find that when I am telling friends and colleagues about it, I do not hesitate or hedge when asked why I am doing this. This is a good sign.
What will I be doing?We’ll see what main tasks I’ll get to tackle initially, but knowing myself, I expect I’ll get broadly involved.
To get up to speed I started playing around with a few things already, and for example created Loogle, a Mathlib search engine inspired by Haskell’s Hoogle, including a Zulip bot integration. This seems to be useful and quite well received, so I’ll continue maintaining that.
Expect more about this and other contributions here in the future.
Matthew Garrett: Why ACPI?
Why does ACPI exist? In the beforetimes power management on x86 was done by jumping to an opaque BIOS entry point and hoping it would do the right thing. It frequently didn't. We called this Advanced Power Management (Advanced because before this power management involved custom drivers for every machine and everyone agreed that this was a bad idea), and it involved the firmware having to save and restore the state of every piece of hardware in the system. This meant that assumptions about hardware configuration were baked into the firmware - failed to program your graphics card exactly the way the BIOS expected? Hurrah! It's only saved and restored a subset of the state that you configured and now potential data corruption for you. The developers of ACPI made the reasonable decision that, well, maybe since the OS was the one setting state in the first place, the OS should restore it.
So far so good. But some state is fundamentally device specific, at a level that the OS generally ignores. How should this state be managed? One way to do that would be to have the OS know about the device specific details. Unfortunately that means you can't ship the computer without having OS support for it, which means having OS support for every device (exactly what we'd got away from with APM). This, uh, was not an option the PC industry seriously considered. The alternative is that you ship something that abstracts the details of the specific hardware and makes that abstraction available to the OS. This is what ACPI does, and it's also what things like Device Tree do. Both provide static information about how the platform is configured, which can then be consumed by the OS and avoid needing device-specific drivers or configuration to be built-in.
The main distinction between Device Tree and ACPI is that Device Tree is purely a description of the hardware that exists, and so still requires the OS to know what's possible - if you add a new type of power controller, for instance, you need to add a driver for that to the OS before you can express that via Device Tree. ACPI decided to include an interpreted language to allow vendors to expose functionality to the OS without the OS needing to know about the underlying hardware. So, for instance, ACPI allows you to associate a device with a function to power down that device. That function may, when executed, trigger a bunch of register accesses to a piece of hardware otherwise not exposed to the OS, and that hardware may then cut the power rail to the device to power it down entirely. And that can be done without the OS having to know anything about the control hardware.
How is this better than just calling into the firmware to do it? Because the fact that ACPI declares that it's going to access these registers means the OS can figure out that it shouldn't, because it might otherwise collide with what the firmware is doing. With APM we had no visibility into that - if the OS tried to touch the hardware at the same time APM did, boom, almost impossible to debug failures (This is why various hardware monitoring drivers refuse to load by default on Linux - the firmware declares that it's going to touch those registers itself, so Linux decides not to in order to avoid race conditions and potential hardware damage. In many cases the firmware offers a collaborative interface to obtain the same data, and a driver can be written to get that. this bug comment discusses this for a specific board)
Unfortunately ACPI doesn't entirely remove opaque firmware from the equation - ACPI methods can still trigger System Management Mode, which is basically a fancy way to say "Your computer stops running your OS, does something else for a while, and you have no idea what". This has all the same issues that APM did, in that if the hardware isn't in exactly the state the firmware expects, bad things can happen. While historically there were a bunch of ACPI-related issues because the spec didn't define every single possible scenario and also there was no conformance suite (eg, should the interpreter be multi-threaded? Not defined by spec, but influences whether a specific implementation will work or not!), these days overall compatibility is pretty solid and the vast majority of systems work just fine - but we do still have some issues that are largely associated with System Management Mode.
One example is a recent Lenovo one, where the firmware appears to try to poke the NVME drive on resume. There's some indication that this is intended to deal with transparently unlocking self-encrypting drives on resume, but it seems to do so without taking IOMMU configuration into account and so things explode. It's kind of understandable why a vendor would implement something like this, but it's also kind of understandable that doing so without OS cooperation may end badly.
This isn't something that ACPI enabled - in the absence of ACPI firmware vendors would just be doing this unilaterally with even less OS involvement and we'd probably have even more of these issues. Ideally we'd "simply" have hardware that didn't support transitioning back to opaque code, but we don't (ARM has basically the same issue with TrustZone). In the absence of the ideal world, by and large ACPI has been a net improvement in Linux compatibility on x86 systems. It certainly didn't remove the "Everything is Windows" mentality that many vendors have, but it meant we largely only needed to ensure that Linux behaved the same way as Windows in a finite number of ways (ie, the behaviour of the ACPI interpreter) rather than in every single hardware driver, and so the chances that a new machine will work out of the box are much greater than they were in the pre-ACPI period.
There's an alternative universe where we decided to teach the kernel about every piece of hardware it should run on. Fortunately (or, well, unfortunately) we've seen that in the ARM world. Most device-specific simply never reaches mainline, and most users are stuck running ancient kernels as a result. Imagine every x86 device vendor shipping their own kernel optimised for their hardware, and now imagine how well that works out given the quality of their firmware. Does that really seem better to you?
It's understandable why ACPI has a poor reputation. But it's also hard to figure out what would work better in the real world. We could have built something similar on top of Open Firmware instead but the distinction wouldn't be terribly meaningful - we'd just have Forth instead of the ACPI bytecode language. Longing for a non-ACPI world without presenting something that's better and actually stands a reasonable chance of adoption doesn't make the world a better place.
comments
Paul Wise: FLOSS Activities October 2023
This month I didn't have any particular focus. I just worked on issues in my info bubble.
Changes- sqliteodbc: fix crash (also sent upstream privately)
- swh-docs: add FAQs for Add Forge Now (1 2 3)
- swh-web: improve forge contact mail (1 2)
- devscripts: improve chdist, build-rdeps
- duck: add indicator
- userdir-ldap: TLS
- Debian BTS usertags: fixed usertags for Java, archive admins, ports
- Debian QA services: link cruft report
- Debian wiki pages: AutoGeneratedFiles, DebianKernel/UserspaceTools, Derivatives/Census/Zevenet, Firmware/Open, GPS, IRC, JelmerVernooij, Ports/loong64 (1 2), SSH, Teams (Debbugs/ArchitectureTags, Publicity/ReleasePointAnnouncements (1 2)), thermald
- FOSSjobs wiki pages: Resources
- notmuch wiki pages: frontends (1 2)
- Features in lintian, util-linux
- Enable feature in xwayland
- Missing transitional package in golang-ginkgo
- Broken symlinks in dateutils, libimage-magick-perl
- Conffile removal needed in samba-common, ddcontrol
- Python 3 porting needed for Debian carnivore
- Missing git tags in DBD-ODBC
- CPAN release needed for DBD-ODBC
- SWH listers needed for hgweb, OSDN, Gitee, GitHub Enterprise, klaus, Phorge, Phabricator, Assembla
- SWH listing problems for SourceForge (1 2)
- Debian wiki: RecentChanges for the month
- Debian BTS usertags: changes for the month
- Debian screenshots:
- approved budgie-desktop cadabra2 education-desktop-cinnamon education-desktop-kde extundelete lfm yt-dlp
- rejected freecell-solver-bin (Windows), weboob-qt (chess), task-marathi (random website), stockfish/edict (mobile), libnss3 (random logo), x11vnc (selfie), x11vnc (movie poster), sudoku-solver (Android), fonts-noto-color-emoji (too small)
- Debian IRC: rescue obsolete/unused #debian-wiki channel
- Debian servers: rescue data from an old DebConf server
- Debian wiki: approve accounts
- Respond to queries from Debian users and contributors on the mailing lists and IRC
The SWH, golang-ginkgo, DBD-ODBC, sqliteodbc work was sponsored. All other work was done on a volunteer basis.
Junichi Uekawa: Already November.
Dirk Eddelbuettel: RcppArmadillo 0.12.6.6.0 on CRAN: Bugfix, Thread Throttling
Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1110 other packages on CRAN, downloaded 31.2 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 563 times according to Google Scholar.
This release brings upstream bugfix releases 12.6.5 (sparse matrix corner case) and 12.6.6 with an ARPACK correction. Conrad released it this this morning, I had been running reverse dependency checks anyway and knew we were in good shape so for once I did not await a full run against the now over 1100 (!!) packages using RcppArmadillo.
This release also contains a change I prepared on Sunday and which helps with much-criticized (and rightly I may add) insistence by CRAN concerning ‘throttling’. The motivation is understandable: CRAN tests many packages at once on beefy servers and can ill afford tests going off and requesting numerous cores. But rather than providing a global setting at their end, CRAN insists that each package (!!) deals with this. The recent traffic on the helpful-as-ever r-pkg-devel mailing clearly shows that this confuses quite a few package developers. Some have admitted to simply turning examples and tests off: a net loss for all of us. Now, Armadillo defaults to using up to eight cores (which is enough to upset CRAN) when running with OpenMP (which is generally only on Linux for “reasons” I rather not get into…). With this release I expose a helper functions (from OpenMP) to limit this. I also set up an example package and repo RcppArmadilloOpenMPEx detailing this, and added a demonstration of how to use the new throttlers to the fastLm example. I hope this proves useful to users of the package.
The set of changes since the last CRAN release follows.
Changes in RcppArmadillo version 0.12.6.6.0 (2023-10-31)Upgraded to Armadillo release 12.6.6 (Cortisol Retox)
- Fix eigs_sym(), eigs_gen() and svds() to generate deterministic results in ARPACK mode
Add helper functions to set and get the number of OpenMP threads
Store initial thread count at package load and use in thread-throttling helper (and resetter) suitable for CRAN constraints
Upgraded to Armadillo release 12.6.5 (Cortisol Retox)
- Fix for corner-case bug in handling sparse matrices with no non-zero elements
Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.
If you like this or other open-source work I do, you can sponsor me at GitHub.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
Iustin Pop: Raspberry PI OS: upgrading and cross-grading
One of the downsides of running Raspberry PI OS is the fact that - not having the resources of pure Debian - upgrades are not recommended, and cross-grades (migrating between armhf and arm64) is not even mentioned. Is this really true? It is, after all a Debian-based system, so it should in theory be doable. Let’s try!
UpgradingThe recently announced release based on Debian Bookworm here says:
We have always said that for a major version upgrade, you should re-image your SD card and start again with a clean image. In the past, we have suggested procedures for updating an existing image to the new version, but always with the caveat that we do not recommend it, and you do this at your own risk.
This time, because the changes to the underlying architecture are so significant, we are not suggesting any procedure for upgrading a Bullseye image to Bookworm; any attempt to do this will almost certainly end up with a non-booting desktop and data loss. The only way to get Bookworm is either to create an SD card using Raspberry Pi Imager, or to download and flash a Bookworm image from here with your tool of choice.
Which means, it’s time to actually try it 😊 … turns out it’s actually trivial, if you use RPIs as headless servers. I had only three issues:
- if using an initrd, the new initrd-building scripts/hooks are looking for some binaries in /usr/bin, and not in /bin; solution: install manually the usrmerge package, and then re-run dpkg --configure -a;
- also if using an initrd, the scripts are looking for the kernel config file in /boot/config-$(uname -r), and the raspberry pi kernel package doesn’t provide this; workaround: modprobe configs && zcat /proc/config.gz > /boot/config-$(uname -r);
- and finally, on normal RPI systems, that don’t use manual configurations of interfaces in /etc/network/interface, migrating from the previous dhcpcd to NetworkManager will break network connectivity, and require you to log in locally and fix things.
I expect most people to hit only the 3rd, and almost no-one to use initrd on raspberry pi. But, overall, aside from these two issues and a couple of cosmetic ones (login.defs being rewritten from scratch and showing a baffling diff, for example), it was easy.
Is it worth doing? Definitely. Had no data loss, and no non-booting system.
Cross-grading (32 bit to 64 bit userland)This one is actually painful. Internet searches go from “it’s possible, I think” to “it’s definitely not worth trying”. Examples:
- rsync files over after reinstall
- just install on a new sdcard
- confusion about kernel vs userland
- don’t do it, not recommended
- stackexchange post without answers
Aside from these, there are a gazillion other posts about switching the kernel to 64 bit. And that’s worth doing on its own, but it’s only half the way.
So, armed with two different systems - a RPI4 4GB and a RPI Zero W2 - I tried to do this. And while it can be done, it takes many hours - first system was about 6 hours, second the same, and a third RPI4 probably took ~3 hours only since I knew the problematic issues.
So, what are the steps? Basically:
- install devscripts, since you will need dget‼
- enable new architecture in dpkg: dpkg --add-architecture arm64
- switch over apt sources to include the 64 bit repos, which are different than the 32 bit ones (Raspberry PI OS did a migration here; normally a single repository has all architectures, of course)
- downgrade all custom rpi packages/libraries to the standard bookworm/bullseye version, since dpkg won’t usually allow a single library package to have different versions (I think it’s possible to override, but I didn’t bother)
- install libc for the arm64 arch (this takes some effort, it’s actually a set of 3-4 packages)
- once the above is done, install whiptail:amd64 and rejoice at running a 64-bit binary!
- then painfully go through sets of packages and migrate the “set” to
arm64:
- sometimes this work via apt, sometimes you’ll need to use dget and dpkg -i
- make sure you download both the armhf and arm64 versions before doing dpkg -i, since you’ll need to rollback some installs
- at one point, you’ll be able to switch over dpkg and apt to arm64, at which point the default architecture flips over; from here, if you’ve done it at the right moment, it becomes very easy; you’ll probably need an apt install --fix-broken, though, at first
- and then, finish by replacing all packages with arm64 versions
- and then, dpkg --remove-architecture armhf, reboot, and profit!
But it’s tears and blood to get to that point 🙁
Pain point 1: RPI custom versions of packagesSince the 32bit armhf architecture is a bit weird - having many variations - it turns out that raspberry pi OS has many packages that are very slightly tweaked to disable a compilation flag or work around build/test failures, or whatnot. Since we talk here about 64-bit capable processors, almost none of these are needed, but they do make life harder since the 64 bit version doesn’t have those overrides.
So what is needed would be to say “downgrade all armhf packages to the version in debian upstream repo”, but I couldn’t find the right apt pinning incantation to do that. So what I did was to remove the 32bit repos, then use apt-show-versions to see which packages have versions that are no longer in any repo, then downgrade them.
There’s a further, minor, complication that there were about 3-4 packages with same version but different hash (!), which simply needed apt install --reinstall, I think.
Pain point 2: architecture independent packagesThere is one very big issue with dpkg in all this story, and the one that makes things very problematic: while you can have a library package installed multiple times for different architectures, as the files live in different paths, a non-library package can only be installed once (usually). For binary packages (arch:any), that is fine. But architecture-independent packages (arch:all) are problematic since usually they depend on a binary package, but they always depend on the default architecture version!
Hrmm, and I just realise I don’t have logs from this, so I’m only ~80% confident. But basically:
- vim-solarized (arch:all) depends on vim (arch:any)
- if you replace vim armhf with vim arm64, this will break vim-solarized, until the default architecture becomes arm64
So you need to keep track of which packages apt will de-install, for later re-installation.
It is possible that Multi-Arch: foreign solves this, per the debian wiki which says:
Note that even though Architecture: all and Multi-Arch: foreign may look like similar concepts, they are not. The former means that the same binary package can be installed on different architectures. Yet, after installation such packages are treated as if they were “native” architecture (by definition the architecture of the dpkg package) packages. Thus Architecture: all packages cannot satisfy dependencies from other architectures without being marked Multi-Arch foreign.
It also has warnings about how to properly use this. But, in general, not many packages have it, so it is a problem.
Pain point 3: remove + install vs overwriteIt seems that depending on how the solver computes a solution, when migrating a package from 32 to 64 bit, it can choose either to:
- “overwrite in place” the package (akin to dpkg -i)
- remove + install later
The former is OK, the later is not. Or, actually, it might be that apt never can do this, for example (edited for brevity):
# apt install systemd:arm64 --no-install-recommends The following packages will be REMOVED: systemd The following NEW packages will be installed: systemd:arm64 0 upgraded, 1 newly installed, 1 to remove and 35 not upgraded. Do you want to continue? [Y/n] y dpkg: systemd: dependency problems, but removing anyway as you requested: systemd-sysv depends on systemd. Removing systemd (247.3-7+deb11u2) ... systemd is the active init system, please switch to another before removing systemd. dpkg: error processing package systemd (--remove): installed systemd package pre-removal script subprocess returned error exit status 1 dpkg: too many errors, stopping Errors were encountered while processing: systemd Processing was halted because there were too many errors.But at the same time, overwrite in place is all good - via dpkg -i from /var/cache/apt/archives.
In this case it manifested via a prerm script, in other cases is manifests via dependencies that are no longer satisfied for packages that can’t be removed, etc. etc. So you will have to resort to dpkg -i a lot.
Pain point 4: lib- packages that are not libDuring the whole process, it is very tempting to just go ahead and install the corresponding arm64 package for all armhf lib… package, in one go, since these can coexist.
Well, this simple plan is complicated by the fact that some packages are named libfoo-bar, but are actual holding (e.g.) the bar binary for the libfoo package. Examples:
- libmagic-mgc contains /usr/lib/file/magic.mgc, which conflicts between the 32 and 64 bit versions; of course, it’s the exact same file, so this should be an arch:all package, but…
- libpam-modules-bin and liblockfile-bin actually contain binaries (per the -bin suffix)
It’s possible to work around all this, but it changes a 1 minute:
# apt install $(dpkg -i | grep ^ii | awk '{print $2}'|grep :amrhf|sed -e 's/:armhf/:arm64')into a 10-20 minutes fight with packages (like most other steps).
Is it worth doing?Compared to the simple bullseye → bookworm upgrade, I’m not sure about this. The result? Yes, definitely, the system feels - weirdly - much more responsive, logged in over SSH. I guess the arm64 base architecture has some more efficient ops than the “lowest denominator armhf”, so to say (e.g. there was in the 32 bit version some rpi-custom package with string ops), and thus migrating to 64 bit makes more things “faster”, but this is subjective so it might be actually not true.
But from the point of view of the effort? Unless you like to play with dpkg and apt, and understand how these work and break, I’d rather say, migrate to ansible and automate the deployment. It’s doable, sure, and by the third system, I got this nailed down pretty well, but it was a lot of time spent.
The good aspect is that I did 3 migrations:
- rpi zero w2: bullseye 32 bit to 64 bit, then bullseye to bookworm
- rpi 4: bullseye to bookworm, then bookworm 32bit to 64 bit
- same, again, for a more important system
And all three worked well and no data loss. But I’m really glad I have this behind me, I probably wouldn’t do a fourth system, even if forced 😅
And now, waiting for the RPI 5 to be available… See you!
Russell Coker: Links October 2023
The Daily Kos has an interesting article about a new more effective method of desalination [1].
Here is a video of a crazy guy zapping things with 100 car batteries [2]. This is sonmething you should avoid if you want to die of natural causes. Does dying while making a science video count for a Darwin Award?
A Hacker News comment has an interesting explanation of Unix signals [3].
Interesting documentary on the rise of mega corporations [4]. We need to split up Google, Facebook, and Amazon ASAP. Also every phone platform should have competing app stores.
Dave Taht gave an interesting LCA lecture about Internet congestion control [5]. He also referenced a web site about projects to alleviate the buffer bloat problem [6].
This tiny event based sensor is an interesting product [7]. It could lead to some interesting (but possibly invasive) technological developments in phones.
Tara Barnett’s Everything Open lecture Swiss Army GLAM had some interesting ideas for community software development [8]. Having lots of small programs communicating with APIs is an interesting way to get people into development.
Interesting YouTube video from someone who helped the Kurds defend against Turkey about how war tunnels work [10]. He makes a strong case that the Israeli invasion of the Gaza Strip won’t be easy or pleasant.
- [1] https://tinyurl.com/ywoxq29u
- [2] https://www.youtube.com/watch?v=ywaTX-nLm6Y
- [3] https://news.ycombinator.com/item?id=37902515
- [4] https://www.youtube.com/watch?v=Dy8ogOaKk4Y
- [5] https://www.youtube.com/watch?v=ZeCIbCzGY6k
- [6] https://www.bufferbloat.net/projects/
- [7] https://www.prophesee.ai/event-based-sensor-genx320/
- [8] https://tinyurl.com/yoo435ra
- [9] https://www.youtube.com/watch?app=desktop&v=w2bFzQTQ9aI
- [10] https://www.youtube.com/watch?v=mMaQn6eBroY
Related posts:
- Links August 2023 This is an interesting idea from Bruce Schneier, an “AI...
- Links July 2023 Phys.org has an interesting article about finding evidence for nanohertz...
- Links May 2023 Petter Reinholdtsen wrote an interesting blog post about their work...
Bits from Debian: Call for bids for DebConf24
Due to the current state of affairs in Israel, who were to host DebConf24, the DebConf committee has decided to renew calls for bids to host DebConf24 at another venue and location.
The DebConf committee would like to express our sincere appreciation for the DebConf Israeli team, and the work they've done over several years. However, given the uncertainty about the situation, we regret that it will most likely not be possible to hold DebConf in Israel.
As we ask for submissions for new host locations we ask that you please review and understand the details and requirements for a bid submission to host the Debian Developer Conference.
Please review the template for a DebConf bid for guidelines on how to sumbit a proper bid.
To submit a bid, please create the appropriate page(s) under DebConf Wiki Bids, and add it to the "Bids" section in the main DebConf 24 page.
There isn't very much time to make a decision. We need bids by the end of November in order to make a decision by the end of the year.
After your submission is completed please send us a notification at debconf-team@lists.debian.org to let us know that your bid submission is ready for review.
We also suggest hanging out in our IRC chat room #debconf-team.
Given this short deadline, we understand that bids won't be as complete as they would usually be. Do the best you can in the time available.
Bids will be evaluated according to The Priority List.
You can get in contact with the DebConf team by email to debconf-team@lists.debian.org, or via the #debconf-team IRC channel on OFTC or via our Matrix Channel.
Thank you,
The Debian Debconf Committee
Joachim Breitner: Squash your Github PRs with one click
TL;DR: Squash your PRs with one click at https://squasher.nomeata.de/.
Very recently I got this response from the project maintainer at a pull request I contributed: “Thanks, approved, please squash so that I can merge.”
It’s nice that my contribution can go it, but why did the maintainer not just press the “Squash and merge button”, and instead adds the this unnecessary roundtrip to the process? Anyways, maintainers make the rules, so I play by them. But unlike the maintainer, who can squash-and-merge with just one click, squashing the PR’s branch is surprisingly laberous: Github does not allow you to do that via the Web UI (and hence on mobile), and it seems you are expected to go to your computer and juggle with git rebase --interactive.
I found this rather annoying, so I created Squasher, a simple service that will squash your branch for you. There is no configuration, just paste the PR url. It will use the PR title and body as the commit message (which is obviously the right way™), and create the commit in your name:
Squasher in actionIf you find this useful, or found it to be buggy, let me know. The code is at https://github.com/nomeata/squasher if you are curious about it.
Aigars Mahinovs: Figuring out finances part 4
At the end of the last part of this, we got a Home Assistant OS installation that contains in itself a Firefly III instance and that contains all the current financial information. Now I will try to connect the two.
While it could be nice to create a fully-featured integration for Firefly III to Home Assistant to communicate all interesting values and events, I have an interest on programming a more advanced data point calculation for my budget needs, so a less generic, but more flexible approch is a better one for me. So I was quite interested when among the addons in the Home Assistant Addon Store I saw AppDaemon - a way to simply integrate arbitrary Python processing with Home Assistant. Let's see if that can do what I want.
For start, after reading the tutorial , I wanted to create a simple script that would use Firefly III REST API to read the current balance of my main account and then send that to Home Assistant as a sensor value, which then can be displayed on a dashboard.
As a quick try I modified the provided hello_world.py that is included in the default AppDaemon installation:
import requests from datetime import datetime import appdaemon.plugins.hass.hassapi as hass app_token = "<FIREFLY_PERSONAL_ACCESS_TOKEN>" firefly_url = "<FIREFLY_URL>" class HelloWorld(hass.Hass): def initialize(self): self.run_every(self.set_asset, "now", 60 * 60) def set_asset(self, kwargs): ent = self.get_entity("sensor.firefly3_asset_sparkasse_main") if not ent.exists(): ent.add( state=0.0, attributes={ "native_value": 0.0, "native_unit_of_measurement": "EUR", "state_class": "measurement", "device_class": "monetary", "current_balance_date": datetime.now(), }) r = requests.get( firefly_url + "/api/v1/accounts?type=asset", headers={ "Authorization": "Bearer " + app_token, "Accept": "application/vnd.api+json", "Content-Type": "application/json", }) data = r.json() for account in data["data"]: if not "attributes" in account or "name" not in account["attributes"]: continue if account["attributes"]["name"] != "Sparkasse giro": continue self.log("Account :" + str(account["attributes"])) ent.set_state( state=account["attributes"]["current_balance"], attributes={ "native_value": account["attributes"]["current_balance"], "current_balance_date": datetime.fromisoformat(account["attributes"]["current_balance_date"]), }) self.log("Entity updated")It uses a URL and personal access token to access Firefly III API, gets the asset accounts information, then extracts info about current balance and balance date of my main account and then creates and/or updates a "sensor" value into Home Assistant. This sensor is with metadata marked as a monetary value and as a measurement. This makes Home Assistant track this value in the database as a graphable changing value.
I modified the file using the File Editor addon to edit the /config/appdaemon/apps/hello.py file. Each time the file is saved it is reloaded and logs can be seen in the AppDaemon Logs section - main_log for logging messages or error_log if there is a crash. Useful to know that requests library is included, but it hard to see in the docks what else is included or if there is an easy way to install extra Python packages.
This is already a very nice basis for custom value insertion into Home Assistant - whatever you can with a Python script extract or calculate, you can also inject into Home Assistant. With even this simple approach you can monitor balances, budgets, piggy-banks, bill payment status and even sum of transactions in particular catories in a particular time window. Especially interesting data can be found in the insight section of the Firefly III API.
The script above uses a trigger like self.run_every(self.set_asset, "now", 60 * 60) to simply run once per hour. The data in Firefly will not be updated too often anyway, at least not until we figure out how to make bank connection run automatically without user interaction and not screw up already existing transactions along the way. In theory a webhook API of the Firefly III could be used to trigger the data update instantly when any transaction is created or updated. Possibly even using Home Assistant webhook integration. Hmmm. Maybe.
Who am I kiddind? I am going to make that work, for sure! :D But first - how about figuring out the future?
So what I want to do? In short, I want to predict what will be the balance on my main account just before the next months salary comes in. To do this I would:
- take the current balance of the main account
- if this months salary is not paid out yet, then add that into the balance
- deduct all still unpaid bills that are due between now and the target date
- if the credit card account has not yet been reset to the main account, deduct current amount on the cards
- if credit card account has been reset, but not from main account deducted yet, deduct the reset amount
To do that I need to use the Firefly API to read: current account info, status of all bills including next due date and amount, transfer transactions between credit cards and main account and something that would store the expected salary date and amount. Ideally I'd use a recurring transaction or a income bill for this, but Firefly is not really cooperating with that. The easiest would be just to hardcode that in the script itself.
And this is what I have come up with so far.
To make the development process easier, I separated put the params for the API key and salary info and app params for the month to predict for, and predict both this and next months balances at the same time. I edited the script locally with Neovim and also ran it locally with a few mocks, uploading to Home Assistant via the SSH addon when the local executions looked good.
So what's next? Well, need to somewhat automate the sync with the bank (if at all possible). And for sure take a regular database and config backup :D
Russell Coker: Hello Kitty
I’ve just discovered a new xterm replacement named Kitty [1]. It boasts about being faster due to threading and using the GPU and it does appear faster on some of my systems but that’s not why I like it.
A trend in terminal programs in recent years has been tabbed operation so you can have multiple sessions in one OS window, this is something I’ve never liked just as I’ve never liked using Screen to switch between sessions when I had the option of just having multiple sessions on screen. The feature that I like most about Kitty is the ability to have a grid based layout of sessions in one OS window. Instead of having 16 OS windows on my workstation or 4 OS windows on a laptop with different entries in the window list and the possibility of them getting messed up if the OS momentarily gets confused about the screen size (a common issue with laptop use) I can just have 1 Kitty window that has all the sessions running.
Kitty has “Kitten” processes that can do various things, one is icat which displays an image file to the terminal and leaves it in the scroll-back buffer. I put the following shell code in one of the scripts called from .bashrc to setup an alias for icat.
if [ "$TERM" == "xterm-kitty" ]; then alias icat='kitty +kitten icat' fiThe kitten interface can be supported by other programs. The version of the mpv video player in Debian/Unstable has a --vo=kitty option which is an interesting feature. However playing a video in a Kitty window that takes up 1/4 of the screen on my laptop takes a bit over 100% of a CPU core for mpv and about 10% to 20% for Kitty which gives a total of about 120% CPU use on my i5-6300U compared to about 20% for mpv using wayland directly. The option to make it talk to Kitty via shared memory doesn’t improve things.
Using this effectively requires installing the kitty-terminfo package on every system you might ssh to. But you can set the term type to xterm-256color when logged in to a system without the kitty terminfo installed. The fact that icat and presumably other advanced terminal functions work over ssh by default is a security concern, but this also works with Konsole and will presumably be added to other terminal emulators so it’s a widespread problem that needs attention.
There is support for desktop notifications in the Kitty terminal encoding [2]. One of the things I’m interested in at the moment is how to best manage notifications on converged systems (phone and desktop) so this is something I’ll have to investigate.
Overall Kitty has some great features and definitely has the potential to improve productivity for some work patterns. There are some security concerns that it raises through closer integration between systems and between programs, but many of them aren’t exclusive to Kitty.
Related posts:
- Wayland in Bookworm We are getting towards the freeze for Debian/Bookworm so the...
- cheap big TFT monitor I just received the latest Dell advert, they are offering...
- Android Multitasking My new Samsung Galaxy S3 has support for “Multi Window...
Valhalla's Things: Forgotten Yeast Bread or Pan Sbagliato
I’ve made it again. And again. And a few more times, and now it has an official household name, “Pan Sbagliato”, or “Wrong Bread”.
And this is the procedure I’ve mostly settled on; starting on the day before (here called Saturday) and baking it so that it’s ready for lunch time (on what here is called Sunday).
Saturday: around 13:00
In a bowl, mix together and work well:
- 250 g water;
- 400 g flour;
- 8 g salt;
cover to rise.
Saturday: around 18:00
In a small bowl, mix together:
- 2-3 g yeast;
- 10 g water;
- 10 g flour.
Saturday: around 21:00
In the bowl with the original dough, add the contents of the small bowl plus:
- 100 g flour;
- 100 g water;
and work well; cover to rise overnight.
Sunday: around 8:00
Pour the dough on a lined oven tray, leave in the cold oven to rise.
Sunday: around 11:00
Remove the tray from the oven, preheat the oven to 240°C, bake for 10 minutes, then lower the temperature to 160°C and bake for 20 more minutes.
Waiting until it has cooled down a bit will make it easier to cut, but is not strictly necessary.
I’ve had up to a couple of hours variations in the times listed, with no ill effects.
Scarlett Gately Moore: KDE: KDEneon Plasma Release, Unstable BOOM, Snaps, and Debian
While Yang our cat tries to lure in unsuspecting birds on the bird feeder, I have been busy working on many things. First things first though, a big thank you to all that donated to my Internet bill. I was able to continue my work without interruption.
KDE neon:
A busy week in KDE neon as https://kde.org/announcements/plasma/5/5.27.9/ was released! We have it ready to update in User edition or if you would like to download the new ISO you can find it here: https://neon.kde.org/download I highly advise the User Edition as Unstable is volatile right now with Qt6 transition and ABI breakage. Which leads me to the next busy work for the week. Plasma 6 exploded breaking unstable desktops all over, including mine! A library changed and it was not backward compatible, so we had to rebuild the Qt6 $world to get Plasma and PIM functional again. I am happy to report it is all fixed now, but I cannot stress enough, if you don’t want to chance broken things, please use the User Edition! I also continued the orange -> green build effort in making sure all our runtime dependencies are up to date. This fixes odd UI bugs and developers have all the build dependencies needed to build their applications.
KDE Snaps:
Several more 23.08.2 snaps have arrived in the snap store including the new to snaps Kamoso!
KDE snap KamosoI have an auto-connect request to the snap-store policy folks, but until it is approved please snap connect kamoso:camera :camera I have a pile of new MR’s in for non release service applications and some fixes for issues found while testing. While this new workflow does take a bit longer waiting for approvals I like it much better as I am developing closer relationships with the application developers.
I have made significant progress on the Kf6 ( Qt6 based ) content snap. I am about 90% complete. While this doesn’t mean much for users yet, it will when KDE applications release their qt6 ports starting the next major release cycle. I will be ready!
The last bit for snap work is I have almost completed my akonadi service snap. This will connect to all KDE PIM snaps so they share data. Akonadi is the background database that ties all the PIM applications together.
Debian:
This week I have worked on updates for several golang packages including charmbracelet/lipgloss charmbracelet/bubbles, and muesli-termenv. unfortunately I am stuck golang-github-aymanbagabas-go-osc52. The work is done in salsa but the maintainer has not uploaded. I have shot an email to the maintainer. I have also begun mentoring my first potential future DD! I reviewed his python-scienceplots and python-art which should land in Debian soon.
Thanks for stopping by! As usual, if you can please spare some change, consider a donation. All proceeds go to surviving another day to work on cool things to land on your desktop!
<noscript><a href="https://liberapay.com/sgmoore/donate"><img alt="Donate using Liberapay" src="https://liberapay.com/assets/widgets/donate.svg" /></a></noscript> Donate