Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 14 hours 10 min ago

Thomas Koch: Chromium gkt-filechooser preview size

Tue, 2024-01-09 12:59
Posted on January 9, 2024 Tags: debian, free software, life

I wanted to report this issue in chromiums issue tracker, but it gave me:

“Something went wrong, please try again later.”

Ok, then at least let me reply to this askubuntu question. But my attempt to signup with my launchpad account gave me:

“Launchpad Login Failed. Please try logging in again.”

I refrain from commenting on this to not violate some code of conduct.

So this is what I wanted to write:

GTK file chooser image preview size should be configurable

The file chooser that appears when uploading a file (e.g. an image to Google Fotos) learned to show a preview in issue 15500.

The preview image size is hard coded to 256x512 in kPreviewWidth and kPreviewHeight in ui/gtk/select_file_dialog_linux_gtk.cc.

Please make the size configurable.

On high DPI screens the images are too small to be of much use.

Yes, I should not use chromium anymore.

Categories: FLOSS Project Planets

Louis-Philippe Véronneau: 2023 — A Musical Retrospective

Tue, 2024-01-09 00:00

I ended 2022 with a musical retrospective and very much enjoyed writing that blog post. As such, I have decided to do the same for 2023! From now on, this will probably be an annual thing :)

Albums

In 2023, I added 73 new albums to my collection — nearly 2 albums every three weeks! I listed them below in the order in which I acquired them.

I purchased most of these albums when I could and borrowed the rest at libraries. If you want to browse though, I added links to the album covers pointing either to websites where you can buy them or to Discogs when digital copies weren't available.

Once again this year, it seems that Punk (mostly Oï!) and Metal dominate my list, mostly fueled by Angry Metal Guy and the amazing Montréal Skinhead/Punk concert scene.

Concerts

A trend I started in 2022 was to go to as many concerts of artists I like as possible. I'm happy to report I went to around 80% more concerts in 2023 than in 2022! Looking back at my list, April was quite a busy month...

Here are the concerts I went to in 2023:

  • March 8th: Godspeed You! Black Emperor
  • April 11th: Alexandra Stréliski
  • April 12th: Bikini Kill
  • April 21th: Brigada Flores Magon, Union Thugs
  • April 28th: Komintern Sect, The Outcasts, Violent Way, Ultra Razzia, Over the Hill
  • May 3rd: First Fragment
  • May 12th: Rhapsody of Fire, Wind Rose
  • May 13th: Aeternam
  • June 2nd: Mortier, La Gachette
  • June 17th: Ultra Razzia, Total Nada, BLEMISH
  • June 30th: Avishai Cohen Trio
  • July 9th: Richard Galliano
  • August 18th: Gojira, Mastodon, Lorna Shore
  • September 14th: Jinjer
  • September 22nd: CUIR, Salvaje Punk, Hysteric Polemix, Perestroika, Ultra Razzia, Ilusion, Over the Hill, Asbestos
  • October 6th: Rancoeur, Street Code, Tenaz, Mortimer, Guernica, High Anxiety

Although metalfinder continues to work as intended, I'm very glad to have discovered the Montréal underground scene has departed from Facebook/Instagram and adopted en masse Gancio, a FOSS community agenda that supports ActivityPub. Our local instance, askapunk.net is pretty much all I could ask for :)

That's it for 2023!

Categories: FLOSS Project Planets

Antoine Beaupré: Last year on this blog

Mon, 2024-01-08 15:58

So this blog is now celebrating its 21st birthday (or 20 if you count from zero, or 18 if you want to be pedantic), and I figured I would do this yearly thing of reviewing how that went.

Number of posts

2022 was the official 20th anniversary in any case, and that was one of my best years on record, with 46 posts, surpassed only by the noisy 2005 (62) and matching 2006 (46). 2023, in comparison, was underwhelming: a feeble 11 posts! What happened!

Well, I was busy with other things, mostly away from keyboard, that I will not bore you with here...

The other thing that happened is that the one-liner I used to collect stats was broken (it counted folders and other unrelated files) and wildly overestimated 2022! Turns out I didn't write that much then:

anarc.at$ ls blog | grep '^[0-9][0-9][0-9][0-9].*.md' | se d s/-.*// | sort | uniq -c | sort -n -k2 57 2005 43 2006 20 2007 20 2008 7 2009 13 2010 16 2011 11 2012 13 2013 5 2014 13 2015 18 2016 29 2017 27 2018 17 2019 18 2020 14 2021 28 2022 10 2023 1 2024

But even that is inaccurate because, in ikiwiki, I can tag any page as being featured on the blog. So we actually need to process the HTML itself because we don't have much better on hand without going through ikiwiki's internals:

anarcat@angela:anarc.at$ curl -sSL https://anarc.at/blog/ | grep 'href="\./' | grep -o 20[0-9][0-9] | sort | uniq -c 56 2005 42 2006 19 2007 18 2008 6 2009 12 2010 15 2011 10 2012 11 2013 3 2014 15 2015 32 2016 50 2017 37 2018 19 2019 19 2020 15 2021 28 2022 13 2023

Which puts the top 10 years at:

$ curl -sSL https://anarc.at/blog/ | grep 'href="\./' | grep -o 20[0-9][0-9] | sort | uniq -c | sort -nr | head -10 56 2005 50 2017 42 2006 37 2018 32 2016 28 2022 19 2020 19 2019 19 2007 18 2008

Anyway. 2023 is certainly not a glorious year in that regard, in any case.

Visitors

In terms of visits, however, we had quite a few hits. According to Goatcounter, I had 122 300 visits in 2023! 2022, in comparison, had 89 363, so that's quite a rise.

What you read

I seem to have hit the Hacker News front page at least twice. I say "seem" because it's actually pretty hard to tell what the HN frontpage actually is on any given day. I had 22k visits on 2023-03-13, in any case, and you can't see me on the front that day. We do see a post of mine on 2023-09-02, all the way down there, which seem to have generated another 10k visits.

In any case, here were the most popular stories for you fine visitors:

  • Framework 12th gen laptop review: 24k visits, which is surprising for a 13k words article "without images", as some critics have complained. 15k referred by Hacker News. Good reference and time-consuming benchmarks, slowly bit-rotting.

    That is, by far, my most popular article ever. A popular article in 2021 or 2022 was around 6k to 9k, so that's a big one. I suspect it will keep getting traffic for a long while.

  • Calibre replacement considerations: 15k visits, most of which without a referrer. Was actually an old article, but I suspect HN brought it back to light. I keep updating that wiki page regularly when I find new things, but I'm still using Calibre to import ebooks.

  • Hacking my Kobo Clara HD: is not new but always gathering more and more hits, it had 1800 hits in the first year, 4600 hits last year and now brought 6400 visitors to the blog! Not directly related, but this iFixit battery replacement guide I wrote also seem to be quite popular

Everything else was published before 2023. Replacing Smokeping with Prometheus is still around and Looking at Wayland terminal emulators makes an entry in the top five.

Where you've been

People send less and less private information when they browse the web. The number of visitors without referrers was 41% in 2021, it rose to 44% in 2023. Most of the remaining traffic comes from Google, but Hacker News is now a significant chunk, almost as big as Google.

In 2021, Google represented 23% of my traffic, in 2022, it was down to 15% so 18% is actually a rise from last year, even if it seems much smaller than what I usually think of.

Ratio Referrer Visits 18% Google 22 098 13% Hacker News 16 003 2% duckduckgo.com 2 640 1% community.frame.work 1 090 1% missing.csail.mit.edu 918

Note that Facebook and Twitter do not appear at all in my referrers.

Where you are

Unsurprisingly, most visits still come from the US:

Ratio Country Visits 26% United States 32 010 14% France 17 046 10% Germany 11 650 6% Canada 7 425 5% United Kingdom 6 473 3% Netherlands 3 436

Those ratios are nearly identical to last year, but quite different from 2021, where Germany and France were more or less reversed.

Back in 2021, I mentioned there was a long tail of countries with at least one visit, with 160 countries listed. I expanded that and there's now 182 countries in that list, almost all of the 193 member states in the UN.

What you were

Chrome's dominance continues to expand, even on readers of this blog, gaining two percentage points from Firefox compared to 2021.

Ratio Browser Visits 49% Firefox 60 126 36% Chrome 44 052 14% Safari 17 463 1% Others N/A

It seems like, unfortunately, my Lynx and Haiku users have not visited in the past year. It seems like trying to read those metrics is like figuring out tea leaves...

In terms of operating systems:

Ratio OS Visits 28% Linux 34 010 23% macOS 28 728 21% Windows 26 303 17% Android 20 614 10% iOS 11 741

Again, Linux and Mac are over-represented, and Android and iOS are under-represented.

What is next

I hope to write more next year. I've been thinking about a few posts I could write for work, about how things work behind the scenes at Tor, that could be informative for many people. We run a rather old setup, but things hold up pretty well for what we throw at it, and it's worth sharing that with the world...

So anyway, thanks for coming, faithful reader, and see you in the coming 2024 year...

Categories: FLOSS Project Planets

Thorsten Alteholz: My Debian Activities in December 2023

Mon, 2024-01-08 13:40
FTP master

This month I accepted 235 and rejected 13 packages. The overall number of packages that got accepted was 249. I also handled lots of RM bugs and almost stopped the increase in packages this month :-). Please be aware, if you don’t want your package to be removed, take care of it and keep it in good shape!

Debian LTS

This was my hundred-fourteenth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

During my allocated time I uploaded:

  • [DLA 3686-1] xorg-server security update for two CVEs to fix privilege escalation
  • [DLA 3686-2] xorg-server security update for one CVE to really fix privilege escalation. Unfortunately the first patches provided by upstream did not really solve the problem, so here we are in round 2
  • [DLA 3699-1] libde265 security update for three CVEs to fix heap buffer or global buffer overflows
  • [DLA 3700-1] cjson security update for one CVE to fix a segmentation violation
  • [#1056934] Bookworm PU-bug for libde265; I could finally upload the package
  • [#1056737] Bookworm PU-bug for minizip; I could finally upload the package
  • [libde265]For the next round of CVEs of libde265 I prepared debdiffs for Bullseye and Bookworm and sent them to the maintainer.
  • [cjson]I prepared debdiffs for Bullseye and Bookworm and sent them to the maintainer.

This month was rather calm and no unexpected things happened. The web team now automatically creates all webpages from data found in the security tracker. So I could deactivate my web-dla script again which created the webpages from the contents of the announcement mailing list.

Last but not least I also did two weeks of frontdesk duties.

Debian ELTS

This month was the sixty-fifth ELTS month. During my allocated time I uploaded:

  • [ELA-1019-1]xorg-server security update for two CVEs to fix privilege escalation
  • [ELA-1019-2]xorg-server security update for to really fix privilege escalation. As with the DLAs above, the first patches provided by upstream did not really solve the problem, so here we are in round 2
  • [ELA 1027-1] libde265 security update for three CVEs in Stretch to fix heap buffer or global buffer overflows

Last but not least I also did two weeks of frontdesk duties.

Debian Printing

This month I uploaded a package to fix bugs:

  • cups/Bookworm to fix a bug related to color printing
  • hplip to fix a bug related to /usr-merge

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a package to fix bugs:

Other stuff

This month I uploaded new upstream version of packages, did a source upload for the transition or uploaded it to fix one or the other issue:

Categories: FLOSS Project Planets

Uwe Kleine-König: PGP Keysigning on FOSDEM'24

Mon, 2024-01-08 02:34

I'm going to FOSDEM'24. Assuming to meet Debian and Kernel folks there, this should be a good opportunity to do PGP keysigning.

If you also go there and you're interested in keysigning: Send me your key via email to fosdem24-keysigning@kleine-koenig.org. I'll collect the keys, create a paper list for a keysigning party and send it back to you in the week before FOSDEM. The list will only be made available to other participants.

Then maybe wear a "keysigning" badge (or a crepe tape with that caption) that allows us to identify others on the list. If there are not too many who are interested, I guess that should work fine. (If this idea becomes too successful, I'd close the list after the first 100 participants.)

Categories: FLOSS Project Planets

Russ Allbery: Review: The Faithless

Sun, 2024-01-07 22:47

Review: The Faithless, by C.L. Clark

Series: Magic of the Lost #2 Publisher: Orbit Copyright: March 2023 ISBN: 0-316-54283-0 Format: Kindle Pages: 527

The Faithless is the second book in a political fantasy series that seems likely to be a trilogy. It is a direct sequel to The Unbroken, which you should read first. As usual, Orbit made it unnecessarily hard to get re-immersed in the world by refusing to provide memory aids for readers who read books as they come out instead of only when the series is complete, but this is not the fault of Clark or the book and you've heard me rant about this before.

The Unbroken was set in Qazāl (not-Algeria). The Faithless, as readers of the first book might guess from the title, is set in Balladaire (not-France). This is the palace intrigue book. Princess Luca is fighting for her throne against her uncle, the regent. Touraine is trying to represent her people. Whether and to what extent those interests are aligned is much of the meat of this book.

Normally I enjoy palace intrigue novels for the competence porn: watching someone navigate a complex political situation with skill and cunning, or upend the entire system by building unlikely coalitions or using unexpected routes to power. If you are similar, be warned that this is not what you're going to get. Touraine is a fish out of water with no idea how to navigate the Balladairan court, and does not magically become an expert in the course of this novel. Luca has the knowledge, but she's unsure, conflicted, and largely out-maneuvered. That means you will have to brace for some painful scenes of some of the worst people apparently getting what they want.

Despite that, I could not put this down. It was infuriating, frustrating, and a much slower burn than I prefer, but the layers of complex motivations that Clark builds up provided a different sort of payoff.

Two books in, the shape of this series is becoming clearer. This series is about empire and colonialism, but with considerably more complexity than fantasy normally brings to that topic. Power does not loosen its grasp easily, and it has numerous tools for subtle punishment after apparent upstart victories. Righteous causes rarely call banners to your side; instead, they create opportunities for other people to maneuver to their own advantage. Touraine has some amount of power now, but it's far from obvious how to use it. Her life's training tells her that exercising power will only cause trouble, and her enemies are more than happy to reinforce that message at every opportunity.

Most notable to me is Clark's bitingly honest portrayal of the supposed allies within the colonial power. It is clear that Luca is attempting to take the most ethical actions as she defines them, but it's remarkable how those efforts inevitably imply that Touraine should help Luca now in exchange for Luca's tenuous and less-defined possible future aid. This is not even a lie; it may be an accurate summary of Balladairan politics. And yet, somehow what Balladaire needs always matters more than the needs of their abused colony.

Underscoring this, Clark introduces another faction in the form of a populist movement against the Balladairan monarchy. The details of that setup in another fantasy novel would make them allies of the Qazāl. Here, as is so often the case in real life, a substantial portion of the populists are even more xenophobic and racist than the nobility. There are no easy alliances.

The trump card that Qazāl holds is magic. They have it, and (for reasons explored in The Unbroken) Balladaire needs it, although that is a position held by Luca's faction and not by her uncle. But even Luca wants to reduce that magic to a manageable technology, like any other element of the Balladairan state. She wants to understand it, harness it, and bring it under local control. Touraine, trained by Balladaire and facing Balladairan political problems, has the same tendency. The magic, at least in this book, refuses — not in the flashy, rebellious way that it would in most fantasy, but in a frustrating and incomprehensible lack of predictable or convenient rules. I think this will feel like a plot device to some readers, and that is to some extent true, but I think I see glimmers of Clark setting up a conflict of world views that will play out in the third book.

I think some people are going to bounce off this book. It's frustrating, enraging, at times melodramatic, and does not offer the cathartic payoff typically offered in fantasy novels of this type. Usually these are things I would be complaining about as well. And yet, I found it satisfyingly challenging, engrossing, and memorable. I spent a lot of the book yelling "just kill him already" at the characters, but I think one of Clark's points is that overcoming colonial relationships requires a lot more than just killing one evil man. The characters profoundly fail to execute some clever and victorious strategy. Instead, as in the first book, they muddle through, making the best choice that they can see in each moment, making lots of mistakes, and paying heavy prices. It's realistic in a way that has nothing to do with blood or violence or grittiness. (Although I did appreciate having the thin thread of Pruett's story and its highly satisfying conclusion.)

This is also a slow-burn romance, and there too I think opinions will differ. Touraine and Luca keep circling back to the same arguments and the same frustrations, and there were times that this felt repetitive. It also adds a lot of personal drama to the politics in a way that occasionally made me dubious. But here too, I think Clark is partly using the romance to illustrate the deeper political points.

Luca is often insufferable, cruel and ambitious in ways she doesn't realize, and only vaguely able to understand the Qazāl perspective; in short, she's the pragmatic centrist reformer. I am dubious that her ethics would lead her to anything other than endless compromise without Touraine to push her. To Luca's credit, she also realizes that and wants to be a better person, but struggles to have the courage to act on it. Touraine both does and does not want to manipulate her; she wants Luca's help (and more), but it's not clear Luca will give it under acceptable terms, or even understand how much she's demanding. It's that foundational conflict that turns the romance into a slow burn by pushing them apart. Apparently I have more patience for this type of on-again, off-again relationship than one based on artificial miscommunication.

The more I noticed the political subtext, the more engaging I found the romance on the surface.

I picked this up because I'd read several books about black characters written by white authors, and while there was nothing that wrong with those books, the politics felt a little too reductionist and simplified. I wanted a book that was going to force me out of comfortable political assumptions. The Faithless did exactly what I was looking for, and I am definitely here for the rest of the series. In that sense, recommended, although do not go into this book hoping for adroit court maneuvering and competence porn.

Followed by The Sovereign, which does not yet have a release date.

Content warnings: Child death, attempted cultural genocide.

Rating: 7 out of 10

Categories: FLOSS Project Planets

Jonathan McDowell: Free Software Activities for 2023

Sun, 2024-01-07 13:34

This year was hard from a personal and work point of view, which impacted the amount of Free Software bits I ended up doing - even when I had the time I often wasn’t in the right head space to make progress on things. However writing this annual recap up has been a useful exercise, as I achieved more than I realised. For previous years see 2019, 2020, 2021 + 2022.

Conferences

The only Free Software related conference I made it to this year was DebConf23 in Kochi, India. Changes with projects at work meant I couldn’t justify anything work related. This year I’m planning to make it to FOSDEM, and haven’t made a decision on DebConf24 yet.

Debian

Most of my contributions to Free software continue to happen within Debian.

I started the year working on retrogaming with Kodi on Debian. I got this to a much better state for bookworm, with it being possible to run the bsnes-mercury emulator under Kodi using RetroArch. There are a few other libretro backends available for RetroArch, but Kodi needs some extra controller mappings packaged up first.

Plenty of uploads were involved, though some of this was aligning all the dependencies and generally cleaning things up in iterations.

I continued to work on a few packages within the Debian Electronics Packaging Team. OpenOCD produced a new release in time for the bookworm release, so I uploaded 0.12.0-1. There were a few minor sigrok cleanups - sigrok 0.3, libsigrokdecode 0.5.3-4 + libsigrok 0.5.2-4 / 0.5.2-5.

While I didn’t manage to get the work completed I did some renaming of the ESP8266 related packages - gcc-xtensa-lx106 (which saw a 13 upload pre-bookworm) has become gcc-xtensa (with 14) and binutils-xtensa-lx106 has become binutils-xtensa (with 6). Binary packages remain the same, but this is intended to allow for the generation of ESP32 compiler toolchains from the same source.

onak saw 0.6.3-1 uploaded to match the upstream release. I also uploaded libgpg-error 1.47-1 (though I can claim no credit for any of the work in preparing the package) to help move things forward on updating gnupg2 in Debian.

I NMUed tpm2-pkcs11 1.9.0-0.1 to fix some minor issues pre-bookworm release; I use this package myself to store my SSH key within my laptop TPM, so I care about it being in a decent state.

sg3-utils also saw a bit of love with 1.46-2 + 1.46-3 - I don’t work in the storage space these days, but I’m still listed as an uploaded and there was an RC bug around the library package naming that I was qualified to fix and test pre-bookworm.

Related to my retroarch work I sponsored uploads of mgba for Ryan Tandy: 0.10.0+dfsg-1, 0.10.0+dfsg-2, 0.10.1+dfsg-1, 0.10.2+dfsg-1, mgba 0.10.1+dfsg-1+deb12u1.

As part of the Data Protection Team I responded to various inbound queries to that team, both from project members and those external to the project.

I continue to keep an eye on Debian New Members, even though I’m mostly inactive as an application manager - we generally seem to have enough available recently. Mostly my involvement is via Front Desk activities, helping out with queries to the team alias, and contributing to internal discussions as well as our panel at DebConf23.

Finally the 3 month rotation for Debian Keyring continues to operate smoothly. I dealt with 2023.03.24, 2023.06.26, 2023.06.29, 2023.09.10, 2023.09.24 + 2023.12.24.

Linux

I had a few minor patches accepted to the kernel this year. A pair of safexcel cleanups (improved error logging for firmware load fail and cleanup on load failure) came out of upgrading the kernel running on my RB5009.

The rest were related to my work on repurposing my C.H.I.P.. The AXP209 driver needed extended to support GPIO3 (with associated DT schema update). That allowed Bluetooth to be enabled. Adding the AXP209 internal temperature ADC as an iio-hwmon node means it can be tracked using the normal sensor monitoring framework. And finally I added the pinmux settings for mmc2, which I use to support an external microSD slot on my C.H.I.P.

Personal projects

2023 saw another minor release of onak, 0.6.3, which resulted in a corresponding Debian upload (0.6.3-1). It has a couple of bug fixes (including a particularly annoying, if minor, one around systemd socket activation that felt very satisfying to get to the bottom of), but I still lack the time to do any of the major changes I would like to.

I wrote listadmin3 to allow easy manipulation of moderation queues for Mailman3. It’s basic, but it’s drastically improved my timeliness on dealing with held messages.

Work related

This year only involved a single upstream related submission; a fix for tpm_tis interrupts with the Lenovo P620 that then got dropped when the change that caused the issue was reverted.

That wraps up 2023. I’ve got no particular goals for this year; looking around my desk I’ve a few ARM based devices I’d like to get running a mainline kernel. I need to play about a bit more with the retroarch bits (if I really had time I’d do the migration for Kodi to PCRE2, as that’s currently causing testing migration issues), perhaps getting some more controller mappings packaged. But no promises.

Categories: FLOSS Project Planets

Valhalla's Things: A Corset or Two

Sat, 2024-01-06 19:00
Posted on January 7, 2024
Tags: madeof:atoms, craft:sewing, period:victorian, FreeSoftWear

CW for body size change mentions

I needed a corset, badly.

Years ago I had a chance to have my measurements taken by a former professional corset maker and then a lesson in how to draft an underbust corset, and that lead to me learning how nice wearing a well-fitted corset feels.

Later I tried to extend that pattern up for a midbust corset, with success.

And then my body changed suddenly, and I was no longer able to wear either of those, and after a while I started missing them.

Since my body was still changing (if no longer drastically so), and I didn’t want to use expensive materials for something that had a risk of not fitting after too little time, I decided to start by making myself a summer lightweight corset in aida cloth and plastic boning (for which I had already bought materials). It fitted, but not as well as the first two ones, and I’ve worn it quite a bit.

I still wanted back the feeling of wearing a comfy, heavy contraption of coutil and steel, however.

After a lot of procrastination I redrafted a new pattern, scrapped everything, tried again, had my measurements taken by a dressmaker [#dressmaker], put them in the draft, cut a first mock-up in cheap cotton, fixed the position of a seam, did a second mock-up in denim [#jeans] from an old pair of jeans, and then cut into the cheap herringbone coutil I was planning to use.

And that’s when I went to see which one of the busks in my stash would work, and realized that I had used a wrong vertical measurement and the front of the corset was way too long for a midbust corset.

Luckily I also had a few longer busks, I basted one to the denim mock up and tried to wear it for a few hours, to see if it was too long to be comfortable. It was just a bit, on the bottom, which could be easily fixed with the Power Tools1.

Except, the more I looked at it the more doing this felt wrong: what I needed most was a midbust corset, not an overbust one, which is what this was starting to be.

I could have trimmed it down, but I knew that I also wanted this corset to be a wearable mockup for the pattern, to refine it and have it available for more corsets. And I still had more than half of the cheap coutil I was using, so I decided to redo the pattern and cut new panels.

And this is where the “or two” comes in: I’m not going to waste the overbust panels: I had been wanting to learn some techniques to make corsets with a fashion fabric layer, rather than just a single layer of coutil, and this looks like an excellent opportunity for that, together with a piece of purple silk that I know I have in the stash. This will happen later, however, first I’m giving priority to the underbust.

Anyway, a second set of panels was cut, all the seam lines marked with tailor tacks, and I started sewing by inserting the busk.

And then realized that the pre-made boning channel tape I had was too narrow for the 10 mm spiral steel I had plenty of. And that the 25 mm twill tape was also too narrow for a double boning channel. On the other hand, the 18 mm twill tape I had used for the waist tape was good for a single channel, so I decided to put a single bone on each seam, and then add another piece of boning in the middle of each panel.

Since I’m making external channels, making them in self fabric would have probably looked better, but I no longer had enough fabric, because of the cutting mishap, and anyway this is going to be a strictly underwear only corset, so it’s not a big deal.

Once the boning channel situation was taken care of, everything else proceeded quite smoothly and I was able to finish the corset during the Christmas break, enlisting again my SO to take care of the flat steel boning while I cut the spiral steels myself with wire cutters.

I could have been a bit more precise with the binding, as it doesn’t align precisely at the front edge, but then again, it’s underwear, nobody other than me and everybody who reads this post is going to see it and I was in a hurry to see it finished. I will be more careful with the next one.

I also think that I haven’t been careful enough when pressing the seams and applying the tape, and I’ve lost about a cm of width per part, so I’m using a lacing gap that is a bit wider than I planned for, but that may change as the corset gets worn, and is still within tolerance.

Also, on the morning after I had finished the corset I woke up and realized that I had forgotten to add garter tabs at the bottom edge. I don’t know whether I will ever use them, but I wanted the option, so maybe I’ll try to add them later on, especially if I can do it without undoing the binding.

The next step would have been flossing, which I proceeded to postpone until I’ve worn the corset for a while: not because there is any reason for it, but because I still don’t know how I want to do it :)

What was left was finishing and uploading the pattern and instructions, that are now on my sewing pattern website as #FreeSoftWear, and finally I could post this on the blog.

  1. i.e. by asking my SO to cut and sand it, because I’m lazy and I hate doing that part :D↩︎

Categories: FLOSS Project Planets

Valhalla's Things: Blog updates

Fri, 2024-01-05 19:00
Posted on January 6, 2024
Tags: madeof:bits, meta

After just a tiny1 delay I’ve finally added support for tags to this blog.

In the next few days I may go back and add / change tags to the older posts, or I may not, I’ll decide.

Also, I still need to render a tag cloud somewhere; maybe it will happen soon, maybe it will take another year. :D

I hope I’ve also succesfully worked around the bug in Friendica where the src for images got deleted instead of properly converted.

  1. less than one year!↩︎

Categories: FLOSS Project Planets

Valhalla's Things: Random Sashiko + Crazy Quilt Pocket

Thu, 2024-01-04 19:00
Posted on January 5, 2024

Lately I’ve seen people on the internet talking about victorian crazy quilting. Years ago I had watched a Numberphile video about Hitomezashi Stitch Patterns based on numbers, words or randomness. Few weeks ago I had cut some fabric piece out of an old pair of jeans and I had a lot of scraps that were too small to do anything useful on their own. It easy to see where this can go, right?

I cut a pocket shape out of old garment mockups (this required some piecing), drew a square grid, arranged scraps of jeans to cover the other side, kept everything together with a lot of pins, carefully avoided basting anything, and started covering everything in sashiko / hitomezashi stitches, starting each line with a stitch on the front or the back of the work based on the result of:

import random random.choice(["front", "back"])

For the second piece I tried to use a piece of paper with the square grid instead of drawing it on the fabric: it worked, mostly, I would not do it again as removing the paper was more of a hassle than drawing the lines in the first place. I suspected it, but had to try it anyway.

Then I added a lining from some plain black cotton from the stash; for the slit I put the lining on the front right sides together, sewn at 2 mm from the marked slit, cut it, turned the lining to the back side, pressed and then topstitched as close as possible to the slit from the front.

I bound everything with bias tape, adding herringbone tape loops at the top to hang it from a belt (such as one made from the waistband of one of the donor pair of jeans) and that was it.

I like the way the result feels; maybe it’s a bit too stiff for a pocket, but I can see it work very well for a bigger bag, and maybe even a jacket or some other outer garment.

Categories: FLOSS Project Planets

Aigars Mahinovs: Figuring out finances part 5

Thu, 2024-01-04 15:46

At the end of the last part of this, we got a Home Assistant OS installation that contains in itself a Firefly III instance and that contains all the current financial information. They are communicating and calculating predictions for me.

The only part that I was not 100% happy with was accounting of cash transactions. You see payments in cash are mostly made away from computer and sometimes even in areas without a mobile Internet connection. And all Firefly III apps that I tried failed at the task of creating a new transaction when offline. Even the one recommended Telegram bot from Firefly III page used a dialog-based approach for creating even a simplest transaction. Issue asking for a one-shot transaction creation option stands as unresolved.

Theoretically it would have been best if I could simply contribute that feature to that particular Telegram bot ... but it's written in Javascript. By mapping the API onto tasks somehow. After about 4 hours I still could not figure out where in the code anything actually happens. It all looked like just sugar or spagetty. Connectors on connectors on mappers.

So I did the real open-source thing and just wrote my own tool. firefly3_telegram_oneshot is a maximally simple Telegram bot based on python-telegram-bot library.

So what does it do? The primary usage for me is to simply send a message to the bot at any time with text like "23.2 coffee and cake" and when the message eventually reaches the bot, it then should create a new transaction from my "cash" account to "Unknown" account in amount of 23.20€ and description "coffee and cake".

That is the key. Everything else in the bot is comfort.

For example "/undo" command deletes the last transaction in cash account (presumably added by error) and "/last" shows which transaction the "/undo" would delete.

And to help with expense categorisation one can also do a message like "6.1 beer, dest=Edeka, cat=alcohol" that would search for a destination account that would fuzzy match to "Edeka" (a supermarket in Germany) and add the transation to the category fuzzy matched to "alcohol", like "Shopping - alcoholic drinks".

And to make that fuzzy matching more reliable I also added "/cat something" and "/dest something" commands that would show which category or destination account would be matched with a given string.

All of that in around 250 lines of Python code and executed by a 17 line Dockerfile (via the Portainer on my Home Assistant OS). One remaining function that could be nice is creating a category or destination account on request (for example when the first character of the supplied string is "+").

I am really plesantly surprised about how much can be done with how little code using the above Python library. And you never need to have any open incoming ports anywhere to runs such bots, so the attack surface for such bot-based service is much tighter.

All in all the system works and works well. The only exception is that for my particual bank there is still no automatic way of extracting data about credit card transactions. For those I still have to manually log into the Internet bank, export a CSV file and then feed that into the Firefly III importer. Annoying. And I am not really motivated to try to hack my bank :D

Has this been useful to any of you? Any ideas to expand or improve what I have? Just find me as "aigarius" on any social media and speak up :)

Categories: FLOSS Project Planets

Michael Ablassmeier: Migrating a system to Hetzner cloud using REAR and kexec

Wed, 2024-01-03 19:00

I needed to migrate an existing system to an Hetzner cloud VPS. While it is possible to attach KVM consoles and custom ISO images to dedicated servers, i didn’t find any way to do so with regular cloud instances.

For system migrations i usually use REAR, which has never failed me. (and also has saved my ass during recovery multiple times). It’s an awesome utility!

It’s possible to do this using the Hetzner recovery console too, but using REAR is very convenient here, because it handles things like re-creating the partition layout and network settings automatically!

The steps are:

  • Create bootable REAR rescue image on the source system.
  • Register a target system in Hetzner Cloud with at least the same disk size as the source system.
  • Boot the REAR image’s initrd and kernel from the running VPS system using kexec
  • Make the REAR recovery console accessible via SSH (or use Hetzners console).
  • Let REAR do its magic and re-partition the system.
  • Restore the system data to the freshly partitioned disks
  • Let REAR handle the bootloader and network re-configuration.
Example

To create a rescue image on the source system:

apt install rear echo OUTPUT=ISO > /etc/rear/local.conf rear mkrescue -v [..] Wrote ISO image: /var/lib/rear/output/rear-debian12.iso (185M)

My source system had a 128 GB disk, so i registered an instance on Hetzner cloud with greater disk size to make things easier:

Now copy the ISO image to the newly created instance and extract its data:

apt install kexec-tools scp rear-debian12.iso root@49.13.193.226:/tmp/ modprobe loop mount -o loop rear-debian12.iso /mnt/ cp /mnt/isolinux/kernel /tmp/ cp /mnt/isolinux/initrd.cgz /tmp/

Install kexec if not installed already:

apt install kexec-tools

Note down the current gateway configuration, this is required later on to make the REAR recovery console reachable via SSH:

root@testme:~# ip route default via 172.31.1.1 dev eth0 172.31.1.1 dev eth0 scope link

Reboot the running VPS instance into the REAR recovery image using somewhat the same kernel cmdline:

root@testme:~# cat /proc/cmdline BOOT_IMAGE=/boot/vmlinuz-6.1.0-13-amd64 root=UUID=5174a81e-5897-47ca-8fe4-9cd19dc678c4 ro consoleblank=0 systemd.show_status=true console=tty1 console=ttyS0 kexec --initrd /tmp/initrd.cgz --command-line="consoleblank=0 systemd.show_status=true console=tty1 console=ttyS0" /tmp/kernel Connection to 49.13.193.226 closed by remote host. Connection to 49.13.193.226 closed

Now watch the system on the Console booting into the REAR system:

Login the recovery console (root without password) and fix its default route to make it reachable:

ip addr [..] 2: enp1s0 .. $ ip route add 172.31.1.1 dev enp1s0 $ ip route add default via 172.31.1.1 ping 49.13.193.226 64 bytes from 49.13.193.226: icmp_seq=83 ttl=52 time=27.7 ms

The network configuration might differ, the source system in this example used DHCP, as the target does. If REAR detects changed static network configuration it guides you through the setup pretty nicely.

Login via SSH (REAR will store your ssh public keys in the image) and start the recovery process, follow the steps as suggested by REAR:

ssh -l root 49.13.193.226 Welcome to Relax-and-Recover. Run "rear recover" to restore your system ! RESCUE debian12:~ # rear recover Relax-and-Recover 2.7 / Git Running rear recover (PID 673 date 2024-01-04 19:20:22) Using log file: /var/log/rear/rear-debian12.log Running workflow recover within the ReaR rescue/recovery system Will do driver migration (recreating initramfs/initrd) Comparing disks Device vda does not exist (manual configuration needed) Switching to manual disk layout configuration (GiB sizes rounded down to integer) /dev/vda had size 137438953472 (128 GiB) but it does no longer exist /dev/sda was not used on the original system and has now 163842097152 (152 GiB) Original disk /dev/vda does not exist (with same size) in the target system Using /dev/sda (the only available of the disks) for recreating /dev/vda Current disk mapping table (source => target): /dev/vda => /dev/sda Confirm or edit the disk mapping 1) Confirm disk mapping and continue 'rear recover' [..] User confirmed recreated disk layout [..]

This step re-recreates your original disk layout and mounts it to /mnt/local/ (this example uses a pretty lame layout, but usually REAR will handle things like lvm/btrfs just nicely):

mount /dev/sda3 on /mnt/local type ext4 (rw,relatime,errors=remount-ro) /dev/sda1 on /mnt/local/boot type ext4 (rw,relatime)

Now clone your source systems data to /mnt/local/ with whatever utility you like to use and exit the recovery step. After confirming everything went well, REAR will setup the bootloader (and all other config details like fstab entries and adjusted network configuration) for you as required:

rear> exit Did you restore the backup to /mnt/local ? Are you ready to continue recovery ? yes User confirmed restored files Updated initramfs with new drivers for this system. Skip installing GRUB Legacy boot loader because GRUB 2 is installed (grub-probe or grub2-probe exist). Installing GRUB2 boot loader... Determining where to install GRUB2 (no GRUB2_INSTALL_DEVICES specified) Found possible boot disk /dev/sda - installing GRUB2 there Finished 'recover'. The target system is mounted at '/mnt/local'. Exiting rear recover (PID 7103) and its descendant processes ... Running exit tasks

Now reboot the recovery console and watch it boot into your target systems configuration:

Being able to use this procedure for complete disaster recovery within Hetzner cloud VPS (using off-site backups) gives me a better feeling, too.

Categories: FLOSS Project Planets

John Goerzen: Live Migrating from Raspberry Pi OS bullseye to Debian bookworm

Wed, 2024-01-03 18:33

I’ve been getting annoyed with Raspberry Pi OS (Raspbian) for years now. It’s a fork of Debian, but manages to omit some of the most useful things. So I’ve decided to migrate all of my Pis to run pure Debian. These are my reasons:

  1. Raspberry Pi OS has, for years now, specified that there is no upgrade path. That is, to get to a newer major release, it’s a reinstall. While I have sometimes worked around this, for a device that is frequently installed in hard-to-reach locations, this is even more important than usual. It’s common for me to upgrade machines for a decade or more across Debian releases and there’s no reason that it should be so much more difficult with Raspbian.
  2. As I noted in Consider Security First, the security situation for Raspberry Pi OS isn’t as good as it is with Debian.
  3. Raspbian lags behind Debian – often times by 6 months or more for major releases, and days or weeks for bug fixes and security patches.
  4. Raspbian has no direct backports support, though Raspberry Pi 3 and above can use Debian’s backports (per my instructions as Installing Debian Backports on Raspberry Pi)
  5. Raspbian uses a custom kernel without initramfs support

It turns out it is actually possible to do an in-place migration from Raspberry Pi OS bullseye to Debian bookworm. Here I will describe how. Even if you don’t have a Raspberry Pi, this might still be instructive on how Raspbian and Debian packages work.

WARNINGS

Before continuing, back up your system. This process isn’t for the neophyte and it is entirely possible to mess up your boot device to the point that you have to do a fresh install to get your Pi to boot. This isn’t a supported process at all.

Architecture Confusion

Debian has three ARM-based architectures:

  • armel, for the lowest-end 32-bit ARM devices without hardware floating point support
  • armhf, for the higher-end 32-bit ARM devices with hardware float (hence “hf”)
  • arm64, for 64-bit ARM devices (which all have hardware float)

Although the Raspberry Pi 0 and 1 do support hardware float, they lack support for other CPU features that Debian’s armhf architecture assumes. Therefore, the Raspberry Pi 0 and 1 could only run Debian’s armel architecture.

Raspberry Pi 3 and above are capable of running 64-bit, and can run both armhf and arm64.

Prior to the release of the Raspberry Pi 5 / Raspbian bookworm, Raspbian only shipped the armhf architecture. Well, it was an architecture they called armhf, but it was different from Debian’s armhf in that everything was recompiled to work with the more limited set of features on the earlier Raspberry Pi boards. It was really somewhere between Debian’s armel and armhf archs. You could run Debian armel on those, but it would run more slowly, due to doing floating point calculations without hardware support. Debian’s raspi FAQ goes into this a bit.

What I am going to describe here is going from Raspbian armhf to Debian armhf with a 64-bit kernel. Therefore, it will only work with Raspberry Pi 3 and above. It may theoretically be possible to take a Raspberry Pi 2 to Debian armhf with a 32-bit kernel, but I haven’t tried this and it may be more difficult. I have seen conflicting information on whether armhf really works on a Pi 2. (If you do try it on a Pi 2, ignore everything about arm64 and 64-bit kernels below, and just go with the linux-image-armmp-lpae kernel per the ARMMP page)

There is another wrinkle: Debian doesn’t support running 32-bit ARM kernels on 64-bit ARM CPUs, though it does support running a 32-bit userland on them. So we will wind up with a system with kernel packages from arm64 and everything else from armhf. This is a perfectly valid configuration as the arm64 – like x86_64 – is multiarch (that is, the CPU can natively execute both the 32-bit and 64-bit instructions).

(It is theoretically possible to crossgrade a system from 32-bit to 64-bit userland, but that felt like a rather heavy lift for dubious benefit on a Pi; nevertheless, if you want to make this process even more complicated, refer to the CrossGrading page.)

Prerequisites and Limitations

In addition to the need for a Raspberry Pi 3 or above in order for this to work, there are a few other things to mention.

If you are using the GPIO features of the Pi, I don’t know if those work with Debian.

I think Raspberry Pi OS modified the desktop environment more than other components. All of my Pis are headless, so I don’t know if this process will work if you use a desktop environment.

I am assuming you are booting from a MicroSD card as is typical in the Raspberry Pi world. The Pi’s firmware looks for a FAT partition (MBR type 0x0c) and looks within it for boot information. Depending on how long ago you first installed an OS on your Pi, your /boot may be too small for Debian. Use df -h /boot to see how big it is. I recommend 200MB at minimum. If your /boot is smaller than that, stop now (or use some other system to shrink your root filesystem and rearrange your partitions; I’ve done this, but it’s outside the scope of this article.)

You need to have stable power. Once you begin this process, your pi will mostly be left in a non-bootable state until you finish. (You… did make a backup, right?)

Basic idea

The basic idea here is that since bookworm has almost entirely newer packages then bullseye, we can “just” switch over to it and let the Debian packages replace the Raspbian ones as they are upgraded. Well, it’s not quite that easy, but that’s the main idea.

Preparation

First, make a backup. Even an image of your MicroSD card might be nice. OK, I think I’ve said that enough now.

It would be a good idea to have a HDMI cable (with the appropriate size of connector for your particular Pi board) and a HDMI display handy so you can troubleshoot any bootup issues with a console.

Preparation: access

The Raspberry Pi OS by default sets up a user named pi that can use sudo to gain root without a password. I think this is an insecure practice, but assuming you haven’t changed it, you will need to ensure it still works once you move to Debian. Raspberry Pi OS had a patch in their sudo package to enable it, and that will be removed when Debian’s sudo package is installed. So, put this in /etc/sudoers.d/010_picompat:

pi ALL=(ALL) NOPASSWD: ALL

Also, there may be no password set for the root account. It would be a good idea to set one; it makes it easier to log in at the console. Use the passwd command as root to do so.

Preparation: bluetooth

Debian doesn’t correctly identify the Bluetooth hardware address. You can save it off to a file by running hcitool dev > /root/bluetooth-from-raspbian.txt. I don’t use Bluetooth, but this should let you develop a script to bring it up properly.

Preparation: Debian archive keyring

You will next need to install Debian’s archive keyring so that apt can authenticate packages from Debian. Go to the bookworm download page for debian-archive-keyring and copy the URL for one of the files, then download it on the pi. For instance:

wget http://http.us.debian.org/debian/pool/main/d/debian-archive-keyring/debian-archive-keyring_2023.3+deb12u1_all.deb

Use sha256sum to verify the checksum of the downloaded file, comparing it to the package page on the Debian site.

Now, you’ll install it with:

dpkg -i debian-archive-keyring_2023.3+deb12u1_all.deb Package first steps

From here on, we are making modifications to the system that can leave it in a non-bootable state.

Examine /etc/apt/sources.list and all the files in /etc/apt/sources.list.d. Most likely you will want to delete or comment out all lines in all files there. Replace them with something like:

deb http://deb.debian.org/debian/ bookworm main non-free-firmware contrib non-free deb http://security.debian.org/debian-security bookworm-security main non-free-firmware contrib non-free deb https://deb.debian.org/debian bookworm-backports main non-free-firmware contrib non-free

(you might leave off contrib and non-free depending on your needs)

Now, we’re going to tell it that we’ll support arm64 packages:

dpkg --add-architecture arm64

And finally, download the bookworm package lists:

apt-get update

If there are any errors from that command, fix them and don’t proceed until you have a clean run of apt-get update.

Moving /boot to /boot/firmware

The boot FAT partition I mentioned above is mounted at /boot by Raspberry Pi OS, but Debian’s scripts assume it will be at /boot/firmware. We need to fix this. First:

umount /boot mkdir /boot/firmware

Now, edit fstab and change the reference to /boot to be to /boot/firmware. Now:

mount -v /boot/firmware cd /boot/firmware mv -vi * ..

This mounts the filesystem at the new location, and moves all its contents back to where apt believes it should be. Debian’s packages will populate /boot/firmware later.

Installing the first packages

Now we start by installing the first of the needed packages. Eventually we will wind up with roughly the same set Debian uses.

apt-get install linux-image-arm64 apt-get install firmware-brcm80211=20230210-5 apt-get install raspi-firmware

If you get errors relating to firmware-brcm80211 from any commands, run that install firmware-brcm80211 command and then proceed. There are a few packages that Raspbian marked as newer than the version in bookworm (whether or not they really are), and that’s one of them.

Configuring the bootloader

We need to configure a few things in /etc/default/raspi-firmware before proceeding. Edit that file.

First, uncomment (or add) a line like this:

KERNEL_ARCH="arm64"

Next, in /boot/cmdline.txt you can find your old Raspbian boot command line. It will say something like:

root=PARTUUID=...

Save off the bit starting with PARTUUID. Back in /etc/default/raspi-firmware, set a line like this:

ROOTPART=PARTUUID=abcdef00

(substituting your real value for abcdef00).

This is necessary because the microSD card device name often changes from /dev/mmcblk0 to /dev/mmcblk1 when switching to Debian’s kernel. raspi-firmware will encode the current device name in /boot/firmware/cmdline.txt by default, which will be wrong once you boot into Debian’s kernel. The PARTUUID approach lets it work regardless of the device name.

Purging the Raspbian kernel

Run:

dpkg --purge raspberrypi-kernel Upgrading the system

At this point, we are going to run the procedure beginning at section 4.4.3 of the Debian release notes. Generally, you will do:

apt-get -u upgrade apt full-upgrade

Fix any errors at each step before proceeding to the next. Now, to remove some cruft, run:

apt-get --purge autoremove

Inspect the list to make sure nothing important isn’t going to be removed.

Removing Raspbian cruft

You can list some of the cruft with:

apt list '~o'

And remove it with:

apt purge '~o'

I also don’t run Bluetooth, and it seemed to sometimes hang on boot becuase I didn’t bother to fix it, so I did:

apt-get --purge remove bluez Installing some packages

This makes sure some basic Debian infrastructure is available:

apt-get install wpasupplicant parted dosfstools wireless-tools iw alsa-tools apt-get --purge autoremove Installing firmware

Now run:

apt-get install firmware-linux Resolving firmware package version issues

If it gives an error about the installed version of a package, you may need to force it to the bookworm version. For me, this often happened with firmware-atheros, firmware-libertas, and firmware-realtek.

Here’s how to resolve it, with firmware-realtek as an example:

  1. Go to https://packages.debian.org/PACKAGENAME – for instance, https://packages.debian.org/firmware-realtek. Note the version number in bookworm – in this case, 20230210-5.

  2. Now, you will force the installation of that package at that version:

    apt-get install firmware-realtek=20230210-5
  3. Repeat with every conflicting package until done.

  4. Rerun apt-get install firmware-linux and make sure it runs cleanly.

Also, in the end you should be able to:

apt-get install firmware-atheros firmware-libertas firmware-realtek firmware-linux Dealing with other Raspbian packages

The Debian release notes discuss removing non-Debian packages. There will still be a few of those. Run:

apt list '?narrow(?installed, ?not(?origin(Debian)))'

Deal with them; mostly you will need to force the installation of a bookworm version using the procedure in the section Resolving firmware package version issues above (even if it’s not for a firmware package). For non-firmware packages, you might possibly want to add --mark-auto to your apt-get install command line to allow the package to be autoremoved later if the things depending on it go away.

If you aren’t going to use Bluetooth, I recommend apt-get --purge remove bluez as well. Sometimes it can hang at boot if you don’t fix it up as described above.

Set up networking

We’ll be switching to the Debian method of networking, so we’ll create some files in /etc/network/interfaces.d. First, eth0 should look like this:

allow-hotplug eth0 iface eth0 inet dhcp iface eth0 inet6 auto

And wlan0 should look like this:

allow-hotplug wlan0 iface wlan0 inet dhcp wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

Raspbian is inconsistent about using eth0/wlan0 or renamed interface. Run ifconfig or ip addr. If you see a long-named interface such as enx<something> or wlp<something>, copy the eth0 file to the one named after the enx interface, or the wlan0 file to the one named after the wlp interface, and edit the internal references to eth0/wlan0 in this new file to name the long interface name.

If using wifi, verify that your SSIDs and passwords are in /etc/wpa_supplicant/wpa_supplicant.conf. It should have lines like:

network={ ssid="NetworkName" psk="passwordHere" }

(This is where Raspberry Pi OS put them).

Deal with DHCP

Raspberry Pi OS used dhcpcd, whereas bookworm normally uses isc-dhcp-client. Verify the system is in the correct state:

apt-get install isc-dhcp-client apt-get --purge remove dhcpcd dhcpcd-base dhcpcd5 dhcpcd-dbus Set up LEDs

To set up the LEDs to trigger on MicroSD activity as they did with Raspbian, follow the Debian instructions. Run apt-get install sysfsutils. Then put this in a file at /etc/sysfs.d/local-raspi-leds.conf:

class/leds/ACT/brightness = 1 class/leds/ACT/trigger = mmc1 Prepare for boot

To make sure all the /boot/firmware files are updated, run update-initramfs -u. Verify that root in /boot/firmware/cmdline.txt references the PARTUUID as appropriate. Verify that /boot/firmware/config.txt contains the lines arm_64bit=1 and upstream_kernel=1. If not, go back to the section on modifying /etc/default/raspi-firmware and fix it up.

The moment arrives

Cross your fingers and try rebooting into your Debian system:

reboot

For some reason, I found that the first boot into Debian seems to hang for 30-60 seconds during bootstrap. I’m not sure why; don’t panic if that happens. It may be necessary to power cycle the Pi for this boot.

Troubleshooting

If things don’t work out, hook up the Pi to a HDMI display and see what’s up. If I anticipated a particular problem, I would have documented it here (a lot of the things I documented here are because I ran into them!) So I can’t give specific advice other than to watch boot messages on the console. If you don’t even get kernel messages going, then there is some problem with your partition table or /boot/firmware FAT partition. Otherwise, you’ve at least got the kernel going and can troubleshoot like usual from there.

Categories: FLOSS Project Planets

John Goerzen: Consider Security First

Tue, 2024-01-02 19:38

I write this in the context of my decision to ditch Raspberry Pi OS and move everything I possibly can, including my Raspberry Pi devices, to Debian. I will write about that later.

But for now, I wanted to comment on something I think is often overlooked and misunderstood by people considering distributions or operating systems: the huge importance of getting security updates in an automated and easy way.

Background

Let’s assume that these statements are true, which I think are well-supported by available evidence:

  1. Every computer system (OS plus applications) that can do useful modern work has security vulnerabilities, some of which are unknown at any given point in time;

  2. During the lifetime of that computer system, some of these vulnerabilities will be discovered. For a (hopefully large) subset of those vulnerabilities, timely patches will become available.

Now then, it follows that applying those timely patches is a critical part of having a system that it as secure as possible. Of course, you have to do other things as well – good passwords, secure practices, etc – but, fundamentally, if your system lacks patches for known vulnerabilities, you’ve already lost at the security ballgame.

How to stay patched

There is something of a continuum of how you might patch your system. It runs roughly like this, from best to worst:

  1. All components are kept up-to-date automatically, with no intervention from the user/operator

  2. The operator is automatically alerted to necessary patches, and they can be easily installed with minimal intervention

  3. The operator is automatically alerted to necessary patches, but they require significant effort to apply

  4. The operator has no way to detect vulnerabilities or necessary patches

It should be obvious that the first situation is ideal. Every other situation relies on the timeliness of human action to keep up-to-date with security patches. This is a fallible situation; humans are busy, take trips, dismiss alerts, miss alerts, etc. That said, it is rare to find any system living truly all the way in that scenario, as you’ll see.

What is “your system”?

A critical point here is: what is “your system”? It includes:

  • Your kernel
  • Your base operating system
  • Your applications
  • All the libraries needed to run all of the above

Some OSs, such as Debian, make little or no distinction between the base OS and the applications. Others, such as many BSDs, have a distinction there. And in some cases, people will compile or install applications outside of any OS mechanism. (It must be stressed that by doing so, you are taking the responsibility of patching them on your own shoulders.)

How do common systems stack up?
  • Debian, with its support for unattended-upgrades, needrestart, debian-security-support, and such, is largely category 1. It can automatically apply security patches, in most cases can restart the necessary services for the patch to take effect, and will alert you when some processes or the system must be manually restarted for a patch to take effect (for instance, a kernel update). Those cases requiring manual intervention are category 2. The debian-security-support package will even warn you of gaps in the system. You can also use debsecan to scan for known vulnerabilities on a given installation.

  • FreeBSD has no way to automatically install security patches for things in the packages collection. As with many rolling-release systems, you can’t automate the installation of these security patches with FreeBSD because it is not safe to blindly update packages. It’s not safe to blindly update packages because they may bring along more than just security patches: they may represent major upgrades that introduce incompatibilities, etc. Unlike Debian’s practice of backporting fixes and thus producing narrowly-tailored patches, forcing upgrades to newer versions precludes a “minimal intervention” install. Therefore, rolling release systems are category 3.

  • Things such as Snap, Flatpak, AppImage, Docker containers, Electron apps, and third-party binaries often contain embedded libraries and such for which you have no easy visibility into their status. For instance, if there was a bug in libpng, would you know how many of your containers had a vulnerability? These systems are category 4 – you don’t even know if you’re vulnerable. It’s for this reason that my Debian-based Docker containers apply security patches before starting processes, and also run unattended-upgrades and friends.

The pernicious library problem

As mentioned in my last category above, hidden vulnerabilities can be a big problem. I’ve been writing about this for years. Back in 2017, I wrote an article focused on Docker containers, but which applies to the other systems like Snap and so forth. I cited a study back then that “Over 80% of the :latest versions of official images contained at least one high severity vulnerability.” The situation is no better now. In December 2023, it was reported that, two years after the critical Log4Shell vulnerability, 25% of apps were still vulnerable to it. Also, only 21% of developers ever update third-party libraries after introducing them into their projects.

Clearly, you can’t rely on these images with embedded libraries to be secure. And since they are black box, they are difficult to audit.

Debian’s policy of always splitting libraries out from packages is hugely beneficial; it allows finegrained analysis of not just vulnerabilities, but also the dependency graph. If there’s a vulnerability in libpng, you have one place to patch it and you also know exactly what components of your system use it.

If you use snaps, or AppImages, you can’t know if they contain a deeply embedded vulnerability, nor could you patch it yourself if you even knew. You are at the mercy of upstream detecting and remedying the problem – a dicey situation at best.

Who makes the patches?

Fundamentally, humans produce security patches. Often, but not always, patches originate with the authors of a program and then are integrated into distribution packages. It should be noted that every security team has finite resources; there will always be some CVEs that aren’t patched in a given system for various reasons; perhaps they are not exploitable, or are too low-impact, or have better mitigations than patches.

Debian has an excellent security team; they manage the process of integrating patches into Debian, produce Debian Security Advisories, maintain the Debian Security Tracker (which maintains cross-references with the CVE database), etc.

Some distributions don’t have this infrastructure. For instance, I was unable to find this kind of tracker for Devuan or Raspberry Pi OS. In contrast, Ubuntu and Arch Linux both seem to have active security teams with trackers and advisories.

Implications for Raspberry Pi OS and others

As I mentioned above, I’m transitioning my Pi devices off Raspberry Pi OS (Raspbian). Security is one reason. Although Raspbian is a fork of Debian, and you can install packages like unattended-upgrades on it, they don’t work right because they use the Debian infrastructure, and Raspbian hasn’t modified them to use their own infrastructure. I don’t see any Raspberry Pi OS security advisories, trackers, etc. In short, they lack the infrastructure to support those Debian tools anyhow.

Not only that, but Raspbian lags behind Debian in both new releases and new security patches, sometimes by days or weeks.

A future post will include instructions for migrating Raspberry Pis to Debian.

Categories: FLOSS Project Planets

Jacob Adams: Fixing My Shell

Tue, 2024-01-02 19:00

For an embarassingly long time, my shell has unnecessarily tried to initialize a console font in every kind of interactive terminal.

This leaves the following error message in my terminal:

Couldn't get a file descriptor referring to the console.

It even shows up twice when running tmux!

Clearly I’ve done something horrible to my configuration, and now I’ve got to clean it up.

How does Shell Initialization Work?

The precise files a shell reads at start-up is somewhat complex, and defined by this excellent chart 1:

For the purposes of what I’m trying to fix, there are two paths that matter.

  • Interactive login shell startup
  • Interactive non-login shell startup

As you can see from the above, trying to distinguish these two paths in bash is an absolute mess. zsh, in contrast, is much cleaner and allows for a clear distinction between these two cases, with login shell configuration files as a superset of configuration files used by non-login shells.

How did we get here?

I keep my configuration files in a config repository.

Some time ago I got quite frustrated at this whole shell initialization thing, and just linked everything together in one profile file:

.mkshrc -> config/profile .profile -> config/profile

This appears to have created this mess.

Move to ZSH

I’ve wanted to move to zsh for a while, and took this opportunity to do so. So my new configuration files are .zprofile and .zshrc instead of .mkshrc and .profile (though I’m going to retain those symlinks to allow my old configurations to continue working).

mksh is a nice simple shell, but using zsh here allows for more consistency between my home and $WORK environments, and will allow a lot more powerful extensions.

Updating my Prompt

ZSH prompts use a totally different configuration via variable expansion. However, it also uses the PROMPT variable, so I set that to the needed values for zsh.

There’s an excellent ZSH prompt generator at https://zsh-prompt-generator.site that I used to get these variables, though I’m sure they’re in the zsh documentation somewhere as well.

I wanted a simple prompt with user (%n), host (%m), and path (%d). I also wanted a % at the end to distinguish this from other shells.

PROMPT="%n@%m%d%% " Fixing mksh prompts

This worked but surprisingly mksh also looks at PROMPT, leaving my mksh prompt as the literal prompt string without expansion.

Fixing this requires setting up a proper shrc and linking it to .mkshrc and .zshrc.

I chose to move my existing aliases script to this file, as it also broke in non-login shells when moved to profile.

Within this new shrc file we can check what shell we’re running via $0:

if [ "$0" = "/bin/zsh" ] || [ "$0" = "zsh" ] || [ "$0" = "-zsh" ]

I chose to add plain zsh here in case I run it manually for whatever reason. I also added -zsh to support tmux as that’s what it presents as $0. This also means you’ll need to be careful to quote $0 or you’ll get fun shell errors.

There’s probably a better way to do this, but I couldn’t find something that was compatible with POSIX shell, which is what most of this has to be written in to be compatible with mksh and zsh2.

We can then setup different prompts for each:

if [ "$0" = "/bin/zsh" ] || [ "$0" = "zsh" ] || [ "$0" = "-zsh" ] then PROMPT="%n@%m%d%% " else # Borrowed from # http://www.unixmantra.com/2013/05/setting-custom-prompt-in-ksh.html PS1='$(id -un)@$(hostname -s)$PWD$ ' fi Setting Console Font in a Better Way

I’ve been setting console font via setfont in my .profile for a while. I’m not sure where I picked this up, but it’s not the right way. I even tried to only run this in a console with -t but that only checks that output is a terminal, not specifically a console.

if [ -t 1 ] then setfont /usr/share/consolefonts/Lat15-Terminus20x10.psf.gz fi

This also only runs once the console is logged into, instead of initializing it on boot. The correct way to set this up, on Debian-based systems, is reconfiguring console-setup like so:

dpkg-reconfigure console-setup

From there you get a menu of encoding, character set, font, and then font size to configure for your consoles.

VIM mode

To enable VIM mode for ZSH, you simply need to set:

bindkeys -v

This allows you to edit your shell commands with basic VIM keybinds.

Getting back Ctrl + Left Arrow and Ctrl + Right Arrow

Moving around one word at a time with Ctrl and the arrow keys is broken by vim mode unfortunately, so we’ll need to re-enable it:

bindkey "^[[1;5C" forward-word bindkey "^[[1;5D" backward-word Better History Search

But of course we’re now using zsh so we can do better than just the same configuration as we had before.

There’s an excellent substring history search plugin that we can just source without a plugin manager3

source $HOME/config/zsh-history-substring-search.zsh # Keys are weird, should be ^[[ but it's not bindkey '^[[A' history-substring-search-up bindkey '^[OA' history-substring-search-up bindkey '^[[B' history-substring-search-down bindkey '^[OB' history-substring-search-down

For some reason my system uses ^[OA and ^[OB as up and down keys. It seems ^[[A and ^[[B are the defaults, so I’ve left them in, but I’m confused as to what differences would lead to this. If you know, please let me know and I’ll add a footnote to this article explaining it.

Back to history search. For this to work, we also need to setup history logging:

export SAVEHIST=1000000 export HISTFILE=$HOME/.zsh_history export HISTFILESIZE=1000000 export HISTSIZE=1000000 Why did it show up twice for tmux?

Because tmux creates a login shell. Adding:

echo PROFILE

to profile and:

echo SHRC

to shrc confirms this with:

PROFILE SHRC SHRC

For now, profile sources shrc so that running twice is expected.

But after this exploration and diagram, it’s clear we don’t need that for zsh. Removing this will break remote bash shells (see above diagram), but I can live without those on my development laptop.

Removing that line results in the expected output for a new terminal:

SHRC

And the full output for a new tmux session or console:

PROFILE SHRC

So finally we’re back to a normal state!

This post is a bit unfocused but I hope it helps someone else repair or enhance their shell environment.

If you liked this4, or know of any other ways to manage this I could use, let me know at fixmyshell@tookmund.com.

  1. This chart comes from the excellent Shell Startup Scripts article by Peter Ward. I’ve generated the SVG from the graphviz source linked in the article. 

  2. Technically it has be compatible with Korn shell, but a quick google seems to suggest that that’s actually a subset of POSIX shell. 

  3. I use oh-my-zsh at $WORK but for now I’m going to simplify my personal configuration. If I end up using a lot of plugins I’ll reconsider this. 

  4. Or if you’ve found any typos or other issues that I should fix. 

Categories: FLOSS Project Planets

Matthew Garrett: Dealing with weird ELF libraries

Tue, 2024-01-02 14:04
Libraries are collections of code that are intended to be usable by multiple consumers (if you're interested in the etymology, watch this video). In the old days we had what we now refer to as "static" libraries, collections of code that existed on disk but which would be copied into newly compiled binaries. We've moved beyond that, thankfully, and now make use of what we call "dynamic" or "shared" libraries - instead of the code being copied into the binary, a reference to the library function is incorporated, and at runtime the code is mapped from the on-disk copy of the shared object[1]. This allows libraries to be upgraded without needing to modify the binaries using them, and if multiple applications are using the same library at once it only requires that one copy of the code be kept in RAM.

But for this to work, two things are necessary: when we build a binary, there has to be a way to reference the relevant library functions in the binary; and when we run a binary, the library code needs to be mapped into the process.

(I'm going to somewhat simplify the explanations from here on - things like symbol versioning make this a bit more complicated but aren't strictly relevant to what I was working on here)

For the first of these, the goal is to replace a call to a function (eg, printf()) with a reference to the actual implementation. This is the job of the linker rather than the compiler (eg, if you use the -c argument to tell gcc to simply compile to an object rather than linking an executable, it's not going to care about whether or not every function called in your code actually exists or not - that'll be figured out when you link all the objects together), and the linker needs to know which symbols (which aren't just functions - libraries can export variables or structures and so on) are available in which libraries. You give the linker a list of libraries, it extracts the symbols available, and resolves the references in your code with references to the library.

But how is that information extracted? Each ELF object has a fixed-size header that contains references to various things, including a reference to a list of "section headers". Each section has a name and a type, but the ones we're interested in are .dynstr and .dynsym. .dynstr contains a list of strings, representing the name of each exported symbol. .dynsym is where things get more interesting - it's a list of structs that contain information about each symbol. This includes a bunch of fairly complicated stuff that you need to care about if you're actually writing a linker, but the relevant entries for this discussion are an index into .dynstr (which means the .dynsym entry isn't sufficient to know the name of a symbol, you need to extract that from .dynstr), along with the location of that symbol within the library. The linker can parse this information and obtain a list of symbol names and addresses, and can now replace the call to printf() with a reference to libc instead.

(Note that it's not possible to simply encode this as "Call this address in this library" - if the library is rebuilt or is a different version, the function could move to a different location)

Experimentally, .dynstr and .dynsym appear to be sufficient for linking a dynamic library at build time - there are other sections related to dynamic linking, but you can link against a library that's missing them. Runtime is where things get more complicated.

When you run a binary that makes use of dynamic libraries, the code from those libraries needs to be mapped into the resulting process. This is the job of the runtime dynamic linker, or RTLD[2]. The RTLD needs to open every library the process requires, map the relevant code into the process's address space, and then rewrite the references in the binary into calls to the library code. This requires more information than is present in .dynstr and .dynsym - at the very least, it needs to know the list of required libraries.

There's a separate section called .dynamic that contains another list of structures, and it's the data here that's used for this purpose. For example, .dynamic contains a bunch of entries of type DT_NEEDED - this is the list of libraries that an executable requires. There's also a bunch of other stuff that's required to actually make all of this work, but the only thing I'm going to touch on is DT_HASH. Doing all this re-linking at runtime involves resolving the locations of a large number of symbols, and if the only way you can do that is by reading a list from .dynsym and then looking up every name in .dynstr that's going to take some time. The DT_HASH entry points to a hash table - the RTLD hashes the symbol name it's trying to resolve, looks it up in that hash table, and gets the symbol entry directly (it still needs to resolve that against .dynstr to make sure it hasn't hit a hash collision - if it has it needs to look up the next hash entry, but this is still generally faster than walking the entire .dynsym list to find the relevant symbol). There's also DT_GNU_HASH which fulfills the same purpose as DT_HASH but uses a more complicated algorithm that performs even better. .dynamic also contains entries pointing at .dynstr and .dynsym, which seems redundant but will become relevant shortly.

So, .dynsym and .dynstr are required at build time, and both are required along with .dynamic at runtime. This seems simple enough, but obviously there's a twist and I'm sorry it's taken so long to get to this point.

I bought a Synology NAS for home backup purposes (my previous solution was a single external USB drive plugged into a small server, which had uncomfortable single point of failure properties). Obviously I decided to poke around at it, and I found something odd - all the libraries Synology ships were entirely lacking any ELF section headers. This meant no .dynstr, .dynsym or .dynamic sections, so how was any of this working? nm asserted that the libraries exported no symbols, and readelf agreed. If I wrote a small app that called a function in one of the libraries and built it, gcc complained that the function was undefined. But executables on the device were clearly resolving the symbols at runtime, and if I loaded them into ghidra the exported functions were visible. If I dlopen()ed them, dlsym() couldn't resolve the symbols - but if I hardcoded the offset into my code, I could call them directly.

Things finally made sense when I discovered that if I passed the --use-dynamic argument to readelf, I did get a list of exported symbols. It turns out that ELF is weirder than I realised. As well as the aforementioned section headers, ELF objects also include a set of program headers. One of the program header types is PT_DYNAMIC. This typically points to the same data that's present in the .dynamic section. Remember when I mentioned that .dynamic contained references to .dynsym and .dynstr? This means that simply pointing at .dynamic is sufficient, there's no need to have separate entries for them.

The same information can be reached from two different locations. The information in the section headers is used at build time, and the information in the program headers at run time[3]. I do not have an explanation for this. But if the information is present in two places, it seems obvious that it should be able to reconstruct the missing section headers in my weird libraries? So that's what this does. It extracts information from the DYNAMIC entry in the program headers and creates equivalent section headers.

There's one thing that makes this more difficult than it might seem. The section header for .dynsym has to contain the number of symbols present in the section. And that information doesn't directly exist in DYNAMIC - to figure out how many symbols exist, you're expected to walk the hash tables and keep track of the largest number you've seen. Since every symbol has to be referenced in the hash table, once you've hit every entry the largest number is the number of exported symbols. This seemed annoying to implement, so instead I cheated, added code to simply pass in the number of symbols on the command line, and then just parsed the output of readelf against the original binaries to extract that information and pass it to my tool.

Somehow, this worked. I now have a bunch of library files that I can link into my own binaries to make it easier to figure out how various things on the Synology work. Now, could someone explain (a) why this information is present in two locations, and (b) why the build-time linker and run-time linker disagree on the canonical source of truth?

[1] "Shared object" is the source of the .so filename extension used in various Unix-style operating systems
[2] You'll note that "RTLD" is not an acryonym for "runtime dynamic linker", because reasons
[3] For environments using the GNU RTLD, at least - I have no idea whether this is the case in all ELF environments

comments
Categories: FLOSS Project Planets

Thomas Koch: Good things come ... state folder

Tue, 2024-01-02 11:05
Posted on January 2, 2024 Tags: debian, free software, life

Just a little while ago (10 years) I proposed the addition of a state folder to the XDG basedir specification and expanded the article XDGBaseDirectorySpecification in the Debian wiki. Recently I learned, that version 0.8 (from May 2021) of the spec finally includes a state folder.

Granted, I wasn’t the first to have this idea (2009), nor the one who actually made it happen.

Now, please go ahead and use it! Thank you.

Categories: FLOSS Project Planets

Thomas Koch: Know your tools - simple backup with rsync

Tue, 2024-01-02 11:05
Posted on June 9, 2022 Tags: debian, free software

I’ve been using rsync for years and still did not know its full powers. I just wanted a quick and dirty simple backup but realised that rsnapshot is not in Debian anymore.

However you can do much of rsnapshot with rsync alone nowadays.

The --link-dest option (manpage) solves the part of creating hardlinks to a previous backup (found here). So my backup program becomes this shell script in ~/backups/backup.sh:

#!/bin/sh SERVER="${1}" BACKUP="${HOME}/backups/${SERVER}" SNAPSHOTS="${BACKUP}/snapshots" FOLDER=$(date --utc +%F_%H-%M-%S) DEST="${SNAPSHOTS}/${FOLDER}" LAST=$(ls -d1 ${SNAPSHOTS}/????-??-??_??-??-??|tail -n 1) rsync \ --rsh="ssh -i ${BACKUP}/sshkey -o ControlPath=none -o ForwardAgent=no" \ -rlpt \ --delete --link-dest="${LAST}" \ ${SERVER}::backup "${DEST}"

The script connects to rsync in daemon mode as outlined in section “USING RSYNC-DAEMON FEATURES VIA A REMOTE-SHELL CONNECTION” in the rsync manpage. This allows to reference a “module” as the source that is defined on the server side as follows:

[backup] path = / read only = true exclude from = /srv/rsyncbackup/excludelist uid = root gid = root

The important bit is the read only setting that protects the server against somebody with access to the ssh key to overwrit files on the server via rsync and thus gaining full root access.

Finally the command prefix in ~/.ssh/authorized_keys runs rsync as daemon with sudo and the specified config file:

command="sudo rsync --config=/srv/rsyncbackup/config --server --daemon ."

The sudo setup is left as an exercise for the reader as mine is rather opinionated.

Unfortunately I have not managed to configure systemd timers in the way I wanted and therefor opened an issue: “Allow retry of timer triggered oneshot services with failed conditions or asserts”. Any help there is welcome!

Categories: FLOSS Project Planets

Thomas Koch: Missing memegen

Tue, 2024-01-02 11:05
Posted on May 1, 2022 Tags: debian, free software, life

Back at $COMPANY we had an internal meme-site. I had some reputation in my team for creating good memes. When I watched Episode 3 of Season 2 from Yes Premier Minister yesterday, I really missed a place to post memes.

This is the full scene. Please watch it or even the full episode before scrolling down to the GIFs. I had a good laugh for some time.

With Debian, I could just download the episode from somewhere on the net with youtube-dl and easily create two GIFs using ffmpeg, with and without subtitle:

ffmpeg -ss 0:5:59.600 -to 0:6:11.150 -i Downloads/Yes.Prime.Minister.S02E03-1254485068289.mp4 tmp/tragic.gif ffmpeg -ss 0:5:59.600 -to 0:6:11.150 -i Downloads/Yes.Prime.Minister.S02E03-1254485068289.mp4 \ -vf "subtitles=tmp/sub.srt:force_style='Fontsize=60'" tmp/tragic_with_subtitle.gif

And this sub.srt file:

1 00:00:10,000 --> 00:00:12,000 Tragic.

I believe, one needs to install the libavfilter-extra variant to burn the subtitle in the GIF.

Some

space

to

hide

the

GIFs.

The Premier Minister just learned, that his predecessor, who was about to publish embarassing memories, died of a sudden heart attack:

I can’t actually think of a meme with this GIF, that the internal thought police community moderation would not immediately take down.

For a moment I thought that it would be fun to have a Meme-Site for Debian members. But it is probably not the right time for this.

Maybe somebody likes the above GIFs though and wants to use them somewhere.

Categories: FLOSS Project Planets

Thomas Koch: lsp-java coming to debian

Tue, 2024-01-02 11:05
Posted on March 12, 2022 Tags: debian

The Language Server Protocol (LSP) standardizes communication between editors and so called language servers for different programming languages. This reduces the old problem that every editor had to implement many different plugins for all different programming languages. With LSP an editor just needs to talk LSP and can immediately provide typicall IDE features.

I already packaged the Emacs packages lsp-mode and lsp-haskell for Debian bullseye. Now lsp-java is waiting in the NEW queue.

I’m always worried about downloading and executing binaries from random places of the internet. It should be a matter of hygiene to only run binaries from official Debian repositories. Unfortunately this is not feasible when programming and many people don’t see a problem with running multiple curl-sh pipes to set up their programming environment.

I prefer to do such stuff only in virtual machines. With Emacs and LSP I can finally have a lightweight textmode programming environment even for Java.

Unfortunately the lsp-java mode does not yet work over tramp. Once this is solved, I could run emacs on my host and only isolate the code and language server inside the VM.

The next step would be to also keep the code on the host and mount it with Virtio FS in the VM. But so far the necessary daemon is not yet in Debian (RFP: #1007152).

In Detail I uploaded these packages:

Categories: FLOSS Project Planets

Pages