Planet Debian
Russ Allbery: Review: The Mimicking of Known Successes
Review: The Mimicking of Known Successes, by Malka Older
Series: Mossa and Pleiti #1 Publisher: Tordotcom Copyright: 2023 ISBN: 1-250-86051-2 Format: Kindle Pages: 169The Mimicking of Known Successes is a science fiction mystery novella, the first of an expected series. (The second novella is scheduled to be published in February of 2024.)
Mossa is an Investigator, called in after a man disappears from the eastward platform on the 4°63' line. It's an isolated platform, five hours away from Mossa's base, and home to only four residential buildings and a pub. The most likely explanation is that the man jumped, but his behavior before he disappeared doesn't seem consistent with that theory. He was bragging about being from Valdegeld University, talking to anyone who would listen about the important work he was doing — not typically the behavior of someone who is suicidal. Valdegeld is the obvious next stop in the investigation.
Pleiti is a Classics scholar at Valdegeld. She is also Mossa's ex-girlfriend, making her both an obvious and a fraught person to ask for investigative help. Mossa is the last person she expected to be waiting for her on the railcar platform when she returns from a trip to visit her parents.
The Mimicking of Known Successes is mostly a mystery, following Mossa's attempts to untangle the story of what happened to the disappeared man, but as you might have guessed there's a substantial sapphic romance subplot. It's also at least adjacent to Sherlock Holmes: Mossa is brilliant, observant, somewhat monomaniacal, and very bad at human relationships. All of this story except for the prologue is told from Pleiti's perspective as she plays a bit of a Watson role, finding Mossa unreadable, attractive, frustrating, and charming in turn. Following more recent Holmes adaptations, Mossa is portrayed as probably neurodivergent, although the story doesn't attach any specific labels.
I have no strong opinions about this novella. It was fine? There's a mystery with a few twists, there's a sapphic romance of the second chance variety, there's a bit of action and a bit of hurt/comfort after the action, and it all felt comfortably entertaining but kind of predictable. Susan Stepney has a "passes the time" review rating, and while that may be a bit harsh, that's about where I ended up.
The most interesting part of the story is the science fiction setting. We're some indefinite period into the future. Humans have completely messed up Earth to the point of making it uninhabitable. We then took a shot at terraforming Mars and messed that planet up to the point of uninhabitability as well. Now, what's left of humanity (maybe not all of it — the story isn't clear) lives on platforms connected by rail lines high in the atmosphere of Jupiter. (Everyone in the story calls Jupiter "Giant" for reasons that I didn't follow, given that they didn't rename any of its moons.) Pleiti's position as a Classics scholar means that she studies Earth and its now-lost ecosystems, whereas the Modern faculty focus on their new platform life.
This background does become relevant to the mystery, although exactly how is not clear at the start.
I wouldn't call this a very realistic setting. One has to accept that people are living on platforms attached to artificial rings around the solar system's largest planet and walk around in shirt sleeves and only minor technological support due to "atmoshields" of some unspecified capability, and where the native atmosphere plays the role of London fog. Everything feels vaguely Edwardian, including to the occasional human porter and message runner, which matches the story concept but seems unlikely as a plausible future culture. I also disbelieve in humanity's ability to do anything to Earth that would make it less inhabitable than the clouds of Jupiter.
That said, the setting is a lot of fun, which is probably more important. It's fun to try to visualize, and it has that slightly off-balance, occasionally surprising feel of science fiction settings where everyone is recognizably human but the things they consider routine and unremarkable are unexpected by the reader.
This novella also has a great title. The Mimicking of Known Successes is simultaneously a reference a specific plot point from late in the story, a nod to the shape of the romance, and an acknowledgment of the Holmes pastiche, and all of those references work even better once you know what the plot point is. That was nicely done.
This was not very memorable apart from the setting, but it was pleasant enough. I can't say that I'm inspired to pre-order the next novella in this series, but I also wouldn't object to reading it. If you're in the mood for gender-swapped Holmes in an exotic setting, you could do worse.
Followed by The Imposition of Unnecessary Obstacles.
Rating: 6 out of 10
Shirish Agarwal: Pearls of Luthra, Dahaad, Tetris & Discord.
Pearls of Luthra is the first book by Brian Jacques and I think I am going to be a fan of his work. This particular book you have to be wary of. While it is a beautiful book with quite a few illustrations, I have to warn that if you are somebody who feels hungry at the very mention of food, then you will be hungry throughout the book. There isn’t a single page where food isn’t mentioned and not just any kind of food, the kind of food that is geared towards sweet tooth. So if you fancy tarts or chocolates or anything sweet you will right at home. The book also touches upon various teas and wines and various liquors but food is where it shines in literally. The tale is very much like a Harry Potter adventure but isn’t as dark as HP was. In fact, apart from one death and one ear missing rest of our heroes and heroines and there are quite a few. I don’t want to give too much away as it’s a book to be treasured.
DahaadDahaad (the roar) is Sonakshi Sinha’s entry in OTT/Web Series. The stage is set somewhere in North India while the exploits are based on a real life person called Cyanide Mohan who killed 20 women between 2005-2009. In the web series however, the antagonist’s crimes are done over a period of 12 years and has 29 women as his victims. Apart from that it’s pretty much a copy of what was done by the person above. It’s a melting pot of a series which quite a few stories enmeshed along with the main one. The main onus and plot of the movie is about women from lower economic and caste order whose families want them to be wed but cannot due to huge demands for dowry. Now in such a situation, if a person were to give them a bit of attention, promise marriage and ask them to steal a bit and come with him and whatever, they will do it. The same modus operandi was done by Cynaide Mohan. He had a car that was not actually is but used it show off that he’s from a richer background, entice the women, have sex, promise marriage and in the morning after pill there will be cynaide which the women unwittingly will consume.
This is also framed by the protagonist Sonakshi Sinha to her mother as her mother is also forcing her to get married as she is becoming older. She shows some of the photographs of the victims and says that while the perpetrator is guilty but so is the overall society that puts women in such vulnerable positions. AFAIK, that is still the state of things. In fact, there is a series called ‘Indian Matchmaking‘ that has all the snobbishness that you want. How many people could have a lifestyle like the ones shown in that, less than 2% of the population. It’s actually shows like the above that make the whole thing even more precarious
Apart from it, the show also shows prejudice about caste and background. I wouldn’t go much into it as it’s worth seeing and experiencing.
TetrisTetris in many a ways is a story of greed. It’s also a story of a lone inventor who had to wait almost 20 odd years to profit from his invention. Forbes does a marvelous job of giving some more background and foreground info. about Tetris, the inventor and the producer that went to strike it rich. It also does share about copyright misrepresentation happens but does nothing to address it. Could talk a whole lot but better to see the movie and draw your own conclusions. For me it was 4/5.
DiscordDiscord became Discord 2.0 and is a blank to me. A blank page. Can’t do anything. First I thought it was a bug. Waited for a few days as sometimes webservices do fix themselves. But two weeks on and it still wasn’t fixed then decided to look under. One of the tools in Firefox is Web Developer Tools ( CTRL+Shift+I) that tells you if an element of a page is not appearing or at least gives you a hint. To me it gave me the following –
Content Security Policy: Ignoring “'unsafe-inline'” within script-src or style-src: nonce-source or hash-source specified
Content Security Policy: The page’s settings blocked the loading of a resource at data:text/css,%0A%20%20%20%20%20%20%20%2… (“style-src”). data:44:30
Content Security Policy: Ignoring “'unsafe-inline'” within script-src or style-src: nonce-source or hash-source specified
TypeError: AudioContext is not a constructor 138875 https://discord.com/assets/cbf3a75da6e6b6a4202e.js:262 l https://discord.com/assets/f5f0b113e28d4d12ba16.js:1ed46a18578285e5c048b.js:241:118
What is being done is dom.webaudio.enabled being disabled in Firefox.
Then on a hunch, searched on reddit and saw the following. Be careful while visiting the link as it’s labelled NSFW although to my mind there wasn’t anything remotely NSFW about it. They do mention using another tool ‘AudioContext Fingerprint Defender‘ which supposedly fakes or spoofs an id. As this add-on isn’t tracked by Firefox privacy team it’s hard for me to say anything positive or negative.
So, in the end I stopped using discord as the alternative was being tracked by them
Last but not the least, saw this about a week back. Sooner or later this had to happen as Elon tries to make money off Twitter.
John Goerzen: Recommendations for Tools for Backing Up and Archiving to Removable Media
I have several TB worth of family photos, videos, and other data. This needs to be backed up — and archived.
Backups and archives are often thought of as similar. And indeed, they may be done with the same tools at the same time. But the goals differ somewhat:
Backups are designed to recover from a disaster that you can fairly rapidly detect.
Archives are designed to survive for many years, protecting against disaster not only impacting the original equipment but also the original person that created them.
Reflecting on this, it implies that while a nice ZFS snapshot-based scheme that supports twice-hourly backups may be fantastic for that purpose, if you think about things like family members being able to access it if you are incapacitated, or accessibility in a few decades’ time, it becomes much less appealing for archives. ZFS doesn’t have the wide software support that NTFS, FAT, UDF, ISO-9660, etc. do.
This post isn’t about the pros and cons of the different storage media, nor is it about the pros and cons of cloud storage for archiving; these conversations can readily be found elsewhere. Let’s assume, for the point of conversation, that we are considering BD-R optical discs as well as external HDDs, both of which are too small to hold the entire backup set.
What would you use for archiving in these circumstances?
Establishing goals
The goals I have are:
- Archives can be restored using Linux or Windows (even though I don’t use Windows, this requirement will ensure the broadest compatibility in the future)
- The archival system must be able to accommodate periodic updates consisting of new files, deleted files, moved files, and modified files, without requiring a rewrite of the entire archive dataset
- Archives can ideally be mounted on any common OS and the component files directly copied off
- Redundancy must be possible. In the worst case, one could manually copy one drive/disc to another. Ideally, the archiving system would automatically track making n copies of data.
- While a full restore may be a goal, simply finding one file or one directory may also be a goal. Ideally, an archiving system would be able to quickly tell me which discs/drives contain a given file.
- Ideally, preserves as much POSIX metadata as possible (hard links, symlinks, modification date, permissions, etc). However, for the archiving case, this is less important than for the backup case, with the possible exception of modification date.
I would welcome your ideas for what to use. Below, I’ll highlight different approaches I’ve looked into and how they stack up.
Basic copies of directories
The initial approach might be one of simply copying directories across. This would work well if the data set to be archived is smaller than the archival media. In that case, you could just burn or rsync a new copy with every update and be done. Unfortunately, this is much less convenient with data of the size I’m dealing with. rsync is unavailable in that case. With some datasets, you could manually design some rsyncs to store individual directories on individual devices, but that gets unwieldy fast and isn’t scalable.
You could use something like my datapacker program to split the data across multiple discs/drives efficiently. However, updates will be a problem; you’d have to re-burn the entire set to get a consistent copy, or rely on external tools like mtree to reflect deletions. Not very convenient in any case.
So I won’t be using this.
tar or zip
While you can split tar and zip files across multiple media, they have a lot of issues. GNU tar’s incremental mode is clunky and buggy; zip is even worse. tar files can’t be read randomly, making it extremely time-consuming to extract just certain files out of a tar file.
The only thing going for these formats (and especially zip) is the wide compatibility for restoration.
dar
Here we start to get into the more interesting tools. Dar is, in my opinion, one of the best Linux tools that few people know about. Since I first wrote about dar in 2008, it’s added some interesting new features; among them, binary deltas and cloud storage support. So, dar has quite a few interesting features that I make use of in other ways, and could also be quite helpful here:
- Dar can both read and write files sequentially (streaming, like tar), or with random-access (quick seek to extract a subset without having to read the entire archive)
- Dar can apply compression to individual files, rather than to the archive as a whole, faciliting both random access and resilience (corruption in one file doesn’t invalidate all subsequent files). Dar also supports numerous compression algorithms including gzip, bzip2, xz, lzo, etc., and can omit compressing already-compressed files.
- The end of each dar file contains a central directory (dar calls this a catalog). The catalog contains everything necessary to extract individual files from the archive quickly, as well as everything necessary to make a future incremental archive based on this one. Additionally, dar can make and work with “isolated catalogs” — a file containing the catalog only, without data.
- Dar can split the archive into multiple pieces called slices. This can best be done with fixed-size slices (–slice and –first-slice options), which let the catalog regord the slice number and preserves random access capabilities. With the –execute option, dar can easily wait for a given slice to be burned, etc.
- Dar normally stores an entire new copy of a modified file, but can optionally store an rdiff binary delta instead. This has the potential to be far smaller (think of a case of modifying metadata for a photo, for instance).
- Must be easy enough to do, and sufficiently automatable, to allow frequent updates without error-prone or time-consuming manual hassle
Additionally, dar comes with a dar_manager program. dar_manager makes a database out of dar catalogs (or archives). This can then be used to identify the precise archive containing a particular version of a particular file.
All this combines to make a useful system for archiving. Isolated catalogs are tiny, and it would be easy enough to include the isolated catalogs for the entire set of archives that came before (or even the dar_manager database file) with each new incremental archive. This would make restoration of a particular subset easy.
The main thing to address with dar is that you do need dar to extract the archive. Every dar release comes with source code and a win64 build. dar also supports building a statically-linked Linux binary. It would therefore be easy to include win64 binary, Linux binary, and source with every archive run. dar is also a part of multiple Linux and BSD distributions, which are archived around the Internet. I think this provides a reasonable future-proofing to make sure dar archives will still be readable in the future.
The other challenge is user ability. While dar is highly portable, it is fundamentally a CLI tool and will require CLI abilities on the part of users. I suspect, though, that I could write up a few pages of instructions to include and make that a reasonably easy process. Not everyone can use a CLI, but I would expect a person that could follow those instructions could be readily-enough found.
One other benefit of dar is that it could easily be used with tapes. The LTO series is liked by various hobbyists, though it could pose formidable obstacles to non-hobbyists trying to aceess data in future decades. Additionally, since the archive is a big file, it lends itself to working with par2 to provide redundancy for certain amounts of data corruption.
git-annex
git-annex is an interesting program that is designed to facilitate managing large sets of data and moving it between repositories. git-annex has particular support for offline archive drives and tracks which drives contain which files.
The idea would be to store the data to be archived in a git-annex repository. Then git-annex commands could generate filesystem trees on the external drives (or trees to br burned to read-only media).
In a post about using git-annex for blu-ray backups, an earlier thread about DVD-Rs was mentioned.
This has a few interesting properties. For one, with due care, the files can be stored on archival media as regular files. There are some different options for how to generate the archives; some of them would place the entire git-annex metadata on each drive/disc. With that arrangement, one could access the individual files without git-annex. With git-annex, one could reconstruct the final (or any intermediate) state of the archive appropriately, handling deltions, renames, etc. You would also easily be able to know where copies of your files are.
The practice is somewhat more challenging. Hundreds of thousands of files — what I would consider a medium-sized archive — can pose some challenges, running into hours-long execution if used in conjunction with the directory special remote (but only minutes-long with a standard git-annex repo).
Ruling out the directory special remote, I had thought I could maybe just work with my files in git-annex directly. However, I ran into some challenges with that approach as well. I am uncomfortable with git-annex mucking about with hard links in my source data. While it does try to preserve timestamps in the source data, these are lost on the clones. I wrote up my best effort to work around all this.
In a forum post, the author of git-annex comments that “I don’t think that CDs/DVDs are a particularly good fit for git-annex, but it seems a couple of users have gotten something working.” The page he references is Managing a large number of files archived on many pieces of read-only medium. Some of that discussion is a bit dated (for instance, the directory special remote has the importtree feature that implements what was being asked for there),
git-annex supplies win64 binaries, and git-annex is included with many distributions as well. So it should be nearly as accessible as dar in the future. Since git-annex would be required to restore a consistent recovery image, similar caveats as with dar apply; CLI experience would be needed, along with some written instructions.
Bacula and BareOS
Although primarily tape-based archivers, these do also also nominally support drives and optical media. However, they are much more tailored as backup tools, especially with the ability to pull from multiple machines. They require a database and extensive configuration, making them a poor fit for both the creation and future extractability of this project.
Conclusions
I’m going to spend some more time with dar and git-annex, testing them out, and hope to write some future posts about my experiences.
Jonathan Carter: MiniDebConf Germany 2023
This year I attended Debian Reunion Hamburg (aka MiniDebConf Germany) for the second time. My goal for this MiniDebConf was just to talk to people and make the most of the time I have there. No other specific plans or goals. Despite this simple goal, it was a very productive and successful event for me.
Tuesday 23rd:
- Arrived much later than planned after about 18h of travel, went to bed early.
Wednesday 24th:
- Was in a discussion about individual package maintainership.
- Was in a discussion about the nature of Technical Committee.
- Co-signed a copy of The Debian System book along with the other DDs
- Submitted a BoF request for people who are present to bring issues to the attention of the DPL (and to others who are around).
- Noticed I still had a blog entry draft about this event last year, and posted it just to get it done.
- Had a stand-up meeting, was nice to see what everyone was working on.
- Had some event budgeting discussions with Holger.
- Worked a bit on a talk I haven’t submitted yet called “Current events” (it’s slightly punny, get it?) – it’s still very raw but I’m passively working on it just in case we need a backup talk over the weekend.
- Had a discussion over lunch with someone who runs their HPC on Debian and learned about Octopus and Pac.
- TIL (from #debian-python) about pyproject.toml (https://pip.pypa.io/en/stable/reference/build-system/pyproject-toml/)
- Was in a discussion about amd64 build times on our buildds and referred them to DSA. I also e-mailed DSA to ask them if there’s anything we can do to improve build times (since it affects both productivity and motivation).
- Did some premium cola tasting with andrewsh
- Had a discussion with Ilu about installers (and luks2 issues in Calamares), accessibility and organisational stuff.
Thursday 25th:
- Spent quite a chunk of the morning in a usrmerge BoF. I’m very impressed by the amount of reading and research the people in the BoF did and gathering all the facts/data, it seems that there is now a way forward that will fix usrmerge in Debian in a way that could work for everyone, an extensive summary/proposal will be posted to debian-devel as soon as possible.
- Mind was in zombie mode. So I did something easy and upgraded the host running this blog and a few other hosts to bookworm to see what would break.
- Cheese and wine party, which resulted in a mao party that ran waaaay too late.
Friday 26th:
- Daytrip
- Was very saddened to learn that the Metallica concert in Hamburg that day was not a daytrip option.
- The Museum der Arbeit had lots of interesting machinery and history, there’s also a giant drill bit outside it that was used to create the elb tunnels in Hamburg, we had a really nice dinner at a restaurant named after it.
Saturday 27th:
- Attended talks:
- HTTP all the things – The rocky path from the basement into the “cloud”
- Running Debian on a Smartphone
- debvm – Ephemeral Virtual Debian Machines
- Network Configuration on Debian Systems
- Discussing changes to the Debian key package definition
- Meet the Release Team
- Towards collective decision-making and maintenance in the Debian base system
- Performed some PGP key signing.
- Edited group photo.
Sunday 28th:
- Had a BoF where we had an open discussion about things on our collective minds (Debian Therapy Session).
- Had a session on upcoming legislature in the EU (like CRA).
- Some web statistics with MrFai.
- Talked to Marc Haber about a DebConf bid for Heidelberg for DebConf 25.
- Closing session.
Monday 29th:
- Started the morning with Helmut and Jörgen convincing me switch from cowbuilder to sbuild (I’m tentatively sold, the huge new plus is that you don’t need schroot anymore, which trashed two of my systems in the past and effectively made sbuild a no-go for me until now).
- Dealt with more laptop hardware failures, removing a stick of RAM seems to have solved that for now!
Das is nicht gut.
- Dealt with some delegation issues for release team and publicity team.
- Attended my last stand-up meeting.
- Wrapped things up, blogged about the event. Probably forgot to list dozens of things in this blog entry. It is fine.
Tuesday 30th:
- Didn’t attend the last day, basically a travel day for me.
Thank you to Holger for organising this event yet again!
Russell Coker: Considering Convergence
In 2013 Kyle Rankin (at the time Linux Journal columnist and CSO of Purism) wrote a Linux Journal article about Linux convergence [1] (which means using a phone and a dock to replace a desktop) featuring the Nokia N900 smart phone and a chroot environment on the Motorola Droid 4 Android phone. Both of them have very limited hardware even by the standards of the day and neither of which were systems I’d consider using all the time. None of the Android phones I used at that time were at all comparable to any sort of desktop system I’d want to use.
Hardware for Convergence – Comparing a Phone to a LaptopThe first hardware issue for convergence is docks and other accessories to attach a small computer to hardware designed for larger computers. Laptop docks have been around for decades and for decades I haven’t been using them because they have all been expensive and specific to a particular model of laptop. Having an expensive dock at home and an expensive dock at the office and then replacing them both when the laptop is replaced may work well for some people but wasn’t something I wanted to do. The USB-C interface supports data, power, and DisplayPort video over the same cable and now USB-C docks start at about $20 on eBay and dock functionality is built in to many new monitors. I can take a USB-C device to the office of any large company and know there’s a good chance that there will be a USB-C dock ready for me to use. The fact that USB-C is a standard feature for phones gives obvious potential for convergence.
The next issue is performance. The Passmark benchmark seems like a reasonable way to compare CPUs [2]. It may not be the best benchmark but it has an excellent set of published results for Intel and AMD CPUs. I ran that benchmark on my Librem5 [3] and got a result of 507 for the CPU score. At the end of 2017 I got a Thinkpad X301 [4] which rates 678 on Passmark. So the Librem5 has 3/4 the CPU power of a laptop that was OK for my use in 2018. Given that the X301 was about the minimum specs for a PC that I can use (for things other than serious compiles, running VMs, etc) the Librem 5 has 3/4 the CPU power, only 3G of RAM compared to 6G, and 32G of storage compared to 64G. Here is the Passmark page for my Librem5 [5]. As an aside my Libnrem5 is apparently 25% faster than the other results for the same CPU – did the Purism people do something to make their device faster than most?
For me the Librem5 would be at the very low end of what I would consider a usable desktop system. A friend’s N900 (like the one Kyle used) won’t complete the Passmark test apparently due to the “Extended Instructions (NEON)” test failing. But of the rest of the tests most of them gave a result that was well below 10% of the result from the Librem5 and only the “Compression” and “CPU Single Threaded” tests managed to exceed 1/4 the speed of the Librem5. One thing to note when considering the specs of phones vs desktop systems is that the MicroSD cards designed for use in dashcams and other continuous recording devices have TBW ratings that compare well to SSDs designed for use in PCs, so swap to a MicroSD card should work reasonably well and be significantly faster than the hard disks I was using for swap in 2013!
In 2013 I was using a Thinkpad T420 as my main system [6], it had 8G of RAM (the same as my current laptop) although I noted that 4G was slow but usable at the time. Basically it seems that the Librem5 was about the sort of hardware I could have used for convergence in 2013. But by today’s standards and with the need to drive 4K monitors etc it’s not that great.
The N900 hardware specs seem very similar to the Thinkpads I was using from 1998 to about 2003. However a device for convergence will usually do more things than a laptop (IE phone and camera functionality) and software had become significantly more bloated in 1998 to 2013 time period. A Linux desktop system performed reasonably with 32MB of RAM in 1998 but by 2013 even 2G was limiting.
Software Issues for ConvergenceJeremiah Foster (Director PureOS at Purism) wrote an interesting overview of some of the software issues of convergence [7]. One of the most obvious is that the best app design for a small screen is often very different from that for a large screen. Phone apps usually have a single window that shows a view of only one part of the data that is being worked on (EG an email program that shows a list of messages or the contents of a single message but not both). Desktop apps of any complexity will either have support for multiple windows for different data (EG two messages displayed in different windows) or a single window with multiple different types of data (EG message list and a single message). What we ideally want is all the important apps to support changing modes when the active display is changed to one of a different size/resolution. The Purism people are doing some really good work in this regard. But it is a large project that needs to involve a huge range of apps.
The next thing that needs to be addressed is the OS interface for managing apps and metadata. On a phone you swipe from one part of the screen to get a list of apps while on a desktop you will probably have a small section of a large monitor reserved for showing a window list. On a desktop you will typically have an app to manage a list of items copied to the clipboard while on Android and iOS there is AFAIK no standard way to do that (there is a selection of apps in the Google Play Store to do this sort of thing).
The limitations in phone hardware force changes to the software. Software needs to use less memory because phone RAM can’t be upgraded. The OS needs to be configured for low RAM use which includes technologies like the zram kernel memory compression feature.
SecurityWhen mobile phones first came out they were used for less secret data. Loss of a phone was annoying and expensive but not a security problem. Now phone theft for the purpose of gaining access to resources stored on the phone is becoming a known crime, here is a news report about a thief stealing credit cards and phones to receive the SMS notifications from banks [9]. We should expect that trend to continue, stealing mobile devices for ssh keys, management tools for cloud services, etc is something we should expect to happen.
A problem with mobile phones in current use is that they have one login used for all access from trivial things done in low security environments (EG paying for public transport) to sensitive things done in more secure environments (EG online banking and healthcare). Some applications take extra precautions for this EG the Android app I use for online banking requires authentication before performing any operations. The Samsung version of Android has a system called Knox for running a separate secured workspace [10]. I don’t think that the Knox approach would work well for a full Linux desktop environment, but something that provides some similar features would be a really good idea. Also running apps in containers as much as possible would be a good security feature, this is done by default in Android and desktop OSs could benefit from it.
The Linux desktop security model of logging in to a single account and getting access to everything has been outdated for a long time, probably ever since single-user Linux systems became popular. We need to change this for many reasons and convergence just makes it more urgent.
ConclusionI have become convinced that convergence is the way of the future. It has the potential to make transporting computers easier, purchasing cheaper (buy just a phone and not buy desktop and laptop systems), and access to data more convenient. The Librem5 doesn’t seem up to the task for my use due to being slow and having short battery life, the PinePhone Pro has more powerful hardware and allegedly has better battery life [11] so it might work for my needs. The PinePhone Pro probably won’t meet the desktop computing needs of most people, but hardware keeps getting faster and cheaper so eventually most people could have their computing needs satisfied with a phone.
The current state of software for convergence and for Linux desktop security needs some improvement. I have some experience with Linux security so this is something I can help work on.
To work on improving this I asked Linux Australia for a grant for me and a friend to get PinePhone Pro devices and a selection of accessories to go with them. Having both a Librem5 and a PinePhone Pro means that I can test software in different configurations which will make developing software easier. Also having a friend who’s working on similar things will help a lot, especially as he has some low level hardware skills that I lack.
Linux Australia awarded the grant and now the PinePhones are in transit. Hopefully I will have a PinePhone in a couple of weeks to start work on this.
- [1] https://www.linuxjournal.com/content/whats-dock
- [2] https://www.passmark.com/products/pt_linux/download.php
- [3] https://etbe.coker.com.au/2022/03/19/more-librem5/
- [4] https://etbe.coker.com.au/2018/01/16/more-thinkpad-x301/
- [5] https://tinyurl.com/2jwnbgb2
- [6] https://etbe.coker.com.au/2013/10/23/thinkpad-t420/
- [7] https://tinyurl.com/yxobjae3
- [8] https://puri.sm/posts/what-is-mobile-pureos/
- [9] https://www.bbc.com/news/uk-england-london-62809151
- [10] https://en.wikipedia.org/wiki/Samsung_Knox
- [11] https://www.pine64.org/pinephonepro/
Related posts:
- More About the Librem 5 I concluded my previous post about the Purism Librem 5...
- Long-term Device Use It seems to me that Android phones have recently passed...
- Huawei Mate9 Warranty Etc I recently got a Huawei Mate 9 phone....
Russ Allbery: Book haul
I think this is partial because I also have a stack of other books that I missed recording. At some point, I should stop using this method to track book acquisitions in favor of one of the many programs intended for this purpose, but it's in the long list of other things I really should do one of these days.
As usual, I have already read and reviewed a few of these. I might be getting marginally better at reading books shortly after I acquire them? Maybe?
Steven Brust — Tsalmoth (sff)
C.L. Clark — The Faithless (sff)
Oliver Darkshire — Once Upon a Tome (non-fiction)
Hernan Diaz — Trust (mainstream)
S.B. Divya — Meru (sff)
Kate Elliott — Furious Heaven (sff)
Steven Flavall — Before We Go Live (non-fiction)
R.F. Kuang — Babel (sff)
Laurie Marks — Dancing Jack (sff)
Arkady Martine — Rose/House (sff)
Madeline Miller — Circe (sff)
Jenny Odell — Saving Time (non-fiction)
Malka Older — The Mimicking of Known Successes (sff)
Sabaa Tahir — An Ember in the Ashes (sff)
Emily Tesh — Some Desperate Glory (sff)
Valerie Valdes — Chilling Effect (sff)
Louis-Philippe Véronneau: Python 3.11, pip and (breaking) system packages
As we get closer to Debian Bookworm's release, I thought I'd share one change in Python 3.11 that will surely affect many people.
Python 3.11 implements the new PEP 668, Marking Python base environments as “externally managed”1. If you use pip regularly on Debian, it's likely you'll eventually hit the externally-managed-environment error:
error: externally-managed-environment × This environment is externally managed ╰─> To install Python packages system-wide, try apt install python3-xyz, where xyz is the package you are trying to install. If you wish to install a non-Debian-packaged Python package, create a virtual environment using python3 -m venv path/to/venv. Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make sure you have python3-full installed. If you wish to install a non-Debian packaged Python application, it may be easiest to use pipx install xyz, which will manage a virtual environment for you. Make sure you have pipx installed. See /usr/share/doc/python3.11/README.venv for more information. note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. hint: See PEP 668 for the detailed specification.With this PEP, Python tools can now distinguish between packages that have been installed by the user with a tool like pip and ones installed using a distribution's package manager, like apt.
This is generally great news: it was previously too easy to break a system by mixing the two types of packages. This PEP will simplify our role as a distribution, as well as improve the overall Python user experience in Debian.
Sadly, it's also likely this change will break some of your scripts, especially CI that (legitimately) install packages via pip alongside system packages. For example, I use the following gitlab-ci snippet to make sure my PRs don't break my build process2:
build:flit: stage: build script: - apt-get update && apt-get install -y flit python3-pip - FLIT_ROOT_INSTALL=1 flit install - metalfinder --helpWith Python 3.11, this snippet will error out, as pip will refuse to install packages alongside the system's. The fix is to tell pip it's OK to "break" your system packages, either using the --break-system-packages parameter, or the PIP_BREAK_SYSTEM_PACKAGES=1 environment variable3.
This, of course, is not something you should be using in production to restore the old behavior! The "proper" way to fix this issue, as the externally-managed-environment error message aptly (har har) informs you, is to use virtual environments.
Happy hacking!
-
Kudos to our own Matthias Klose, Stefano Rivera and Elana Hashman, who worked on designing and implementing this PEP! ↩
-
Which is something that bit me before... You push some changes to your git repository, everything seems fine and all the tests pass, so you merge it and make a new git tag. When the time comes to build and upload this tag to PyPi, you find out some minor thing broke your build system (which you weren't testing) and you have to scramble to make a point-release to fix the issue. Sad! ↩
-
Don't go searching for this environment variable in pip's code though, as you won't find it! All of pip's command line options can be passed as env vars using the PIP_<UPPER_LONG_NAME> format. Useful for tools that use pip indirectly, like flit. ↩
Dirk Eddelbuettel: RcppArmadillo 0.12.4.0.0 on CRAN: New Upstream Minor
Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1074 other packages on CRAN, downloaded 29.3 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 535 times according to Google Scholar.
This release brings a new upstream release 12.4.0 made by Conrad a day or so ago. I prepared the usual release candidate, tested on the over 1000 reverse depends (which sadly takes almost a day on old hardware), found no issues and sent it to CRAN. Where it got tested again and was once again auto-processed smoothly by CRAN within a few hours on a Friday night which is just marvelous. So this time I tweeted about it too.
The releases actually has a relatively small set of changes as a second follow-up release in the 12.* series.
Changes in RcppArmadillo version 0.12.4.0.0 (2023-05-26)Upgraded to Armadillo release 12.4.0 (Cortisol Profusion Redux)
Added norm2est() for finding fast estimates of matrix 2-norm (spectral norm)
Added vecnorm() for obtaining the vector norm of each row or column of a matrix
Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.
If you like my open-source work, you may consider sponsoring me at GitHub.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
Valhalla's Things: Late Victorian Combinations
Some time ago, on an early Friday afternoon our internet connection died. After a reasonable time had passed we called the customer service, they told us that they would look into it and then call us back.
On Friday evening we had not heard from them, and I was starting to get worried. At the time in the evening when I would have been relaxing online I grabbed the first Victorian sewing-related book I found on my hard disk and started to read it.
For the record, it wasn’t actually Victorian, it was Margaret J. Blair. System of Sewing and Garment Drafting. from 1904, but I also had available for comparison the earlier and smaller Margaret Blair. System of Garment Drafting. from 1897.
Anyway, this book had a system to draft a pair of combinations (chemise top + drawers); and months ago I had already tried to draft a pair from another system, but they didn’t really fit and they were dropped low on the priority list, so on a whim I decided to try and draft them again with this new-to-me system.
Around 23:00 in the night the pattern was ready, and I realized that my SO had gone to sleep without waiting for me, as I looked too busy to be interrupted.
The next few days were quite stressful (we didn’t get our internet back until Wednesday) and while I couldn’t work at my day job I didn’t sew as much as I could have done, but by the end of the week I had an almost complete mockup from an old sheet, and could see that it wasn’t great, but it was a good start.
One reason why the mockup took a whole week is that of course I started to sew by machine, but then I wanted flat-felled seams, and felling them by hand is so much neater, isn’t it?
And let me just say, I’m grateful for the fact that I don’t depend on streaming services for media, but I have a healthy mix of DVDs and stuff I had already temporary downloaded to watch later, because handsewing and being stressed out without watching something is not really great.
Anyway, the mockup was a bit short on the crotch, but by the time I could try it on and be sure I was invested enough in it that I decided to work around the issue by inserting a strip of lace around the waist.
And then I went back to the pattern to fix it properly, and found out that I had drafted the back of the drawers completely wrong, making a seam shorter rather than longer as it should have been. ooops.
I fixed the pattern, and then decided that YOLO and cut the new version directly on some lightweight linen fabric I had originally planned to use in this project.
The result is still not perfect, but good enough, and I finished it with a very restrained amount of lace at the neckline and hems, wore it one day when the weather was warm (loved the linen on the skin) and it’s ready to be worn again when the weather will be back to being warm (hopefully not too soon).
The last problem was taking pictures of this underwear in a way that preserves the decency (and it even had to be outdoors, for the light!).
This was solved by wearing leggings and a matched long sleeved shirt under the combinations, and then promptly forgetting everything about decency and, well, you can see what happened.
The pattern is, as usual, published on my pattern website as #FreeSoftWear.
And then, I started thinking about knits.
In the late Victorian and Edwardian eras knit underwear was a thing, also thanks to the influence of various aspects of the rational dress movement; reformers such as Gustav Jäger advocated for wool underwear, but mail order catalogues from the era such as https://archive.org/details/cataloguefallwin00macy (starting from page 67) have listings for both cotton and wool ones.
From what I could find, back then they would have been either handknit at home or made to shape on industrial knitting machines; patterns for the former are available online, but the latter would probably require a knitting machine that I don’t currently1 have.
However, this is underwear that is not going to be seen by anybody2, and I believe that by using flat knit fabric one can get a decent functional approximation.
In The Stash I have a few meters of a worked cotton jersey with a pretty comfy feel, and to make a long story short: this happened.
I suspect that the linen one will get worn a lot this summer (linen on the skin. nothing else need to be said), while the cotton one will be stored away for winter. And then maybe I may make a couple more, if I find out that I’m using it enough.
Valhalla's Things: Correspondence Book
I write letters. The kind that are written on paper with a dip pen 1 and ink, stamped and sent through the post, spend a few days or weeks maturing like good wine in a depot somewhere2, and then get delivered to the recipient.
Some of them (mostly cards) are to people who will receive them and thank me via xmpp (that sounds odd, but actually works out nicely), but others are proper letters with long texts that I exchange with penpals.
Most of those are fountain pen frea^Wenthusiasts, so I usually use a different ink each time, and try to vary the paper, and I need to keep track of what I’ve used.
Some time ago, I’ve read a Victorian book3 which recommended keeping a correspondence book to register all mail received and sent, the topics and whether it had been replied or otherwise acted upon. I don’t have the mail traffic of a Victorian lady (or even middle class woman), but this looked like something fun to do, and if I added fields for the inks and paper used it would also have useful side effect.
So I headed over to the obvious program anybody would use for these things (XeLaTeX, of course) and quickly designed a page with fields for the basic thinks I want to record; it was a bit hurried, and I may improve on it the next time I make one, but I expect this one to last me two or three years, and it is good enough.
I’ve decided to make it A6 sized, so that it doesn’t require a lot of space on my busy desktop, and it could be carried inside a portable desktop, if I ever decide to finish the one for which I’ve made a mockup years ago :)
I’ve also added a few pages for the addresses of my correspondents (and an index of the letters I’ve exchanged with them), and a few empty pages for other notes.
Then I’ve used my a6_book.py script to rearrange the A6 pages into signatures and impress them on A4; to reduce later effort I’ve added an option to order the pages in such a way that if I then cut four A4 sheet in half at a time (the limit of my rotary cutter) the signatures are ready to be folded. It’s not the default because it requires that the pages are a multiple of 32 rather than just 16 (and they are padded up with empty pages if they aren’t).
If you’re also interested in making one, here are the files:
After printing (an older version where some of the pages are repeated. whoops, but it only happened 4 times, and it’s not a big deal), it was time for binding this into a book.
I’ve opted for Coptic stitch, so that the book will open completely flat and writing on it will be easier and the covers are 2 mm cardboard covered in linen-look bookbinding paper (sadly I no longer have a source for bookbinding cloth made from actual cloth).
I tried to screenprint a simple design on the cover: the first attempt was unusable (the paper was smaller than the screen, so I couldn’t keep it in the right place and moved as I was screenprinting); on the second attempt I used some masking tape to keep the paper in place, and they were a bit better, but I need more practice with the technique.
Finally, I decided that for such a Victorian thing I will use an Iron-gall ink, but it’s Rohrer & Knlingner Scabiosa, with a purple undertone, because life’s too short to use blue-black ink :D
And now, I’m off to write an actual letter, rather than writing online about things that are related to letter writing.
Dirk Eddelbuettel: qlcal 0.0.6 on CRAN: More updates from QuantLib
The sixth release of the still new-ish qlcal package arrivied at CRAN today.
qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more.
This release brings updates to a few calendars which happened since the QuantLib 1.30 release, and also updates a several of the (few) non-calendaring functions.
Changes in version 0.0.6 (2023-05-24)Several calendars (India, Singapore, South Africa, South Korea) updated with post-QuantLib 1.3.0 changes (Sebastian Schmidt in #6)
Three now-used scheduled files were removed (Dirk in #7))
A number of non-calendaring files used were synchronised with the current QuantLib repo (Dirk in #8)
Last release, we also added a quick little demo using xts to column-bind calendars produced from each of the different US sub-calendars. This is a slightly updated version of the sketch we tooted a few days ago. The output now is
> print(Reduce(cbind, Map(makeHol, cals))) LiborImpact NYSE GovernmentBond NERC FederalReserve 2023-01-02 TRUE TRUE TRUE TRUE TRUE 2023-01-16 TRUE TRUE TRUE NA TRUE 2023-02-20 TRUE TRUE TRUE NA TRUE 2023-04-07 NA TRUE NA NA NA 2023-05-29 TRUE TRUE TRUE TRUE TRUE 2023-06-19 TRUE TRUE TRUE NA TRUE 2023-07-04 TRUE TRUE TRUE TRUE TRUE 2023-09-04 TRUE TRUE TRUE TRUE TRUE 2023-10-09 TRUE NA TRUE NA TRUE 2023-11-10 TRUE NA NA NA NA 2023-11-23 TRUE TRUE TRUE TRUE TRUE 2023-12-25 TRUE TRUE TRUE TRUE TRUE >Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples.
If you like this or other open-source work I do, you can now sponsor me at GitHub.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
Jonathan Carter: Upgraded this host to Debian 12 (bookworm)
I upgraded the host running my blog to Debian 12 today. My website has existed in some form since 1997, it changed from pure html to a Python CGI script in the early 2000s, and when blogging became big around then, I migrated to WordPress around 2004.
This WordPress instance ran on Ubuntu up until 2010, and then on Debian ever since. Upgrades are just too easy. I did end up hitting one small bug with today’s upgrade though, I run the PHP fast process manager on the Apache MPM event server, and during upgrade, php8.2-fpm wasn’t enabled somehow (contrary to what I would expect), at least a simple 'a2conf enable php8.2-fpm' saved my site again after a (very rare) few minutes of downtime.
Bits from Debian: New Debian Developers and Maintainers (March and April 2023)
The following contributors got their Debian Developer accounts in the last two months:
- James Lu (jlu)
- Hugh McMaster (hmc)
- Agathe Porte (gagath)
The following contributors were added as Debian Maintainers in the last two months:
- Soren Stoutner
- Matthijs Kooijman
- Vinay Keshava
- Jarrah Gosbell
- Carlos Henrique Lima Melara
- Cordell Bloor
Congratulations!
Jonathan McDowell: RIP Brenda McDowell
My mother died earlier this month. She’d been diagnosed with cancer back in February 2022 and had been through major surgery and a couple of rounds of chemotherapy, so it wasn’t a complete surprise even if it was faster at the end than expected. That doesn’t make it easy, but I’m glad to be able to say that her immediate family were all with her at home at the end.
I was touched by the number of people who turned up, both to the wake and the subsequent funeral ceremony. Mum had done a lot throughout her life and was settled in Newry, and it was nice to see how many folk wanted to pay their respects. It was also lovely to hear from some old school friends who had fond memories of her.
There are many things I could say about her, but I don’t feel that here is the place to do so. My father and brother did excellent jobs at eulogies at the funeral. However, while I blog less about life things than I did in the past, I did not want it to go unmarked here. She was my Mum, I loved her, and I am sad she is gone.
Scarlett Gately Moore: KDE Gear 23.04.1 Snaps Released! Snapcraft updates and more.
I have completed the 23.04.1 KDE Gear applications release for snaps! With this release comes several new KDE Snaps!
- Kweather
- Krecorder
- Kclock
- Alligator
- Ghostwriter
- Kasts
- Tokodon
Plus many long outdated / broken snaps are updated and or fixed!
Check them all out here:
https://snapcraft.io/search?q=KDE
I have been busy triaging and squashing bugs in regards to snaps on https://bugs.kde.org
Snapcraft:
Updated the kde-neon extension for the newest content pack.
Made a core22 qmake plugin with tests PR
https://github.com/canonical/craft-parts/pull/463
Future work:
Top on my TO-DO list is still PIM. There are many parts, making it more complex. I am working on it though. QT6/KF6 is making it’s way to the top of the list as well. KDE Neon has made significant progress here, so I am in early stages of updating our build scripts to generate our qt6/kf6 content snap.
Thanks for stopping by!
https://gofund.me/2c7b1808 All proceeds go to improving my ability to work. Thanks for your consideration!
Jonathan Carter: Debian Reunion MiniDebConf 2022
It wouldn’t be inaccurate to say that I’ve had a lot on my plate in the last few years, and that I have a *huge* backlog of little tasks to finish. Just last week, I finally got to all my keysigning from DebConf22. This week, I’m at MiniDebConf Germany in Hamburg. It’s the second time I’m here! And it’s great already. Last year I drafted a blog entry, but never got around to publishing it. So, in order to mentally tick off yet another thing, here follows a somewhat imperfect (I had to delete a lot of short-hand because I didn’t know what it means anymore), but at least published post about my activities from a year ago.
This week (well, last year) I attended my first ever in-person MiniDebConf and MiniDebCamp in Hamburg, Germany. The last time I was in Germany was 7 years ago for DebConf15 (or at time of publishing, actually, last year… for this same event).
My focus for the week was to work on Debian live related stuff.
In preparation for the week I tried to fix/close as many Calamares bugs as I could, so before the event I closed:
- File calamares upstream issue #1944 ‘Calamares allows me to select a username of “root”‘ for Debian bug #976617.
- File calamares upstream issue #1945 ‘Calamares needs support for high DPI’ for Debian bug #992162.
- Comment on calamares bug #1005212 ‘Calamares installer fails at partitioning disks’ requesting further info.
- Close calamares bug #1009876 ‘There is no /tmp item in the list during the partitioning step in the debian calamares installer’ – /tmp partitions can be created, not a bug, really just a small UI issue.
- Close calamares bug #974998 ‘SegFault when clicked on “Create” in manual partitioning’, not reproducible in bullseye.
- Close calamares bug #971266 ‘Debian fails to start when /home is encrypted during installation’ – this works fine since bullseye.
- Close calamares bug #976617 ‘Calamares allows me to select a username of “root”‘ – has since been fixed upstream.
Monday to Friday we worked together on various issues, and the weekend was for talks.
On Monday morning, I had a nice discussion with Roland Clobus who has been working on making Debian live images reproducible. He’s also been working on testing Debian Live on openqa.debian.net. We’re planning on integrating his work so that Debian 12 live images will be reproducible. For automated testing on openqa, it will be ongoing work, one blocker has been that snapshots.debian.org limits connections after a while, so builds on there start failing fast.
On Monday afternoon, I went ahead and uploaded the latest Calamares hotfix (Calamares 3.2.58.1-1) release that fixes a UI issue on the partitioning screen where it could get stuck. On 15:00 we had a stand-up meeting where we introduced ourselves and talked a bit about our plans. It was great to see how many people could benefit from each other being there. For example, someone wanting to learn packaging, another wanting to improve packaging documentation, another wanting help with packaging something written in Rust, another wanting to improve Rust packaging in general and lots of overlap when it comes to reproducible builds! I also helped a few people with some of their packaging issues.
On Monday evening, someone in videoteam managed to convince me to put together a loopy loop for this MiniDebConf. There’s really wasn’t enough time to put together something elaborate, but I put something together based on the previous loopy with some experiments that I’ve been working on for the upcoming DC22 loopy, and we can use this loop to do a call for content for the DC22 loop.
On Tuesday morning had some chats with urbec and Ilu,Tuesday afternoon talked to MIA team about upcoming removals. Did some admin on debian.ch payments for hosting. On Tuesday evening worked on live image stuff (d-i downloader, download module for dmm).
On Wednesday morning I slept a bit late, then had to deal with some DPL admin/legal things. Wednesday afternoon, more chats with people.
On Thursday: Talked to a bunch more people about a lot of issues, got loopy in a reasonably shape, edited and published the Group photo!
On Friday: prepared my talk slides, learned about Brave (https://github.com/bbc/brave) – It initially looked like a great compositor for DebConf video stuff (and possible replacement for OBS, but it turned out it wasn’t really maintained upstream). In the evening we had the Cheese and Wine party, where lots of deliciousness was experienced.
On Saturday, I learned from Felix’s talk that Tensorflow is now in experimental! (and now in 2023 I checked again and that’s still the case, although it hasn’t made it’s way in unstable yet, hopefully that improves over the trixie cycle)
I know most of the people who attended quite well, but it was really nice to also see a bunch of new Debianites that I’ve only seen online before and to properly put some faces to names. We also had a bunch of enthusiastic new contributors and we did some key signing.
Craig Small: Devices with cgroup v2
Docker and other container systems by default restrict access to devices on the host. They used to do this with cgroups with the cgroup v1 system, however, the second version of cgroups removed this controller and the man page says:
Cgroup v2 device controller has no interface files and is implemented on top of cgroup BPF. https://www.kernel.org/doc/Documentation/admin-guide/cgroup-v2.rstThat is just awesome, nothing to see here, go look at the BPF documents if you have cgroup v2.
With cgroup v1 if you wanted to know what devices were permitted, you just would cat /sys/fs/cgroup/XX/devices.allow and you were done!
The kernel documentation is not very helpful, sure its something in BPF and has something to do with the cgroup BPF specifically, but what does that mean?
There doesn’t seem to be an easy corresponding method to get the same information. So to see what restrictions a docker container has, we will have to:
- Find what cgroup the programs running in the container belong to
- Find what is the eBPF program ID that is attached to our container cgroup
- Dump the eBPF program to a text file
- Try to interpret the eBPF syntax
The last step is by far the most difficult.
Finding a container’s cgroupAll containers have a short ID and a long ID. When you run the docker ps command, you get the short id. To get the long id you can either use the --no-trunc flag or just guess from the short ID. I usually do the second.
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a3c53d8aaec2 debian:minicom "/bin/bash" 19 minutes ago Up 19 minutes inspiring_shannonSo the short ID is a3c53d8aaec2 and the long ID is a big ugly hex string starting with that. I generally just paste the relevant part in the next step and hit tab. For this container the cgroup is /sys/fs/cgroup/system.slice/docker-a3c53d8aaec23c256124f03d208732484714219c8b5f90dc1c3b4ab00f0b7779.scope/ Notice that the last directory has “docker-” then the short ID.
If you’re not sure of the exact path. The “/sys/fs/cgroup” is the cgroup v2 mount point which can be found with mount -t cgroup2 and then rest is the actual cgroup name. If you know the process running in the container then the cgroup column in ps will show you.
$ ps -o pid,comm,cgroup 140064 PID COMMAND CGROUP 140064 bash 0::/system.slice/docker-a3c53d8aaec23c256124f03d208732484714219c8b5f90dc1c3b4ab00f0b7779.scopeEither way, you will have your cgroup path.
eBPF programs and cgroupsNext we will need to get the eBPF program ID that is attached to our recently found cgroup. To do this, we will need to use the bpftool. One thing that threw me for a long time is when the tool talks about a program or a PROG ID they are talking about the eBPF programs, not your processes! With that out of the way, let’s find the prog id.
$ sudo bpftool cgroup list /sys/fs/cgroup/system.slice/docker-a3c53d8aaec23c256124f03d208732484714219c8b5f90dc1c3b4ab00f0b7779.scope/ ID AttachType AttachFlags Name 90 cgroup_device multiOur cgroup is attached to eBPF prog with ID of 90 and the type of program is cgroup _device.
Dumping the eBPF programNext, we need to get the actual code that is run every time a process running in the cgroup tries to access a device. The program will take some parameters and will return either a 1 for yes you are allowed or a zero for permission denied. Don’t use the file option as it dumps the program in binary format. The text version is hard enough to understand.
sudo bpftool prog dump xlated id 90 > myebpf.txtCongratulations! You now have the eBPF program in a human-readable (?) format.
Interpreting the eBPF programThe eBPF format as dumped is not exactly user friendly. It probably helps to first go and look at an example program to see what is going on. You’ll see that the program splits type (lower 4 bytes) and access (higher 4 bytes) and then does comparisons on those values. The eBPF has something similar:
0: (61) r2 = *(u32 *)(r1 +0) 1: (54) w2 &= 65535 2: (61) r3 = *(u32 *)(r1 +0) 3: (74) w3 >>= 16 4: (61) r4 = *(u32 *)(r1 +4) 5: (61) r5 = *(u32 *)(r1 +8)What we find is that once we get past the first few lines filtering the given value that the comparison lines have:
- r2 is the device type, 1 is block, 2 is character.
- r3 is the device access, it’s used with r1 for comparisons after masking the relevant bits. mknod, read and write are 1,2 and 3 respectively.
- r4 is the major number
- r5 is the minor number
For a even pretty simple setup, you are going to have around 60 lines of eBPF code to look at. Luckily, you’ll often find the lines for the command options you added will be near the end, which makes it easier. For example:
63: (55) if r2 != 0x2 goto pc+4 64: (55) if r4 != 0x64 goto pc+3 65: (55) if r5 != 0x2a goto pc+2 66: (b4) w0 = 1 67: (95) exitThis is a container using the option --device-cgroup-rule='c 100:42 rwm'. It is checking if r2 (device type) is 2 (char) and r4 (major device number) is 0x64 or 100 and r5 (minor device number) is 0x2a or 42. If any of those are not true, move to the next section, otherwise return with 1 (permit). We have all access modes permitted so it doesn’t check for it.
The previous example has all permissions for our device with id 100:42, what about if we only want write access with the option --device-cgroup-rule='c 100:42 r'. The resulting eBPF is:
63: (55) if r2 != 0x2 goto pc+7 64: (bc) w1 = w3 65: (54) w1 &= 2 66: (5d) if r1 != r3 goto pc+4 67: (55) if r4 != 0x64 goto pc+3 68: (55) if r5 != 0x2a goto pc+2 69: (b4) w0 = 1 70: (95) exitThe code is almost the same but we are checking that w3 only has the second bit set, which is for reading, effectively checking for X==X&2. It’s a cautious approach meaning no access still passes but multiple bits set will fail.
The device optiondocker run allows you to specify files you want to grant access to your containers with the --device flag. This flag actually does two things. The first is to great the device file in the containers /dev directory, effectively doing a mknod command. The second thing is to adjust the eBPF program. If the device file we specified actually did have a major number of 100 and a minor of 42, the eBPF would look exactly like the above snippets.
What about privileged?So we have used the direct cgroup options here, what does the --privileged flag do? This lets the container have full access to all the devices (if the user running the process is allowed). Like the --device flag, it makes the device files as well, but what does the filtering look like? We still have a cgroup but the eBPF program is greatly simplified, here it is in full:
0: (61) r2 = *(u32 *)(r1 +0) 1: (54) w2 &= 65535 2: (61) r3 = *(u32 *)(r1 +0) 3: (74) w3 >>= 16 4: (61) r4 = *(u32 *)(r1 +4) 5: (61) r5 = *(u32 *)(r1 +8) 6: (b4) w0 = 1 7: (95) exitThere is the usual setup lines and then, return 1. Everyone is a winner for all devices and access types!
Jonathan Dowland: neovim plugins and distributions
I've been watching the neovim community for a while and what seems like a cambrian explosion of plugins emerging. A few weeks back I decided to spend most of a "day of learning" on investigating some of the plugins and technologies that I'd read about: Language Server Protocol, TreeSitter, neorg (a grandiose organiser plugin), etc.
It didn't go so well. I spent most of my time fighting version incompatibilities or tracing through scant documentation or code to figure out what plugin was incompatible with which other.
There's definitely a line where crossing it is spending too much time playing with your tools instead of creating. On the other hand, there's definitely value in honing your tools and learning about new technologies. Everyone's line is probably in a different place. I've come to the conclusion that I don't have the time or inclination (or both) to approach exploring the neovim universe in this way. There exist a number of plugin "distributions" (such as LunarVim): collections of pre- configured and integrated plugins that you can try to use out-of-the-box. Next time I think I'll pick one up and give that a try &emdash independently from my existing configuration &emdash and see which ideas from it I might like to adopt.
shared vimrcsSome folks upload their vim or neovim configurations in their entirety for others to see. I noticed Jess Frazelle had published hers so I took a look. I suppose one could evaluate a bunch of plugins and configuration in isolation using a shared vimrc like this, in the same was as a distribution.
bufferlineAmongst the plugins she uses was bufferline, a plugin to re-work neovim's tab bar to behave like tab bars from more conventional editors1. I don't make use of neovim's tabs at all2, so I would lose nothing having the (presently hidden) tab bar reworked, so I thought I'd give it a go.
I had to disable an existing plugin lightline, which I've had enabled for years but I wasn't sure I was getting much value from. Apparently it also messes with the tab bar! Disabling it, at least for now, at least means I'll find out if I miss it.
I am already using vim-buffergator as a means of seeing and managing open buffers: a hotkey opens a sidebar with a list of open buffers, to switch between or close. Bufferline gives me a more immediate, always-present view of open buffers, which is faintly useful: but not much. Perhaps I'd like it more if I was coming from an editor that had made it more of an expected feature. The two things I noticed about it that aren't especially useful for me: when browsing around vimwiki pages, I quickly open a lot of buffers. The horizontal line fills up very quickly. Even when I don't, I habitually have quite a lot of buffers open, and the horizontal line is quickly overwhelmed.
I have found myself closing open buffers with the mouse, which I didn't do before.
vertSince I have brought up a neovim UI feature (tabs) I thought I'd briefly mention my new favourite neovim built-in command: vert.
Quite a few plugins and commands open up a new window (e.g. git-fugitive, Man, etc.) and they typically do so in a horizontal split. I'm increasingly preferring vertical splits. Prefixing any3 such command with vert forces the split to be vertical instead.
Bits from Debian: proxmox Platinum Sponsor of DebConf23
We are pleased to announce that Proxmox has committed to sponsor DebConf23 as a Platinum Sponsor.
Proxmox develops powerful, yet easy-to-use open-source server software. The product portfolio from Proxmox, including server virtualization, backup, and email security, helps companies of any size, sector, or industry to simplify their IT infrastructures. The Proxmox solutions are based on the great Debian platform, and we are happy that we can give back to the community by sponsoring DebConf23.
With this commitment as Platinum Sponsor, Proxmox is contributing to make possible our annual conference, and directly supporting the progress of Debian and Free Software, helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.
Thank you very much Proxmox, for your support of DebConf23!
Become a sponsor too!DebConf23 will take place from September 10th to 17th, 2022 in Kochi, India, and will be preceded by DebCamp, from September 3rd to 9th.
And DebConf23 is accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf23 website at https://debconf23.debconf.org/sponsors/become-a-sponsor/.
Sergio Durigan Junior: Using WireGuard to host services at home
It’s been a while since I had this idea to leverage the power of WireGuard to self-host stuff at home. Even though I pay for a proper server somewhere in the world, there are some services that I don’t consider critical to put there, or that I consider too critical to host outside my home.
It’s only NATuralWith today’s ISP packages for end users, I find it very annoying the amount of trouble they create when you try to host anything at home. Dynamic IPs, NAT/CGNAT, port-blocking, traffic shapping are only a few examples of methods or limitations that prevent users from making local services reachable in a reliable way from outside.
WireGuard comes to helpIf you already pay for a VPS or a dedicated server somewhere, why not use its existing infrastructure (and public availability) in your favour? That’s what I thought when I started this journey.
My initial idea was to use a reverse proxy to redirect external requests to the service running at my home. But how could I make sure that these requests reach my dynamic-IP-behind-a-NAT-behind-another-NAT? Well, let’s create a tunnel! WireGuard is the perfect tool for that because of many things: it’s stateless, very performant, secure, and requires very little configuration.
Setting up on the serverOn the server side (i.e., VPS or dedicated server), you will create the first endpoint. Something like the following should do:
[Interface] PrivateKey = PRIVATE_KEY_HERE Address = 10.0.0.1/32 ListenPort = 51821 [Peer] PublicKey = PUBLIC_KEY_HERE AllowedIps = 10.0.0.2/32 PersistentKeepalive = 10A few interesting points to note:
- The Peer section contains information about the home service that will be configured below.
- I’m using PersistentKeepalive because I have a dynamic IP at my home. If you have a static IP, you could get rid of PersistentKeepalive and specify an Endpoint here (don’t forget to set a ListenPort below, in the Interface section).
- Now you have an IP where you can forward requests to. If we’re talking about HTTP traffic, Apache and nginx are absolutely capable of doing it. If we’re talking about other kind of traffic, you might want to look into other utilities, like HAProxy, Traefik and others.
At your home, you will configure the peer:
[Interface] PrivateKey = PRIVATE_KEY_HERE Address = 10.0.0.2/32 [Peer] PublicKey = PUBLIC_KEY_HERE AllowedIps = 10.0.0.1/32 Endpoint = YOUR_SERVER:51821 PersistentKeepalive = 10 A few notes about securityI would be remiss if I didn’t say anything about security, especially because we’re talking about hosting services at home. So, here are a few recommendations:
- Make sure to put your services in a separate local network. Using VLANs is also a good option.
- Don’t run services on your personal (or work!) computer, even if they’ll be running inside a VM.
- Run a firewall on the WireGuard interface and make sure that you only allow traffic over the required ports.
Have fun!