Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 22 hours 23 min ago

Dirk Eddelbuettel: ulid 0.3.1 on CRAN: New Maintainer, Some Polish

Tue, 2024-04-02 19:14

Happy to share that ulid is now (back) on CRAN. It provides universally unique identifiers that are lexicographically sortable, which improves over the more well-known uuid generators.

ulid is a neat little package put together by Bob Rudis a few years ago. It had recently drifted off CRAN so I offered to brush it up and re-submit it. And as tooted earlier today, it took just over an hour to finish that (after the lead up work I had done, including prior email with CRAN in the loop, the repo transfer from Bob’s to my ulid repo plus of course a wee bit of actual maintenance; see below for more).

The NEWS entry follows.

Changes in version 0.3.1 (2024-04-02)
  • New Maintainer

  • Deleted several repository files no longer used or needed

  • Added .editorconfig, ChangeLog and cleanup

  • Converted NEWS.md to NEWS.Rd

  • Simplified R/ directory to one source file

  • Simplified src/ removing redundant Makevars

  • Added ulid() alias

  • Updated / edited roxygen and README.md documention

  • Removed vignette which was identical to README.md

  • Switched continuous integration to GitHub Actions

  • Placed upstream (header-only) library into src/ulid/

  • Renamed single interface file to src/wrapper

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Sven Hoexter: PKIX: pathLen Constrain on Root Certificates

Tue, 2024-04-02 15:07

I recently came a cross a x509 P(rivate)KI Root Certificate which had a pathLen constrain set on the (self signed) Root Certificate. Since that is not commonly seen I looked a bit around to get a better understanding about how the pathLen basic constrain should be used.

Primary source is RFC 5280 section 4.2.1.9

The pathLenConstraint field is meaningful only if the cA boolean is asserted and the key usage extension, if present, asserts the keyCertSign bit (Section 4.2.1.3). In this case, it gives the maximum number of non-self-issued intermediate certificates that may follow this certificate in a valid certification path

Since the Root is always self-issued it doesn't count towards the limit, and since it's the last certificate (or the first depending on how you count) in a chain, it's pretty much pointless to configure a pathLen constrain directly on a Root Certificate.

Another relevant resource are the Baseline Requirements of the CA/Browser Forum (currently v2.0.2). Section 7.1.2.1.4 "Root CA Basic Constraints" describes it as NOT RECOMMENDED for a Root CA.

Last but not least there is the awesome x509 Limbo project which has a section for validating pathLen constrains. Since the RFC 5280 based assumption is that self signed certs do not count, they do not check a case with such a constrain on the Root itself, and what the implementations do about it. So the assumption right now is that they properly ignore it.

Summary: It's pointless to set the pathLen constrain on the Root Certificate, so just don't do it.

Categories: FLOSS Project Planets

Bits from Debian: Bits from the DPL

Tue, 2024-04-02 13:00

Dear Debianites

This morning I decided to just start writing Bits from DPL and send whatever I have by 18:00 local time. Here it is, barely proof read, along with all it's warts and grammar mistakes! It's slightly long and doesn't contain any critical information, so if you're not in the mood, don't feel compelled to read it!

== Get ready for a new DPL! ==

Soon, the voting period will start to elect our next DPL, and my time as DPL will come to an end. Reading the questions posted to the new candidates on [debian-vote], it takes quite a bit of restraint to not answer all of them myself, I think I can see how that aspect contributed to me being reeled in to running for DPL! In total I've done so 5 times (the first time I ran, Sam was elected!).

Good luck to both [Andreas] and [Sruthi], our current DPL candidates! I've already started working on preparing handover, and there's multiple request from teams that have came in recently that will have to wait for the new term, so I hope they're both ready to hit the ground running!

  • [debian-vote] Mailing list: https://lists.debian.org/debian-vote/2024/03/threads.html
  • Platform: https://www.debian.org/vote/2024/platforms/tille [Anrea]
  • Platform: https://www.debian.org/vote/2024/platforms/srud [Sruthi]

== Things that I wish could have gone better ==

  • Communication:

Recently, I saw a t-shirt that read:

Adulthood is saying, 'But after this week things will slow down a bit' over and over until you die.

I can relate! With every task, crisis or deadline that appears, I think that once this is over, I'll have some more breathing space to get back to non-urgent, but important tasks. "Bits from the DPL" was something I really wanted to get right this last term, and clearly failed spectacularly. I have two long Bits from the DPL drafts that I never finished, I tend to have prioritised problems of the day over communication. With all the hindsight I have, I'm not sure which is better to prioritise, I do rate communication and transparency very highly and this is really the top thing that I wish I could've done better over the last four years.

On that note, thanks to people who provided me with some kind words when I've mentioned this to them before. They pointed out that there are many other ways to communicate and be in touch with the community, and they mentioned that they thought that I did a good job with that.

Since I'm still on communication, I think we can all learn to be more effective at it, since it's really so important for the project. Every time I publicly spoke about us spending more money, we got more donations. People out there really like to see how we invest funds in to Debian, instead of just making it heap up. DSA just spent a nice chunk on money on hardware, but we don't have very good visibility on it. It's one thing having it on a public line item in SPI's reporting, but it would be much more exciting if DSA could provide a write-up on all the cool hardware they're buying and what impact it would have on developers, and post it somewhere prominent like debian-devel-announce, Planet Debian or Bits from Debian (from the publicity team).

I don't want to single out DSA there, it's difficult and affects many other teams. The Salsa CI team also spent a lot of resources (time and money wise) to extend testing on AMD GPUs and other AMD hardware. It's fantastic and interesting work, and really more people within the project and in the outside world should know about it!

I'm not going to push my agendas to the next DPL, but I hope that they continue to encourage people to write about their work, and hopefully at some point we'll build enough excitement in doing so that it becomes a more normal part of our daily work.

  • Founding Debian as a standalone entity:

This was my number one goal for the project this last term, which was a carried over item from my previous terms.

I'm tempted to write everything out here, including the problem statement and our current predicaments, what kind of ground work needs to happen, likely constitutional changes that need to happen, and the nature of the GR that would be needed to make such a thing happen, but if I start with that, I might not finish this mail.

In short, I 100% believe that this is still a very high ranking issue for Debian, and perhaps after my term I'd be in a better position to spend more time on this (hmm, is this an instance of "The grass is always better on the other side", or "Next week will go better until I die?"). Anyway, I'm willing to work with any future DPL on this, and perhaps it can in itself be a delegation tasked to properly explore all the options, and write up a report for the project that can lead to a GR.

Overall, I'd rather have us take another few years and do this properly, rather than rush into something that is again difficult to change afterwards. So while I very much wish this could've been achieved in the last term, I can't say that I have any regrets here either.

== My terms in a nutshell ==

  • COVID-19 and Debian 11 era:

My first term in 2020 started just as the COVID-19 pandemic became known to spread globally. It was a tough year for everyone, and Debian wasn't immune against its effects either. Many of our contributors got sick, some have lost loved ones (my father passed away in March 2020 just after I became DPL), some have lost their jobs (or other earners in their household have) and the effects of social distancing took a mental and even physical health toll on many. In Debian, we tend to do really well when we get together in person to solve problems, and when DebConf20 got cancelled in person, we understood that that was necessary, but it was still more bad news in a year we had too much of it already.

I can't remember if there was ever any kind of formal choice or discussion about this at any time, but the DebConf video team just kind of organically and spontaneously became the orga team for an online DebConf, and that lead to our first ever completely online DebConf. This was great on so many levels. We got to see each other's faces again, even though it was on screen. We had some teams talk to each other face to face for the first time in years, even though it was just on a Jitsi call. It had a lasting cultural change in Debian, some teams still have video meetings now, where they didn't do that before, and I think it's a good supplement to our other methods of communication.

We also had a few online Mini-DebConfs that was fun, but DebConf21 was also online, and by then we all developed an online conference fatigue, and while it was another good online event overall, it did start to feel a bit like a zombieconf and after that, we had some really nice events from the Brazillians, but no big global online community events again. In my opinion online MiniDebConfs can be a great way to develop our community and we should spend some further energy into this, but hey! This isn't a platform so let me back out of talking about the future as I see it...

Despite all the adversity that we faced together, the Debian 11 release ended up being quite good. It happened about a month or so later than what we ideally would've liked, but it was a solid release nonetheless. It turns out that for quite a few people, staying inside for a few months to focus on Debian bugs was quite productive, and Debian 11 ended up being a very polished release.

During this time period we also had to deal with a previous Debian Developer that was expelled for his poor behaviour in Debian, who continued to harass members of the Debian project and in other free software communities after his expulsion. This ended up being quite a lot of work since we had to take legal action to protect our community, and eventually also get the police involved. I'm not going to give him the satisfaction by spending too much time talking about him, but you can read our official statement regarding Daniel Pocock here:

https://www.debian.org/News/2021/20211117

In late 2021 and early 2022 we also discussed our general resolution process, and had two consequent votes to address some issues that have affected past votes:

  • https://www.debian.org/vote/2021/vote_003
  • https://www.debian.org/vote/2022/vote_001

In my first term I addressed our delegations that were a bit behind, by the end of my last term all delegation requests are up to date. There's still some work to do, but I'm feeling good that I get to hand this over to the next DPL in a very decent state. Delegation updates can be very deceiving, sometimes a delegation is completely re-written and it was just 1 or 2 hours of work. Other times, a delegation updated can contain one line that has changed or a change in one team member that was the result of days worth of discussion and hashing out differences.

I also received quite a few requests either to host a service, or to pay a third-party directly for hosting. This was quite an admin nightmare, it either meant we had to manually do monthly reimbursements to someone, or have our TOs create accounts/agreements at the multiple providers that people use. So, after talking to a few people about this, we founded the DebianNet team (we could've admittedly chosen a better name, but that can happen later on) for providing hosting at two different hosting providers that we have agreement with so that people who host things under debian.net have an easy way to host it, and then at the same time Debian also has more control if a site maintainer goes MIA.

More info:

https://wiki.debian.org/Teams/DebianNet

You might notice some Openstack mentioned there, we had some intention to set up a Debian cloud for hosting these things, that could also be used for other additional Debiany things like archive rebuilds, but these have so far fallen through. We still consider it a good idea and hopefully it will work out some other time (if you're a large company who can sponsor few racks and servers, please get in touch!)

  • DebConf22 and Debian 12 era:

DebConf22 was the first time we returned to an in-person DebConf. It was a bit smaller than our usual DebConf - understandably so, considering that there were still COVID risks and people who were at high risk or who had family with high risk factors did the sensible thing and stayed home.

After watching many MiniDebConfs online, I also attended my first ever MiniDebConf in Hamburg. It still feels odd typing that, it feels like I should've been at one before, but my location makes attending them difficult (on a side-note, a few of us are working on bootstrapping a South African Debian community and hopefully we can pull off MiniDebConf in South Africa later this year).

While I was at the MiniDebConf, I gave a talk where I covered the evolution of firmware, from the simple e-proms that you'd find in old printers to the complicated firmware in modern GPUs that basically contain complete operating systems- complete with drivers for the device their running on. I also showed my shiny new laptop, and explained that it's impossible to install that laptop without non-free firmware (you'd get a black display on d-i or Debian live). Also that you couldn't even use an accessibility mode with audio since even that depends on non-free firmware these days.

Steve, from the image building team, has said for a while that we need to do a GR to vote for this, and after more discussion at DebConf, I kept nudging him to propose the GR, and we ended up voting in favour of it. I do believe that someone out there should be campaigning for more free firmware (unfortunately in Debian we just don't have the resources for this), but, I'm glad that we have the firmware included. In the end, the choice comes down to whether we still want Debian to be installable on mainstream bare-metal hardware.

At this point, I'd like to give a special thanks to the ftpmasters, image building team and the installer team who worked really hard to get the changes done that were needed in order to make this happen for Debian 12, and for being really proactive for remaining niggles that was solved by the time Debian 12.1 was released.

The included firmware contributed to Debian 12 being a huge success, but it wasn't the only factor. I had a list of personal peeves, and as the hard freeze hit, I lost hope that these would be fixed and made peace with the fact that Debian 12 would release with those bugs. I'm glad that lots of people proved me wrong and also proved that it's never to late to fix bugs, everything on my list got eliminated by the time final freeze hit, which was great! We usually aim to have a release ready about 2 years after the previous release, sometimes there are complications during a freeze and it can take a bit longer. But due to the excellent co-ordination of the release team and heavy lifting from many DDs, the Debian 12 release happened 21 months and 3 weeks after the Debian 11 release. I hope the work from the release team continues to pay off so that we can achieve their goals of having shorter and less painful freezes in the future!

Even though many things were going well, the ongoing usr-merge effort highlighted some social problems within our processes. I started typing out the whole history of usrmerge here, but it's going to be too long for the purpose of this mail. Important questions that did come out of this is, should core Debian packages be team maintained? And also about how far the CTTE should really be able to override a maintainer. We had lots of discussion about this at DebConf22, but didn't make much concrete progress. I think that at some point we'll probably have a GR about package maintenance. Also, thank you to Guillem who very patiently explained a few things to me (after probably having have to done so many times to others before already) and to Helmut who have done the same during the MiniDebConf in Hamburg. I think all the technical and social issues here are fixable, it will just take some time and patience and I have lots of confidence in everyone involved.

UsrMerge wiki page: https://wiki.debian.org/UsrMerge

  • DebConf 23 and Debian 13 era:

DebConf23 took place in Kochi, India. At the end of my Bits from the DPL talk there, someone asked me what the most difficult thing I had to do was during my terms as DPL. I answered that nothing particular stood out, and even the most difficult tasks ended up being rewarding to work on. Little did I know that my most difficult period of being DPL was just about to follow. During the day trip, one of our contributors, Abraham Raji, passed away in a tragic accident. There's really not anything anyone could've done to predict or stop it, but it was devastating to many of us, especially the people closest to him. Quite a number of DebConf attendees went to his funeral, wearing the DebConf t-shirts he designed as a tribute. It still haunts me when I saw his mother scream "He was my everything! He was my everything!", this was by a large margin the hardest day I've ever had in Debian, and I really wasn't ok for even a few weeks after that and I think the hurt will be with many of us for some time to come. So, a plea again to everyone, please take care of yourself! There's probably more people that love you than you realise.

A special thanks to the DebConf23 team, who did a really good job despite all the uphills they faced (and there were many!).

As DPL, I think that planning for a DebConf is near to impossible, all you can do is show up and just jump into things. I planned to work with Enrico to finish up something that will hopefully save future DPLs some time, and that is a web-based DD certificate creator instead of having the DPL do so manually using LaTeX. It already mostly works, you can see the work so far by visiting https://nm.debian.org/person/ACCOUNTNAME/certificate/ and replacing ACCOUNTNAME with your Debian account name, and if you're a DD, you should see your certificate. It still needs a few minor changes and a DPL signature, but at this point I think that will be finished up when the new DPL start. Thanks to Enrico for working on this!

Since my first term, I've been trying to find ways to improve all our accounting/finance issues. Tracking what we spend on things, and getting an annual overview is hard, especially over 3 trusted organisations. The reimbursement process can also be really tedious, especially when you have to provide files in a certain order and combine them into a PDF. So, at DebConf22 we had a meeting along with the treasurer team and Stefano Rivera who said that it might be possible for him to work on a new system as part of his Freexian work. It worked out, and Freexian funded the development of the system since then, and after DebConf23 we handled the reimbursements for the conference via the new reimbursements site:

https://reimbursements.debian.net

It's still early days, but over time it should be linked to all our TOs and we'll use the same category codes across the board. So, overall, our reimbursement process becomes a lot simpler, and also we'll be able to get information like how much money we've spent on any category in any period. It will also help us to track how much money we have available or how much we spend on recurring costs. Right now that needs manual polling from our TOs. So I'm really glad that this is a big long-standing problem in the project that is being fixed.

For Debian 13, we're waving goodbye to the KFreeBSD and mipsel ports. But we're also gaining riscv64 and loongarch64 as release architectures! I have 3 different RISC-V based machines on my desk here that I haven't had much time to work with yet, you can expect some blog posts about them soon after my DPL term ends!

As Debian is a unix-like system, we're affected by the [Year 2038 problem], where systems that uses 32 bit time in seconds since 1970 run out of available time and will wrap back to 1970 or have other undefined behaviour. A detailed [wiki page] explains how this works in Debian, and currently we're going through a rather large transition to make this possible.

[Year 2038 problem] https://simple.wikipedia.org/wiki/Year_2038_problem [wiki page] https://wiki.debian.org/ReleaseGoals/64bit-time

I believe this is the right time for Debian to be addressing this, we're still a bit more than a year away for the Debian 13 release, and this provides enough time to test the implementation before 2038 rolls along.

Of course, big complicated transitions with dependency loops that causes chaos for everyone would still be too easy, so this past weekend (which is a holiday period in most of the west due to Easter weekend) has been filled with dealing with an upstream bug in xz-utils, where a backdoor was placed in this key piece of software. An [Ars Technica] covers it quite well, so I won't go into all the details here. I mention it because I want to give yet another special thanks to everyone involved in dealing with this on the Debian side. Everyone involved, from the ftpmasters to security team and others involved were super calm and professional and made quick, high quality decisions. This also lead to the archive being frozen on Saturday, this is the first time I've seen this happen since I've been a DD, but I'm sure next week will go better!

[Ars Technica] https://arstechnica.com/security/2024/04/what-we-know-about-the-xz-utils-backdoor-that-almost-infected-the-world/

== Looking forward ==

It's really been an honour for me to serve as DPL. It might well be my biggest achievement in my life. Previous DPLs range from prominent software engineers to game developers, or people who have done things like complete Iron Man, run other huge open source projects and are part of big consortiums. Ian Jackson even authored dpkg and is now working on the very interesting [tag2upload service]!

[tag2upload service] https://peertube.debian.social/w/pav68XBWdurWzfTYvDgWRM

I'm a relative nobody, just someone who grew up as a poor kid in South Africa, who just really cares about Debian a lot. And, above all, I'm really thankful that I didn't do anything major to screw up Debian for good.

Not unlike learning how to use Debian, and also becoming a Debian Developer, I've learned a lot from this and it's been a really valuable growth experience for me.

I know I can't possible give all the thanks to everyone who deserves it, so here's a big big thanks to everyone who have worked so hard and who have put in many, many hours to making Debian better, I consider you all heroes!

-Jonathan

Categories: FLOSS Project Planets

Ben Hutchings: FOSS activity in March 2024

Mon, 2024-04-01 10:51
Categories: FLOSS Project Planets

Ben Hutchings: FOSS activity in March 2024

Mon, 2024-04-01 10:51
Categories: FLOSS Project Planets

Colin Watson: Free software activity in March 2024

Mon, 2024-04-01 09:10

My Debian contributions this month were all sponsored by Freexian.

Categories: FLOSS Project Planets

Simon Josefsson: Towards reproducible minimal source code tarballs? On *-src.tar.gz

Mon, 2024-04-01 06:28

While the work to analyze the xz backdoor is in progress, several ideas have been suggested to improve the entire software supply chain ecosystem. Some of those ideas are good, some of the ideas are at best irrelevant and harmless, and some suggestions are plain bad. I’d like to attempt to formalize one idea (remains to be see in which category it belongs), which have been discussed before, but the context in which the idea can be appreciated have not been as clear as it is today.

  1. Reproducible source tarballs. The idea is that published source tarballs should be possible to reproduce independently somehow, and that this should be continuously tested and verified — preferrably as part of the upstream project continuous integration system (e.g., GitHub action or GitLab pipeline). While nominally this looks easy to achieve, there are some complex matters in this, for example: what timestamps to use for files in the tarball? I’ve brought up this aspect before.
  2. Minimal source tarballs without generated vendor files. Most GNU Autoconf/Automake-based tarballs pre-generated files which are important for bootstrapping on exotic systems that does not have the required dependencies. For the bootstrapping story to succeed, this approach is important to support. However it has become clear that this practice raise significant costs and risks. Most modern GNU/Linux distributions have all the required dependencies and actually prefers to re-build everything from source code. These pre-generated extra files introduce uncertainty to that process.

My strawman proposal to improve things is to define new tarball format *-src.tar.gz with at least the following properties:

  1. The tarball should allow users to build the project, which is the entire purpose of all this. This means that at least all source code for the project has to be included.
  2. The tarballs should be signed, for example with PGP or minisign.
  3. The tarball should be possible to reproduce bit-by-bit by a third party using upstream’s version controlled sources and a pointer to which revision was used (e.g., git tag or git commit).
  4. The tarball should not require an Internet connection to download things.
    • Corollary: every external dependency either has to be explicitly documented as such (e.g., gcc and GnuTLS), or included in the tarball.
    • Observation: This means including all *.po gettext translations which are normally downloaded when building from version controlled sources.
  5. The tarball should contain everything required to build the project from source using as much externally released versioned tooling as possible. This is the “minimal” property lacking today.
    • Corollary: This means including a vendored copy of OpenSSL or libz is not acceptable: link to them as external projects.
    • Open question: How about non-released external tooling such as gnulib or autoconf archive macros? This is a bit more delicate: most distributions either just package one current version of gnulib or autoconf archive, not previous versions. While this could change, and distributions could package the gnulib git repository (up to some current version) and the autoconf archive git repository — and packages were set up to extract the version they need (gnulib’s ./bootstrap already supports this via the –gnulib-refdir parameter), this is not normally in place.
    • Suggested Corollary: The tarball should contain content from git submodule’s such as gnulib and the necessary Autoconf archive M4 macros required by the project.
  6. Similar to how the GNU project specify the ./configure interface we need a documented interface for how to bootstrap the project. I suggest to use the already well established idiom of running ./bootstrap to set up the package to later be able to be built via ./configure. Of course, some projects are not using the autotool ./configure interface and will not follow this aspect either, but like most build systems that compete with autotools have instructions on how to build the project, they should document similar interfaces for bootstrapping the source tarball to allow building.

If tarballs that achieve the above goals were available from popular upstream projects, distributions could more easily use them instead of current tarballs that include pre-generated content. The advantage would be that the build process is not tainted by “unnecessary” files. We need to develop tools for maintainers to create these tarballs, similar to make dist that generate today’s foo-1.2.3.tar.gz files.

I think one common argument against this approach will be: Why bother with all that, and just use git-archive outputs? Or avoid the entire tarball approach and move directly towards version controlled check outs and referring to upstream releases as git URL and commit tag or id. My counter-argument is that this optimize for packagers’ benefits at the cost of upstream maintainers: most upstream maintainers do not want to store gettext *.po translations in their source code repository. A compromise between the needs of maintainers and packagers is useful, so this *-src.tar.gz tarball approach is the indirection we need to solve that.

What do you think?

Categories: FLOSS Project Planets

Arturo Borrero González: Kubecon and CloudNativeCon 2024 Europe summary

Mon, 2024-04-01 05:00

This blog post shares my thoughts on attending Kubecon and CloudNativeCon 2024 Europe in Paris. It was my third time at this conference, and it felt bigger than last year’s in Amsterdam. Apparently it had an impact on public transport. I missed part of the opening keynote because of the extremely busy rush hour tram in Paris.

On Artificial Intelligence, Machine Learning and GPUs

Talks about AI, ML, and GPUs were everywhere this year. While it wasn’t my main interest, I did learn about GPU resource sharing and power usage on Kubernetes. There were also ideas about offering Models-as-a-Service, which could be cool for Wikimedia Toolforge in the future.

See also:

On security, policy and authentication

This was probably the main interest for me in the event, given Wikimedia Toolforge was about to migrate away from Pod Security Policy, and we were currently evaluating different alternatives.

In contrast to my previous attendances to Kubecon, where there were three policy agents with presence in the program schedule, Kyverno, Kubewarden and OpenPolicyAgent (OPA), this time only OPA had the most relevant sessions.

One surprising bit I got from one of the OPA sessions was that it could work to authorize linux PAM sessions. Could this be useful for Wikimedia Toolforge?

I attended several sessions related to authentication topics. I discovered the keycloak software, which looks very promising. I also attended an Oauth2 session which I had a hard time following, because I clearly missed some additional knowledge about how Oauth2 works internally.

I also attended a couple of sessions that ended up being a vendor sales talk.

See also:

On container image builds, harbor registry, etc

This topic was also of interest to me because, again, it is a core part of Wikimedia Toolforge.

I attended a couple of sessions regarding container image builds, including topics like general best practices, image minimization, and buildpacks. I learned about kpack, which at first sight felt like a nice simplification of how the Toolforge build service was implemented.

I also attended a session by the Harbor project maintainers where they shared some valuable information on things happening soon or in the future , for example:

  • new harbor command line interface coming soon. Only the first iteration though.
  • harbor operator, to install and manage harbor. Looking for new maintainers, otherwise going to be archived.
  • the project is now experimenting with adding support to hosting more artifacts: maven, NPM, pypi. I wonder if they will consider hosting Debian .deb packages.

On networking

I attended a couple of sessions regarding networking.

One session in particular I paid special attention to, ragarding on network policies. They discussed new semantics being added to the Kubernetes API.

The different layers of abstractions being added to the API, the different hook points, and override layers clearly resembled (to me at least) the network packet filtering stack of the linux kernel (netfilter), but without the 20 (plus) years of experience building the right semantics and user interfaces.

I very recently missed some semantics for limiting the number of open connections per namespace, see Phabricator T356164: [toolforge] several tools get periods of connection refused (104) when connecting to wikis This functionality should be available in the lower level tools, I mean Netfilter. I may submit a proposal upstream at some point, so they consider adding this to the Kubernetes API.

Final notes

In general, I believe I learned many things, and perhaps even more importantly I re-learned some stuff I had forgotten because of lack of daily exposure. I’m really happy that the cloud native way of thinking was reinforced in me, which I still need because most of my muscle memory to approach systems architecture and engineering is from the old pre-cloud days. That being said, I felt less engaged with the content of the conference schedule compared to last year. I don’t know if the schedule itself was less interesting, or that I’m losing interest?

Finally, not an official track in the conference, but we met a bunch of folks from Wikimedia Deutschland. We had a really nice time talking about how wikibase.cloud uses Kubernetes, whether they could run in Wikimedia Cloud Services, and why structured data is so nice.

We in WMCS usually consider ourselves only one or two engineers short of offering the same level of services as Google cloud :-P

Categories: FLOSS Project Planets

Junichi Uekawa: Learning about xz and what is happening is fascinating.

Sun, 2024-03-31 18:02
Learning about xz and what is happening is fascinating. The scope of potential exploit is very large. The Open source software space is filled with many unmaintained and unreviewed software.

Categories: FLOSS Project Planets

Russell Coker: Links March 2024

Sun, 2024-03-31 08:51

Bruce Schneier wrote an interesting blog post about his workshop on reimagining democracy and the unusual way he structured it [1]. It would be fun to have a security conference run like that!

Matthias write an informative blog post about Wayland “Wayland really breaks things… Just for now” which links to a blog debate about the utility of Wayland [2]. Wayland seems pretty good to me.

Cory Doctorow wrote an insightful article about the AI bubble comparing it to previous bubbles [3].

Charles Stross wrote an insightful analysis of the implications if the UK brought back military conscription [4]. Looks like the era of large armies is over.

Charles Stross wrote an informative blog post about the Worldcon in China, covering issues of vote rigging for location, government censorship vs awards, and business opportunities [5].

The Paris Review has an interesting article about speaking to the CIA’s Creative Writing Group [6]. It doesn’t explain why they have a creative writing group that has some sort of semi-official sanction.

LongNow has an insightful article about the threats to biodiversity in food crops and the threat that poses to humans [7].

Bruce Schneier and Albert Fox Cahn wrote an interesting article about the impacts of chatbots on human discourse [8]. If it makes people speak more precisely then that would be great for all Autistic people!

Related posts:

  1. Links February 2024 In 2018 Charles Stross wrote an insightful blog post Dude...
  2. Links January 2024 Long Now has an insightful article about domestication that considers...
  3. Links March 2023 Interesting paper about a plan for eugenics in dogs with...
Categories: FLOSS Project Planets

Steinar H. Gunderson: xz backdooring

Sat, 2024-03-30 06:39

Andres Freund found that xz-utils is backdoored, but could not (despite the otherwise excellent analysis) get quite to the bottom of what the payload actually does.

What you would hope for to be posted by others: Further analysis of the payload.

What actually gets posted by others: “systemd is bad.”

Categories: FLOSS Project Planets

Raphaël Hertzog: Freexian is looking to expand its team with more Debian contributors

Fri, 2024-03-29 11:13

It’s been a while that I haven’t posted anything on my blog, the truth is that Freexian has been doing very well in the last years and that I have a hard time to allocate time to write articles or even to contribute to my usual Debian projects… the exception being debusine since that’s part of the Freexian work (have a look at our most recent announce!).

That being said, given Freexian’s growth and in the hope to reduce my workload, we are looking to extend our team with Debian members of more varied backgrounds and skills, so they can help us in areas like sales / marketing / project management. Have a look at our announce on debian-jobs@lists.debian.org.

As a mission-oriented company, we are looking to work with persons already involved in Debian (or persons who were waiting the right opportunity to get involved). All our collaborators can spend 20% of their paid work time on the Debian projects they care about.

Categories: FLOSS Project Planets

Ravi Dwivedi: A visit to the Taj Mahal

Fri, 2024-03-29 06:13

Note: The currency used in this post is Indian Rupees, which was around 83 INR for 1 US Dollar as that time.

I and my friend Badri visited the Taj Mahal this month. Taj Mahal is one of the main tourist destinations in India and does not need an introduction, I guess. It is in Agra, in the state of Uttar Pradesh, 188 km from Delhi by train. So, I am writing a post documenting useful information for people who are planning to visit Taj Mahal. Feel free to ask me questions about visiting the Taj Mahal.

Our retiring room at the Old Delhi Railway Station.

We had booked a train from Delhi to Agra. The name of the train was Taj Express, and its scheduled departure time from Hazrat Nizamuddin station in Delhi is 07:08 hours in the morning, and its arrival time at Agra Cantt station is 09:45. So, we booked a retiring room at the Old Delhi railway station for the previous night. This retiring room was hard to find. We woke up at 05:00 in the morning and took the metro to Hazrat Nizamuddin station. We barely reached the station in time, but anyway, the train was not yet at the station; it was late.

We reached Agra at 10:30 and checked into our retiring room, took rest and went out for Taj Mahal at 13:00 in the afternoon. Taj Mahal’s outer gate is 5 km away from the Agra Cantt station. As we were going out of the railway station, we were chased by an autorickshaw driver who offered to go to Taj Mahal for 150 INR for both of us. I asked him to raise it down to 60 INR, and after some back and forth, he agreed to drop us off at Taj Mahal for 80 INR. But I said we won’t pay anything above 60 INR. He agreed with that amount but said that he would need to fill up with more passengers. When we saw that he wasn’t making any effort in bringing more passengers, we walked away.

As soon as we got out of the railway station complex, an autorickshaw driver came to us and offered to drop us off at Taj Mahal for 20 INR if we are sharing with other passengers and 100 INR if we reserve the auto for us. We agreed to go with 20 INR per person, but he started the autorickshaw as soon as we hopped in. I thought that the third person in the auto was another passenger sharing a ride with us, but later we got to know he was with the driver. Upon reaching the outer gate of Taj Mahal, I gave him 40 INR (for both of us), and he asked to instead give 100 INR as he said we reserved the auto, even though I clearly stated before taking the auto that we wanted to share the auto, not reserve it. I think this was a scam. We walked away, and he didn’t insist further.

Taj Mahal entrance was like 500 m from the outer gate. We went there and bought offline tickets just outside the West gate. For Indians, the ticket for going inside the Taj Mahal complex is 50 INR, and a visit to the mausoleum costs 200 INR extra.

Security outside the Taj Mahal complex. This red colored building is entrance to where you can see the Taj Mahal. Taj Mahal. Shoe covers for going inside the mausoleum. Taj Mahal from side angle.

We came out of the Taj Mahal complex at 18:00 and stopped for some tea and snacks. I also bought a fridge magnet for 30 INR. Then we walked back towards Agra Cantt station, as we had a train for Jaipur at midnight. We were hoping to find a restaurant along the way, but we didn’t find any that we found interesting, so we just ate at the railway station. During the return trip, we noticed there was a bus stand near the station, which we didn’t know about. It turns out you can catch a bus to Taj Mahal from there. You can click here to check out the location of that bus stand on OpenStreetMap.

Expenses

These were our expenses per person

Retiring room at Delhi Railway Station for 12 hours ₹131

Train ticket from Delhi to Agra (Taj Express) ₹110

Retiring room at Agra Cantt station for 12 hours ₹450

Auto-rickshaw to Taj Mahal ₹20

Taj Mahal ticket (including going inside the mausoleum): ₹250

Food ₹350

Important information for visitors
  • Taj Mahal is closed on Friday.

  • There are plenty of free-of-cost drinking water taps inside the Taj Mahal complex.

  • Ticket price for Indians is ₹50, for foreigners and NRIs it is ₹1100, and for people from SAARC/BIMSTEC is ₹540. ₹200 extra for the mausoleum for everyone.

  • A visit inside the mausoleum requires covering your shoes or removing them. Shoe covers costs ₹10 per person inside the complex, but are probably involved free of charge in foreigner tickets. We could not find a place to keep our shoes, but some people managed to enter barefoot, indicating there must be some place to keep your shoes.

  • Mobile phones and cameras are allowed inside the Taj Mahal, but not eatables.

  • We went there on March 10th, and the weather was pleasant. So, we recommend going around that time.

  • Regarding the timings, I found this written near the ticket counter: “Taj Mahal opens 30 minutes before sunrise and closes 30 minutes before sunset during normal operating days,” so the timings are vague. But we came out of the complex at 18:00 hours. I would interpret that to mean the Taj Mahal is open from 07:00 to 18:00, and the ticket counter closes at around 17:00. During the winter, the timings might differ.

  • The cheapest way to reach Taj Mahal is by bus, and the bus stop is here

Bye for now. See you in the next post :)

Categories: FLOSS Project Planets

Patryk Cisek: Sanoid on TrueNAS

Thu, 2024-03-28 21:18
syncoid to TrueNAS In my homelab, I have 2 NAS systems: Linux (Debian) TrueNAS Core (based on FreeBSD) On my Linux box, I use Jim Salter’s sanoid to periodically take snapshots of my ZFS pool. I also want to have a proper backup of the whole pool, so I use syncoid to transfer those snapshots to another machine. Sanoid itself is responsible only for taking new snapshots and pruning old ones you no longer care about.
Categories: FLOSS Project Planets

Reproducible Builds (diffoscope): diffoscope 262 released

Thu, 2024-03-28 20:00

The diffoscope maintainers are pleased to announce the release of diffoscope version 262. This version includes the following changes:

[ Chris Lamb ] * Factor out Python version checking in test_zip.py. (Re: #362) * Also skip some zip tests under 3.10.14 as well; a potential regression may have been backported to the 3.10.x series. The underlying cause is still to be investigated. (Re: #362)

You find out more by visiting the project homepage.

Categories: FLOSS Project Planets

Joey Hess: the vulture in the coal mine

Thu, 2024-03-28 18:37

Turns out that VPS provider Vultr's terms of service were quietly changed some time ago to give them a "perpetual, irrevocable" license to use content hosted there in any way, including modifying it and commercializing it "for purposes of providing the Services to you."

This is very similar to changes that Github made to their TOS in 2017. Since then, Github has been rebranded as "The world’s leading AI-powered developer platform". The language in their TOS now clearly lets them use content stored in Github for training AI. (Probably this is their second line of defense if the current attempt to legitimise copyright laundering via generative AI fails.)

Vultr is currently in damage control mode, accusing their concerned customers of spreading "conspiracy theories" (-- founder David Aninowsky) and updating the TOS to remove some of the problem language. Although it still allows them to "make derivative works", so could still allow their AI division to scrape VPS images for training data.

Vultr claims this was the legalese version of technical debt, that it only ever applied to posts in a forum (not supported by the actual TOS language) and basically that they and their lawyers are incompetant but not malicious.

Maybe they are indeed incompetant. But even if I give them the benefit of the doubt, I expect that many other VPS providers, especially ones targeting non-corporate customers, are watching this closely. If Vultr is not significantly harmed by customers jumping ship, if the latest TOS change is accepted as good enough, then other VPS providers will know that they can try this TOS trick too. If Vultr's AI division does well, others will wonder to what extent it is due to having all this juicy training data.

For small self-hosters, this seems like a good time to make sure you're using a VPS provider you can actually trust to not be eyeing your disk image and salivating at the thought of stripmining it for decades of emails. Probably also worth thinking about moving to bare metal hardware, perhaps hosted at home.

I wonder if this will finally make it worthwhile to mess around with VPS TPMs?

Categories: FLOSS Project Planets

Scarlett Gately Moore: Kubuntu, KDE Report. In Loving Memory of my Son.

Thu, 2024-03-28 13:54

Personal:

As many of you know, I lost my beloved son March 9th. This has hit me really hard, but I am staying strong and holding on to all the wonderful memories I have. He grew up to be an amazing man, devoted christian and wonderful father. He was loved by everyone who knew him and will be truly missed by us all. I have had folks ask me how they can help. He left behind his 7 year old son Mason. Mason was Billy’s world and I would like to make sure Mason is taken care of. I have set up a gofundme for Mason and all proceeds will go to the future care of him.

https://gofund.me/25dbff0c

Work report

Kubuntu:

Bug bashing! I am triaging allthebugs for Plasma which can be seen here:

https://bugs.launchpad.net/plasma-5.27/+bug/2053125

I am happy to report many of the remaining bugs have been fixed in the latest bug fix release 5.27.11.

I prepared https://kde.org/announcements/plasma/5/5.27.11/ and Rik uploaded to archive, thank you. Unfortunately, this and several other key fixes are stuck in transition do to the time_t64 transition, which you can read about here: https://wiki.debian.org/ReleaseGoals/64bit-time . It is the biggest transition in Debian/Ubuntu history and it couldn’t come at a worst time. We are aware our ISO installer is currently broken, calamares is one of those things stuck in this transition. There is a workaround in the comments of the bug report: https://bugs.launchpad.net/ubuntu/+source/calamares/+bug/2054795

Fixed an issue with plasma-welcome.

Found the fix for emojis and Aaron has kindly moved this forward with the fontconfig maintainer. Thanks!

I have received an https://kfocus.org/spec/spec-ir14.html laptop and it is truly a great machine and is now my daily driver. A big thank you to the Kfocus team! I can’t wait to show it off at https://linuxfestnorthwest.org/.

KDE Snaps:

You will see the activity in this ramp back up as the KDEneon Core project is finally a go! I will participate in the project with part time status and get everyone in the Enokia team up to speed with my snap knowledge, help prepare the qt6/kf6 transition, package plasma, and most importantly I will focus on documentation for future contributors.

I have created the ( now split ) qt6 with KDE patchset support and KDE frameworks 6 SDK and runtime snaps. I have made the kde-neon-6 extension and the PR is in: https://github.com/canonical/snapcraft/pull/4698 . Future work on the extension will include multiple versions track support and core24 support.

I have successfully created our first qt6/kf6 snap ark. They will show showing up in the store once all the required bits have been merged and published.

Thank you for stopping by.

~Scarlett

Categories: FLOSS Project Planets

Steinar H. Gunderson: git grudge

Wed, 2024-03-27 13:56

Small teaser:

Probably won't show up in aggregators (try this link instead).

Categories: FLOSS Project Planets

Emmanuel Kasper: Adding a private / custom Certificate Authority to the firefox trust store

Tue, 2024-03-26 14:43

Today at $WORK I needed to add the private company Certificate Authority (CA) to Firefox, and I found the steps were unnecessarily complex. Time to blog about that, and I also made a Debian wiki article of that post, so that future generations can update the information, when Firefox 742 is released on Debian 17.

The cacert certificate authority is not included in Debian and Firefox, and is thus a good example of adding a private CA. Note that this does not mean I specifically endorse that CA.

  • Test that SSL connections to a site signed by the private CA is failing
$ gnutls-cli wiki.cacert.org:443 ... - Status: The certificate is NOT trusted. The certificate issuer is unknown. *** PKI verification of server certificate failed... *** Fatal error: Error in the certificate.
  • Download the private CA
$ wget http://www.cacert.org/certs/root_X0F.crt
  • test that a connection works with the private CA
$ gnutls-cli --x509cafile root_X0F.crt wiki.cacert.org:443 ... - Status: The certificate is trusted. - Description: (TLS1.2-X.509)-(ECDHE-SECP256R1)-(RSA-SHA256)-(AES-256-GCM) - Session ID: 37:56:7A:89:EA:5F:13:E8:67:E4:07:94:4B:52:23:63:1E:54:31:69:5D:70:17:3C:D0:A4:80:B0:3A:E5:22:B3 - Options: safe renegotiation, - Handshake was completed ...
  • add the private CA to the Debian trust store located in /etc/ssl/certs/ca-certificates.crt
$ sudo cp root_X0F.crt /usr/local/share/ca-certificates/cacert-org-root-ca.crt $ sudo update-ca-certificates --verbose ... Adding debian:cacert-org-root-ca.pem ...
  • verify that we can connect without passing the private CA on the command line
$ gnutls-cli wiki.cacert.org:443 ... - Status: The certificate is trusted.
  • At that point most applications are able to connect to systems with a certificate signed by the private CA (curl, Gnome builtin Browser …). However Firefox is using its own trust store and will still display a security error if connecting to https://wiki.cacert.org. To make Firefox trust the Debian trust store, we need to add a so called security device, in fact an extra library wrapping the Debian trust store. The library will wrap the Debian trust store in the PKCS#11 industry format that Firefox supports.

  • install the pkcs#11 wrapping library and command line tools

$ sudo apt install p11-kit p11-kit-modules
  • verify that the private CA is accessible via PKCS#11
$ trust list | grep --context 2 'CA Cert' pkcs11:id=%16%B5%32%1B%D4%C7%F3%E0%E6%8E%F3%BD%D2%B0%3A%EE%B2%39%18%D1;type=cert type: certificate label: CA Cert Signing Authority trust: anchor category: authority
  • now we need to add a new security device in Firefox pointing to the pkcs11 trust store. The pkcs11 trust store is located in /usr/lib/x86_64-linux-gnu/pkcs11/p11-kit-trust.so
$ dpkg --listfiles p11-kit-modules | grep trust /usr/lib/x86_64-linux-gnu/pkcs11/p11-kit-trust.so
  • in Firefox (tested in version 115 esr), go to Settings -> Privacy & Security -> Security -> Security Devices.
    Then click “Load”, in the popup window use “My local trust” as a module name, and /usr/lib/x86_64-linux-gnu/pkcs11/p11-kit-trust.so as a module filename. After adding the module, you should see it in the list of Security Devices, having /etc/ssl/certs/ca-certificates.crt as a description.

  • now restart Firefox and you should be able to browse https://wiki.cacert.org without security errors

Categories: FLOSS Project Planets

Jonathan Dowland: a bug a day

Mon, 2024-03-25 12:58

I recently became a maintainer of/committer to IkiWiki, the software that powers my site. I also took over maintenance of the Debian package. Last week I cut a new upstream point release, 3.20200202.4, and a corresponding Debian package upload, consisting only of a handful of low-hanging-fruit patches from other people, largely to exercise both processes.

I've been discussing IkiWiki's maintenance situation with some other users for a couple of years now. I've also weighed up the pros and cons of moving to a different static-site-generator (a term that describes what IkiWiki is, but was actually coined more recently). It turns out IkiWiki is exceptionally flexible and powerful: I estimate the cost of moving to something modern(er) and fashionable such as Jekyll, Hugo or Hakyll as unreasonably high, in part because they are surprisingly rigid and inflexible in some key places.

Like most mature software, IkiWiki has a bug backlog. Over the past couple of weeks, as a sort-of "palate cleanser" around work pieces, I've tried to triage one IkiWiki bug per day: either upstream or in the Debian Bug Tracker. This is a really lightweight task: it can be as simple as "find a bug reported in Debian, copy it upstream, tag it upstream, mark it forwarded; perhaps taking 5-10 minutes.

Often I'll stumble across something that has already been fixed but not recorded as such as I go.

Despite this minimal level of work, I'm quite satisfied with the cumulative progress. It's notable to me how much my perspective has shifted by becoming a maintainer: I'm considering everything through a different lens to that of being just one user.

Eventually I will put some time aside to scratch some of my own itches (html5 by default; support dark mode; duckduckgo plugin; use the details tag...) but for now this minimal exercise is of broader use.

Categories: FLOSS Project Planets

Pages