Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 23 hours 1 min ago

C.J. Adams-Collier: IPv6 with CenturyLink Fiber

Sat, 2023-02-04 19:49

In case you want to know how to configure IPv6 using CenturyLink’s 6rd tunneling service.

There are four files to update with my script:

There are a couple of environment variables in /etc/default/centurylink-6rd that you will want to set. Mine looks like this:

LAN_IFACE="ens18,ens21,ens22" HEADER_FILE=/etc/radvd.conf.header

The script will configure radvd to advertise v6 routes to a /64 of clients on each of the interfaces delimited by a comma in LAN_IFACE. Current interface limit is 9. I can patch something to go up to 0xff, but I don’t want to at this time.

If you have some static configuration that you want to preserve, place them into the file pointed to by ${HEADER_FILE} and it will be prepended to the generated /etc/radvd.conf file. The up script will remove the file and re-create it every time your ppp link comes up, so keep that in mind and don’t manually modify the file and expect it to perist. You’ve been warned :-)

So here’s the script. It’s also linked above if you want to curl it.

#!/bin/bash # # Copyright 2023 Collier Technologies LLC # https://wp.c9h.org/cj/?p=1844 # # # These variables are for the use of the scripts run by run-parts # PPP_IFACE="$1" # PPP_TTY="$2" # PPP_SPEED="$3" # PPP_LOCAL="$4" # PPP_REMOTE="$5" # PPP_IPPARAM="$6" ${PPP_IFACE:="ppp0"} if [[ -z ${PPP_LOCAL} ]]; then PPP_LOCAL=$(curl -s https://icanhazip.com/) fi 6rd_prefix="2602::/24" printf "%x%x:%x%x\n" $(echo $PPP_LOCAL | tr . ' ') # This configuration option came from CenturyLink: # https://www.centurylink.com/home/help/internet/modems-and-routers/advanced-setup/enable-ipv6.html border_router=205.171.2.64 TUNIFACE=6rdif ip tunnel del ${TUNIFACE} 2>/dev/null ip -l 5 -6 addr flush scope global dev ${IFACE} ip tunnel add ${TUNIFACE} mode sit local ${PPP_LOCAL} remote ${border_router} ip tunnel 6rd dev ${TUNIFACE} 6rd-prefix "${ipv6_network}0/64" 6rd-relay_prefix ${border_router}/32 ip -6 route add default dev ${TUNIFACE} rm /etc/radvd.conf i=0 DEFAULT_FILE="/etc/default/centurylink-6rd" if [[ -f ${DEFAULT_FILE} ]]; then source ${DEFAULT_FILE} if [[ -f ${HEADER_FILE} ]]; then cp ${HEADER_FILE} /etc/radvd.conf fi else LAN_IFACE="setThis" fi for IFACE in $( echo ${LAN_IFACE} | tr , ' ' ) ; do ipv6_network=$(printf "2602:%02x:%02x%02x:%02x0${i}::" $(echo ${PPP_LOCAL} | tr . ' ')) ip addr add ${ipv6_network}1/64 dev ${TUNIFACE} ip link set up dev ${TUNIFACE} ip -6 route replace "${ipv6_network}/64" dev ${IFACE} metric 1 echo " interface ${IFACE} { AdvSendAdvert on; MinRtrAdvInterval 3; MaxRtrAdvInterval 10; AdvLinkMTU 1280; prefix ${ipv6_network}/64 { AdvOnLink on; AdvAutonomous on; AdvRouterAddr on; AdvValidLifetime 86400; AdvPreferredLifetime 86400; }; }; " >> /etc/radvd.conf let "i=i+1" done sudo systemctl restart radvd
Categories: FLOSS Project Planets

Jonathan Dowland: FreedomBox

Sat, 2023-02-04 13:44
personal servers

Moxie Marlinspike, former CEO of Signal, wrote a very interesting blog post about "web3", the crypto-scam1. It's worth a read if you are interested in that stuff. This blog post, however, is not about crypto-scams; but I wanted to quote from the beginning of the article:

People don’t want to run their own servers, and never will. The premise for web1 was that everyone on the internet would be both a publisher and consumer of content as well as a publisher and consumer of infrastructure.

We’d all have our own web server with our own web site, our own mail server for our own email, our own finger server for our own status messages, our own chargen server for our own character generation. However – and I don’t think this can be emphasized enough – that is not what people want. People do not want to run their own servers.

What's interesting to me about this is I feel that he's right: the vast, vast majority of people almost certainly do not want to run their own servers. Yet, I decided to. I started renting a Linux virtual server2 close to 20 years ago3, but more recently, decided to build and run a home NAS, which was a critical decision for getting my personal data under control.

FreedomBox and Debian

I am almost entirely dormant within the Debian project these days, and that's unlikely to change in the near future, at least until I wrap up some other commitments. I do sometimes mull over what I would do within Debian, if/when I return to the fold. And one thing I could focus on, since I am running my own NAS, would be software support for that sort of thing.

FreedomBox is a project that bills itself as a private server for non-experts: in other words, it's almost exactly the thing that Marlinspike states people don't want. Nonetheless, it is an interesting project. And, it's a Debian Pure Blend: which is to say (quoting the previous link) a subset of Debian that is tailored to be used out-of-the-box in a particular situation or by a particular target group. So FreedomBox is a candidate project for me to get involved with, especially (or more sensibly, assuming that) I end up using some of it myself.

But, that's not the only possibility, especially after a really, really good conversation I had earlier today with old friends Neil McGovern and Chris Boot…

  1. crypto-scam is my characterisation, not Marlinspike's.
  2. hosting, amongst other things, the site you are reading
  3. The Linux virtual servers replaced an ancient beige Pentium that was running as an Internet server from my parent's house in the 3-4 years before that.
Categories: FLOSS Project Planets

Jonathan Dowland: The Horror Show!

Sat, 2023-02-04 13:22

I was passing through London on Friday and I had time to get to The Horror Show! Exhibit at Somerset House, over by Victoria embankment. I learned about the exhibition from Gazelle Twin’s website: she wrote music for the final part of the exhibit, with Maxine Peake delivering a monologue over the top.

I loved it. It was almost like David Keenan’s book England’s Hidden Reverse had been made into an exhibition. It’s divided into three themes: Monster, Ghost and Witch, although the themes are loose with lots of cross over and threads running throughout.

Thatcher (right)

Derek Jarman's Blue

The show is littered with artefacts from culturally significant works from a recently-bygone age. There’s a strong theme of hauntology. The artefacts that stuck out to me include some of Leigh Bowery’s costumes; the spitting image doll of Thatcher; the cover of a Radio Times featuring the cult BBC drama Threads; Nigel Kneale’s “the stone tape” VHS alongside more recent artefacts from Inside Number 9 and a MR James adaptation by Mark Gatiss (a clear thread of inspiration there); various bits relating to Derek Jarman including the complete “Blue” screening in a separate room; Mica Levi’s eerie score to “under the skin” playing in the “Witch” section, and various artefacts and references to the underground music group Coil. Too much to mention!

Having said that, the things which left the most lasting impression are the some of the stand-alone works of art: the charity box boy model staring fractured and refracted through a glass door (above); the glossy drip of blood running down a wall; the performance piece on a Betamax tape; the self portrait of a drowned man; the final piece, "The Neon Heiroglyph".

Jonathan Jones at the Guardian liked it.

The show runs until the 19th February and is worth a visit if you can.

Categories: FLOSS Project Planets

Tim Retout: AlmaLinux and SBOMs

Sat, 2023-02-04 11:37

At CentOS Connect yesterday, Jack Aboutboul and Javier Hernandez presented a talk about AlmaLinux and SBOMs [video], where they are exploring a novel supply-chain security effort in the RHEL ecosystem.

Now, I have unfortunately ignored the Red Hat ecosystem for a long time, so if you are in a similar position to me: CentOS used to produce debranded rebuilds of RHEL; but Red Hat changed the project round so that CentOS Stream now sits in between Fedora Rawhide and RHEL releases, allowing the wider community to try out/contribute to RHEL builds before their release. This is credited with making early RHEL point releases more stable, but left a gap in the market for debranded rebuilds of RHEL; AlmaLinux and Rocky Linux are two distributions that aim to fill that gap.

Alma are generating and publishing Software Bill of Material (SBOM) files for every package; these are becoming a requirement for all software sold to the US federal government. What’s more, they are sending these SBOMs to a third party (CodeNotary) who store them in some sort of Merkle tree system to make it difficult for people to tamper with later. This should theoretically allow end users of the distribution to verify the supply chain of the packages they have installed?

I am currently unclear on the differences between CodeNotary/ImmuDB vs. Sigstore/Rekor, but there’s an SBOM devroom at FOSDEM tomorrow so maybe I’ll soon be learning that. This also makes me wonder if a Sigstore-based approach would be more likely to be adopted by Fedora/CentOS/RHEL, and whether someone should start a CentOS Software Supply Chain Security SIG to figure this out, or whether such an effort would need to live with the build system team to be properly integrated. It would be nice to understand the supply-chain story for CentOS and RHEL.

As I write this, I’m also reflecting that perhaps it would be helpful to explain what happens next in the SBOM consumption process; i.e. can this effort demonstrate tangible end user value, like enabling AlmaLinux to integrate with a vendor-neutral approach to vulnerability management? Aside from the value of being able to sell it to the US government!

Categories: FLOSS Project Planets

Scarlett Gately Moore: KDE Snaps, snapcraft, Debian packages.

Fri, 2023-02-03 13:54
Sunset, Witch Wells Arizona

Another busy week!

In the snap world, I have been busy trying to solve the problem of core20 snaps needing security updates and focal is no longer supported in KDE Neon. So I have created a ppa at https://launchpad.net/~scarlettmoore/+archive/ubuntu/kf5-5.99-focal-updates/+packages

Which of course presents more work, as kf5 5.99.0 requires qt5 5.15.7. Sooo this is a WIP.

Snapcraft kde-neon-extension is moving along as I learn the python ways of formatting, and fixing some issues in my tests.

In the Debian world, I am sad to report Mycroft-AI has gone bust, however the packaging efforts are not in vain as the project has been forked to https://github.com/orgs/OpenVoiceOS/repositories and should be relatively easy to migrate.

I have spent some time verifying the libappimage in buster is NOT vulnerable with CVE-2020-25265 as the code wasn’t introduced yet.

Skanpage, plasma-bigscreen both have source uploads so the can migrate to testing to hopefully make it into bookworm!

As many of you know, I am seeking employment. I am a hard worker, that thrives on learning new things. I am a self starter, knowledge sponge, and eager to be an asset to < insert your company here > !

Meanwhile, as interview processes are much longer than I remember and the industry exploding in layoffs, I am coming up short on living expenses as my unemployment lingers on. Please consider donating to my gofundme. Thank you for your consideration.

I still have a ways to go to cover my bills this month, I will continue with my work until I cannot, I hate asking, but please consider a donation. Thank you!

GoFundMe

Categories: FLOSS Project Planets

Ian Jackson: derive-adhoc: powerful pattern-based derive macros for Rust

Thu, 2023-02-02 19:34
tl;dr

Have you ever wished that you could that could write a new derive macro without having to mess with procedural macros?

Now you can!

derive-adhoc lets you write a #[derive] macro, using a template syntax which looks a lot like macro_rules!.

It’s still 0.x - so unstable, and maybe with sharp edges. We want feedback!

And, the documentation is still very terse. It is doesn’t omit anything, but, it is severely lacking in examples, motivation, and so on. It will suit readers who enjoy dense reference material.

Read more... )

comments
Categories: FLOSS Project Planets

Matt Brown: 2023 Writing Plan

Thu, 2023-02-02 16:07

To achieve my goal of publishing one high-quality piece of writing per week this year, I’ve put together a draft writing plan and a few organisational notes.

Please let me know what you think - what’s missing? what would you like to read more/less of from me?

I aim for each piece of writing to generate discussion, inspire further writing, and raise my visibility and profile with potential customers and peers. Some of the writing will be opinion, but I expect a majority of it will take a “learning by teaching” approach - aiming to explain and present useful information to the reader while helping me learn more!

Topic Backlog

The majority of my writing is going to fit into 4 series, allowing me to plan out a set of posts and narrative rather than having to come up with something novel to write about every week. To start with for Feb, my aim is to get an initial post in each series out the door. Long-term, it’s likely that the order of posts will reflect my work focus (e.g. if I’m spending a few weeks deep-diving into a particular product idea then expect more writing on that), but I will try and maintain some variety across the different series as well.

This backlog will be maintained as a living page at https://www.mattb.nz/w/queue.

Thoughts on SRE

This series of posts will be pitched primarily at potential consulting customers who want to understand how I approach the development and operations of distributed software systems. Initial topics to cover include:

  • What is SRE? My philosophy on how it relates to DevOps, Platform Engineering and various other “hot” terms.
  • How SRE scales up and down in size.
  • My approach to managing oncall responsibilities, toil and operational work.
  • How to grow an SRE team, including the common futility of SRE “transformations”.
  • Learning from incidents, postmortems, incident response, etc.

Business plan drafts

I have an ever-growing list of potential software opportunities and products which I think would be fun to build, but which generally don’t ever leave my head due to lack of time to develop the idea, or being unable to convince myself that there’s a viable business case or market for it.

I’d like to start sharing some very rudimentary business plan sketches for some of these ideas as a way of getting some feedback on my assessment of their potential. Whether that’s confirmation that it’s not worth pursuing, an expression of interest in the product, or potential partnership/collaboration opportunities - anything is better than the idea just sitting in my head.

Initial ideas include:

  • Business oriented Mastodon hosting.
  • PDF E-signing - e.g. A Docusign competitor, but with a local twist through RealMe or drivers license validation.
  • A framework to enable simple, performant per-tenant at-rest encryption for SaaS products - stop the data leaks.

Product development updates

For any product ideas that show merit and develop into a project, and particularly for the existing product ideas I’ve already committed to exploring, I plan to document my product investigation and market research findings as a way of structuring and driving my learning in the space.

To start with this will involve:

  • A series of explanatory posts diving into how NZ’s electricity system works with a particular focus on how operational data that will be critical to managing a more dynamic grid flows (or doesn’t flow!) today, and what opportunities or needs exist for generating, managing or distributing data that might be solvable with a software system I could build.
  • A series of product reviews and deep dives into existing farm management software and platforms in use by NZ farmers today, looking at the functionality they provide, how they integrate and generally testing the anecdotal feedback I have to date that they’re clunky, hard to use and not well integrated.
  • For co2mon.nz the focus will be less on market research and more on exploring potential distribution channels (e.g. direct advertising vs partnership with air conditioning suppliers) and pricing models (e.g. buy vs rent).

Debugging walk-throughs

Being able to debug and fix a system that you’re not intimately familiar with is valuable skill and something that I’ve always enjoyed, but it’s also a skill that I observe many engineers are uncomfortable with. There’s a set of techniques and processes that I’ve honed and developed over the years for doing this which I think make the task of debugging an unfamiliar system more approachable.

The idea, is that each post will take a problem or situation I’ve encountered, from the initial symptom or problem report and walk through the process of how to narrow down and identify the trigger or root cause of the behaviour. Along the way, discussing techniques used, their pros and cons. In addition to learning about the process of debugging itself, the aim is to illustrate lessons that can be applied when designing and building software systems that facilitate and improve our experiences in the operational stage of a systems lifecycle where debugging takes place.

Miscellaneous topics

In addition the regular series above, stand-alone posts on the other topics may include:

  • The pros/cons I see of bootstrapping a business vs taking VC or other funding.
  • Thoughts on remote work and hiring staff.
  • AI - a confessional on how I didn’t think it would progress in my lifetime, but maybe I was wrong.
  • Reflections on 15 years at Google and thoughts on subsequent events since my departure.
  • AWS vs GCP. Fight! Or with less click-bait, a level-headed comparison of the pros/cons I see in each platform.
Logistics Discussion and comments

A large part of my motivation for writing regularly is to seek feedback and generate discussion on these topics. Typically this is done by including comment functionality within the website itself. I’ve decided not to do this - on-site commenting creates extra infrastructure to maintain, and limits the visibility and breadth of discussion to existing readers and followers.

To provide opportunities for comment and feedback I plan to share and post notification and summarised snippets of selected posts to various social media platforms. Links to these social media posts will be added to each piece of writing to provide a path for readers to engage and discuss further while enabling the discussion and visibility of the post to grow and extend beyond my direct followers and subscribers.

My current thinking is that I’ll distribute via the following platforms:

  • Mastodon @matt@mastodon.nz - every post.
  • Twitter @xleem - selected posts. I’m trying to reduce Twitter usage in favour of Mastodon, but there’s no denying that it’s still where a significant number of people and discussions are happening.
  • LinkedIn - probably primarily for posts in the business plan series, and notable milestones in the product development process.

In all cases, my aim will be to post a short teaser or summary paragraph that poses an question or relays an interesting fact to give some immediate value and signal to readers as to whether they want to click through rather than simply spamming links into the feed.

Feedback

In addition to social media discussion I also plan to add a direct feedback path, particularly for readers who don’t have time or inclination to participate in written discussion, by providing a simple thumbs up/thumbs down feedback widget to the bottom of each post, including those delivered via RSS and email.

Organisation

To enable subscription to subsets of my writing (particularly for places like Planet Debian, etc where the more business focused content is likely to be off-topic), I plan to place each post into a set of categories:

  • Business
  • Technology
  • General

In addition to the categories, I’ll also use more free-form tags to group writing with linked themes or that falls within one of the series described above.

Categories: FLOSS Project Planets

Matthew Garrett: Blocking free API access to Twitter doesn't stop abuse

Thu, 2023-02-02 05:21
In one week from now, Twitter will block free API access. This prevents anyone who has written interesting bot accounts, integrations, or tooling from accessing Twitter without paying for it. A whole number of fascinating accounts will cease functioning, people will no longer be able to use tools that interact with Twitter, and anyone using a free service to do things like find Twitter mutuals who have moved to Mastodon or to cross-post between Twitter and other services will be blocked.

There's a cynical interpretation to this, which is that despite firing 75% of the workforce Twitter is still not profitable and Elon is desperate to not have Twitter go bust and also not to have to tank even more of his Tesla stock to achieve that. But let's go with the less cynical interpretation, which is that API access to Twitter is something that enables bot accounts that make things worse for everyone. Except, well, why would a hostile bot account do that?

To interact with an API you generally need to present some sort of authentication token to the API to prove that you're allowed to access it. It's easy enough to restrict issuance of those tokens to people who pay for the service. But, uh, how do the apps work? They need to be able to communicate with the service to tell it to post tweets, retrieve them, and so on. And the simple answer to that is that they use some hardcoded authentication tokens. And while registering for an API token yourself identifies that you're not using an official client, using the tokens embedded in the clients makes it look like you are. If you want to make it look like you're a human, you're already using tokens ripped out of the official clients.

The Twitter client API keys are widely known. Anyone who's pretending to be a human is using those already and will be unaffected by the shutdown of the free API tier. Services like movetodon.org do get blocked. This isn't an anti-abuse choice. It's one that makes it harder to move to other services. It's one that blocks a bunch of the integrations and accounts that bring value to the platform. It's one that hurts people who follow the rules, without hurting the ones who don't. This isn't an anti-abuse choice, it's about trying to consolidate control of the platform.

comments
Categories: FLOSS Project Planets

John Goerzen: Using Yggdrasil As an Automatic Mesh Fabric to Connect All Your Docker Containers, VMs, and Servers

Wed, 2023-02-01 23:18

Sometimes you might want to run Docker containers on more than one host. Maybe you want to run some at one hosting facility, some at another, and so forth.

Maybe you’d like run VMs at various places, and let them talk to Docker containers and bare metal servers wherever they are.

And maybe you’d like to be able to easily migrate any of these from one provider to another.

There are all sorts of very complicated ways to set all this stuff up. But there’s also a simple one: Yggdrasil.

My blog post Make the Internet Yours Again With an Instant Mesh Network explains some of the possibilities of Yggdrasil in general terms. Here I want to show you how to use Yggdrasil to solve some of these issues more specifically. Because Yggdrasil is always Encrypted, some of the security lifting is done for us.

Background

Often in Docker, we connect multiple containers to a single network that runs on a given host. That much is easy. Once you start talking about containers on multiple hosts, then you start adding layers and layers of complexity. Once you start talking multiple providers, maybe multiple continents, then the complexity can increase. And, if you want to integrate everything from bare metal servers to VMs into this – well, there are ways, but they’re not easy.

I’m a believer in the KISS principle. Let’s not make things complex when we don’t have to.

Enter Yggdrasil

As I’ve explained before, Yggdrasil can automatically form a global mesh network. This is pretty cool! As most people use it, they join it to the main Yggdrasil network. But Yggdrasil can be run entirely privately as well. You can run your own private mesh, and that’s what we’ll talk about here.

All we have to do is run Yggdrasil inside each container, VM, server, or whatever. We handle some basics of connectivity, and bam! Everything is host- and location-agnostic.

Setup in Docker

The installation of Yggdrasil on a regular system is pretty straightforward. Docker is a bit more complicated for several reasons:

  • It blocks IPv6 inside containers by default
  • The default set of permissions doesn’t permit you to set up tunnels inside a container
  • It doesn’t typically pass multicast (broadcast) packets

Normally, Yggdrasil could auto-discover peers on a LAN interface. However, aside from some esoteric Docker networking approaches, Docker doesn’t permit that. So my approach is going to be setting up one or more Yggdrasil “router” containers on a given Docker host. All the other containers talk directly to the “router” container and it’s all good.

Basic installation

In my Dockerfile, I have something like this:

FROM jgoerzen/debian-base-security:bullseye RUN echo "deb http://deb.debian.org/debian bullseye-backports main" >> /etc/apt/sources.list && \ apt-get --allow-releaseinfo-change update && \ apt-get -y --no-install-recommends -t bullseye-backports install yggdrasil ... COPY yggdrasil.conf /etc/yggdrasil/ RUN set -x; \ chown root:yggdrasil /etc/yggdrasil/yggdrasil.conf && \ chmod 0750 /etc/yggdrasil/yggdrasil.conf && \ systemctl enable yggdrasil

The magic parameters to docker run to make Yggdrasil work are:

--cap-add=NET_ADMIN --sysctl net.ipv6.conf.all.disable_ipv6=0 --device=/dev/net/tun:/dev/net/tun

This example uses my docker-debian-base images, so if you use them as well, you’ll also need to add their parameters.

Note that it is NOT necessary to use --privileged. In fact, due to the network namespaces in use in Docker, this command does not let the container modify the host’s networking (unless you use --net=host, which I do not recommend).

The --sysctl parameter was the result of a lot of banging my head against the wall. Apparently Docker tries to disable IPv6 in the container by default. Annoying.

Configuration of the router container(s)

The idea is that the router node (or more than one, if you want redundancy) will be the only ones to have an open incoming port. Although the normal Yggdrasil case of directly detecting peers in a broadcast domain is more convenient and more robust, this can work pretty well too.

You can, of course, generate a template yggdrasil.conf with yggdrasil -genconf like usual. Some things to note for this one:

  • You’ll want to change Listen to something like Listen: ["tls://[::]:12345"] where 12345 is the port number you’ll be listening on.
  • You’ll want to disable the MulticastInterfaces entirely by just setting it to [] since it doesn’t work anyway.
  • If you expose the port to the Internet, you’ll certainly want to firewall it to only authorized peers. Setting AllowedPublicKeys is another useful step.
  • If you have more than one router container on a host, each of them will both Listen and act as a client to the others. See below.
Configuration of the non-router nodes

Again, you can start with a simple configuration. Some notes here:

  • You’ll want to set Peers to something like Peers: ["tls://routernode:12345"] where routernode is the Docker hostname of the router container, and 12345 is its port number as defined above. If you have more than one local router container, you can simply list them all here. Yggdrasil will then fail over nicely if any one of them go down.
  • Listen should be empty.
  • As above, MulticastInterfaces should be empty.
Using the interfaces

At this point, you should be able to ping6 between your containers. If you have multiple hosts running Docker, you can simply set up the router nodes on each to connect to each other. Now you have direct, secure, container-to-container communication that is host-agnostic! You can also set up Yggdrasil on a bare metal server or VM using standard procedures and everything will just talk nicely!

Security notes

Yggdrasil’s mesh is aggressively greedy. It will peer with any node it can find (unless told otherwise) and will find a route to anywhere it can. There are two main ways to make sure your internal comms stay private: by restricting who can talk to your mesh, and by firewalling the Yggdrasil interface. Both can be used, and they can be used simultaneously.

By disabling multicast discovery, you eliminate the chance for random machines on the LAN to join the mesh. By making sure that you firewall off (outside of Yggdrasil) who can connect to a Yggdrasil node with a listening port, you can authorize only your own machines. And, by setting AllowedPublicKeys on the nodes with listening ports, you can authenticate the Yggdrasil peers. Note that part of the benefit of the Yggdrasil mesh is normally that you don’t have to propagate a configuration change to every participatory node – that’s a nice thing in general!

You can also run a firewall inside your container (I like firehol for this purpose) and aggressively firewall the IPs that are allowed to connect via the Yggdrasil interface. I like to set a stable interface name like ygg0 in yggdrasil.conf, and then it becomes pretty easy to firewall the services. The Docker parameters that allow Yggdrasil to run are also sufficient to run firehol.

Naming Yggdrasil peers

You probably don’t want to hard-code Yggdrasil IPs all over the place. There are a few solutions:

  • You could run an internal DNS service
  • You can do a bit of scripting around Docker’s --add-host command to add things to /etc/hosts
Other hints & conclusion

Here are some other helpful use cases:

  • If you are migrating between hosts, you could leave your reverse proxy up at both hosts, both pointing to the target containers over Yggdrasil. The targets will be automatically found from both sides of the migration while you wait for DNS caches to update and such.
  • This can make services integrate with local networks a lot more painlessly than they might otherwise.

This is just an idea. The point of Yggdrasil is expanding our ideas of what we can do with a network, so here’s one such expansion. Have fun!

Note: This post also has a permanent home on my webiste, where it may be periodically updated.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: RInside 0.2.18 on CRAN: Maintenance

Wed, 2023-02-01 19:17

A new release 0.2.18 of RInside arrived on CRAN and in Debian today. This is the first release in ten months since the 0.2.17 release. RInside provides a set of convenience classes which facilitate embedding of R inside of C++ applications and programs, using the classes and functions provided by Rcpp.

This release brings a contributed change to how the internal REPL is called: Dominick found the current form more reliable when embedding R on Windows. We also updated a few other things around the package.

The list of changes since the last release:

Changes in RInside version 0.2.18 (2023-02-01)
  • The random number initialization was updated as in R.

  • The main REPL is now running via 'run_Rmainloop()'.

  • Small routine update to package and continuous integration.

My CRANberries also provides a short report with changes from the previous release. More information is on the RInside page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page, or to issues tickets at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Valhalla's Things: How To Verify Debian's ARM Installer Images

Wed, 2023-02-01 19:00
Posted on February 2, 2023

Thanks to Vagrant on the debian-arm mailing list I’ve found that there is a chain of verifiability for the images usually used to install Debian on ARM devices.

It’s not trivial, so I’m writing it down for future reference when I’ll need it again.

  1. Download the images from https://ftp.debian.org/debian/dists/bullseye/main/installer-armhf/current/images/ (choose either hd-media or netboot, then SD-card-images and download the firmware.* file for your board as well as partition.img.gz).

  2. Download the checksums file https://ftp.debian.org/debian/dists/bullseye/main/installer-armhf/current/images/SHA256SUMS

  3. Download the Release file from https://ftp.debian.org/debian/dists/bullseye/ ; for convenience the InRelease

  4. Verify the Release file:

    gpg --no-default-keyring \ --keyring /usr/share/keyrings/debian-archive-bullseye-stable.gpg \ --verify InRelease
  5. Verify the checksums file:

    awk '/installer-armhf\/current\/images\/SHA256SUMS/ {print $1 " SHA256SUMS"}' InRelease | tail -n 1 | sha256sum -c

    (I know, I probably can use awk instead of that tail, but it’s getting late and I want to publish this).

  6. Verify the actual files, for hd-media:

    grep hd-media SHA256SUMS \ | sed 's#hd-media/SD-card-images/##' \ | sha256sum -c \ | grep -v "No such file or directory" \ | grep -v "FAILED open or read" 2> /dev/null

    and for netboot:

    grep netboot SHA256SUMS \ | sed 's#netboot/SD-card-images/##' \ | sha256sum -c \ | grep -v "No such file or directory" \ | grep -v "FAILED open or read" 2> /dev/null

    and check that all of the files you wanted are there with an OK; of course change hd-media with netboot as needed.

And I fully agree that fewer steps would be nice, but this is definitely better than nothing!

Categories: FLOSS Project Planets

Simon Josefsson: Apt Archive Transparency: debdistdiff & apt-canary

Wed, 2023-02-01 15:56

I’ve always found the operation of apt software package repositories to be a mystery. There appears to be a lack of transparency into which people have access to important apt package repositories out there, how the automatic non-human update mechanism is implemented, and what changes are published. I’m thinking of big distributions like Ubuntu and Debian, but also the free GNU/Linux distributions like Trisquel and PureOS that are derived from the more well-known distributions.

As far as I can tell, anyone who has the OpenPGP private key trusted by a apt-based GNU/Linux distribution can sign a modified Release/InRelease file and if my machine somehow downloads that version of the release file, my machine could be made to download and install packages that the distribution didn’t intend me to install. Further, it seems that anyone who has access to the main HTTP server, or any of its mirrors, or is anywhere on the network between them and my machine (when plaintext HTTP is used), can either stall security updates on my machine (on a per-IP basis), or use it to send my machine (again, on a per-IP basis to avoid detection) a modified Release/InRelease file if they had been able to obtain the private signing key for the archive. These are mighty powers that warrant overview.

I’ve always put off learning about the processes to protect the apt infrastructure, mentally filing it under “so many people rely on this infrastructure that enough people are likely to have invested time reviewing and improving these processes”. Simultaneous, I’ve always followed the more free-software friendly Debian-derived distributions such as gNewSense and have run it on some machines. I’ve never put them into serious production use, because the trust issues with their apt package repositories has been a big question mark for me. Even the simple question of “is someone updating the apt repository” is not easy to understand on a running gNewSense system. At some point in time the gNewSense cron job to pull in security updates from Debian must have stopped working, and I wouldn’t have had any good mechanism to notice that. Most likely it happened without any public announcement. I’ve recently switched to Trisquel on production machines, and these questions has come back to haunt me.

The situation is unsatisfying and I looked into what could be done to improve it. While I could try to understand who are the key people involved in each project, and may even learn what hardware component (running non-free firmware? GnuPG private keys on disk? Smartcard? TPM? YubiKey? HSM?) is used, or what software is involved to update and sign apt repositories, I’m not certain that would scale to securing my machines against attacks on this infrastructure. Even people with the best intentions and the state of the art hardware and software can have problems.

To increase my trust in Trisquel I set out to understand how it worked. To make it easier to sort out what the interesting parts of the Trisquel archive to audit further were, I created debdistdiff to produce human readable text output comparing one apt archive with another apt archive. There is a GitLab CI/CD cron job that runs this every day, producing output comparing Trisquel vs Ubuntu and PureOS vs Debian. Working with these output files has made me learn more about how the process works, and I even stumbled upon something that is likely a bug where Trisquel aramo was imported from Ubuntu jammy while it contained a couple of package (e.g., gcc-8, python3.9) that were removed for the final Ubuntu jammy release.

After working on auditing the Trisquel archive manually that way, I realized that whatever I could tell from comparing Trisquel with Ubuntu, it would only be something based on a current snapshot of the archives. Tomorrow it may look completely different. What felt necessary was to audit the differences of the Trisquel archive continously. I was quite happy to have developed debdistdiff for one purpose (comparing two different archives like Trisquel and Ubuntu) and discovered that the tool could be used for another purpose (comparing the Trisquel archive at two different points in time). At this time I realized that I needed a log of all different apt archive metadata to be able to produce an audit log of the differences in time for the archive. I create manually curated git-repositories with the Release/InRelease and the Packages files for each architecture/component of the well-known distributions Trisquel, Ubuntu, Debian and PureOS. Eventually I wrote scripts to automate this, which are now published in the debdistget project.

At this point, one of the early question about per-IP substitution of Release files were lingering in my mind. However with the tooling I now had available, coming up with a way to resolve this was simple! Merely have apt compute a SHA256 checksum of the just downloaded InRelease file, and see if my git repository had the same file. At this point I started reading the Apt source code, and now I had more doubts about the security of my systems than I ever had before. Oh boy how the name Apt has never before felt more… Apt?! Oh well, we must leave some exercises for the students. Eventually I realized I wanted to touch as little of apt code basis as possible, and noticed the SigVerify::CopyAndVerify function called ExecGPGV which called apt-key verify which called GnuPG’s gpgv. By setting Apt::Key::gpgvcommand I could get apt-key verify to call another tool than gpgv. See where I’m going? I thought wrapping this up would now be trivial but for some reason the hash checksum I computed locally never matched what was on my server. I gave up and started working on other things instead.

Today I came back to this idea, and started to debug exactly how the local files looked that I got from apt and how they differed from what I had in my git repositories, that came straight from the apt archives. Eventually I traced this back to SplitClearSignedFile which takes an InRelease file and splits it into two files, probably mimicking the (old?) way of distributing both Release and Release.gpg. So the clearsigned InRelease file is split into one cleartext file (similar to the Release file) and one OpenPGP signature file (similar to the Release.gpg file). But why didn’t the cleartext variant of the InRelease file hash to the same value as the hash of the Release file? Sadly they differ by the final newline.

Having solved this technicality, wrapping the pieces up was easy, and I came up with a project apt-canary that provides a script apt-canary-gpgv that verify the local apt release files against something I call a “apt canary witness” file stored at a URL somewhere.

I’m now running apt-canary on my Trisquel aramo laptop, a Trisquel nabia server, and Talos II ppc64el Debian machine. This means I have solved the per-IP substitution worries (or at least made them less likely to occur, having to send the same malicious release files to both GitLab and my system), and allow me to have an audit log of all release files that I actually use for installing and downloading packages.

What do you think? There are clearly a lot of work and improvements to be made. This is a proof-of-concept implementation of an idea, but instead of refining it until perfection and delaying feedback, I wanted to publish this to get others to think about the problems and various ways to resolve them.

Btw, I’m going to be at FOSDEM’23 this weekend, helping to manage the Security Devroom. Catch me if you want to chat about this or other things. Happy Hacking!

Categories: FLOSS Project Planets

Julian Andres Klode: Ubuntu 2022v1 secure boot key rotation and friends

Wed, 2023-02-01 08:40

This is the story of the currently progressing changes to secure boot on Ubuntu and the history of how we got to where we are.

taking a step back: how does secure boot on Ubuntu work?

Booting on Ubuntu involves three components after the firmware:

  1. shim
  2. grub
  3. linux

Each of these is a PE binary signed with a key. The shim is signed by Microsoft’s 3rd party key and embeds a self-signed Canonical CA certificate, and optionally a vendor dbx (a list of revoked certificates or binaries). grub and linux (and fwupd) are then signed by a certificate issued by that CA

In Ubuntu’s case, the CA certificate is sharded: Multiple people each have a part of the key and they need to meet to be able to combine it and sign things, such as new code signing certificates.

BootHole

When BootHole happened in 2020, travel was suspended and we hence could not rotate to a new signing certificate. So when it came to updating our shim for the CVEs, we had to revoke all previously signed kernels, grubs, shims, fwupds by their hashes.

This generated a very large vendor dbx which caused lots of issues as shim exported them to a UEFI variable, and not everyone had enough space for such large variables. Sigh.

We decided we want to rotate our signing key next time.

This was also when upstream added SBAT metadata to shim and grub. This gives a simple versioning scheme for security updates and easy revocation using a simple EFI variable that shim writes to and reads from.

Spring 2022 CVEs

We still were not ready for travel in 2021, but during BootHole we developed the SBAT mechanism, so one could revoke a grub or shim by setting a single EFI variable.

We actually missed rotating the shim this cycle as a new vulnerability was reported immediately after it, and we decided to hold on to it.

2022 key rotation and the fall CVEs

This caused some problems when the 2nd CVE round came, as we did not have a shim with the latest SBAT level, and neither did a lot of others, so we ended up deciding upstream to not bump the shim SBAT requirements just yet. Sigh.

Anyway, in October we were meeting again for the first time at a Canonical sprint, and the shardholders got together and created three new signing keys: 2022v1, 2022v2, and 2022v3. It took us until January before they were installed into the signing service and PPAs setup to sign with them.

We also submitted a shim 15.7 with the old keys revoked which came back at around the same time.

Now we were in a hurry. The 22.04.2 point release was scheduled for around middle of February, and we had nothing signed with the new keys yet, but our new shim which we need for the point release (so the point release media remains bootable after the next round of CVEs), required new keys.

So how do we ensure that users have kernels, grubs, and fwupd signed with the new key before we install the new shim?

upgrade ordering

grub and fwupd are simple cases: For grub, we depend on the new version. We decided to backport grub 2.06 to all releases (which moved focal and bionic up from 2.04), and kept the versioning of the -signed packages the same across all releases, so we were able to simply bump the Depends for grub to specify the new minimum version. For fwupd-efi, we added Breaks.

(Actually, we also had a backport of the CVEs for 2.04 based grub, and we did publish that for 20.04 signed with the old keys before backporting 2.06 to it.)

Kernels are a different story: There are about 60 kernels out there. My initial idea was that we could just add Breaks for all of them. So our meta package linux-image-generic which depends on linux-image-$(uname -r)-generic, we’d simply add Breaks: linux-image-generic (« 5.19.0-31) and then adjust those breaks for each series. This would have been super annoying, but ultimately I figured this would be the safest option. This however caused concern, because it could be that apt decides to remove the kernel metapackage.

I explored checking the kernels at runtime and aborting if we don’t have a trusted kernel in preinst. This ensures that if you try to upgrade shim without having a kernel, it would fail to install. But this ultimately has a couple of issues:

  1. It aborts the entire transaction at that point, so users will be unable to run apt upgrade until they have a recent kernel.
  2. We cannot even guarantee that a kernel would be unpacked first. So even if you got a new kernel, apt/dpkg might attempt to unpack it first and then the preinst would fail because no kernel is present yet.

Ultimately we believed the danger to be too large given that no kernels had yet been released to users. If we had kernels pushed out for 1-2 months already, this would have been a viable choice.

So in the end, I ended up modifying the shim packaging to install both the latest shim and the previous one, and an update-alternatives alternative to select between the two:

In it’s post-installation maintainer script, shim-signed checks whether all kernels with a version greater or equal to the running one are not revoked, and if so, it will setup the latest alternative with priority 100 and the previous with a priority of 50. If one or more of those kernels was signed with a revoked key, it will swap the priorities around, so that the previous version is preferred.

Now this is fairly static, and we do want you to switch to the latest shim eventually, so I also added hooks to the kernel install to trigger the shim-signed postinst script when a new kernel is being installed. It will then update the alternatives based on the current set of kernels, and if it now points to the latest shim, reinstall shim and grub to the ESP.

Ultimately this means that once you install your 2nd non-revoked kernel, or you install a non-revoked kernel and then reconfigure shim or the kernel, you will get the latest shim. When you install your first non-revoked kernel, your currently booted kernel is still revoked, so it’s not upgraded immediately. This has a benefit in that you will most likely have two kernels you can boot without disabling secure boot.

regressions

Of course, the first version I uploaded had still some remaining hardcoded “shimx64” in the scripts and so failed to install on arm64 where “shimaa64” is used. And if that were not enough, I also forgot to include support for gzip compressed kernels there. Sigh, I need better testing infrastructure to be able to easily run arm64 tests as well (I only tested the actual booting there, not the scripts).

shim-signed migrated to the release pocket in lunar fairly quickly, but this caused images to stop working, because the new shim was installed into images, but no kernel was available yet, so we had to demote it to proposed and block migration. Despite all the work done for end users, we need to be careful to roll this out for image building.

another grub update for OOM issues.

We had two grubs to release: First there was the security update for the recent set of CVEs, then there also was an OOM issue for large initrds which was blocking critical OEM work.

We fixed the OOM issue by cherry-picking all 2.12 memory management patches, as well as the red hat patches to the loader we take from there. This ended up a fairly large patch set and I was hesitant to tie the security update to that, so I ended up pushing the security update everywhere first, and then pushed the OOM fixes this week.

With the OOM patches, you should be able to boot initrds of between 400M and 1GB, it also depends on the memory layout of your machine and your screen resolution and background images. So OEM team had success testing 400MB irl, and I tested up to I think it was 1.2GB in qemu, I ran out of FAT space then and stopped going higher :D

other features in this round
  • Intel TDX support in grub and shim
  • Kernels are allocated as CODE now not DATA as per the upstream mm changes, might fix boot on X13s
am I using this yet?

The new signing keys are used in:

  • shim-signed 1.54 on 22.10+, 1.51.3 on 22.04, 1.40.9 on 20.04, 1.37~18.04.13 on 18.04
  • grub2-signed 1.187.2~ or newer (binary packages grub-efi-amd64-signed or grub-efi-arm64-signed), 1.192 on 23.04.
  • fwupd-signed 1.51~ or newer
  • various linux updates. Check apt changelog linux-image-unsigned-$(uname -r) to see if Revoke & rotate to new signing key (LP: #2002812) is mentioned in there to see if it signed with the new key.

If you were able to install shim-signed, your grub and fwupd-efi will have the correct version as that is ensured by packaging. However your shim may still point to the old one. To check which shim will be used by grub-install, you can check the status of the shimx64.efi.signed or (on arm64) shimaa64.efi.signed alternative. The best link needs to point to the file ending in latest:

$ update-alternatives --display shimx64.efi.signed shimx64.efi.signed - auto mode link best version is /usr/lib/shim/shimx64.efi.signed.latest link currently points to /usr/lib/shim/shimx64.efi.signed.latest link shimx64.efi.signed is /usr/lib/shim/shimx64.efi.signed /usr/lib/shim/shimx64.efi.signed.latest - priority 100 /usr/lib/shim/shimx64.efi.signed.previous - priority 50

If it does not, but you have installed a new kernel compatible with the new shim, you can switch immediately to the new shim after rebooting into the kernel by running dpkg-reconfigure shim-signed. You’ll see in the output if the shim was updated, or you can check the output of update-alternatives as you did above after the reconfiguration has finished.

For the out of memory issues in grub, you need grub2-signed 1.187.3~ (same binaries as above).

how do I test this (while it’s in proposed)?
  1. upgrade your kernel to proposed and reboot into that
  2. upgrade your grub-efi-amd64-signed, shim-signed, fwupd-signed to proposed.

If you already upgraded your shim before your kernel, don’t worry:

  1. upgrade your kernel and reboot
  2. run dpkg-reconfigure shim-signed

And you’ll be all good to go.

deep dive: uploading signed boot assets to Ubuntu

For each signed boot asset, we build one version in the latest stable release and the development release. We then binary copy the built binaries from the latest stable release to older stable releases. This process ensures two things: We know the next stable release is able to build the assets and we also minimize the number of signed assets.

OK, I lied. For shim, we actually do not build in the development release but copy the binaries upward from the latest stable, as each shim needs to go through external signing.

The entire workflow looks something like this:

  1. Upload the unsigned package to one of the following “build” PPAs:

  2. Upload the signed package to the same PPA

  3. For stable release uploads:

    • Copy the unsigned package back across all stable releases in the PPA
    • Upload the signed package for stable releases to the same PPA with ~<release>.1 appended to the version
  4. Submit a request to canonical-signing-jobs to sign the uploads.

    The signing job helper copies the binary -unsigned packages to the primary-2022v1 PPA where they are signed, creating a signing tarball, then it copies the source package for the -signed package to the same PPA which then downloads the signing tarball during build and places the signed assets into the -signed deb.

    Resulting binaries will be placed into the proposed PPA: https://launchpad.net/~ubuntu-uefi-team/+archive/ubuntu/proposed

  5. Review the binaries themselves

  6. Unembargo and binary copy the binaries from the proposed PPA to the proposed-public PPA: https://launchpad.net/~ubuntu-uefi-team/+archive/ubuntu/proposed-public.

    This step is not strictly necessary, but it enables tools like sru-review to work, as they cannot access the packages from the normal private “proposed” PPA.

  7. Binary copy from proposed-public to the proposed queue(s) in the primary archive

Lots of steps!

WIP

As of writing, only the grub updates have been released, other updates are still being verified in proposed. An update for fwupd in bionic will be issued at a later point, removing the EFI bits from the fwupd 1.2 packaging and using the separate fwupd-efi project instead like later release series.

Categories: FLOSS Project Planets

Junichi Uekawa: February.

Tue, 2023-01-31 19:19
February. Working through crosvm dependencies and found that cargo-debstatus does not dump all dependencies; seems like it skips over optional ones. Haven't tracked down what is going on yet but at least it seems like crosvm does not have all dependencies and can't build yet.

Categories: FLOSS Project Planets

Paul Wise: FLOSS Activities January 2023

Tue, 2023-01-31 19:02
Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes Issues Review Administration
  • Debian BTS: unarchive/reopen/triage bugs for reintroduced packages cycle/pygopherd and ask about guile-2.2 reintroduction bugs
  • Debian IRC: fix topic/info of obsolete channel
  • Debian wiki: unblock IP addresses, approve accounts, approve domains.
Communication
  • Respond to queries from Debian users and contributors on the mailing lists and IRC
Sponsors

The celery, docutils, pyemd work was sponsored. All other work was done on a volunteer basis.

Categories: FLOSS Project Planets

Jonathan McDowell: Enabling retrogaming with Kodi on Debian

Tue, 2023-01-31 12:00

For some reason my son has started to be really into watching playthroughs of Mario and similar games on Youtube. I don’t understand the appeal, but it’s less distracting as background than Paw Patrol, so I’m not complaining. He’s not quite at the stage he’s ready to play the games himself, but it’s coming. So I figured it would be neat to sort out some retrogaming bits ready for when that happens.

I already have a Kodi box underneath the TV; it doesn’t get as much use these days as a lot of our viewing is through commercial streaming services, but it’s got all of our DVDs ripped so is still useful. Recent version of Kodi have support for games as well, so I decided it would be perfect if I could tie in to that. However. The normal ways of doing this seems to be to download someone’s pre-rolled setup, and I’d much rather be able to get the bits I need from Debian, as that’s what the machine is running (it does a few minor things other than Kodi).

The best retrogaming environment out there seems to be RetroArch. It’s available for Linux/OS X/Windows and RetroPie provides a nice easy standalone setup if you’re not interested in the Kodi side. If you are then game.libretro provides a wrapper for libretro cores under Kodi. This seemed like the right track.

Unfortunately RetroArch and related packages were in need of some love in Debian. So I ended up engaging in some yak shaving to try and get to where I wanted to be. First up was RetroArch itself, which was over 4 years out of date, at 1.7.3. It turned out that wanted an updated assets package which contains the necessary icons etc for the interface.

RetroArch is only a frontend. To actually play games you need a suitable core. The first one I tried was genesisplusgx (I have fond memories of Sonic from the Master System era), which again was several years out of date. I pulled recent git (I wish folk would tag releases at least every now and then) and updated things. And successfully managed to play Sonic (badly, I am way out of practice).

genesisplusgx is in non-free, due to a prohibition on commercial distribution. So it’s not actually part of Debian. I switched my attention to libretro-bsnes-mercury, which would then allow SNES emulation and is part of main. Again, not to hard to update, some packaging cleanups, and I was playing Super Mario. Again, badly.

That meant I knew I had working emulation with libretro cores. It was time to integrate with Kodi. That meant taking game.libretro, filing an ITP and doing a bunch of bits to get it ready to upload (including introducing a retroarch-dev binary package that contains the appropriate include files as part of retroarch). It sat in NEW for a while (including an initial reject because I’d missed an attribution in the debian/copyright), and was accepted yesterday.

There’s a final piece of the puzzle, and that’s the Kodi config that ties together the libretro core with game.libretro and presents the emulator to Kodi as a fully fledged add-on. The kodi-game folk have a neat tool, kodi-game-scripting which automates the heavy lifting of producing this config. I’ve done some local modifications that make it bit more useful for producing config that can be embedded in the Debian libretro-* packages directly, which I should upload somewhere but it’s all a bit rough ‘n ready at present. It was enough to allow me to produce some kodi-game-libretro-bsnes-* packages as part of libretro-bsnes-mercury.

With that complete, all the packages needed for playing SNES games under Kodi are now present in Debian. I need to upload libretro-bsnes-mercury to unstable (it went to experimental while waiting for kodi-game-libretro to be accepted), and kodi-game-libretro needs another source-only upload, but once that’s done both should be in good shape to migrate to testing and be part of the upcoming bookworm release.

What else is there to do? I’d like to get Kodi config included in the other libretro packages that are already part of Debian. That’s going to need the Controller Topology Project to be packaged so that the controller details are available (I was lucky in that the SNES controller is already part of the Kodi package). I need to work out if I can turn kodi-game-scripting into some sort of dh helper to help automate things. But I’ve done some local testing with genesisplusgx and it works fine as expected.

The other thing is that games are not yet first class citizens in Kodi; the normal browser interface you get for movies, music and TV shows is not available for games. Currently I’ve been trying out the ROM Collection Browser though I find its automated scraping isn’t as good as I’d like. A friend has recommended the Advanced Emulator Launcher but I haven’t taken a look at it. Either way I’d like to ultimately get one of them packaged up as well, though not in time for bookworm.

Anyway. My hope is that these updated and new packages prove useful to someone else. You can tell I’m biased towards 90s era consoles, but if you’ve enough CPU grunt there are a bunch of more recent cores available too. Big thanks to the Debian FTP Master team for letting these through NEW so close to release. And all the upstream devs - RetroArch is a great framework from my perspective as a user, and the Kodi Game folk have done massive amounts of work that made my life much easier when preparing things for Debian.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: #39: Faster Feedback Systems – A Continuous Integration Example

Mon, 2023-01-30 22:11

Welcome to the 39th post in the relatively randomly recurring rants, or R4 for short. Today’s post picks up where the previous post #38: Faster Feedback Systems started. As we argued in #38, the need for fast feedback loops is fairly universal and widespread. Fairly shortly after we posted #38, useR! 2022 happened and one presentation had the key line

Waiting 1-24 minutes for a build to finish can be a massive time suck.

which we would like to come back to today. Furthermore, the unimitable @b0rk had a fabulous tweet just weeks later stating the same point debugging strategy: shorten your feedback loop as a key in a successful debugging strategy.

So in sum: shorter is better. Nobody likes to wait. And shorter i.e. faster is a key and recurrent theme in the R4 series. Today we have a fairly nice illustration of two aspects we have stressed before:

  • Fewer dependencies makes for faster installation time (apart from other desirable robustness aspects); and

  • Using binaries makes for faster installation time as it removes the need for compilations.

The combined effects can be staggering as we show below. The example is motivated by a truly “surprising” (we are being generous here) comment we received as an aside when discussing the eternal topic of whether R users do, or do not, have a choice when picking packages, or approaches, or verses. To our surprise, we were told that “packages are not substitutable”. Which is both demonstrably false (see below) and astonishing as it came from an academic. I.e. someone trained and paid to abstract superfluous detail away and recognise and compare ‘core’ features of items under investigation. Truly stunning. But I digress.

CRAN by now has many packages, slowly moving in on 20,000 in total, and is a unique success we commented-on time and time before. By now many packages shadow or duplicate each other, reinvent one another, or revise/review. One example is the pair of packages accessing PostgreSQL databases. There are several, but two are key. The older one is RPostgreSQL which has been around since Sameer Kumar Prayaga wrote it as a Google Summer of Code student in 2008 under my mentorship. The package has now been maintained well (if quietly) for many years by Tomoaki Nishiyama. The other entrant is more recent, though not new, and is RPostgres by Kirill Müller and others. We will for the remainder of this post refer to these two as the tiny and the tidy version as either can been as being a representative of a certain ‘verse’.

The aforementioned comment on non-substitutability struck us as eminently silly, so we set out to prove just how absurd it really is. So about a year ago we set up pair of GitHub repos with minimal code in a pair we called lim-tiny and lim-tidy. Our conjecture was (and is!) that less is more – just as post #34 titled Less Is More argued with respect to package dependencies. Each of the repos just does one thing: a query to a (freely accessible but remote) PostgreSQL database. The tiny version just retrieves a data frame using only the dependencies needed for RPostgreSQL namely DBI and nothing else. The tidy version retrieves a tibble and has access to everything else that comes when installing RPostgres: DBI, dplyr, and magrittr – plus of course their respective dependencies. We were able to let the code run in (very default) GitHub Actions on a weekly schedule without intervention apart from one change to the SQL query when the remote server (providing public bioinformatics data) changed its schema slighly, plus one update to the action yaml code version. No other changes.

We measure the time a standard continuous integration run takes in total using the default GitHub Action setup in the tidy case (which relies on RSPM/PPM binaries, caching, …, and does not rebuild from source each time), and our own r-ci script (introduced in #32 for CI with R. It switched to using r2u during the course of this experiment but already had access to binaries via c2d4u – so it is always binaries-based (see e.g. #37 or #13 for more). The chart shows the evolution of these run-times over the course of the year with one weekly run each.

tiny vs tidy ci timing impact chart

This study reveals a few things:

  • “Yes, Veronica, we have a choice”: one year of identical results retrieving a data set via SQL from a PostgreSQL server. We can often choose between alternative packages to get (or process) our data.
  • There is a Free Lunch (TM) here: RPostgreSQL consistently dominates RPostgres when measured in CI run-time for the same task.
  • While there is (considerable) variability (likely stemming from heterogenous setups at GitHub Action) the tiny approach is on average about twice as fast as the tidy approch.
  • You do have a choice. Would you rather wait x minutes for CI, or 2x?

The key point there is that while the net-time to fire off a single PostgreSQL is likely (near-)identical, the net cost of continuous integration is not. In this setup, it is about twice the run-time simply because ‘Less Is More’ which (in this setup) comes out to about being twice as fast. And that is a valid (and concrete, and verifiable) illustration of the overall implicit cost of adding dependencies to creating and using development, test, or run-time environments.

Faster feedback loops make for faster builds and faster debugging. Consider using fewer dependencies, and/or using binaries as provided by r2u.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Arturo Borrero González: Debian and the adventure of the screen resolution

Mon, 2023-01-30 12:21

I read somewhere a nice meme about Linux: Do you want an operating system or do you want an adventure? I love it, because it is so true. What you are about to read is my adventure to set a usable screen resolution in a fresh Debian testing installation.

The context is that I have two different Lenovo Thinkpad laptops with 16” screen and nvidia graphic cards. They are both installed with the latest Debian testing. I use the closed-source nvidia drivers (they seem to work better than the nouveau module). The desktop manager and environment that I use is lightdm + XFCE4. The monitor native resolution in both machines is very high: 3840x2160 (or 4K UHD if you will).

The thing is that both laptops show an identical problem: when freshly installed with the Debian default config, the native resolution is in use. For a 16” screen laptop, this high resolution means that the font is tiny. Therefore, the raw native resolution renders the machine almost unusable.

This is a picture of what you get by running htop in the console (tty1, the terminal you would get by hitting CTRL+ALT+F1) with the default install:

Everything in the system is affected by this:

  1. the grub menu is unreadable. Thanksfully the right option is selected by default.
  2. the tty console, with the boot splash by systemd is unreadable as well. There are some colors, so you at least see some systemd stuff happening in ‘green’.
  3. when lightdm starts, the resolution keeps being very high. Can barely click the login button.
  4. when XFCE4 starts, it is a pain to navigate the menu and click the right buttons to set a more reasonable resolution.

The adventure begins after installing the system. Each of these four points must be fixed by hand by the user.

XFCE4

Point #4 is the easiest. Navigate with the mouse pointer to the tiny Applications menu, then Settings, then Displays. This is more or less the same in every other desktop operating system. There are no further actions required to persist this setting. Thanks you XFCE4.

lightdm

Point #3, about lightdm, is more tricky to solve. It involves running xrandr when lightdm sets up the display. Nobody will tell you this trick. You have to search for it on the internet. Thankfully is a common problem, and a person who knows what to search for can find good results.

The file /etc/lightdm/lightdm.conf needs to contain something like this:

[LightDM] [Seat:*] # set up correct display resolution display-setup-script=sh -c -- "xrandr -s 1920x1080"

By the way, depending on your system hardware setup, you may also need an additional call to xrandr here. If you want to plug in an HDMI monitor, chances are you require something like xrandr --setprovideroutputsource NVIDIA-G0 modesetting && xrandr --auto to instruct the NVIDIA graphic card to work will with the kernel graphic system.

In my case, one of my laptops require it, so I have:

[LightDM] [Seat:*] # don't ask me to type my username greeter-hide-users=false # set up correct display resolution, and prepare NVIDIA card for HDMI output display-setup-script=sh -c "xrandr -s 1920x1080 && xrandr --setprovideroutputsource NVIDIA-G0 modesetting && xrandr --auto"

grub

Point #1 about the grub menu is also not trivial to solve, but also widely known on the internet. Grub allows you to set arbitrary graphical modes. In Debian systems, adding something like GRUB_GFXMODE=1024x768 to /etc/default/grub and then running sudo update-grub should do the magic.

console

So we get to point #2 about the tty1 console. For months, I’ve been investing my scarce personal time into trying to solve this annoyance. There are a lot of conflicting information about this on the internet. Plenty of misleading solutions, essays about framebuffer, kernel modeset, and other stuff I don’t want to read just to get my tty1 in a readable status.

People point in different directions, like using GRUB_GFXPAYLOAD_LINUX=keep in /etc/default/grub. Which is a good solution, but won’t work: my best bet is that the kernel indeed keeps the resolution as told by grub, but the moment systemd loads the nvidia driver, it enables 4K in the display and the console gets the high resolution.

Actually, for a few weeks, I blamed plymouth. Because the plymouth service is loaded early by systemd, it could be responsible for setting some of the display settings. It actually contains some (undocummented) DeviceScale configuration option that is seemingly aimed to integrate into high resolution scenarios. I played with it to no avail.

Some folks from IRC suggested reconfiguring the console-font package. Back-then unknown to me. Running sudo dpkg-reconfigure console-font would indeed show a menu to select some preferences for the console, including font size. But apparently, a freshly installed system already uses the biggest possible, so this was a dead end.

Other option I evaluted for a few days was touching the kernel framebuffer setting. I honestly don’t understand this, and all the solutions pointing to use fbset didn’t work for me anyways. This is the default framebuffer configuration in one of the laptops:

user@debian:~$ fbset -i mode "3840x2160" geometry 3840 2160 3840 2160 32 timings 0 0 0 0 0 0 0 accel true rgba 8/16,8/8,8/0,0/0 endmode Frame buffer device information: Name : i915drmfb Address : 0 Size : 33177600 Type : PACKED PIXELS Visual : TRUECOLOR XPanStep : 1 YPanStep : 1 YWrapStep : 0 LineLength : 15360 Accelerator : No

Playing with these numbers, I was able to modify the geometry of the console, only to reduce the panel to a tiny square in the console display (with equally small fonts anyway). If it was possible to scale or resize the panel in other way, I was unable to understand how to do so by reading the associated docs.

One day, out of despair, I tried disabling kernel modesetting (or KMS). It indeed got me a more readable tty1, only to prevent the whole graphic stack from starting, with Xorg complaining about the lack of kernel modeset.

After lots of wasted time, I decided to blame the NVIDIA graphic card. Because why not: a closed source module in my system looks fishy. I registered in their official forum and wrote a message about my suspicion on the module, asking for advice on how to modify the driver default resolution. I was hoping that something like modprobe nvidia my_desired_resolution=1920x1080 could exist. Apparently not :-(

I was about to give up. I had walked every corner of the known internet. I even tried summoning the ancient gods, I used ChatGPT. I asked the AI god for mercy, for a working solution… to no avail.

Then I decided to change the kind of queries I was issuing the search engines (don’t ask me, I no longer remember). Eventually I landed in this askubuntu.com page. The question described the exact same problem I was experiencing. Finally, that was encouraging! I was not alone in my adventure after all!

The solution section included a font size I hadn’t seen before in my previous tests: 16x32. More excitement!

I did all the steps. I installed the xfonts-terminus package, and in the file /etc/default/console-setup I put:

ACTIVE_CONSOLES="/dev/tty[1-6]" CHARMAP="ISO-8859-15" CODESET="guess" FONTFACE="Terminus" FONTSIZE="16x32" VIDEOMODE=

Then I run setupcon from a tty, and… the miracle happened! I finally got a bigger font in the tty1 console! Turned out a potential solution was about playing with console-setup, which I had tried wihtout success before. I’m not even sure if the additional package was required.

This is how my console looks now:

The truth is… the solution is satisfying only to a degree. I’m a person with good eyesight and can work with these bit larger fonts. I’m not sure if I can get larger fonts using this method, honestly.

After some search, I discovered that some folks already managed to describe the problem in detail and filed a proper bug report in Debian, see #595696… opened more than 10 years ago.

2023 is the year of linux on the desktop

Nope.

I honestly don’t see how this disconnected pile of settings can be all reconciled together. Can we please have a systemd-whatever that homogeinizes all of this mess?

I’m referring to grub + kernel drivers + console + lightdm + XFCE4.

Next adventure

When I lock the desktop (with CTRL+ALT+L) and close the laptop lid to suspend it, then reopen it, type the login info into the lightdm greeter, then the desktop environment never loads, black screen.

I have already tried the first few search results without luck. Perhaps the nvidia card is to blame this time? Perhaps poorly coupled power management by the different system software pieces?

Who knows what’s going on here. This will probably be my next Debian desktop adventure.

Categories: FLOSS Project Planets

Russell Coker: Links January 2023

Mon, 2023-01-30 08:17

The Intercept has an amusing and interesting article about senior Facebook employees testifying that they don’t know where Facebook stores all it’s data on users [1]. One lesson all programmers can learn from this is to document all these things in an orderly manner.

Cory Doctorow wrote a short informative article about inflation from a modern monetary theory perspective [2].

Russ Allbery wrote an insightful blog post about effecive altruism and respect for disadvantaged people [3]. GiveDirectly sounds good.

The Conversation has an interesting article about the Google and Apple app stores providing different versions of apps for users in different regions [4]. Apparently there are specific versions to comply with GDPR and versions that differ in adverts. The hope that GDPR would affect enough people to become essentially a world-wide standard was apparently overly optimistic. We need political lobbying in all countries for laws like the GDPR to force the app stores to give us the better versions of apps.

Arya Voronova wrote an informative article about USB-C and extension or data blocker cables [5]. USB just keeps getting more horrible in technology while getting more useful in functionality. Laptops and phones catching fire will probably become more common in future.

John McBride wrote an insightful article about the problems in the security of the software supply chain [6]. His main suggestion for addressing problems is “If you are on a team that relies on some piece of open source software, allocate real engineering time to contributing”, the problem with this is that real engineering time means real money and companies don’t want to do that. Maybe having companies contribute moderate amounts of money to a foundation that hires people would be a viable option.

Toms Guide has an interesting article describing problems with the Tesla [7]. It doesn’t cover things like autopilot driving over children and bikers but instead covers issues of the user interface that make it less pleasant to drive and also remove concentration from the road.

The BBC has an interesting article about the way mathematical skill is correlated with the way language is used to express numbers [8]. Every country with a lesser way of expressing numbers should switch to some variation of the East-Asian way.

Science 2.0 has an interesting blog post about the JP Aerospace plans to use airships to get most of the way through the atmosphere and then a plane to get to orbit [9]. It’s a wild idea but seems plausible. The idea of going to space in balloons seems considerably scarier to me than the current space craft.

Interesting list of red team and physical entry gear with links to YouTube videos showing how to use them [10].

The Verge has an informative summary of the way Elon mismanaged Twitter after taking it over [11].

Related posts:

  1. Links January 2021 Krebs on Security has an informative article about web notifications...
  2. Links January 2020 C is Not a Low Level Language [1] is an...
  3. Links January 2014 Fast Coexist has an interesting article about the art that...
Categories: FLOSS Project Planets

Reproducible Builds (diffoscope): diffoscope 234 released

Sun, 2023-01-29 19:00

The diffoscope maintainers are pleased to announce the release of diffoscope version 234. This version includes the following changes:

[ FC Stegerman ] * test_text_proper_indentation requires at least file version 5.44. (Closes: reproducible-builds/diffoscope#329)

You find out more by visiting the project homepage.

Categories: FLOSS Project Planets

Pages