Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 16 hours 53 min ago

Ian Jackson: What your vote is worth - a back of the envelope calculation

Sat, 2024-06-01 05:40

tl;dr: Your vote really counts!

Each vote in a UK General Election is worth maybe £100,000 - to you and all your fellow citizens taken together. If you really care about the welfare of everyone affected by actions of the UK government, then it’s worth that to you too.

Introduction

It seems a common perception that one vote, in amongst all those millions, doesn’t really matter. So maybe it’s not worth voting. But, voting is (largely) what determines what the government does - and the government is big. It’s as big as all the people.

If you are the kind of person who cares about what happens to everyone in your polity and indeed everyone its actions affect, then even your one vote is very important indeed.

A method for back of the envelope calculation

It would be nice to give a quantitative estimate. Many things in our society are measured in money, so let’s try taking a stab at calculating the money value of your vote.

The argument I’m going to make is this: the government (by which I include the legislature), which is selected by our votes, decides how to spend the national budget.

So, basically, I’m going to divide the budget, by the electorate.

UK Parliament

UK Parliamentary elections decide not only the House of Commons, but, through that, the government. The upper house, the House of Lords, has very limited influence. So I think it’s fair to regard the Parliamentary election as, simply, controlling that budget.

Being lazy, I’m going to use Wikipedia data. We have the size of the electorate, for 2019, 47.6 million. But your influence isn’t shared with the whole electorate, only with the other people who also vote. Turnout in 2019 was 67.3%. The 2019 budget isn’t listed but I’ll just average the 2018 and March 2020 figures £842bn and £873bn, so £857 billion. (Strictly speaking I should add up the budgets for the period of the Parliament, but that seems like a lot of effort.)

There’s a discrepancy in the timescale we need to account for. Your vote influences the budgets for several years, depending how long it is until the next election. Taking Wikipedia’s list of elections this century there’ve been 7 in 24 years. So that’s an average of about 3.4y.

So, multiplying it through, we have (£857b * (24 / 7)) / (47.6M * 67.3%), giving a guess at the value of your UK General Election vote:

£92,000.

European Parliament

2022 budget for the European Union (Wikipedia again) was €170.6 bn.

The last election, in 2019, had a turnout of 198,352,638. Each EU Parliament lasts 5 years.

The Parliament, however, shares responsibility for the budget with the European Council, which is controlled, ultimately, by national governments. We have to pick a numerical value for the Parliament’s share of the influence. Over the past years the Parliament has gradually been more willing to exercise its powers in this area. I’m going to arbitrarily call its share 50%.

The calculation, then, is €170.6 bn * 5 * 50% / 198M, giving a guess at the value of your EU Parliamentary Election vote:

€2150.

This much smaller figure reflects simply that the EU doesn’t spend very much money, for a polity of its size. (Those stories in the British press giving the impression that the EU is massively wasteful are, simply, lies.)

The interaction of this calculation with the Council’s share of the influence, and with national budgets, is a bit of a question, but given the much smaller amounts involved, it doesn’t seem worth thinking about that too hard.

Only if you care about other people as much as yourself!

All of this is only true for you if you value and want to help everyone in your society. That includes immigrants, women, unemployed people, disabled people, people who are much poorer or richer than you, etc.

If you think about it in purely personal terms, your vote is hardly worth anything - because while the effect of your vote, overall, is very large, that effect is shared by everyone in your polity. So if you only care about yourself, voting is a total waste of time. The more selfish and xenophobic and racist and so on you are - caring only about people like yourself - the less your vote is worth.

This is why voting is rightly seen as a civic duty. I just spent £30 to courier my EP vote to Den Haag. That only makes sense because I’m very willing to spend that £30 to try to improve the spending of the €2000 or so that’s my share of the EU budget.

This is a very rough analysis

These calculations neglect a lot of very important things: politics isn’t just about the allocation of resources. It’s also about values, and bad politics can seriously harm people.

Arguably many of those effects of your vote, are much more important than just how the budget is set and spent.

It would be interesting to see an attempt at a similar analysis but for taking into account life and death questions like hate crime, traffic violence, healthcare, refugees’ welfare, and so on. I’m not sure how to approach that. Maybe some real social scientists have done so? References welcome.

Also, even on its own terms, this analysis is very rough and ready. We haven’t modelled the ability of the government to change its tax rates; perhaps we should be multiplying GDP (or some other better measure) by 90% percentile total tax rate amongst “countries like this one”. The amount of influence that can be wielded by one vote is probably nonlinear in the size of the political faction, but IDK in which direction. In unfair voting systems like the UK’s, some people’s votes are worth much more than others. In a very marginal constituency, which is a target seat, your vote might be worth tens of millions. In a safe seat, it might “only” be worth a few thousand. And in practical terms you don’t get to choose precisely the policies you want; you have to pick a party, which is sometimes very much a question of the lesser evil.

So, there is much I haven’t modelled. But the key point stands:

Conclusion

Although your vote is diluted by everyone else’s votes, together, we control the government, which affects us all. So if you care about the whole of society, the big numbers in the divisor, and the numerator, cancel out.

You can think of your vote as controlling one citizen’s worth of government activity.

edited 2024-06-01 09:40 Z to fix a grammar botch



comments
Categories: FLOSS Project Planets

Russ Allbery: Review: I Shall Wear Midnight

Sat, 2024-06-01 00:29

Review: I Shall Wear Midnight, by Terry Pratchett

Series: Discworld #38 Publisher: Harper Copyright: 2010 Printing: 2011 ISBN: 0-06-143306-3 Format: Trade paperback Pages: 355

I Shall Wear Midnight is the 38th Discworld novel and the 4th Tiffany Aching novel. This is not a good place to start reading.

Tiffany has finished her training and has returned to her home on the Chalk, taking up her duties as the local witch. There are a lot of those, because there's a lot that needs doing. In some cases, such as taking away the pain of the old Duke, they involve things that require magic and that only Tiffany can do. In many other cases, other people could pick up some of the work, but they lack Tiffany's sense of duty and willingness to pay attention.

The people of the Chalk have always been a bit suspicious of witches, in part because the job was done for so long by Tiffany's grandmother and no one thought she was a witch. (She was a witch.) Of late, however, that suspicion seems to be getting worse. It comes to a head when Tiffany is accused of theft and worse by the old Duke's maid, a woman with very fixed ideas about the evils of witches. Tiffany has to sort out what's going on and clear herself, all while navigating her now-awkward relationship with the Duke's son Roland, his unimpressive fiancee, and his spectacularly annoying aunt.

Ah, this is the stuff. This is exactly the Tiffany Aching novel that I have been hoping Pratchett would write. It's pure, snarky competence porn from start to finish.

"I'm a witch. It's what we do. When it's nobody else's business, it's my business."

One of the things that I adore about this series is how well Pratchett shows the different ways in which one can be a witch. Granny Weatherwax out-thinks everyone and nudges (or shoves) people in the right direction, but her natural tendency is to be icy and a bit frightening. Nanny Ogg is that person you can't help but talk to, who may seem happy-go-lucky and hedonistic but who can effortlessly change the mood of a room. And Tiffany is stubborn duty and blunt practicality, which fits the daughter of shepherds. In previous books, we've watched Tiffany as a student, learning the practicalities of being a witch. This is the book where she realizes how much she knows and how much easier the world is to navigate when she's in her own territory.

There is a wonderful scene, late in this book, where Pratchett shows Nanny Ogg at her best, doing the kinds of things that only Nanny Ogg can do. Both Tiffany and the reader are in awe.

I should have learned this, she thought. I wanted to learn fire, and pain, but I should have learned people.

And it's true that Nanny Ogg can do things that Tiffany can't. But what makes this book so great is that it shows how Tiffany's personality and her training come together with her knowledge of the Chalk. She may not know people, in general, but she knows her neighbors and how they think. She doesn't manage them the way that Nanny Ogg would; she's better at solving different kinds of problems, in different ways. But they're the right ways, and the right problems, for her home.

This is another Discworld novel with a forgettable villain that's more of a malevolent force of nature than a character in its own right. It's also another Discworld novel where Pratchett externalizes a human tendency into a malevolent force that can possess people. I have mixed feelings about this narrative approach. That externalization of evil into (in essence) demons has been repeatedly used to squirm out of responsibility and excuse atrocities, and it neatly avoids having to wrestle with the hard questions of prejudice and injustice and why apparently good people do awful things.

I think some of those weaknesses persist even in Pratchett's hands, but I think what he was attempting with that approach in this book is to show how almost no one is immune to nastier ideas that spread through society. Rather than using the externalization of evil as an excuse, he's using it as a warning. With enough exposure to those ideas, they start sounding tempting and partly credible even to people who would never have embraced them earlier. Pratchett also does a good job capturing the way prejudice can start from thoughtless actions that have more to do with the specific circumstances of someone's life than any coherent strategy.

Still, the one major complaint I have about this book is that the externalization of evil is an inaccurate portrayal of the world, and this catches up with Pratchett at the ending. Postulating an external malevolent force reduces evil to something that can be puzzled out and decisively defeated, thus resolving the problem. Sadly, this is not how humans actually work.

I'll forgive that structural flaw, though, because the rest of this book is so good. It's rare that a plot twist in a Discworld novel surprises me — twisty plots are not Pratchett's strength — but this one did. I will not spoil the surprise, but one of the characters is not quite who they seem to be, and Tiffany's reactions once she figures that out are one of my favorite parts of this book. Pratchett is making a point about assumptions, observation, and the importance of being willing to change one's mind about someone when you know more, and I thought it was very well done.

But, most of all, I enjoyed reading about Tiffany being calm, competent, determined, and capable. There's also a bit of an unexpected romance plot that's one of my favorite types: the person who notices that you're doing a lot of work and quietly steps in and starts helping while paying attention to what's needed and not taking over. And it's full of the sort of pithy moral wisdom that makes Discworld such a delight to read.

"There have been times, lately, when I dearly wished that I could change the past. Well, I can't, but I can change the present, so that when it becomes the past it will turn out to be a past worth having."

This was just what I wanted. Highly recommended.

Followed by Snuff in publication order. The next (and last, sadly) Tiffany Aching book is The Shepherd's Crown.

Rating: 9 out of 10

Categories: FLOSS Project Planets

Russell Coker: Links May 2024 (late)

Fri, 2024-05-31 21:42

VoltageDivide has an interesting article on Unconventional Uses of FPGAs [1]. Tagline – Every sensor is a temperature sensor, nearly everything is a resistor or a conductor if you try hard enough and anything is an antenna. Datasheets are just a suggestion, and finally, often we pretend things are ideal, when they often are not.

Interesting blog post about the way npm modules that depend on everything exposed flaws in the entire npm system [2]. The conclusion should have included “use a fake name for doing unusual tests”.

Krebs on Security has an interesting article about MFA bombing [3]. Looks like Apple has some flaws in their MFA system, other companies developing MFA should learn from this.

Joey wrote an informative blog post about the Vultr hosting company wanting to extract data from VMs run for clients to train ML [4]. If your email is stored on such a VM it could be “generated” by an AI system.

John Goerzen wrote an interesting post looking at the causes of the xz issue from a high level [5].

Interesting article about self proclaimed Autistic pro-natalists [6]. They seem somewhat abusive to their kids and are happy to associate with neo-Nazis. :(

Joey Hess wrote an interesting blog post about the possibility of further undiscovered attacks on xz [7]. Going back to an earlier version seems like a good idea.

The Guardian has an interesting article about Amazon’s 2 pizza rule and the way the company is structured [8]. It’s interesting how they did it, but we really need to have it broken up via anti-trust legislation.

John Goerzen wrote an informative post about Facebook censorship and why we should all move to Mastodon [9]. Facebook needs to be broken up under anti-trust laws.

Kobold Letters is an attack on HTML email that results in the visual representation of email changing when it is forwarded. [10]. You could have the original email hide some sections which are revealed with the recipient forwards it for a CEO impersonation attack.

Related posts:

  1. Links January 2024 Long Now has an insightful article about domestication that considers...
  2. Links March 2024 Bruce Schneier wrote an interesting blog post about his workshop...
  3. Links April 2024 Ron Garret wrote an insightful refutation to 2nd amendment arguments...
Categories: FLOSS Project Planets

Junichi Uekawa: June already.

Fri, 2024-05-31 20:56
June already. Thinking about what to do in Debconf.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: RcppArmadillo 0.12.8.4.0 on CRAN: Upstream Bugfix

Fri, 2024-05-31 17:57

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1151 other packages on CRAN, downloaded 34.6 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 584 times according to Google Scholar.

Conrad released a new upstream bugfix yesterday (to improve views of sparse matrices). We uploaded it yesterday too but it once agfain took a day for the hard-working CRAN maintainers to concur that the two NOTEs from reverse-dependency checking over 1100 packages were in a fact false positves. And so it appeared on CRAN earlier today. We also increased the versioned dependency on Rcpp to match the use of optional entry-point headers Rcpp/Light, Rcpp/Lighter and Rcpp/Lightest. No other changes were made.

The set of changes since the last CRAN release follows.

Changes in RcppArmadillo version 0.12.8.4.0 (2024-05-30)
  • Upgraded to Armadillo release 12.8.4 (Cortisol Injector)

    • Faster handling of sparse submatrix views
  • Update versioned Depends on Rcpp to 1.0.8 or later to match use of Light/Lighter/Lightest headers.

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Bits from Debian: New Debian Developers and Maintainers (March and April 2024)

Fri, 2024-05-31 12:00

The following contributors got their Debian Developer accounts in the last two months:

  • Patrick Winnertz (winnie)
  • Fabian Gruenbichler (fabiang)

The following contributors were added as Debian Maintainers in the last two months:

  • Juri Grabowski
  • Tobias Heider
  • Jean Charles Delépine
  • Guilherme Puida Moreira
  • Antoine Le Gonidec
  • Arthur Barbosa Diniz

Congratulations!

Categories: FLOSS Project Planets

Joachim Breitner: Blogging on Lean

Fri, 2024-05-31 08:47

This blog has become a bit quiet since I joined the Lean FRO. One reasons is of course that I can now improve things about Lean, rather than blog about what I think should be done (which, by contraposition, means I shouldn’t blog about what can be improved…). A better reason is that some of the things I’d otherwise write here are now published on the official Lean blog, in particular two lengthy technical posts explaining aspects of Lean that I worked on:

It would not be useful to re-publish them here because the technology verso behind the Lean blog, created by my colleage David Thrane Christansen, enables such fancy features like type-checked code snippets, including output and lots of information on hover. So I’ll be content with just cross-linking my posts from here.

Categories: FLOSS Project Planets

Petter Reinholdtsen: The 2024 LinuxCNC Norwegian developer gathering

Fri, 2024-05-31 01:45

The LinuxCNC project is still going strong. And I believe this great software system for numerical control of machines such as milling machines, lathes, plasma cutters, routers, cutting machines, robots and hexapods, would do even better with more in-person developer gatherings, so we plan to organise such gathering this summer too.

The Norwegian LinuxCNC developer gathering take place the weekend Friday July 5th to 7th this year, and is open for everyone interested in contributing to LinuxCNC and free software manufacturing. Up to date information about the gathering can be found in the developer mailing list thread where the gathering was announced. Thanks to the good people at Debian as well as leftover money from last years gathering from Redpill-Linpro and NUUG Foundation, we have enough sponsor funds to pay for food, and probably also shelter for the people traveling from afar to join us. If you would like to join the gathering, get in touch and add your details on the pad.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Categories: FLOSS Project Planets

Reproducible Builds (diffoscope): diffoscope 269 released

Thu, 2024-05-30 20:00

The diffoscope maintainers are pleased to announce the release of diffoscope version 269. This version includes the following changes:

[ Chris Lamb ] * Allow Debian testing continuous integration builds to fail right now. [ Sergei Trofimovich ] * Amend 7zip version test for older versions that include the "[64]" string. (Closes: reproducible-builds/diffoscope#376)

You find out more by visiting the project homepage.

Categories: FLOSS Project Planets

Matthew Palmer: GitHub's Missing Tab

Wed, 2024-05-29 20:00

Visit any GitHub project page, and the first thing you see is something that looks like this:

“Code”, that’s fairly innocuous, and it’s what we came here for. The “Issues” and “Pull Requests” tabs, with their count of open issues, might give us some sense of “how active” the project is, or perhaps “how maintained”. Useful information for the casual visitor, undoubtedly.

However, there’s another user community that visits this page on the regular, and these same tabs mean something very different to them.

I’m talking about the maintainers (or, more commonly, maintainer, singular). When they see those tabs, all they see is work. The “Code” tab is irrelevant to them – they already have the code, and know it possibly better than they know their significant other(s) (if any). “Issues” and “Pull Requests” are just things that have to be done.

I know for myself, at least, that it is demoralising to look at a repository page and see nothing but work. I’d be surprised if it didn’t contribute in some small way to maintainers just noping the fudge out.

A Modest Proposal

So, here’s my thought. What if instead of the repo tabs looking like the above, they instead looked like this:

My conception of this is that it would, essentially, be a kind of “yearbook”, that people who used and liked the software could scribble their thoughts on. With some fairly straightforward affordances elsewhere to encourage its use, it could be a powerful way to show maintainers that they are, in fact, valued and appreciated.

There are a number of software packages I’ve used recently, that I’d really like to say a general “thanks, this is awesome!” to. However, I’m not about to make the Issues tab look even scarier by creating an “issue” to say thanks, and digging up an email address is often surprisingly difficult, and wouldn’t be a public show of my gratitude, which I believe is a valuable part of the interaction.

You Can’t Pay Your Rent With Kudos

Absolutely you cannot. A means of expressing appreciation in no way replaces the pressing need to figure out a way to allow open source developers to pay their rent. Conversely, however, the need to pay open source developers doesn’t remove the need to also show those people that their work is appreciated and valued by many people around the world.

Anyway, who knows a senior exec at GitHub? I’ve got an idea I’d like to run past them…

Categories: FLOSS Project Planets

Antoine Beaupré: 2024-05-29-playing-with-fonts-again

Wed, 2024-05-29 17:38

meta title="Playing with fonts again"

I am getting increasingly frustrated by Fira Mono's lack of italic support so I am looking at alternative fonts again.

This time I seem to be settling on either Commit Mono or Space Mono. For now I'm using Commit Mono because it's a little more compressed than Fira and does have a italic version. I don't like how Space Mono's parenthesis (()) is "squarish", it feels visually ambiguous with the square brackets ([]), a big no-no for my primary use case (code).

So here I am using a new font, again. It required changing a bunch of configuration files in my home directory (which is in a private repository, sorry) and Emacs configuration (thankfully that's public!).

One gotcha is I realized I didn't actually have a global font configuration in Emacs, as some Faces define their own font family, which overrides the frame defaults.

This is what it looks like, before:

Fira Mono

After:

Commit Mono

(Notice how those screenshots are not sharp? I'm surprised too. The originals look sharp on my display, I suspect this is something to do with the Wayland transition. I've tried with both grim and flameshot, for what its worth.)

They are pretty similar! Commit Mono feels a bit more vertically compressed maybe too much so, actually -- the line height feels too low. But it's heavily customizable so that's something that's relatively easy to fix, if it's really a problem. Its weight is also a little heavier and wider than Fira which I find a little distracting right now, but maybe I'll get used to it.

All characters seem properly distinguishable, although, if I'd really want to nitpick I'd say the © and ® are too different, with the latter (REGISTERED SIGN) being way too small, basically unreadable here. Since I see this sign approximately never, it probably doesn't matter at all.

I like how the ampersand (&) is more traditional, although I'll miss the exotic one Fira produced... I like how the back quotes (`, GRAVE ACCENT) drop down low, nicely aligned with the apostrophe. As I mentioned before, I like how the bar on the "f" aligns with the other top of letters, something in Fira mono that really annoys me now that I've noticed it (it's not aligned!).

Here's the test sheet I've made up to test various characters. I could have sworn I had a good one like this lying around somewhere but couldn't find it so here it is, I guess.

ASCII test abcdefghijklmnopqrstuvwxyz1234567890-= ABCDEFGHIJKLMNOPQRSTUVWXYZ!@#$%^&*()_+ ambiguous characters &iIL7l1!|[](){}/\oO0DQ8B;:,./?~`'"$ all characters in a sentence, uppercase the quick fox jumps over the lazy dog THE QUICK FOX JUMPS OVER THE LAZY DOG same, in french voix ambiguë d'un cœur qui, au zéphyr, préfère les jattes de kiwis. VOIX AMBIGUË D'UN CŒUR QUI, AU ZÉPHYR, PRÉFÈRE LES JATTES DE KIWIS. Box drawing alignment tests: █ ▉ ╔══╦══╗ ┌──┬──┐ ╭──┬──╮ ╭──┬──╮ ┏━━┳━━┓ ┎┒┏┑ ╷ ╻ ┏┯┓ ┌┰┐ ▊ ╱╲╱╲╳╳╳ ║┌─╨─┐║ │╔═╧═╗│ │╒═╪═╕│ │╓─╁─╖│ ┃┌─╂─┐┃ ┗╃╄┙ ╶┼╴╺╋╸┠┼┨ ┝╋┥ ▋ ╲╱╲╱╳╳╳ ║│╲ ╱│║ │║ ║│ ││ │ ││ │║ ┃ ║│ ┃│ ╿ │┃ ┍╅╆┓ ╵ ╹ ┗┷┛ └┸┘ ▌ ╱╲╱╲╳╳╳ ╠╡ ╳ ╞╣ ├╢ ╟┤ ├┼─┼─┼┤ ├╫─╂─╫┤ ┣┿╾┼╼┿┫ ┕┛┖┚ ┌┄┄┐ ╎ ┏┅┅┓ ┋ ▍ ╲╱╲╱╳╳╳ ║│╱ ╲│║ │║ ║│ ││ │ ││ │║ ┃ ║│ ┃│ ╽ │┃ ░░▒▒▓▓██ ┊ ┆ ╎ ╏ ┇ ┋ ▎ ║└─╥─┘║ │╚═╤═╝│ │╘═╪═╛│ │╙─╀─╜│ ┃└─╂─┘┃ ░░▒▒▓▓██ ┊ ┆ ╎ ╏ ┇ ┋ ▏ ╚══╩══╝ └──┴──┘ ╰──┴──╯ ╰──┴──╯ ┗━━┻━━┛ └╌╌┘ ╎ ┗╍╍┛ ┋ ▁▂▃▄▅▆▇█ MIDDLE DOT, BULLET, HORIZONTAL ELLIPSIS: ·•… curly ‘single’ and “double” quotes ACUTE ACCENT, GRAVE ACCENT: ´` EURO SIGN: € unicode A1-BF: ¡¢£¤¥¦§¨©ª«¬­®¯°±²³´µ¶·¸¹º»¼½¾¿ HYPHEN-MINUS, MINUS SIGN, EN, EM DASH, HORIZONTAL BAR, LOW LINE -------------------------------------------------- −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− –––––––––––––––––––––––––––––––––––––––––––––––––– —————————————————————————————————————————————————— ―――――――――――――――――――――――――――――――――――――――――――――――――― __________________________________________________

So there you have it, got completely nerd swiped by typography again. Now I can go back to writing a too-long proposal again.

Sources and inspiration for the above:

  • the unicode(1) command, to lookup individual characters to disambiguate, for example, - (U+002D HYPHEN-MINUS, the minus sign next to zero on US keyboards) and − (U+2212 MINUS SIGN, a math symbol)

  • searchable list of characters and their names - roughly equivalent to the unicode(1) command, but in one page, amazingly the /usr/share/unicode database doesn't have any one file like this

  • bits/UTF-8-Unicode-Test-Documents - full list of UTF-8 characters

  • UTF-8 encoded plain text file - nice examples of edge cases, curly quotes example and box drawing alignment test which, incidentally, showed me I needed specific faces customisation in Emacs to get the Markdown code areas to display properly, also the idea of comparing various dashes

  • sample sentences in many languages - unused, "Sentences that contain all letters commonly used in a language"

  • UTF-8 sampler - unused, similar

Categories: FLOSS Project Planets

Russell Coker: Creating a Micro Users’ Group

Tue, 2024-05-28 18:08

Fosdem had a great lecture Building an Open Source Community One Friend at a Time [1]. I recommend that everyone who is involved in the FOSS community watches this lecture to get some ideas.

For some time I’ve been periodically inviting a few friends to visit for lunch, chat about Linux, maybe do some coding, and watch some anime between coding. It seems that I have accidentally created a micro users’ group.

LUGs were really big in the mid to late 90s and still quite vibrant in the early 2000’s. But they seem to have decreased in popularity even before Covid19 and since Covid19 a lot of people have stopped attending large meetings to avoid health risks. I think that a large part of the decline of users’ groups has been due to the success of YouTube. Being able to choose from thousands of hours of lectures about computers on YouTube is a disincentive to spending the time and effort needed to attend a meeting with content that’s probably not your first choice of topic. Attending a formal meeting where someone you don’t know has arranged a lecture might not have a topic that’s really interesting to you. Having lunch with a couple of friends and watching a YouTube video that one of your friends assures you is really good is something more people will find interesting.

In recent times homeschooling [2] has become more widely known. The same factors that allow learning about computers at home also make homeschooling easier. The difference between the traditional LUG model of having everyone meet at a fixed time for a lecture and a micro LUG of a small group of people having an informal meeting is similar to the difference between traditional schools and homeschooling.

I encourage everyone to create their own micro LUG. All you have to do is choose a suitable time and place and invite some people who are interested. Have a BBQ in a park if the weather is good, meet at a cafe or restaurant, or invite people to visit you for lunch on a weekend.

Related posts:

  1. Creating a Micro Conference The TEDxVolcano The TED conference franchise has been extended to...
  2. BLUG This weekend I went to the Ballarat install-fest, mini-conf, and...
  3. Recruiting at a LUG Meeting I’m at the main meeting of Linux Users of Victoria...
Categories: FLOSS Project Planets

Sahil Dhiman: A Late, Late Debconf23 Post

Mon, 2024-05-27 14:00

After much procrastination, I have gotten around to completing DebConf23 (DC23), Kochi blog post. I kind of lost the original etherpad which started before DebConf23, for jotting down things. So I started afresh with whatever I can remember, months after the actual conference ended. So things might be as accurate as my memory.

DebConf23, was the 24th annual Debian Conference, happened in Infopark, Kochi, India from 10th September to 17th September 2023. It was preceded by DebCamp from 3rd September to 9th September 2023.

The first formal bid to host DebConf in India was made during DebConf18 in Hsinchu, Taiwan by Raju Dev, which didn’t came our way. In next DebConf, DebConf19 in Curitiba, Brazil, with help and support from Sruthi, Utkarsh and the whole team, India got the opportunity to host DebConf22, which eventually became DebConf23 for the reasons you all know.

I initially met the local team on the sidelines of DebConf20, which was also my first DebConf. DC20 introduced me to how things work in Debian. Having recently switched to Debian and video teams called for volunteer email pulled me in. Things stuck, and I kept hanging out and helping the local Indian DC team with various stuff. We did manage to organize multiple events leading to DebConf23 including MiniDebConf India 2021 Online, MiniDebConf Palakkad 2022, MiniDebConf Tamil Nadu 2023 and DebUtsav Kochi 2023, which gave us quite a bit of experience and workout. Many local organizers from these conferences later joined various DebConf teams during the conference to help out.

For DebConf23, originally, I was part of publicity team because that was my usual thing, but after a team redistribution exercise, Sruthi and Praveen moved me to sponsorship team, as anyhow we didn’t have to do much publicity and sponsorship was one of those things I could get involved remotely. Sponsorship team had to take care of raising funds by reaching out to sponsors, managing invoices and fulfillment. Praveen joined as well in sponsorship team. We had help from international sponsorship team, Anisa, Daniel and various TOs which took care of reaching out to international orgs, and we took care of reaching out to Indian organizations for sponsorship. It was really proud moment when my present employer, Unmukti (makers of hopbox) came aboard as Bronze sponsor. Though fundraising seem to be hit hard from tech industry slowdown and layoffs. Many of our yesteryear sponsors couldn’t sponsor.

We had biweekly local team meetings, which were turned to weekly as we neared the event. This was done in addition to bi-weekly global team meeting.

Pathu, DebConf23 mascot

To describe the venue, the conference happened in InfoPark, Kochi with the main conference hall being Athulya Hall and food, accommodation and two smaller halls in Four Point Hotel, right outside Infopark. We got the Athulya Hall as part of venue sponsorship from Infopark. The distance between both of them was around 300 meters. Halls were named Anamudi, Kuthiran and Ponmudi based on hills and mountain areas in host state of Kerala. Other than Annamudi hall which was the main hall, I couldn’t remember the names of the hall, I still can’t. Four Points was big and expensive, and we had, as expected, cost overruns. Due to how DebConf function, an Indian university wasn’t suitable to host a conference of this scale.

Four Point's Infinity Pool at Night

I landed in Kochi on the first day of DebCamp on 3rd September. As usual, met Abraham first, and the better part of the next hour was spent on meet and greet. It was my first IRL DebConf so met many old friends and new folks. I got a room to myself. Abraham lived nearby and hadn’t taken the accommodation, so I asked him to join. He finally joined from second day onwards. All through the conference, room 928 became in-famous for various reasons, and I had various roommates for company. In DebCamp days, we would get up to have breakfast and go back to sleep and get active only past lunch for hacking and helping in the hack lab for the day, followed by fun late night discussions and parties.

Nilesh, Chirag and Apple at DC23

The team even managed to get a press conference arranged as well, and we got an opportunity to go to Press Club, Ernakulam. Sruthi and Jonathan gave the speech and answered questions from journalists. The event was covered by media as well due to this.

Ernakulam Press Club

Every night, the team use to have 9 PM meetings for retrospection and planning for next day, which was always dotted with new problems. Every day, we used to hijack Silent Hacklab for the meeting and gently ask the only people there at the time to give us space.

DebConf, it itself is a well oiled machine. Network was brought up from scratch. Video team build the recording, audio mixing, live-streaming, editing and transcoding infrastructure was built on site. A gaming rig served as router and gateway. We got a dual internet connection, a 1 Gbps sponsored leased line from Kerala Vision and a paid backup 100 Mbps connection from a different provider. IPv6 was added through HE’s Tunnelbroker. Overall the network worked fine as additionally we had hotel Wi-Fi as well, so the conference network wasn’t stretched much. I must highlight, DebConf is my only conference where almost everything and every piece of software in developed in-house, for the conference and modified according to need on the fly. Even event recording cameras, audio check, direction, recording and editing is all done on in-house software by volunteers-attendees (in some cases remote ones as well) all trained on the sideline of the conference. The core recording and mixing equipment is owned by Debian and travels to each venue. The rest is sourced locally.

Gaming Rig which served as DC23 gateway router

It was fun seeing how almost all the things were coordinates over text on IRC. If a talk/event was missing a talkmeister or a director or a camera person, a quick text on #debconf channel would be enough for someone to volunteer. Video team had a dedicated support channel for each conference venue for any issues and were quick to respond and fix stuff.

Network information. Screengrab from closing ceremony

It rained for the initial days, which gave us a cool weather. Swag team had decided to hand out umbrella’s in swag kit which turned out to be quite useful. The swag kit was praised for quality and selection - many thanks to Anupa, Sruthi and others. It was fun wearing different color T-shirts, all designed by Abraham. Red for volunteers, light green for Video team, green for core-team i.e. staff, yellow for conference attendees.

With highvoltage

We were already acclimatized by the time DebConf really started as we had been talking, hacking and hanging out since last 7 days, but rush really started with the start of DebConf. More people joined on the first and second day of the conference. As has been the tradition, an opening talk was prepared by the Sruthi and local team (which I highly recommend getting more insights of the process). DebConf day 1 also saw Job fair, where Canonical and FOSSEE, IIT Bombay had stalls for community interactions, which judging by the crowd itself turned out to be quite a hit.

For me, association with DebConf (and Debian) started due to volunteering with video team, so anyhow I was going to continue doing that this conference as well. I usually volunteer for talks/events which anyhow I’m interested in. Handling the camera, talkmeister-ing and direction are fun activities, though I didn’t do sound this time around. Sound seemed difficult, and I didn’t want to spoil someone’s stream and recording. Talk attendance varied a lot, like in Bits from DPL talk, the hall was full but for some there were barely enough people to handle the volunteering tasks, but that’s what usually happens. DebConf is more of a place to come together and collaborate, so talk attendance is an afterthought sometimes.

Audience in highvoltage's Bits from DPL talk

I didn’t submit any talk proposals this time around, as just being in the orga team was too much work already, and I knew, the talk preparation would get delayed to the last moment and I would have to rush through it.

Enrico's talk

From Day 2 onward, more sponsor stalls were introduced in the hallway area. Hopbox by Unmukti , MostlyHarmless and Deeproot (joint stall) and FOSEE. MostlyHarmless stall had nice mechanical keyboards and other fun gadgets. Whenever I got the time, I would go and start typing racing to enjoy the nice, clicky keyboards.

As the DebConf tradition dictates, we had a Cheese and Wine party. Everyone brought in cheese and other delicacies from their region. Then there was yummy Sadya. Sadya is a traditional vegetarian Malayalis lunch served over banana leaves. There were loads of different dishes served, the names of most I couldn’t pronounce or recollect properly, but everything was super delicious.

Day four was day trip and I choose to go to Athirappilly Waterfalls and Jungle safari. Pictures would describe the beauty better than words. The journey was a bit long though.

Athirappilly Falls
Tea Gardens

Late that day, we heard the news of Abraham gone missing. We lost Abraham. He had worked really hard all through the years for Debian and making this conference. Talks were cancelled for the next day and Jonathan addressed everyone. We went to Abraham’s home the next day to meet his family. Team had arranged buses to Abraham’s place. It was an unfortunate moment that I only got an opportunity to visit his place after he was gone.

Days went by slowly after that. The last day marked by a small conference dinner. Some of the people had already left. All through the day and next, we kept saying goodbye to friends, with whom we spent almost a fortnight together.

Group photo with all DebConf T-shirts chronologically

This was 2nd trip to Kochi. Vistara Airway’s UK886 has become the default flight now. Almost learned how to travel in and around Kochi by Metro, Water Metro, Airport Shuttle and auto. Things are quite accessible in Kochi but metro is a bit expensive compared to Delhi. I left Kochi on 19th. My flight out was due to leave around 8 PM, so I had the whole day and nothing to do. A direct option would have taken less than 1 hour, but as I had time, I choose to take the long way to the airport. Took an auto rickshaw to Kakkanad Water Metro station. Took the water metro to Vyttila Water Metro station. Vyttila serves as intermobility hub which connects water metro, metro, bus at once place. I switched to Metro here at Vyttila Metro station till Aluva Metro station. Here, I had lunch and then boarded the Airport feeder bus to reach Kochi Airport. All in all, I did auto rickshaw > water metro > metro > feeder bus to reach Airport. I was fun and scenic. I must say, public transport and intermodal integration is quite good and once can transition seamlessly from one mode to next.

Kochi Water Metro
Scenes from Kochi Water Metro

DebConf23 served its purpose of getting existing Debian people together, as well as getting new people interested and contributing to Debian. People who came are still contributing to Debian, and that’s amazing.

Streaming video stats. Screengrab from closing ceremony

The conference wasn’t without its fair share of trouble. There were multiple money transfer woes, and being in India didn’t help. Many thanks to multiple organizations who were proactive in helping out. On top of this, there was conference visa uncertainty and other issues which troubled visa team a lot.

Kudos to everyone who made this possible. Surely, I’m going to miss the name, so thank you for it, you know how much you have done to make this event possible.

Now, DebConf24 is scheduled for Busan, South Korea, and work is already in full swing. As usual, I’m helping with the fundraising part and plan to attended too. Let’s see if I can make it or not.

DebConf23 Group Photo. Click to enlarge.
Credit - Aigars Mahinovs

In the end, we kept on saying, no DebConf at this scale would come back to India for the next 10 or 20 years. It’s too much trouble to be frank. It was probably the peak that we might not reach again. I would be happy to be proven wrong though :)

Categories: FLOSS Project Planets

Russell Coker: USB-A vs USB-C

Sun, 2024-05-26 18:31

USB-A is the original socket for USB at the PC end. There are 2 variants of it, the first is for USB 1.1 to USB 2 and the second is for USB 3 which adds extra pins in a plug and socket compatible manner – you can plug a USB-A device into a USB-A socket without worrying about the speeds of each end as long as you don’t need USB 3 speeds.

The differences between USB-A and USB-C are:

  1. USB-C has the same form factor as Thunderbolt and the Thunderbolt protocol can run over it if both ends support it.
  2. USB-C generally supports higher power modes for charging (like 130W for Dell laptops, monitors, and plugpacks) but there’s no technical reason why USB-A couldn’t do it. You can buy chargers that do 60W over USB-A which could power one of our laptops via a USB-A to USB-C cable. So high power USB-A is theoretically possible but generally you won’t see it.
  3. USB-C has “DisplayPort alternate mode” which means using some of the wires for DisplayPort.
  4. USB-C is more likely to support the highest speeds than USB-A sockets for “super speed” etc. This is not a difference in the standards just a choice made by manufacturers.

While USB-C tends to support higher power delivery modes in actual implementations for connecting to a PC the PC end seems to only support lower power modes regardless of port. I think it would be really good if workstations could connect to monitors via USB-C and provide power, DisplayPort, and keyboard, mouse, etc over the same connection. But unfortunately the PC and monitor ends don’t appear to support such things.

If you don’t need any of those benefits in the list above (IE you are using USB for almost anything we do other than connecting a laptop to a dock/monitor/charger) then USB-A will do the job just as well as USB-C. The choice of which type to use should be based on price and which ports are available, EG My laptop has 2*USB-C ports and 2*USB-A so given that one USB-C port is almost always used for the monitor or for charging I don’t really want to use USB-C for anything else to avoid running out of ports.

When buying USB devices you can’t always predict which systems you will need to connect them to. Currently there are a lot of systems without USB-C that are working well and have no need to be replaced. I haven’t yet seen a system where the majority of ports are USB-C but that will probably happen in the next few years. Maybe in 2027 there will be PCs on sale with only two USB-A sockets forcing people who don’t want to use a USB hub to save both of them for keyboard and mouse. Currently USB-C keyboards and mice are available on AliExpress but they are expensive and I haven’t seen them in Australian stores. Most computer users don’t wear out keyboards or mice so a lot of USB-A keyboard and mice will be in service for a long time. As an aside there are still many PCs with PS/2 keyboard and mouse ports in service so these things don’t go away for a long time.

There is one corner case where USB-C is convenient which is when you want to connect a mass storage device for system recovery or emergency backup, want a high speed, and don’t want to spend time figuring out which of the ports are “super speed” (which can be difficult at the back of a PC with poor lighting). With USB-C you can expect a speed of at least 5Gbit/s and don’t have to worry about accidentally connecting to a USB 2 port as is the situation with USB-A.

For my own use the only times that I prefer USB-C over USB-A are for devices to connect to phones. Eventually I’ll get a laptop that only has USB-C ports and this will change, but even then adaptors are possible.

For someone who doesn’t know the details of how things works it’s not unreasonable to just buy the newest stuff and assume it’s better as it usually is. But hopefully blog posts like this can help people make more informed decisions.

Related posts:

  1. Dell 32″ 4K Monitor and DisplayPort Switch After determining that the Philips 43″ monitor was too large...
  2. Cheap Peripherals for Work A problem with a lot of the purchase of peripherals...
  3. Thinkpad X1 Carbon Gen 6 In February I reviewed a Thinkpad X1 Carbon Gen 1...
Categories: FLOSS Project Planets

Guido Günther: Don't unblank in my back pack please

Sun, 2024-05-26 06:39
Since phoc 0.39.0 it is possible to configure which keys unidle your phone (which results in unblanking the screen). The current default is that all keys unblank which is usually fine for e.g. laptops but not the desired result for phones and tablets where this depends on the position and function of keys. Volume keys and other exposed keys usually shouldn’t unblank - maybe with the exception of some Home buttons on devices that have those.
Categories: FLOSS Project Planets

Gunnar Wolf: How computers make books • from graphics rendering, search algorithms, and functional programming to indexing and typesetting

Fri, 2024-05-24 20:11
This post is a review for Computing Reviews for How computers make books • from graphics rendering, search algorithms, and functional programming to indexing and typesetting , a book published in Manning

If we look at the age-old process of creating books, how many different areas can a computer help us with? And how can each of them be used to teach computer science (CS) fundamentals to a nontechnical audience? This is the premise of John Whitington’s enticing book and the result is quite amazing.

The book immediately drew my attention when looking at the titles available for review. After all, my initiation into computing as a kid was learning the LaTeX typesetting system while my father worked on his first book on scientific language and typography [1]. Whitington picks 11 different technical aspects of book production, from how dots of ink are transferred to a white page and how they are made into controllable, recognizable shapes, all the way to forming beautiful typefaces and the nuances of properly addressing white-space to present aesthetically pleasing paragraphs, building it all into specific formats aimed at different ends.

But if we dig beyond just the chapter titles, we will find a very interesting book on CS that, without ever using technical language or notation, presents aspects as varied as anti-aliasing, vector and raster images, character sets such as ASCII and Unicode, an introduction to programming, input methods for different writing systems, efficient encoding (compression) methods, both for text and images, lossless and lossy, and recursion and dithering methods. To my absolute surprise, while the author thankfully spared the reader the syntax usually associated with LISP-related languages, the programming examples clearly stem from the LISP school, presenting solutions based on tail recursion. Of course, it is no match for Donald Knuth’s classic book on this same topic [2], but could very well be a primer for readers to approach it.

The book is light and easy to read, and keeps a very informal, nontechnical tone throughout. My only complaint relates to reading it in PDF format; the topic of this book, and the care with which the images were provided by the author, warrant high resolution. The included images are not only decorative but an integral part of the book. Maybe this is specific to my review copy, but all of the raster images were in very low resolution.

This book is quite different from what readers may usually expect, as it introduces several significant topics in the field. CS professors will enjoy it, of course, but also readers with a humanities background, students new to the field, or even those who are just interested in learning a bit more.

References
  1. Sánchez y Gándara, A.; Magariños Lamas, F.; Wolf, K. B., Manual de lenguaje y tipografía científica en castellano. Trillas, Mexico City, Mexico, 1986, https://www.fis.unam.mx/~bwolf/manual.html

  2. Knuth, D. E. Digital typographyCSLI Lecture Notes: CSLI Lecture Notes. CSLI Publications, Stanford, CA, 1999, https://www-cs-faculty.stanford.edu/~knuth/dt.html

Categories: FLOSS Project Planets

Julian Andres Klode: Observations in Debian dependency solving

Fri, 2024-05-24 04:57

In my previous blog, I explored The New APT 3.0 solver. Since then I have been at work in the test suite making tests pass and fixing some bugs.

You see for all intents and purposes, the new solver is a very stupid naive DPLL SAT solver (it just so happens we don’t actually have any pure literals in there). We can control it in a bunch of ways:

  1. We can mark packages as “install” or “reject”
  2. We can order actions/clauses. When backtracking the action that came later will be the first we try to backtrack on
  3. We can order the choices of a dependency - we try them left to right.

This is about all that we really want to do, we can’t go if we reach a conflict, say “oh but this conflict was introduced by that upgrade, and it seems more important, so let’s not backtrack on the upgrade request but on this dependency instead.”.

This forces us to think about lowering the dependency problem into this form, such that not only do we get formally correct solutions, but also semantically correct ones. This is nice because we can apply a systematic way to approach the issue rather than introducing ad-hoc rules in the old solver which had a “which of these packages should I flip the opposite way to break the conflict” kind of thinking.

Now our test suite has a whole bunch of these semantics encoded in it, and I’m going to share some problems and ideas for how to solve them. I can’t wait to fix these and the error reporting and then turn it on in Ubuntu and later Debian (the defaults change is a post-trixie change, let’s be honest).

apt upgrade is hard

The apt upgrade commands implements a safe version of dist-upgrade that essentially calculates the dist-upgrade, and then undoes anything that would cause a package to be removed, but it (unlike its apt-get counterpart) allows the solver to install new packages.

Now, consider the following package is installed:

X Depends: A (= 1) | B

An upgrade from A=1 to A=2 is available. What should happen?

The classic solver would choose to remove X in a dist-upgrade, and then upgrade A, so it’s answer is quite clear: Keep back the upgrade of A.

The new solver however sees two possible solutions:

  1. Install B to satisfy X Depends A (= 1) | B.
  2. Keep back the upgrade of A

Which one does it pick? This depends on the order in which it sees the upgrade action for A and the dependency, as it will backjump chronologically. So

  1. If it gets to the dependency first, it marks A=1 for install to satisfy A (= 1). Then it gets to the upgrade request, which is just A Depends A (= 2) | A (= 1) and sees it is satisfied already and is content.

  2. If it gets to the upgrade request first, it marks A=2 for install to satisfy A (= 2). Then later it gets to X Depends: A (= 1) | B, sees that A (= 1) is not satisfiable, and picks B.

We have two ways to approach this issue:

  1. We always order upgrade requests last, so they will be kept back in case of conflicting dependencies
  2. We require that, for apt upgrade a currently satisfied dependency must be satisfied by currently installed packages, hence eliminating B as a choice.
Recommends are hard too

See if you have a X Recommends: A (= 1) and a new version of A, A (= 2), the solver currently will silently break the Recommends in some cases.

But let’s explore what the behavior of a X Recommends: A (= 1) in combination with an available upgrade of A (= 2) should be. We could say the rule should be:

  • An upgrade should keep back A instead of breaking the Recommends
  • A dist-upgrade should either keep back A or remove X (if it is obsolete)

This essentially leaves us the same choices as for the previous problem, but with an interesting twist. We can change the ordering (and we already did), but we could also introduce a new rule, “promotions”:

A Recommends in an installed package, or an upgrade to that installed package, where the Recommends existed in the installed version, that is currently satisfied, must continue to be satisfied, that is, it effectively is promoted to a Depends.

This neatly solves the problem for us. We will never break Recommends that are satisfied.

Likewise, we already have a Recommends demotion rule:

A Recommends in an installed package, or an upgrade to that installed package, where the Recommends existed in the installed version, that is currently unsatisfied, will not be further evaluated (it is treated like a Suggests is in the default configuration).

Whether we should be allowed to break Suggests with our decisions or not (the old autoremover did not, for instance) is a different decision. Should we promote currently satisified Suggests to Depends as well? Should we follow currently satisified Suggests so the solver sees them and doesn’t autoremove them, but treat them as optional?

tightening of versioned dependencies

Another case of versioned dependencies with alternatives that has complex behavior is something like

X Depends: A (>= 2) | B X Recommends: A (>= 2) | B

In both cases, installing X should upgrade an A < 2 in favour of installing B. But a naive SAT solver might not. If your request to keep A installed is encoded as A (= 1) | A (= 2), then it first picks A (= 1). When it sees the Depends/Recommends it will switch to B.

We can solve this again as in the previous example by ordering the “keep A installed” requests after any dependencies. Notably, we will enqueue the common dependencies of all A versions first before selecting a version of A, so something may select a version for us.

version narrowing instead of version choosing

A different approach to dealing with the issue of version selection is to not select a version until the very last moment. So instead of selecting a version to satisfy A (>= 2) we instead translate

Depends: A (>= 2)

into two rules:

  1. The package selection rule:

    Depends: A

    This ensures that any version of A is installed (i.e. it adds a version choice clause, A (= 1) | A (= 2) in an example with two versions for A.

  2. The version narrowing rule:

    Conflicts: A (<< 2)

    This outright would reject a choice of A (= 1).

So now we have 3 kinds of clauses:

  1. package selection
  2. version narrowing
  3. version selection

If we process them in that order, we should surely be able to find the solution that best matches the semantics of our Debian dependency model, i.e. selecting earlier choices in a dependency before later choices in the face of version restrictions.

This still leaves one issue: What if our maintainer did not use Depends: A (>= 2) | B but e.g. Depends: A (= 3) | B | A (= 2). He’d expect us to fall back to B if A (= 3) is not installable, and not to B. But we’d like to enqueue A and reject all choices other than 3 and 2. I think it’s fair to say: “Don’t do that, then” here.

Implementing strict pinning correctly

APT knows a single candidate version per package, this makes the solver relatively deterministic: It will only ever pick the candidate, or an installed version. This also happens to significantly reduce the search space which is good - less backtracking. An uptodate system will only ever have one version per package that can be installed, so we never actually have to choose versions.

But of course, APT allows you to specify a non-candidate version of a package to install, for example:

apt install foo/oracular-proposed

The way this works is that the core component of the previous solver, which is the pkgDepCache maintains what essentially amounts to an overlay of the policy that you could see with apt-cache policy.

The solver currently however validates allowed version choices against the policy directly, and hence finds these versions are not allowed and craps out. This is an interesting problem because the solver should not be dependent on the pkgDepCache as the pkgDepCache initialization (Building dependency tree...) accounts for about half of the runtime of APT (until the Y/n prompt) and I’d really like to get rid of it.

But currently the frontend does go via the pkgDepCache. It marks the packages in there, building up what you could call a transaction, and then we translate it to the new solver, and once it is done, it translates the result back into the pkgDepCache.

The current implementation of “allowed version” is implemented by reducing the search space, i.e. every dependency, we outright ignore any non-allowed versions. So if you have a version 3 of A that is ignored a Depends: A would be translated into A (= 2) | A (= 1).

However this has two disadvantages. (1) It means if we show you why A could not be installed, you don’t even see A (= 3) in the list of choices and (2) you would need to keep the pkgDepCache around for the temporary overrides.

So instead of actually enforcing the allowed version rule by filtering, a more reasonable model is that we apply the allowed version rule by just marking every other version as not allowed when discovering the package in the from depcache translation layer. This doesn’t really increase the search space either but it solves both our problem of making overrides work and giving you a reasonable error message that lists all versions of A.

pulling up common dependencies to minimize backtracking cost

One of the common issues we have is that when we have a dependency group

`A | B | C | D`

we try them in order, and if one fails, we undo everything it did, and move on to the next one. However, this isn’t perhaps the best choice of operation.

I explained before that one thing we do is queue the common dependencies of a package (i.e. dependencies shared in all versions) when marking a package for install, but we don’t do this here: We have already lowered the representation of the dependency group into a list of versions, so we’d need to extract the package back out of it.

This can of course be done, but there may be a more interesting solution to the problem, in that we simply enqueue all the common dependencies. That is, we add n backtracking levels for n possible solutions:

  1. We enqueue the common dependencies of all possible solutions deps(A)&deps(B)&deps(C)&deps(D)
  2. We decide (adding a decision level) not to install D right now and enqueue deps(A)&deps(B)&deps(C)
  3. We decide (adding a decision level) not to install C right now and enqueue deps(A)&deps(B)
  4. We decide (adding a decision level) not to install B right now and enqueue A

Now if we need to backtrack from our choice of A we hopefully still have a lot of common dependencies queued that we do not need to redo. While we have more backtracking levels, each backtracking level would be significantly cheaper, especially if you have cheap backtracking (which admittedly we do not have, yet anyway).

The caveat though is: It may be pretty expensive to find the common dependencies. We need to iterate over all dependency groups of A and see if they are in B, C, and D, so we have a complexity of roughly

#A * (#B+#C+#D)

Each dependency group we need to check i.e. is X|Y in B meanwhile has linear cost: We need to compare the memory content of two pointer arrays containing the list of possible versions that solve the dependency group. This means that X|Y and Y|X are different dependencies of course, but that is to be expected – they are. But any dependency of the same order will have the same memory layout.

So really the cost is roughly N^4. This isn’t nice.

You can apply various heuristics here on how to improve that, or you can even apply binary logic:

  1. Enqueue common dependencies of A|B|C|D
  2. Move into the left half, enqueue of A|B
  3. Again divide and conquer and select A.

This has a significant advantage in long lists of choices, and also in the common case, where the first solution should be the right one.

Or again, if you enqueue the package and a version restriction instead, you already get the common dependencies enqueued for the chosen package at least.

Categories: FLOSS Project Planets

Freexian Collaborators: Discover release 0.3.0 of the debusine software factory (by Colin Watson)

Thu, 2024-05-23 20:00

Debusine is a Free Software project developed by Freexian to manage scheduling and distribution of Debian-related tasks to a network of worker machines. It was started some time back, but its development pace has recently increased significantly thanks to funding from the Sovereign Tech Fund. You can read more about it in its documentation.

For more background, Enrico Zini and Carles Pina i Estany gave a talk on Debusine in November 2023 at the mini-DebConf in Cambridge.

We described the work from our first funded milestone in a post to debian-devel-announce in March.

We’ve recently finished work on our second funded milestone, culminating in releasing version 0.3.0 to unstable. Our focus on this milestone was on new building blocks to allow us to automatically orchestrate QA tasks in bulk. Full details are in our release history document. As usual, debusine.debian.net is up to date with our latest work.

Collections

In the previous milestone, debusine could store artifacts and run tasks against those artifacts. However, on its own this required the user to do a lot of manual work, because the only way to refer to an artifact was by its ID.

We now have the concept of a collection, which can store references to other artifacts (or indeed to other collections) with some attached metadata. These are structured by category, so for example a debian:suite collection contains references to source and binary package artifacts with their names, versions, and architectures as metadata. This allows us to look up artifacts using a simple query language instead of just by ID.

At the moment, the main visible effect of this is that our Getting started with debusine tutorial no longer needs users of debusine.debian.net to create their own build environments before being able to submit other work requests: they can refer to existing environments using something like debian/match:codename=trixie:variant=sbuild instead.

We also have a basic user interface allowing you to browse existing collections, accessible via the relevant workspace (such as the default System workspace).

Workflows

We’ve always known that individual tasks were just a starting point: real-world requirements often involve chaining many tasks together, as many Debian developers already do using the Salsa CI pipeline. debusine intends to approach a similar problem from a different angle, defining common workflows that can be applied at the scale of a whole distribution without being tightly coupled to where each package’s code is hosted.

In time we intend to define a way for users to specify their own workflows, but rather than getting too bogged down in this we started by building a couple of predefined workflows into debusine. The update_environments workflow is used to create multiple build environments in bulk, while the sbuild workflow builds a source package for all the architectures that it supports and for which debusine has workers. (debusine.debian.net currently has amd64 and arm64 workers, supporting the amd64, arm64, armel, armhf, and i386 architectures between them.)

Upcoming work will build on this by adding more workflows that chain tasks together in various ways, such as workflows that build a package and run QA tasks on the results, or a workflow that builds a package and uploads the result to an upload queue.

Next steps

Our next planned milestone involves expanding debusine’s capability as a build daemon. For that, we already know that there are a number of specific extra workflow steps we need to add, and we’ve reached out to some members of Debian’s buildd team to ask for feedback on what they consider necessary. We hope to be able to replace some of Freexian’s own build infrastructure with debusine in the near future.

Categories: FLOSS Project Planets

Reproducible Builds (diffoscope): diffoscope 268 released

Thu, 2024-05-23 20:00

The diffoscope maintainers are pleased to announce the release of diffoscope version 268. This version includes the following changes:

[ Chris Lamb ] * Drop apktool from Build-Depends; we can still test our APK code via autopkgtests. (Closes: #1071410) * Fix tests for 7zip version 24.05. * Add a versioned dependency for at least version 5.4.5 for the xz tests; they fail under (at least xz 5.2.8). (Closes: reproducible-builds/diffoscope#374) [ Vagrant Cascadian ] * Relax Chris' versioned xz test dependency (5.4.5) to also allow version 5.4.1.

You find out more by visiting the project homepage.

Categories: FLOSS Project Planets

Evgeni Golov: Upgrading CentOS Stream 8 to CentOS Stream 9 using Leapp

Wed, 2024-05-22 15:19

Warning to the Planet Debian readers: the following post might shock you, if you're used to Debian's smooth upgrades using only the package manager.

Leapp?!

Contrary to distributions like Debian and Fedora, RHEL can't be upgraded using the package manager alone.

Instead there is a tool called Leapp that takes care of orchestrating the update and also includes a set of checks whether a system can be upgraded at all. Have a look at the RHEL documentation about upgrading if you want more details on the process itself.

You might have noticed that the title of this post says "CentOS Stream" but here I am talking about RHEL. This is mostly because Leapp was originally written with RHEL in mind.

Upgrading CentOS 7 to EL8

When people started pondering upgrading their CentOS 7 installations, AlmaLinux started the ELevate project to allow upgrading CentOS 7 to CentOS Stream 8 but also to AlmaLinux 8, Rocky 8 or Oracle Linux 8.

ELevate was essentially Leapp with patches to allow working on CentOS, which has different package signature keys, different OS release versioning, etc.

Sadly these patches were never merged back into Leapp.

Making Leapp work with CentOS Stream 8 (and other distributions)

At some point I noticed that things weren't moving and EL8 to EL9 upgrades were coming closer (and I had my own systems that I wanted to be able to upgrade in place).

Annoyed-Evgeni-Development is best development? Not sure, but it produced a set of patches that allowed some movement:

However, this is not yet the end of the story. At least convert dot-less CentOS versions to X.999 is open, and another followup would be needed if we go that route. But I don't expect this to be merged soon, as the patch is technically wrong - yet it makes things mostly work.

The big problem here is that CentOS Stream doesn't have X.Y versioning, just X as it's a constant stream with no point releases. Leapp however relies on X.Y versioning to know which package changes it needs to perform. Pretending CentOS Stream 8 is "RHEL" 8.999 works if you assume that Stream is always ahead of RHEL.

This is however a CentOS only problem. I still need to properly test that, but I'd expect things to work fine with upstream Leapp on AlmaLinux/Rocky if you feed it the right signature and repository data.

Actually upgrading CentOS Stream 8 to CentOS Stream 9 using Leapp

Like I've already teased in my HPE rant, I've actually used that code to upgrade virt01.conova.theforeman.org to CentOS Stream 9. I've also used it to upgrade a server at home that's responsible for running important containers like Home Assistant and UniFi. So it's absolutely battle tested and production grade! It's also hungry for kittens.

As mentioned above, you can't just use upstream Leapp, but I have a Copr: evgeni/leapp.

# dnf copr enable evgeni/leapp # dnf install leapp leapp-upgrade-el8toel9

Apart from the software, we'll also need to tell it which repositories to use for the upgrade.

# vim /etc/leapp/files/leapp_upgrade_repositories.repo [c9-baseos] name=CentOS Stream $releasever - BaseOS metalink=https://mirrors.centos.org/metalink?repo=centos-baseos-9-stream&arch=$basearch&protocol=https,http gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial gpgcheck=1 repo_gpgcheck=0 metadata_expire=6h countme=1 enabled=1 [c9-appstream] name=CentOS Stream $releasever - AppStream metalink=https://mirrors.centos.org/metalink?repo=centos-appstream-9-stream&arch=$basearch&protocol=https,http gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial gpgcheck=1 repo_gpgcheck=0 metadata_expire=6h countme=1 enabled=1

Depending on the setup and installed packages, more repositories might be needed. Just make sure that the $stream substitution is not used as Leapp doesn't override that and you'd end up with CentOS Stream 8 repos again.

Once all that is in place, we can call leapp preupgrade and let it analyze the system.

Ideally, the output will look like this:

# leapp preupgrade … ============================================================ REPORT OVERVIEW ============================================================ Reports summary: Errors: 0 Inhibitors: 0 HIGH severity reports: 0 MEDIUM severity reports: 0 LOW severity reports: 3 INFO severity reports: 3 Before continuing consult the full report: A report has been generated at /var/log/leapp/leapp-report.json A report has been generated at /var/log/leapp/leapp-report.txt ============================================================ END OF REPORT OVERVIEW ============================================================

But trust me, it won't ;-)

As mentioned above, Leapp analyzes the system before the upgrade. Some checks can completely inhibit the upgrade, while others will just be logged as "you better should have a look".

Firewalld Configuration AllowZoneDrifting Is Unsupported

EL7 and EL8 shipped with AllowZoneDrifting=yes, but since EL9 this is not supported anymore. As this can potentially break the networking of the system, the upgrade gets inhibited.

Newest installed kernel not in use

Admit it, you also don't reboot into every new kernel available! Well, Leapp won't let that pass and inhibits the upgrade.

Cannot perform the VDO check of block devices

In EL8 there are two ways to manage VDO: using the dedicated vdo tool and via LVM. If your system uses LVM (it should!) but not VDO, you probably don't have the vdo package installed. But then Leapp can't check if your LVM devices really aren't VDO without the vdo tooling and will inhibit the upgrade. So you gotta install vdo for it to find out that you don't use VDO…

LUKS encrypted partition detected

Yeah. Sorry. Using LUKS? Straight into the inhibit corner!

But hey, if you don't use LUKS for / you can probably get away by deleting the inhibitwhenluks actor. That worked for me, but remember the kittens!

Really upgrading CentOS Stream 8 to CentOS Stream 9 using Leapp

The headings are getting silly, huh?

Anyway, once leapp preupgrade is happy and doesn't throw any inhibitors anymore, the actual (real?) upgrade can be done by calling leapp upgrade.

This will download all necessary packages and create an intermediate initramfs that contains all the things needed for the upgrade and ask you to reboot.

Once booted, the upgrade itself takes somewhere between 5 and 10 minutes. Then another minute or 5 to relabel your disks with the new SELinux policy.

And three reboots (into the upgrade initramfs, into SELinux relabel, into real OS) of a ProLiant DL325 - 5 minutes each? 😿

And then for good measure another one, to flip SELinux from permissive to enforcing.

Are we done yet? Nope.

There are a few post-upgrade tasks you get to do yourself. Yes, the switching of SELinux back to enforcing is one of them. Please don't forget it.

Using the system after the upgrade

A customer once said "We're not running those systems for the sake of running systems, but for the sake of running some application ontop of them". This is very true.

libvirt doesn't support Spice/QXL

In EL9, support for Spice/QXL was dropped, so if you try to boot a VM using it, libvirt will nicely error out with

Error starting domain: unsupported configuration: domain configuration does not support video model 'qxl'

Interestingly, because multiple parts of the VM are invalid, you can't edit it in virt-manager (at least the one in Fedora 39) as removing/fixing one part requires applying the new configuration which is still invalid.

So virsh edit <vm> it is!

Look for entries like

<channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <graphics type='spice' autoport='yes'> <listen type='address'/> </graphics> <audio id='1' type='spice'/> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <redirdev bus='usb' type='spicevmc'> <address type='usb' bus='0' port='2'/> </redirdev> <redirdev bus='usb' type='spicevmc'> <address type='usb' bus='0' port='3'/> </redirdev>

and either just delete the or (better) replace them with VNC/cirrus

<graphics type='vnc' port='-1' autoport='yes'> <listen type='address'/> </graphics> <audio id='1' type='none'/> <video> <model type='cirrus' vram='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> Podman needs re-login to private registries

One of the machines I've updated runs Podman and pulls containers from GitHub which are marked as private. To do so, I have a personal access token that I've used to login to ghcr.io. After the CentOS Stream 9 upgrade (which included an upgrade to Podman 5), pulls stopped working with authentication/permission errors. No idea what exactly happened, but a simple podman login fixed this issue quickly.

$ echo ghp_token | podman login ghcr.io -u <user> --password-stdin shim has an el8 tag

One of the documented post-upgrade tasks is to verify that no EL8 packages are installed, and to remove those if there are any.

However, when you do this, you'll notice that the shim-x64 package has an EL8 version: shim-x64-15-15.el8_2.x86_64.

That's because the same build is used in both CentOS Stream 8 and CentOS Stream 9. Confusing, but should really not be uninstalled if you want the machine to boot ;-)

Are we done yet?

Yes! That's it. Enjoy your CentOS Stream 9!

Categories: FLOSS Project Planets

Pages