Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 3 hours 34 min ago

Dima Kogan: Talking to ROS from outside a LAN

Fri, 2023-10-27 02:25

Alright so let's say we have have some machines in a LAN doing ROS stuff and we have another machine outside the LAN that wants to listen in (like to get a realtime visualization, say). This is an extremely common scenario, but they created enough hoops to make this not work. Let's say we have 3 computers:

  • router: the bridge between the two networks. This has two NICs. The inner IP is 10.0.1.1 and the outer IP is 12.34.56.78
  • inner: a machine in the LAN that's doing ROS stuff. IP 10.0.1.99
  • outer: a machine outside that LAN that wants to listen in. IP 12.34.56.99

Let's say the router is doing ROS stuff. It's running the ROS master and some nodes like this:

ROS_IP=10.0.1.1 roslaunch whatever

If you omit the ROS_IP it'll pick router, which may or may not work, depending on how the DNS is set up. Here we set it to 10.0.1.1 to make it possible for the inner machine to communicate (we'll see why in a bit). An aside: ROS should use the IP by default instead of the name because the IP will work even if the DNS isn't set up. If there are multiple extant IPs, it should throw an error. But all that would be way too user-friendly.

OK. So we have a ROS master on 10.0.1.1 on the default port: 11311. The inner machine can rostopic echo and all that. Great.

What if I try to listen in from outer? I say

ROS_MASTER_URI=http://12.34.56.78:11311 rostopic list

This connects to the router on that port, and it works well: I get the list of available topics. Here this works because the router is the router. If inner was running the ROS master then we'd need to do a forward for port 11311. In any case, this works and we understand it.

So clearly we can talk to the ROS master. Right? Wrong! Let's actually listen in on a specific topic on outer:

ROS_MASTER_URI=http://12.34.56.78:11311 rostopic echo /some/topic

This does not work. No errors are reported. It just sits there, which looks like no data is coming in on that topic. But this is a lie: it's actually broken.

The diagnosis

So this is our problem. It's a very common use case, and there are plenty of internet people asking about it, with no specific solutions. I debugged it, and the details are here.

To figure out what's going on, I made a syscall log on a machine inside the LAN, where a simple rostopic echo does work:

sysdig -A proc.name=rostopic and fd.type contains ipv -s 2000

This shows us all the communication between inner running rostopic and the server. It's really chatty. It's all TCP. There are multiple connections to the router on port 11311. It also starts up multiple TCP servers on the outer client that listen to connections; these are likely to be broken if a machine inside the LAN tries to talk to them, but thankfully in my limited testing nothing actually tried to talk to them. The conversations on port 11311 are really long, but here's the punchline.

outer tells the router:

POST /RPC2 HTTP/1.1 Host: 10.0.1.1:11311 Accept-Encoding: gzip Content-Type: text/xml User-Agent: Python-xmlrpc/3.11 Content-Length: 390 <?xml version='1.0'?> <methodCall> <methodName>registerSubscriber</methodName> <params> <param> <value><string>/rostopic_2447878_1698362157834</string></value> </param> <param> <value><string>/some/topic</string></value> </param> <param> <value><string>*</string></value> </param> <param> <value><string>http://inner:38229/</string></value> </param> </params> </methodCall>

Yes. It's laughably chatty. Then the router replies:

HTTP/1.1 200 OK Server: BaseHTTP/0.6 Python/3.8.10 Date: Thu, 26 Oct 2023 23:15:28 GMT Content-type: text/xml Content-length: 342 <?xml version='1.0'?> <methodResponse> <params> <param> <value><array><data> <value><int>1</int></value> <value><string>Subscribed to [/some/topic]</string></value> <value><array><data> <value><string>http://10.0.1.1:45517/</string></value> </data></array></value> </data></array></value> </param> </params> </methodResponse>

Then this sequence of system calls happens in the rostopic process (an excerpt from the sysdig log):

> connect fd=10(<4>) addr=10.0.1.1:45517 < connect res=-115(EINPROGRESS) tuple=10.0.1.99:47428->10.0.1.1:45517 fd=10(<4t>10.0.1.99:47428->10.0.1.1:45517) < getsockopt res=0 fd=10(<4t>10.0.1.99:47428->10.0.1.1:45517) level=1(SOL_SOCKET) optname=4(SO_ERROR) val=0 optlen=4

So the outer client makes an outgoing TCP connection on the address given to it by the ROS master above: 10.0.1.1:45517. This IP is only accessible from within the LAN. Furthermore, some sort of single-port-forwarding scheme wouldn't work here either, since the port number is dynamic.

To confirm what we think is happening, the sequence of syscalls when trying to rostopic echo from outer does indeed fail:

connect fd=10(<4>) addr=10.0.1.1:45517 connect res=-115(EINPROGRESS) tuple=10.0.1.1:46204->10.0.1.1:45517 fd=10(<4t>10.0.1.1:46204->10.0.1.1:45517) getsockopt res=0 fd=10(<4t>10.0.1.1:46204->10.0.1.1:45517) level=1(SOL_SOCKET) optname=4(SO_ERROR) val=-111(ECONNREFUSED) optlen=4

That's the breakage mechanism: the ROS master asks us to communicate on an address we can't talk to.

Debugging this is easy with sysdig:

sudo sysdig -A -s 400 evt.buffer contains '"Subscribed to"' and proc.name=rostopic

This prints out all syscalls seen by the rostopic command that contain the string Subscribed to, so you can see that different addresses the ROS master gives us in response to different commands.

OK. So can we get the ROS master to give us an address that we can actually talk to? Sorta. Remember that we invoked the master with

ROS_IP=10.0.1.1 roslaunch whatever

The ROS_IP environment variable is exactly the address that the master gives out. So in this case, we can fix it by doing this instead:

ROS_IP=12.34.56.78 roslaunch whatever

Then the outer machine will be asked to talk to 12.34.56.78:45517, which works. Unfortunately, if we do that, then the inner machine won't be able to communicate.

So some sort of ssh port forward cannot fix this: we need a lower-level tunnel, like a VPN or something.

And another rant. Here rostopic tried to connect to an unreachable address, which failed. But rostopic knows the connection failed! It should throw an error message to the user. Something like this would be wonderful:

ERROR! Tried to connect to 10.0.1.1:45517 ($ROS_IP:dynamicport), but connect() returned ECONNREFUSED

That would be immensely helpful. It would tell the user that something went wrong (instead of no data being sent), and it would give a strong indication of the problem and how to fix it. But that would be asking too much.

The solution

So we need a VPN-like thing. I just tried sshuttle, and it just works.

Start the ROS node in the way that makes connections from within the LAN work:

ROS_IP=10.0.1.1 roslaunch whatever

Then on the outer client:

sshuttle -r router 10.0.1.0/24

This connects to the router and does some hackery to make all connections from the client to 10.0.1.x transparently route into the LAN. On all ports. rostopic echo then works. I haven't done any thorough testing, but hopefully it's reliable and has low overhead; I don't know.

I haven't tried it but almost certainly this would work even with the ROS master running on inner:

  1. Tell ssh how to connect to inner. Dropping this into ~/.ssh/config should do it: Host inner HostName 10.0.1.99 ProxyJump router
  2. Do the magic thing: sshuttle -r inner 10.0.1.0/24

I'm sure any other VPN-like thing would work also.

Categories: FLOSS Project Planets

Jonathan McDowell: PSA: OpenPGP key updated in Debian keyring

Thu, 2023-10-26 16:53

This is a Public Service Announcement that my new OpenPGP key has now been updated in the active Debian keyring. I believe the only team that needs to be informed about this to manually update their systems is DSA, and I’ve filed an RT ticket to give them a heads up.

Thanks to all the folk who signed my new key, both at the Debian UK BBQ, and DebConf.

Categories: FLOSS Project Planets

Phil Hands: Sleep Apnoea

Wed, 2023-10-25 18:40

I just noticed that I wrote this a decade ago, and then never got round to posting it, so thought I might kick it off now to mark my tentative return to blogging.

At the recent 2015 Cambridge-UK Mini-DebConf (generously hosted by ARM), I gave an impromptu Lightning Talk about Sleep Apnoea (video here).

Obstructive Sleep Apnoea (OSA - the form I'm on about) is a sleep disorder where one repeatedly stops breathing while asleep, normally when snoring, but not necessarily. The consequence of this is that in order to resume breathing one must wake up momentarily. These events are not remembered, but they ruin the quality of your sleep.

If you find that you're often quite tired, you should probably give the Epworth Sleepiness Scale a try -- if it suggests you have a problem: Get thee to a doctor for a check-up!

The good news is that if you do turn out to have OSA it's fairly easy to treat (CPAP or more recently APAP being the favoured treatment), and that when treated you should be able to get good quality sleep that will result in you being much more awake, and much more cheerful.

If you might be an Apnoeac (or a sufferer of some other sleep disorder, for that matter), get yourself treated, and you'll be able to use the extra hours of daily concentration working on Debian, thus making the world a better place

Categories: FLOSS Project Planets

Sven Hoexter: Curing vpnc-scripts Symptoms

Wed, 2023-10-25 10:05

I stick to some very archaic workflows, e.g. to connect to some corp VPN I just run sudo vpnc-connect and later on sudo vpnc-disconnect. In the past that also managed to restore my resolv.conf, currently it doesn't. According to a colleague that's also the case for Ubuntu.

Taking a step back, the sane way would be to use the NetworkManager vpnc plugin, but that does not work with this specific case because we use uncool VPN tech which requires the Enable weak authentication setting for vpnc. There is a feature request open for that one at https://gitlab.gnome.org/GNOME/NetworkManager-vpnc/-/issues/11

Taking another step back I thought that it shouldn't be that hard to add some checkbox, a boolean and render out another config flag or line in a config file. Not as intuitive as I thought this mix of XML and C. So let's quickly look elsewhere.

What happens is that the backup files in /var/run/vpnc/ are created by the vpnc-scripts script called vpnc-script, but not moved back, because it adds some pid as a suffix and the pid is not the final pid of the vpnc process. Basically it can not find the backup when it tries to restore it. So I decided to replace the pid guessing code with a suffix made up of the gateway IP and the tun interface name. No idea if that is stable in all circumstance (someone with a vpn name DNS RR?) or several connections to different gateways. But good enough for myself, so here is my patch:

vpnc-scripts [master]$ cat debian/patches/replace-pid-detection Index: vpnc-scripts/vpnc-script =================================================================== --- vpnc-scripts.orig/vpnc-script +++ vpnc-scripts/vpnc-script @@ -91,21 +91,15 @@ OS="`uname -s`" HOOKS_DIR=/etc/vpnc -# Use the PID of the controlling process (vpnc or OpenConnect) to -# uniquely identify this VPN connection. Normally, the parent process -# is a shell, and the grandparent's PID is the relevant one. -# OpenConnect v9.0+ provides VPNPID, so we don't need to determine it. -if [ -z "$VPNPID" ]; then - VPNPID=$PPID - PCMD=`ps -c -o cmd= -p $PPID` - case "$PCMD" in - *sh) VPNPID=`ps -o ppid= -p $PPID` ;; - esac +# This whole script is called twice via vpnc-connect. On the first run +# the variables are empty. Catch that and move on when they're there. +if [ -n "$VPNGATEWAY" ]; then + BACKUPID="${VPNGATEWAY}_${TUNDEV}" + DEFAULT_ROUTE_FILE=/var/run/vpnc/defaultroute.${BACKUPID} + DEFAULT_ROUTE_FILE_IPV6=/var/run/vpnc/defaultroute_ipv6.${BACKUPID} + RESOLV_CONF_BACKUP=/var/run/vpnc/resolv.conf-backup.${BACKUPID} fi -DEFAULT_ROUTE_FILE=/var/run/vpnc/defaultroute.${VPNPID} -DEFAULT_ROUTE_FILE_IPV6=/var/run/vpnc/defaultroute_ipv6.${VPNPID} -RESOLV_CONF_BACKUP=/var/run/vpnc/resolv.conf-backup.${VPNPID} SCRIPTNAME=`basename $0` # some systems, eg. Darwin & FreeBSD, prune /var/run on boot

Or rolled into a debian package at https://sven.stormbind.net/debian/vpnc-scripts/

The colleague decided to stick to NetworkManager, moved the vpnc binary aside and added a wrapper which invokes vpnc with --enable-weak-authentication. The beauty is, all of this will break on updates, so at some point someone has to understand GTK4 to fix the NetworkManager plugin for good.

Categories: FLOSS Project Planets

Russ Allbery: Review: Going Infinite

Tue, 2023-10-24 23:08

Review: Going Infinite, by Michael Lewis

Publisher: W.W. Norton & Company Copyright: 2023 ISBN: 1-324-07434-5 Format: Kindle Pages: 255

My first reaction when I heard that Michael Lewis had been embedded with Sam Bankman-Fried working on a book when Bankman-Fried's cryptocurrency exchange FTX collapsed into bankruptcy after losing billions of dollars of customer deposits was "holy shit, why would you talk to Michael Lewis about your dodgy cryptocurrency company?" Followed immediately by "I have to read this book."

This is that book.

I wasn't sure how Lewis would approach this topic. His normal (although not exclusive) area of interest is financial systems and crises, and there is lots of room for multiple books about cryptocurrency fiascoes using someone like Bankman-Fried as a pivot. But Going Infinite is not like The Big Short or Lewis's other financial industry books. It's a nearly straight biography of Sam Bankman-Fried, with just enough context for the reader to follow his life.

To understand what you're getting in Going Infinite, I think it's important to understand what sort of book Lewis likes to write. Lewis is not exactly a reporter, although he does explain complicated things for a mass audience. He's primarily a storyteller who collects people he finds fascinating. This book was therefore never going to be like, say, Carreyrou's Bad Blood or Isaac's Super Pumped. Lewis's interest is not in a forensic account of how FTX or Alameda Research were structured. His interest is in what makes Sam Bankman-Fried tick, what's going on inside his head.

That's not a question Lewis directly answers, though. Instead, he shows you Bankman-Fried as Lewis saw him and was able to reconstruct from interviews and sources and lets you draw your own conclusions. Boy did I ever draw a lot of conclusions, most of which were highly unflattering. However, one conclusion I didn't draw, and had been dubious about even before reading this book, was that Sam Bankman-Fried was some sort of criminal mastermind who intentionally plotted to steal customer money. Lewis clearly doesn't believe this is the case, and with the caveat that my study of the evidence outside of this book has been spotty and intermittent, I think Lewis has the better of the argument.

I am utterly fascinated by this, and I'm afraid this review is going to turn into a long summary of my take on the argument, so here's the capsule review before you get bored and wander off: This is a highly entertaining book written by an excellent storyteller. I am also inclined to believe most of it is true, but given that I'm not on the jury, I'm not that invested in whether Lewis is too credulous towards the explanations of the people involved. What I do know is that it's a fantastic yarn with characters who are too wild to put in fiction, and I thoroughly enjoyed it.

There are a few things that everyone involved appears to agree on, and therefore I think we can take as settled. One is that Bankman-Fried, and most of the rest of FTX and Alameda Research, never clearly distinguished between customer money and all of the other money. It's not obvious that their home-grown accounting software (written entirely by one person! who never spoke to other people! in code that no one else could understand!) was even capable of clearly delineating between their piles of money. Another is that FTX and Alameda Research were thoroughly intermingled. There was no official reporting structure and possibly not even a coherent list of employees. The environment was so chaotic that lots of people, including Bankman-Fried, could have stolen millions of dollars without anyone noticing. But it was also so chaotic that they could, and did, literally misplace millions of dollars by accident, or because Bankman-Fried had problems with object permanence.

Something that was previously less obvious from news coverage but that comes through very clearly in this book is that Bankman-Fried seriously struggled with normal interpersonal and societal interactions. We know from multiple sources that he was diagnosed with ADHD and depression (Lewis describes it specifically as anhedonia, the inability to feel pleasure). The ADHD in Lewis's account is quite severe and does not sound controlled, despite medication; for example, Bankman-Fried routinely played timed video games while he was having important meetings, forgot things the moment he stopped dealing with them, was constantly on his phone or seeking out some other distraction, and often stimmed (by bouncing his leg) to a degree that other people found it distracting.

Perhaps more tellingly, Bankman-Fried repeatedly describes himself in diary entries and correspondence to other people (particularly Caroline Ellison, his employee and on-and-off secret girlfriend) as being devoid of empathy and unable to access his own emotions, which Lewis supports with stories from former co-workers. I'm very hesitant to diagnose someone via a book, but, at least in Lewis's account, Bankman-Fried nearly walks down the symptom list of antisocial personality disorder in his own description of himself to other people. (The one exception is around physical violence; there is nothing in this book or in any of the other reporting that I've seen to indicate that Bankman-Fried was violent or physically abusive.) One of the recurrent themes of this book is that Bankman-Fried never saw the point in following rules that didn't make sense to him or worrying about things he thought weren't important, and therefore simply didn't.

By about a third of the way into this book, before FTX is even properly started, very little about its eventual downfall will seem that surprising. There was no way that Sam Bankman-Fried was going to be able to run a successful business over time. He was extremely good at probabilistic trading and spotting exploitable market inefficiencies, and extremely bad at essentially every other aspect of living in a society with other people, other than a hit-or-miss ability to charm that worked much better with large audiences than one-on-one. The real question was why anyone would ever entrust this man with millions of dollars or decide to work for him for longer than two weeks.

The answer to those questions changes over the course of this story. Later on, it was timing. Sam Bankman-Fried took the techniques of high frequency trading he learned at Jane Street Capital and applied them to exploiting cryptocurrency markets at precisely the right time in the cryptocurrency bubble. There was far more money than sense, the most ruthless financial players were still too leery to get involved, and a rising tide was lifting all boats, even the ones that were piles of driftwood. When cryptocurrency inevitably collapsed, so did his businesses. In retrospect, that seems inevitable.

The early answer, though, was effective altruism.

A full discussion of effective altruism is beyond the scope of this review, although Lewis offers a decent introduction in the book. The short version is that a sensible and defensible desire to use stronger standards of evidence in evaluating charitable giving turned into a bizarre navel-gazing exercise in making up statistical risks to hypothetical future people and treating those made-up numbers as if they should be the bedrock of one's personal ethics. One of the people most responsible for this turn is an Oxford philosopher named Will MacAskill. Sam Bankman-Fried was already obsessed with utilitarianism, in part due to his parents' philosophical beliefs, and it was a presentation by Will MacAskill that converted him to the effective altruism variant of extreme utilitarianism.

In Lewis's presentation, this was like joining a cult. The impression I came away with feels like something out of a science fiction novel: Bankman-Fried knew there was some serious gap in his thought processes where most people had empathy, was deeply troubled by this, and latched on to effective altruism as the ethical framework to plug into that hole. So much of effective altruism sounds like a con game that it's easy to think the participants are lying, but Lewis clearly believes Bankman-Fried is a true believer. He appeared to be sincerely trying to make money in order to use it to solve existential threats to society, he does not appear to be motivated by money apart from that goal, and he was following through (in bizarre and mostly ineffective ways).

I find this particularly believable because effective altruism as a belief system seems designed to fit Bankman-Fried's personality and justify the things he wanted to do anyway. Effective altruism says that empathy is meaningless, emotion is meaningless, and ethical decisions should be made solely on the basis of expected value: how much return (usually in safety) does society get for your investment. Effective altruism says that all the things that Sam Bankman-Fried was bad at were useless and unimportant, so he could stop feeling bad about his apparent lack of normal human morality. The only thing that mattered was the thing that he was exceptionally good at: probabilistic reasoning under uncertainty. And, critically to the foundation of his business career, effective altruism gave him access to investors and a recruiting pool of employees, things he was entirely unsuited to acquiring the normal way.

There's a ton more of this book that I haven't touched on, but this review is already quite long, so I'll leave you with one more point.

I don't know how true Lewis's portrayal is in all the details. He took the approach of getting very close to most of the major players in this drama and largely believing what they said happened, supplemented by startling access to sources like Bankman-Fried's personal diary and Caroline Ellis's personal diary. (He also seems to have gotten extensive information from the personal psychiatrist of most of the people involved; I'm not sure if there's some reasonable explanation for this, but based solely on the material in this book, it seems to be a shocking breach of medical ethics.) But Lewis is a storyteller more than he's a reporter, and his bias is for telling a great story. It's entirely possible that the events related here are not entirely true, or are skewed in favor of making a better story. It's certainly true that they're not the complete story.

But, that said, I think a book like this is a useful counterweight to the human tendency to believe in moral villains. This is, frustratingly, a counterweight extended almost exclusively to higher-class white people like Bankman-Fried. This is infuriating, but that doesn't make it wrong. It means we should extend that analysis to more people.

Once FTX collapsed, a lot of people became very invested in the idea that Bankman-Fried was a straightforward embezzler. Either he intended from the start to steal everyone's money or, more likely, he started losing money, panicked, and stole customer money to cover the hole. Lots of people in history have done exactly that, and lots of people involved in cryptocurrency have tenuous attachments to ethics, so this is a believable story. But people are complicated, and there's also truth in the maxim that every villain is the hero of their own story. Lewis is after a less boring story than "the crook stole everyone's money," and that leads to some bias. But sometimes the less boring story is also true.

Here's the thing: even if Sam Bankman-Fried never intended to take any money, he clearly did intend to mix customer money with Alameda Research funds. In Lewis's account, he never truly believed in them as separate things. He didn't care about following accounting or reporting rules; he thought they were boring nonsense that got in his way. There is obvious criminal intent here in any reading of the story, so I don't think Lewis's more complex story would let him escape prosecution. He refused to follow the rules, and as a result a lot of people lost a lot of money. I think it's a useful exercise to leave mental space for the possibility that he had far less obvious reasons for those actions than that he was a simple thief, while still enforcing the laws that he quite obviously violated.

This book was great. If you like Lewis's style, this was some of the best entertainment I've read in a while. Highly recommended; if you are at all interested in this saga, I think this is a must-read.

Rating: 9 out of 10

Categories: FLOSS Project Planets

Iustin Pop: OS updates are damn easy nowadays!

Tue, 2023-10-24 16:20

I’m baffled at how simple and reliable operating system updates have become.

Upgraded Debian bullseye to bookworm, across a few systems, easy. On VMs, it’s even so fast that installing base system from scratch is probably the same time.

But Linux/Debian OFC works well. Shall we look at MacOS? Takes longer, but just runs and reboots a couple of times and then, bam, it’s up and with windows restored.

Surely Windows is the outlier? Nah, finally said yes to the “Upgrade to Win 11?” prompt, and it took a while to download (why is Win/Mac so heavy and slow to download? Debian just flies!), then rebooted a few times, and again, bam, it’s up and GoG and Steam still work.

I swear, there was a time when updating the OS felt like an accomplishment. Now, except for Raspberry Pi OS (“upgrades not supported, reinstall!” but I bet they also work), upgrading an actual OS is just like new Android/iOS version.

And yes, get off my lawn! I still have a lower digit count Slashdot ID 😅

Categories: FLOSS Project Planets

Russell Coker: Bluetooth Versions and PineTime

Mon, 2023-10-23 07:40

I’ve done some tests with the PineTime [1] on different Android phones. On a Huawei Mate 10 Pro (from 2017 with Bluetooth 4.2) it has very slow transfer speeds for updating the firmware (less than 1KB/s) and unreliable connection to the phone. On a Huawei Nova 7i (from 2020 with Bluetooth 4.2) it has slow transfer speeds (about 2KB/s) and a more reliable connection to the phone. On a Pixel 4 XL (from 2019 with Bluetooth 5.0) it has very fast speeds for updating the firmware and also a reliable link.

Version 5 of the Bluetooth standard [2] was released in 2016 so it’s a little disappointing that the Mate 10 Pro doesn’t support it and very disappointing that the Nova 7i doesn’t support it either. Bluetooth 5 adds higher speeds and longer range for LE (Low Energy) modes which are used for things like smart watches.

It’s extremely disappointing that the PinePhonePro [3] only supports Bluetooth 4.1. It’s a phone released in 2021 that doesn’t even have Bluetooth 4.2 which was released in 2014.

For laptops the Thinkpad X1 Carbon 7th Gen release in 2019 [4] was the first in the X1 Carbon series to have Bluetooth 5. So I will probably be limited in my ability to use my personal laptop or PinePhone for testing Linux software that talks to the PineTime and I’ll have to use a laptop borrowed from work.

Related posts:

  1. More About the PineTime Since my initial review of the PineTime 10 days ago...
  2. The PineTime I have just got a PineTime smart watch [1] from...
  3. Hive Bluetooth Stereo Speakers I’ve just been given a set of Hive Bluetooth...
Categories: FLOSS Project Planets

Jonathan Dowland: cherished

Mon, 2023-10-23 06:47

If I think back to technology I've used and really cherished, quite often they're audio-related: Minidisc players, Walkmans, MP3 players, headphones. These pieces of technology served as vessels to access music, which of course I often have fond emotional connection to. And so I think the tech has benefited from that, and in some way the fondness or emotional connection to music has somewhat transferred or rubbed-off on the technology to access it.

Put another way, no matter how well engineered it was, how easy it was to use or how well it did the job, I doubt I'd have fond memories, years later, of a toilet brush.

I wonder if the same "bleeding" of fondness applies to brands, too. If so, and if you were a large tech company, it would be worth having some audio gear in your portfolio. I think Sony must have benefited from this. Apple too.

on-ear phones

For listening on-the-go, I really like on-ear headphones, as opposed to over-ear. I have some lovely over-ear phones for listening-at-rest, but they get my head too hot when I'm active. The on-ears are a nice compromise between comfort and quality of over-ear, and portability of in-ear. Most of the ones I've owned have folded up nicely into a coat pocket too.

My current Bose pair are from 2019 and might be towards the end of their life. They replaced some AKG K451s, which were also discontinued. Last time I looked (2019) the Sony offerings in this product category were not great. That might have changed. But I fear that the manufacturers have collectively decided this product category isn't worth investing in.

Categories: FLOSS Project Planets

Russ Allbery: Review: Going Postal

Sun, 2023-10-22 23:54

Review: Going Postal, by Terry Pratchett

Series: Discworld #33 Publisher: Harper Copyright: October 2004 Printing: November 2014 ISBN: 0-06-233497-2 Format: Mass market Pages: 471

Going Postal is the 33rd Discworld novel. You could probably start here if you wanted to; there are relatively few references to previous books, and the primary connection (to Feet of Clay) is fully re-explained. I suspect that's why Going Postal garnered another round of award nominations. There are arguable spoilers for Feet of Clay, however.

Moist von Lipwig is a con artist. Under a wide variety of names, he's swindled and forged his way around the Disc, always confident that he can run away from or talk his way out of any trouble. As Going Postal begins, however, it appears his luck has run out. He's about to be hanged.

Much to his surprise, he wakes up after his carefully performed hanging in Lord Vetinari's office, where he's offered a choice. He can either take over the Ankh-Morpork post office, or he can die. Moist, of course, immediately agrees to run the post office, and then leaves town at the earliest opportunity, only to be carried back into Vetinari's office by a relentlessly persistent golem named Mr. Pump. He apparently has a parole officer.

The clacks, Discworld's telegraph system first seen in The Fifth Elephant, has taken over most communications. The city is now dotted with towers, and the Grand Trunk can take them at unprecedented speed to even far-distant cities like Genua. The post office, meanwhile, is essentially defunct, as Moist quickly discovers. There are two remaining employees, the highly eccentric Junior Postman Groat who is still Junior because no postmaster has lasted long enough to promote him, and the disturbingly intense Apprentice Postman Stanley, who collects pins.

Other than them, the contents of the massive post office headquarters are a disturbing mail sorting machine designed by Bloody Stupid Johnson that is not picky about which dimension or timeline the sorted mail comes from, and undelivered mail. A lot of undelivered mail. Enough undelivered mail that there may be magical consequences.

All Moist has to do is get the postal system running again. Somehow. And not die in mysterious accidents like the previous five postmasters.

Going Postal is a con artist story, but it's also a startup and capitalism story. Vetinari is, as always, solving a specific problem in his inimitable indirect way. The clacks were created by engineers obsessed with machinery and encodings and maintenance, but it's been acquired by... well, let's say private equity, because that's who they are, although Discworld doesn't have that term. They immediately did what private equity always did: cut out everything that didn't extract profit, without regard for either the service or the employees. Since the clacks are an effective monopoly and the new owners are ruthless about eliminating any possible competition, there isn't much to stop them. Vetinari's chosen tool is Moist.

There are some parts of this setup that I love and one part that I'm grumbly about. A lot of the fun of this book is seeing Moist pulled into the mission of resurrecting the post office despite himself. He starts out trying to wriggle out of his assigned task, but, after a few early successes and a supernatural encounter with the mail, he can't help but start to care. Reformed con men often make good protagonists because one can enjoy the charisma without disliking the ethics. Pratchett adds the delightfully sharp-witted and cynical Adora Belle Dearheart as a partial reader stand-in, which makes the process of Moist becoming worthy of his protagonist role even more fun.

I think that a properly functioning postal service is one of the truly monumental achievements of human society and doesn't get nearly enough celebration (or support, or pay, or good working conditions). Give me a story about reviving a postal service by someone who appreciates the tradition and social role as much as Pratchett clearly does and I'm there. The only frustration is that Going Postal is focused more on an immediate plot, so we don't get to see the larger infrastructure recovery that is clearly needed. (Maybe in later books?)

That leads to my grumble, though. Going Postal and specifically the takeover of the clacks is obviously inspired by corporate structures in the later Industrial Revolution, but this book was written in 2004, so it's also a book about private equity and startups. When Vetinari puts a con man in charge of the post office, he runs it like a startup: do lots of splashy things to draw attention, promise big and then promise even bigger, stumble across a revenue source that may or may not be sustainable, hire like mad, and hope it all works out.

This makes for a great story in the same way that watching trapeze artists or tightrope walkers is entertaining. You know it's going to work because that's the sort of book you're reading, so you can enjoy the audacity and wonder how Moist will manage to stay ahead of his promises. But it is still a con game applied to a public service, and the part of me that loves the concept of the postal service couldn't stop feeling like this is part of the problem.

The dilemma that Vetinari is solving is a bit too realistic, down to the requirement that the post office be self-funding and not depend on city funds and, well, this is repugnant to me. Public services aren't businesses. Societies spend money to build things that they need to maintain society, and postal service is just as much one of those things as roads are. The ability of anyone to send a letter to anyone else, no matter how rural the address is, provides infrastructure on which a lot of important societal structure is built. Pratchett made me care a great deal about Ankh-Morpork's post office (not hard to do), and now I want to see it rebuilt properly, on firm foundations, without splashy promises and without a requirement that it pay for itself. Which I realize is not the point of Discworld at all, but the concept of running a postal service like a startup hits maybe a bit too close to home.

Apart from that grumble, this is a great book if you're in the mood for a reformed con man story. I thought the gold suit was a bit over the top, but I otherwise thought Moist's slow conversion to truly caring about his job was deeply satisfying. The descriptions of the clacks are full of askew Discworld parodies of computer networking and encoding that I enjoyed more than I thought I would. This is also the book that introduced the now-famous (among Pratchett fans at least) GNU instruction for the clacks, and I think that scene is the most emotionally moving bit of Pratchett outside of Night Watch.

Going Postal is one of the better books in the Discworld series to this point (and I'm sadly getting near the end). If you have less strongly held opinions about management and funding models for public services, or at least are better at putting them aside when reading fantasy novels, you're likely to like it even more than I did. Recommended.

Followed by Thud!. The thematic sequel is Making Money.

Rating: 8 out of 10

Categories: FLOSS Project Planets

Ravi Dwivedi: Software Freedom Day at sflc.in

Sun, 2023-10-22 17:55

Software Freedom Law Center, India, also known as sflc.in, organized an event to celebrate the Software Freedom Day on 30th September 2023. Me, Sahil, Contrapunctus and Suresh joined. The venue was at the SFLC India office in Delhi. The sflc.in office was on the second floor of what looked like someone’s apartment:). I also met Chirag, Orendra, Surbhi and others.

My plan was to have a stall on LibreOffice and Prav app to raise awareness about these projects. I didn’t have QR code for downloading prav app printed already, so I asked the people at sflc.in if they can get it printed for me. They were very kind and helped me in getting a color printout for me. So, I got a stall in their main room. Surbhi was having an Inkscape stall next to mine and gave me company. People came and asked about the prav project and then I realized I was still too tired to explain the idea behind the prav project and about LibreOffice (after a long Kerala trip). We got a few prav app installs during the event, which is cool.

My stall. Photo credits: Tejaswini.

Sahil had Debian stall and contrapunctus had OpenStreetMap stall. After about an hour, Revolution OS was screened for all of us to watch, along with popcorn. The documentary gave an overview of history of Free Software Movement. The office had a kitchen where fresh chai was being made and served to us. The organizers ordered a lot of good snacks for us.

Snacks and tea at the front desk. CC-BY-SA 4.0 by Ravi Dwivedi.

I came out of the movie hall to take more tea and snacks from the front desk. I saw a beautiful painting was hanging at the wall opposite to the front desk and Tejaswini (from sflc.in) revealed that she had made it. The tea was really good as it was freshly made in the kitchen.

After the movie, we played a game of pictionary. We were divided into two teams. The game goes as follows: A person from a team is selected and given a term related to freedom respecting software written on a piece of paper, but concealed from other participants. Then that person draws something on the board (no logo, no alphabets) without speaking. If the team from which the person belongs correctly guesses the term, the team gets one step ahead on the leader board. The team who reaches the finish line wins.

I recall some fun pictionaries. Like, the one in the picture below seems far from the word “Wireguard” and even then someone from the team guessed that word. Our team won in the end \o/.

Pictionary drawing nowhere close to the intended word Wireguard :), which was guessed. Photo by Ravi Dwivedi, CC-BY-SA 4.0.

Then, we posed for a group picture. At the end, SFLC.in had a delicious cake in store for us. They had some merchandise - handbags, T-shirts, etc. which we could take if we donate some amount to SFLC.in. I “bought” a handbag with “Ban Plastic, not Internet” written on it in exchange for donation. I hope that gives people around me a powerful message :) .

Group photo. Photo credits: Tejaswini. Tasty cake. CC-BY-SA 4.0 by Ravi Dwivedi. Merchandise by sflc.in. CC-BY-SA 4.0 by Ravi Dwivedi.

All in all, a nice event by sflc.in :)

Categories: FLOSS Project Planets

Steinar H. Gunderson: Some defaults I don't understand

Sun, 2023-10-22 13:00

I'll rant about just one in mpv today: Why would you not have hardware decoding be default, if it's available? Is that really the best thing for a user who hasn't given a preference?

You should never just copy someone's defaults uncritically, but here is my HTPC's ~/.config/mpv/mpv.conf for reference:

sub-filter-sdh=yes sub-scale=0.75 audio-spdif=ac3,dts,dts-hd,eac3,truehd audio-device=alsa/hdmi:CARD=HDMI,DEV=0 fullscreen audio-delay=0.2 hwdec=vaapi
Categories: FLOSS Project Planets

Aigars Mahinovs: Figuring out finances part 3

Sun, 2023-10-22 13:00

So now that I have something that looks very much like a budgeting setup going, I am going to .. delete it! Why? Well, at the end of the last part of this, the Firefly III instance was running on a tiny Debian server in a Docker container right next to another Docker container that is running the main user of this server - a Home Assistant instance that has been managing my home for several years already. So why change that?

See, there is one bit of knowledge that is very crucial to your Home Assistant experience, which is not really emphasised enough in the Home Assistant documentation. In fact back when I was getting into the Home Assistant both the main documentation and basically all the guides around were just coming off the hype of Docker disrupting everything and that is a big reason why everyone suggested to install and use Home Assistant as a Docker container on top of any kind of stable OS. In fact I used to run it for years on my TerraMaster NAS, just so that I don't have a separate home server running 24/7 at home and just have everything inside the very compact NAS case.

So here is the thing you NEED to know - Home Assistant Container is DEMO version of Home Assistant! If you want to have a full Home Assistant experience and use the knowledge of the huge community around the HA space, you have to use the Home Assistant OS. Ideally on dedicated hardware. Ideally on HA Green box, but any tiny PC would also work great. Raspberry Pi 4+ is common, but quite weak as the network size grows and especially the SD card for storage gets old very fast. Get a real small x86 PC with at least 4Gb RAM and a NVME SSD (eMMC is fine too). You want to have an Ethernet port and a few free USB ports. I would also suggest immediately getting HA SkyConnect adapter that can do Zigbee networking and will do Matter soon (tm). I am making do with a SonOff Zigbee gateway, but it is quite hacky to get working and your whole Zigbee communication breaks down if the WiFi goes down - suboptimal.

So I took a backup of the Home Assistant instance using it's build-in tools. I took an export of my fully configured Firefly III instance and proceeded to wipe the drive of the NUC. That was not a smart idea. :D

On the Home Assitant side I was really frustrated by the documentation that was really focused on users that are (likely) using Windows and are using an SD card in something like Raspberry Pi to get Home Assistant OS running. It recommended downloading Etcher to write the image to the boot medium. That is a really weird piece of software that managed to actually crash consistently when I was trying to run it from Debian Live or Ubuntu Live on my NUC. It took me way too long to give up and try something much simpler - dd.

xzcat haos_generic-x86-64-11.0.img.xz | dd of=/dev/mmcblk0 bs=1M

That just worked, prefectly and really fast. If you want to use a GUI in a live environment, then just using the gnome-disk-utility ("Disks" in Gnome menu) and using the "Restore Disk Image ..." on a partition would work just as well. It even supports decompressing the XZ images directly while writing.

But that image is small, will it not have a ton of unused disk space behind the fixed install partition? Yes, it will ... until first boot. The HA OS takes over the empty space after its install partition on the first boot-up and just grows its main partition to take up all the remaining space. Smart. After first boot is completed, the first boot wizard can be accessed via your web browser and one of the prominent buttons there is restoring from backup. So you just give it the backup file and wait. Sadly the restore does not actually give any kind of progress, so your only way to figure out when it is done is opening the same web adress in another browser tab and refresh periodically - after restoring from backup it just boots into the same config at it had before - all the settings, all the devices, all the history is preserved. Even authentification tokens are preserved so if yu had a Home Assitant Mobile installed on your phone (both for remote access and to send location info and phone state, like charging, to HA to trigger automations) then it will just suddenly start working again without further actions needed from your side. That is an almost perfect backup/restore experience.

The first thing you get for using the OS version of HA is easy automatic update that also automatically takes a backup before upgrade, so if anything breaks you can roll back with one click. There is also a command-line tool that allows to upgrade, but also downgrade ha-core and other modules. I had to use it today as HA version 23.10.4 actually broke support for the Sonoff bridge that I am using to control Zigbee devices, which are like 90% of all smart devices in my home. Really helpful stuff, but not a must have.

What is a must have and that you can (really) only get with Home Assistant Operating System are Addons. Some addons are just normal servers you can run alongside HA on the same HA OS server, like MariaDB or Plex or a file server. That is not the most important bit, but even there the software comes pre-configured to use in a home server configuration and has a very simple config UI to pre-configure key settings, like users, passwords and database accesses for MariaDB - you can litereally in a few clicks and few strings make serveral users each with its own access to its own database. Couple more clicks and the DB is running and will be kept restarted in case of failures.

But the real gems in the Home Assistant Addon Store are modules that extend Home Assitant core functionality in way that would be really hard or near impossible to configure in Home Assitant Container manually, especially because no documentation has ever existed for such manual config - everyone just tells you to install the addon from HA Addon store or from HACS. Or you can read the addon metadata in various repos and figure out what containers it actually runs with what settings and configs and what hooks it puts into the HA Core to make them cooperate. And then do it all over again when a new version breaks everything 6 months later when you have already forgotten everything. In the Addons that show up immediately after installation are addons to install the new Matter server, a MariaDB and MQTT server (that other addons can use for data storage and message exchange), Z-Wave support and ESPHome integration and very handy File manager that includes editors to edit Home Assitant configs directly in brower and SSH/Terminal addon that boht allows SSH connection and also a web based terminal that gives access to the OS itself and also to a comand line interface, for example, to do package downgrades if needed or see detailed logs. And also there is where you can get the features that are the focus this year for HA developers - voice enablers.

However that is only a beginning. Like in Debian you can add additional repositories to expand your list of available addons. Unlike Debian most of the amazing software that is available for Home Assistant is outside the main, official addon store. For now I have added the most popular addon repository - HACS (Home Assistant Community Store) and repository maintained by Alexbelgium. The first includes things like NodeRED (a workflow based automation programming UI), Tailscale/Wirescale for VPN servers, motionEye for CCTV control, Plex for home streaming. HACS also includes a lot of HA UI enhacement modules, like themes, custom UI control panels like Mushroom or mini-graph-card and integrations that provide more advanced functions, but also require more knowledge to use, like Local Tuya - that is harder to set up, but allows fully local control of (normally) cloud-based devices. And it has AppDaemon - basically a Python based automation framework where you put in Python scrips that get run in a special environment where they get fed events from Home Assistant and can trigger back events that can control everything HA can and also do anything Python can do. This I will need to explore later.

And the repository by Alex includes the thing that is actually the focus of this blog post (I know :D) - Firefly III addon and Firefly Importer addon that you can then add to your Home Assistant OS with a few clicks. It also has all kinds of addons for NAS management, photo/video server, book servers and Portainer that lets us setup and run any Docker container inside the HA OS structure. HA OS will detect this and warn you about unsupported processes running on your HA OS instance (nice security feature!), but you can just dismiss that. This will be very helpful very soon.

This whole environment of OS and containers and apps really made me think - what was missing in Debian that made the talented developers behind all of that to spend the immense time and effor to setup a completely new OS and app infrastructure and develop a completel paraller developer community for Home Assistant apps, interfaces and configurations. Is there anything that can still be done to make HA community and the general open source and Debian community closer together? HA devs are not doing anything wrong: they are using the best open source can provide, they bring it to people whould could not install and use it otherwise, they are contributing fixes and improvements as well. But there must be some way to do this better, together.

So I installed MariaDB, create a user and database for Firefly. I installed Firefly III and configured it to use the MariaDB with the web config UI. When I went into the Firefly III web UI I was confronted with the normal wizard to setup a new instance. And no reference to any backup restore. Hmm, ok. Maybe that goes via the Importer? So I make an access token again, configured the Importer to use that, configured the Nordlinger bank connection settings. Then I tried to import the export that I downloaded from Firefly III before. The importer did not auto-recognose the format. Turns out it is just a list of transactions ... It can only be barely useful if you first manually create all the asset accounts with the same names as before and even then you'll again have to deal with resolving the problem of transfers showing up twice. And all of your categories (that have not been used yet) are gone, your automation rules and bills are gone, your budgets and piggy banks are gone. Boooo. It will be easier for me to recreate my account data from bank exports again than to resolve data in that transaction export.

Turns out that Firefly III documenation explicitly recommends making a mysqldump of your own and not rely on anything in the app itself for backup purposes. Kind of sad this was not mentioned in the export page that sure looked a lot like a backup :D

After doing all that work all over again I needed to make something new not to feel like I wasted days of work for no real gain. So I started solving a problem I had for a while already - how do I add cash transations to the system when I am out of the house with just my phone in the hand? So far my workaround has been just sending myself messages in WhatsApp with the amount and description of any cash expenses. Two solutions are possible: app and bot.

There are actually multiple Android-based phone apps that work with Firefly III API to do full financial management from the phone. However, after trying it out, that is not what I will be using most of the time. First of all this requires your Firefly III instance to be accessible from the Internet. Either via direct API access using some port forwarding and secured with HTTPS and good access tokens, or via a VPN server redirect that is installed on both HA and your phone. Tailscale was really easy to get working. But the power has its drawbacks - adding a new cash transaction requires opening the app, choosing new transaction view, entering descriptio, amount, choosing "Cash" as source account and optionally choosing destination expense account, choosing category and budget and then submitting the form to the server. Sadly none of that really works if you have no Internet or bad Internet at the place where you are using cash. And it's just too many steps. Annoying.

An easier alternative is setting up a Telegram bot - it is running in a custom Docker container right next to your Firefly (via Portainer) and you talk to it via a custom Telegram chat channel that you create very easily and quickly. And then you can just tell it "Coffee 5" and it will create a transaction from the (default) cash account in 5€ amount with description "Coffee". This part also works if you are offline at the moment - the bot will receive the message once you get back online. You can use Telegram bot menu system to edit the transaction to add categories or expense accounts, but this part only work if you are online. And the Firefly instance does not have to be online at all. Really nifty.

So next week I will need to write up all the regular payments as bills in Firefly (again) and then I can start writing a Python script to predict my (financial) future!

Categories: FLOSS Project Planets

Ian Jackson: DigiSpark (ATTiny85) - Arduino, C, Rust, build systems

Sun, 2023-10-22 12:04

Recently I completed a small project, including an embedded microcontroller. For me, using the popular Arduino IDE, and C, was a mistake. The experience with Rust was better, but still very exciting, and not in a good way.

Here follows the rant.

Introduction

In a recent project (I’ll write about the purpose, and the hardware in another post) I chose to use a DigiSpark board. This is a small board with a USB-A tongue (but not a proper plug), and an ATTiny85 microcontroller, This chip has 8 pins and is quite small really, but it was plenty for my application. By choosing something popular, I hoped for convenient hardware, and an uncomplicated experience.

Convenient hardware, I got.

Arduino IDE

The usual way to program these boards is via an IDE. I thought I’d go with the flow and try that. I knew these were closely related to actual Arduinos and saw that the IDE package arduino was in Debian.

But it turns out that the Debian package’s version doesn’t support the DigiSpark. (AFAICT from the list it offered me, I’m not sure it supports any ATTiny85 board.) Also, disturbingly, its “board manager” seemed to be offering to “install” board support, suggesting it would download “stuff” from the internet and run it. That wouldn’t be acceptable for my main laptop.

I didn’t expect to be doing much programming or debugging, and the project didn’t have significant security requirements: the chip, in my circuit, has only a very narrow ability do anything to the real world, and no network connection of any kind. So I thought it would be tolerable to do the project on my low-security “video laptop”. That’s the machine where I’m prepared to say “yes” to installing random software off the internet.

So I went to the upstream Arduino site and downloaded a tarball containing the Arduino IDE. After unpacking that in /opt it ran and produced a pointy-clicky IDE, as expected. I had already found a 3rd-party tutorial saying I needed to add a magic URL (from the DigiSpark’s vendor) in the preferences. That indeed allowed it to download a whole pile of stuff. Compilers, bootloader clients, god knows what.

However, my tiny test program didn’t make it to the board. Half-buried in a too-small window was an error message about the board’s bootloader (“Micronucleus”) being too new.

The boards I had came pre-flashed with micronucleus 2.2. Which is hardly new, But even so the official Arduino IDE (or maybe the DigiSpark’s board package?) still contains an old version. So now we have all the downsides of curl|bash-ware, but we’re lacking the “it’s up to date” and “it just works” upsides.

Further digging found some random forum posts which suggested simply downloading a newer micronucleus and manually stuffing it into the right place: one overwrites a specific file, in the middle the heaps of stuff that the Arduino IDE’s board support downloader squirrels away in your home directory. (In my case, the home directory of the untrusted shared user on the video laptop,)

So, “whatever”. I did that. And it worked!

Having demo’d my ability to run code on the board, I set about writing my program.

Writing C again

The programming language offered via the Arduino IDE is C.

It’s been a little while since I started a new thing in C. After having spent so much of the last several years writing Rust. C’s primitiveness quickly started to grate, and the program couldn’t easily be as DRY as I wanted (Don’t Repeat Yourself, see Wilson et al, 2012, §4, p.6). But, I carried on; after all, this was going to be quite a small job.

Soon enough I had a program that looked right and compiled.

Before testing it in circuit, I wanted to do some QA. So I wrote a simulator harness that #included my Arduino source file, and provided imitations of the few Arduino library calls my program used. As an side advantage, I could build and run the simulation on my main machine, in my normal development environment (Emacs, make, etc.). The simulator runs confirmed the correct behaviour. (Perhaps there would have been some more faithful simulation tool, but the Arduino IDE didn’t seem to offer it, and I wasn’t inclined to go further down that kind of path.)

So I got the video laptop out, and used the Arduino IDE to flash the program. It didn’t run properly. It hung almost immediately. Some very ad-hoc debugging via led-blinking (like printf debugging, only much worse) convinced me that my problem was as follows:

Arduino C has 16-bit ints. My test harness was on my 64-bit Linux machine. C was autoconverting things (when building for the micrcocontroller). The way the Arduino IDE ran the compiler didn’t pass the warning options necessary to spot narrowing implicit conversions. Those warnings aren’t the default in C in general because C compilers hate us all for compatibility reasons.

I don’t know why those warnings are not the default in the Arduino IDE, but my guess is that they didn’t want to bother poor novice programmers with messages from the compiler explaining how their program is quite possibly wrong. After all, users don’t like error messages so we shouldn’t report errors. And novice programmers are especially fazed by error messages so it’s better to just let them struggle themselves with the arcane mysteries of undefined behaviour in C?

The Arduino IDE does offer a dropdown for “compiler warnings”. The default is None. Setting it to All didn’t produce anything about my integer overflow bugs. And, the output was very hard to find anyway because the “log” window has a constant stream of strange messages from javax.jmdns, with hex DNS packet dumps. WTF.

Other things that were vexing about the Arduino IDE: it has fairly fixed notions (which don’t seem to be documented) about how your files and directories ought to be laid out, and magical machinery for finding things you put “nearby” its “sketch” (as it calls them) and sticking them in its ear, causing lossage. It has a tendency to become confused if you edit files under its feet (e.g. with git checkout). It wasn’t really very suited to a workflow where principal development occurs elsewhere.

And, important settings such as the project’s clock speed, or even the target board, or the compiler warning settings to use weren’t stored in the project directory along with the actual code. I didn’t look too hard, but I presume they must be in a dotfile somewhere. This is madness.

Apparently there is an Arduino CLI too. But I was already quite exasperated, and I didn’t like the idea of going so far off the beaten path, when the whole point of using all this was to stay with popular tooling and share fate with others. (How do these others cope? I have no idea.)

As for the integer overflow bug:

I didn’t seriously consider trying to figure out how to control in detail the C compiler options passed by the Arduino IDE. (Perhaps this is possible, but not really documented?) I did consider trying to run a cross-compiler myself from the command line, with appropriate warning options, but that would have involved providing (or stubbing, again) the Arduino/DigiSpark libraries (and bugs could easily lurk at that interface).

Instead, I thought, “if only I had written the thing in Rust”. But that wasn’t possible, was it? Does Rust even support this board?

Rust on the DigiSpark

I did a cursory web search and found a very useful blog post by Dylan Garrett.

This encouraged me to think it might be a workable strategy. I looked at the instructions there. It seemed like I could run them via the privsep arrangement I use to protect myself when developing using upstream cargo packages from crates.io.

I got surprisingly far surprisingly quickly. It did, rather startlingly, cause my rustup to download a random recent Nightly Rust, but I have six of those already for other Reasons. Very quickly I got the “trinket” LED blink example, referenced by Dylan’s blog post, to compile. Manually copying the file to the video laptop allowed me to run the previously-downloaded micronucleus executable and successfully run the blink example on my board!

I thought a more principled approach to the bootloader client might allow a more convenient workflow. I found the upstream Micronucleus git releases and tags, and had a look over its source code, release dates, etc. It seemed plausible, so I compiled v2.6 from source. That was a success: now I could build and install a Rust program onto my board, from the command line, on my main machine. No more pratting about with the video laptop.

I had got further, more quickly, with Rust, than with the Arduino IDE, and the outcome and workflow was superior.

So, basking in my success, I copied the directory containing the example into my own project, renamed it, and adjusted the path references.

That didn’t work. Now it didn’t build. Even after I copied about .cargo/config.toml and rust-toolchain.toml it didn’t build, producing a variety of exciting messages, depending what precisely I tried. I don’t have detailed logs of my flailing: the instructions say to build it by cd’ing to the subdirectory, and, given that what I was trying to do was to not follow those instructions, it didn’t seem sensible to try to prepare a proper repro so I could file a ticket. I wasn’t optimistic about investigating it more deeply myself: I have some experience of fighting cargo, and it’s not usually fun. Looking at some of the build control files, things seemed quite complicated.

Additionally, not all of the crates are on crates.io. I have no idea why not. So, I would need to supply “local” copies of them anyway. I decided to just git subtree add the avr-hal git tree.

(That seemed better than the approach taken by the avr-hal project’s cargo template, since that template involve a cargo dependency on a foreign git repository. Perhaps it would be possible to turn them into path dependencies, but given that I had evidence of file-location-sensitive behaviour, which I didn’t feel like I wanted to spend time investigating, using that seems like it would possibly have invited more trouble. Also, I don’t like package templates very much. They’re a form of clone-and-hack: you end up stuck with whatever bugs or oddities exist in the version of the template which was current when you started.)

Since I couldn’t get things to build outside avr-hal, I edited the example, within avr-hal, to refer to my (one) program.rs file outside avr-hal, with a #[path] instruction. That’s not pretty, but it worked.

I also had to write a nasty shell script to work around the lack of good support in my nailing-cargo privsep tool for builds where cargo must be invoked in a deep subdirectory, and/or Cargo.lock isn’t where it expects, and/or the target directory containing build products is in a weird place. It also has to filter the output from cargo to adjust the pathnames in the error messages. Otherwise, running both cd A; cargo build and cd B; cargo build from a Makefile produces confusing sets of error messages, some of which contain filenames relative to A and some relative to B, making it impossible for my Emacs to reliably find the right file.

RIIR (Rewrite It In Rust)

Having got my build tooling sorted out I could go back to my actual program.

I translated the main program, and the simulator, from C to Rust, more or less line-by-line. I made the Rust version of the simulator produce the same output format as the C one. That let me check that the two programs had the same (simulated) behaviour. Which they did (after fixing a few glitches in the simulator log formatting).

Emboldened, I flashed the Rust version of my program to the DigiSpark. It worked right away!

RIIR had caused the bug to vanish. Of course, to rewrite the program in Rust, and get it to compile, it was necessary to be careful about the types of all the various integers, so that’s not so surprising. Indeed, it was the point. I was then able to refactor the program to be a bit more natural and DRY, and improve some internal interfaces. Rust’s greater power, compared to C, made those cleanups easier, so making them worthwhile.

However, when doing real-world testing I found a weird problem: my timings were off. Measured, the real program was too fast by a factor of slightly more than 2. A bit of searching (and searching my memory) revealed the cause: I was using a board template for an Adafruit Trinket. The Trinket has a clock speed of 8MHz. But the DigiSpark runs at 16.5MHz. (This is discussed in a ticket against one of the C/C++ libraries supporting the ATTiny85 chip.)

The Arduino IDE had offered me a choice of clock speeds. I have no idea how that dropdown menu took effect; I suspect it was adding prelude code to adjust the clock prescaler. But my attempts to mess with the CPU clock prescaler register by hand at the start of my Rust program didn’t bear fruit.

So instead, I adopted a bodge: since my code has (for code structure reasons, amongst others) only one place where it dealt with the underlying hardware’s notion of time, I simply changed my delay function to adjust the passed-in delay values, compensating for the wrong clock speed.

There was probably a more principled way. For example I could have (re)based my work on either of the two unmerged open MRs which added proper support for the DigiSpark board, rather than abusing the Adafruit Trinket definition. But, having a nearly-working setup, and an explanation for the behaviour, I preferred the narrower fix to reopening any cans of worms.

An offer of help

As will be obvious from this posting, I’m not an expert in dev tools for embedded systems. Far from it. This area seems like quite a deep swamp, and I’m probably not the person to help drain it. (Frankly, much of the improvement work ought to be done, and paid for, by hardware vendors.)

But, as a full Member of the Debian Project, I have considerable gatekeeping authority there. I also have much experience of software packaging, build systems, and release management. If anyone wants to try to improve the situation with embedded tooling in Debian, and is willing to do the actual packaging work. I would be happy to advise, and to review and sponsor your contributions.

An obvious candidate: it seems to me that micronucleus could easily be in Debian. Possibly a DigiSpark board definition could be provided to go with the arduino package.

Unfortunately, IMO Debian’s Rust packaging tooling and workflows are very poor, and the first of my suggestions for improvement wasn’t well received. So if you need help with improving Rust packages in Debian, please talk to the Debian Rust Team yourself.

Conclusions

Embedded programming is still rather a mess and probably always will be.

Embedded build systems can be bizarre. Documentation is scant. You’re often expected to download “board support packages” full of mystery binaries, from the board vendor (or others).

Dev tooling is maddening, especially if aimed at novice programmers. You want version control? Hermetic tracking of your project’s build and install configuration? Actually to be told by the compiler when you write obvious bugs? You’re way off the beaten track.

As ever, Free Software is under-resourced and the maintainers are often busy, or (reasonably) have other things to do with their lives.

All is not lost

Rust can be a significantly better bet than C for embedded software:

The Rust compiler will catch a good proportion of programming errors, and an experienced Rust programmer can arrange (by suitable internal architecture) to catch nearly all of them. When writing for a chip in the middle of some circuit, where debugging involves staring an LED or a multimeter, that’s precisely what you want.

Rust embedded dev tooling was, in this case, considerably better. Still quite chaotic and strange, and less mature, perhaps. But: significantly fewer mystery downloads, and significantly less crazy deviations from the language’s normal build system. Overall, less bad software supply chain integrity.

The ATTiny85 chip, and the DigiSpark board, served my hardware needs very well. (More about the hardware aspects of this project in a future posting.)



comments
Categories: FLOSS Project Planets

Jamie McClelland: Users without passwords

Sun, 2023-10-22 08:27

About fifteen years ago, while debugging a database probem, I was horrified to discover that we had two root users - one with the password I had been using and one without a password. Nooo!

So, I wrote a simple maintenance script that searched for and deleted any user in our database without a password. I even made it part of our puppet recipe - since the database server was in use by users and I didn’t want anyone using SQL statements to change their password to an empty value.

Then I forgot about it.

Recently, I upgraded our MariaDB databases to Debian bullseye, which inserted the mariadb.sys user which…. doesn’t have a password set. It seems to be locked down in other ways, but my dumb script didn’t know about that and happily deleted the user.

Who needs that mariadb.sys user anyway?

Apparently we all do. On one server, I can’t login as root anymore. On another server I can login as root, but if I try to list users I get an error:

ERROR 1449 (HY000): The user specified as a definer (‘mariadb.sys’@’localhost’) does not exist

The Internt is full of useless advice. The most common is to simply insert that user. Except…

MariaDB [mysql]> CREATE USER `mariadb.sys`@`localhost` ACCOUNT LOCK PASSWORD EXPIRE; ERROR 1396 (HY000): Operation CREATE USER failed for 'mariadb.sys'@'localhost' MariaDB [mysql]>

Yeah, that’s not going to work.

It seems like we are dealing with two changes. One, the old mysql.user table was replaced by the global_priv table and then turned into a view for backwards compatibility.

And two, for sensible reasons the default definer for this view has been changed from the root user to a user that, ahem, is unlikely to be changed or deleted.

Apparently I can’t add the mariadb.sys user because it would alter the user view which has a definer that doesn’t exist. Although not sure if this really is the reason?

Fortunately, I found an excellent suggestion for changing the definer of a view. My modified version of the answer is, run the following command which will generate a SQL statement:

SELECT CONCAT("ALTER DEFINER=root@localhost VIEW ", table_name, " AS ", view_definition, ";") FROM information_schema.views WHERE table_schema='mysql' AND definer = 'mariadb.sys@localhost';

Then, execute the statement.

And then also update the mysql.proc table:

UPDATE mysql.proc SET definer = 'root@localhost' WHERE definer = 'mariadb.sys@localhost';

And lastly, I had to run:

DELETE FROM tables_priv WHERE User = 'mariadb.sys'; FLUSH privileges;

Wait, was the tables_priv entry the whole problem all along? Not sure. But now I can run:

CREATE USER `mariadb.sys`@`localhost` ACCOUNT LOCK PASSWORD EXPIRE; GRANT SELECT, DELETE ON `mysql`.`global_priv` TO `mariadb.sys`@`localhost`;

And reverse the other statements:

SELECT CONCAT("ALTER DEFINER=`mariadb.sys`@localhost VIEW ", table_name, " AS ", view_definition, ";") FROM information_schema.views WHERE table_schema='mysql' AND definer = 'root@localhost';

[Execute the output.]

UPDATE mysql.proc SET definer = 'mariadb.sys@localhost' WHERE definer = 'root@localhost';

And while we’re on the topic of borked MariaDB authentication, here are the steps to change the root password if you can’t get in at all:

systemctl stop mariadb mariadb-safe --skip-grant-tables --skip-networking & mysql -u root [mysql]> FLUSH PRIVILEGES [mysql]> ALTER USER 'root'@'localhost' IDENTIFIED BY 'new_password'; mariadb-admin shutdown systemctl start mariadb
Categories: FLOSS Project Planets

Daniel Lange: Removing the New Event Button from Thunderbird v115 Calendar

Sun, 2023-10-22 08:25

Thunderbird in Debian stable (Bookworm) has received Thunderbird v115.3.1 as a security update.

With it comes "Supernova", a UI redesign. There is a Mozilla blogpost with a walk-through of the new UI.

Unfortunately it features a super eye-catching "New Message" button that - thankfully - can be disabled. Even the whole space above the email folder pane can be recovered by disabling the folder pane header at Burger Menu (☰) -> View -> Folders -> Folder Pane Header.

Unfortunately there is no way to remove the same eye-catching "New Event" button for the Calendar view via a UI setting.

This needs a user CSS file to override the button as non-visible.

To make it process the user CSS Thunderbird needs a config setting to be enabled:

  1. Burger Menu (☰) -> Settings -> General
  2. Scroll down all the way
  3. Click the Config editor... button on the bottom right
  4. Accept that hell will freeze over because you configure software
  5. Search for toolkit.legacyUserProfileCustomizations.stylesheets
  6. Toggle the value to true to enable the user CSS

You can manually add user_pref("toolkit.legacyUserProfileCustomizations.stylesheets", true); to ~/.thunderbird/abcdefgh.default/prefs.js to the same effect (do this while Thunderbird is not running; replace abcdefgh with your Thunderbird profile ID).

Now create a new directory ~/.thunderbird/abcdefgh.default/chrome/, again replacing abcdefgh with your profile ID.

Inside the new directory create a userChrome.css file with the following content:

/* Hide Calendar New Event button */
#primaryButtonSidePanel {
    display: none !important;
}

Restart Thunderbird. And enjoy less visual obstruction when using the Calendar.

Categories: FLOSS Project Planets

Russell Coker: Brother MFC-J4440DW Printer

Sun, 2023-10-22 00:07

I just had to setup a Brother MFC-J4440DW for a relative. They were replacing an old HP laser printer that mysteriously stopped printing as dark as it should, I don’t know whether the HP printer had worn out or if the HP firmware decided to hobble it to make them buy a new printer. In either case HP is well known for shady behaviour with their printer firmware and should be avoided.

The new Brother printer has problems when using wifi and auto DNS. I don’t know how much of that was due to the printer itself and how much was due to the wifi AP provided by Foxtel. Anyway it works better with Ethernet and a fixed address (the wifi AP didn’t allow me to set a fixed address). I think the main thing was configuring CUPS to connect via the IP address and not use Avahi etc.

One problem I had with printing was that programs like Chrome and LibreOffice would hang for about a minute before printing, that turned out to be due to /etc/cups/lpoptions having the old printer (which had been removed) listed as the default. It would be nice if the web configuration for cups would change that when I set the default printer.

CUPS doesn’t seem to support USB printing. If it is possible to get this printer to print via USB then I welcome a comment describing how to do it.

Scanning only seems to work on Ethernet not on USB, the command for scanning that I ended up with was “scanimage -d escl:http://10.0.0.3:80“. Again I welcome comments from anyone who has had success in scanning via USB. There are probably some Linux users who would find it really inconvenient to setup a network interface specifically for printing. It’s easy for me as I have a pile of spare ethernet cards and a box of cables but some people would have to buy this. Also it’s disappointing that Brother didn’t include an Ethernet cable or a USB cable in the box. But if that makes it cheaper I can deal with that. The resolution for scanning is only 832*1163 and it’s black and white, I think that generally scanning in printers is a bad idea, taking a photo with a phone is a better way of scanning documents.

Generally this printer works well and is cheap at only $299, a price for disposable hardware by today’s standards.

There are Debian packages from Brother for the printer. The scanner package looks like it just configures scanimage, and I’m not sure whether the stock version of CUPS in Debian will do it without the Brother package. One thing I found interesting is that the package mfcj4440dwpdrv has the following shell code in the postinst to label for SE Linux:

if [ "$(which semanage 2> /dev/null)" != '' ];then semanage fcontext -a -t cupsd_rw_etc_t '/opt/brother/Printers/mfcj4440dw/inf(/.*)?' semanage fcontext -a -t bin_t '/opt/brother/Printers/mfcj4440dw/lpd(/.*)?' semanage fcontext -a -t bin_t '/opt/brother/Printers/mfcj4440dw/cupswrapper(/.*)?' if [ "$(which restorecon 2> /dev/null)" != '' ];then restorecon -R /opt/brother/Printers/mfcj4440dw fi fi

This is the first time I’ve seen a Debian package from a hardware vendor with SE Linux specific code. I can’t just add those rules to the Debian policy as that would make the semanage commands fail to add an identical context spec will break the postinst.

In the latest policy I’m uploading to Debian/Unstable (version 2.20231010-1) there are the following 3 lines to deal with this, the first was already there for some time and the other 2 I just added:

/opt/brother/Printers/([^/]+/)?inf(/.*)? gen_context(system_u:object_r:cupsd_rw_etc_t,s0) /opt/brother/Printers/[^/]+/lpd(/.*)? gen_context(system_u:object_r:bin_t,s0) /opt/brother/Printers/[^/]+/cupswrapper(/.*)? gen_context(system_u:object_r:bin_t,s0)

The Brother employee(s) who added the SE Linux code to their package are welcome to connect to me on LinkedIn.

Related posts:

  1. Brother MFC-9120CN Color LASER Printer I have just bought a Brother MFC-9120CN Multi-Function Color LED...
  2. Scanning with a MFC-9120CN on Bullseye I previously wrote about getting a Brother MFC-9120CN multifunction printer/scanner...
  3. Lexmark Supposedly Supports Linux I wanted to get a Lexmark Prestige Pro805 printer to...
Categories: FLOSS Project Planets

Dirk Eddelbuettel: qlcal 0.0.8 on CRAN: QuantLib 1.32 Updates

Sat, 2023-10-21 03:05

The eighth release of the still fairly new qlcal package arrivied at CRAN today.

qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more.

This release brings updates from the just-released QuantLib 1.32 version. It also avoids a nag from R during build (“only specify C++14 if you really need it”) but switching to a versioned depends on R 4.2.0 or later. This implies C++14 or later as the default. If you need qlcal on an older R, grab the sources, edit DESCRIPTION to remove this constraint and set the standard as before in src/Makevars (or src/Makevars.win).

Changes in version 0.0.8 (2023-10-21)
  • A small set of updates from QuantLib 1.32 have been applied

  • The explicit C++14 compilation standard has been replaced with an implicit one by relying on R (>= 4.2.0)

Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Russell Coker: More About the PineTime

Sat, 2023-10-21 01:20

Since my initial review of the PineTime 10 days ago [1] I’ve used it in more situations. My initial tests were done connecting to a Huawei Nova 7i [2], I am now using it with a Huawei Mate 10 Pro. I’ve also upgraded the PineTime from version 1.11 (from memory) of the Infinitime software that runs on the watch to version 1.13 [3]. To upgrade it I had to download the file pinetime-mcuboot-app-dfu-1.13.0.zip to the Android phone and then use the File Installer option of the GadgetBridge Android app to upload it. The zip file does NOT need to be extracted first, I don’t know if GadgetBridge extracts it before upload or if the PineTime firmware has a copy of unzip, but it just works.

Version 1.13 is purported to take less battery, I haven’t directly verified this as I turned on the new feature of measuring my pulse 24*7 which significantly increases battery use. The end result is that the battery is being used up at about the same rate as before, overall adding a new battery-hungry feature while reducing battery use for other things to compensate is a good thing and strongly suggests that battery use has decreased overall.

I have noticed that now with a different phone and different version of the firmware it doesn’t reconnect as reliably. Sometimes I need to turn bluetooth on the watch off and on before it works (which indicates an issue with the firmware) and sometimes I need to turn bluetooth off and on on the phone which indicates a phone issue. Also I often unlock my phone to find the GadgetBridge notification saying that it’s disconnected and it usually connects fine, but I get the impression it’s often disconnected. Does the Mate 10 Pro have a problem that triggers a bug in the PineTime? Does the 1.13 version of InfiniTime have a problem that triggers a bug in the Mate 10 Pro? Are they both independently buggy? Is the new version of InfiniTime just disconnecting when it’s not doing stuff to save battery and triggering bugs that weren’t obvious before?

I’ve tested the media control which basically works, sometimes it gets out of sync and displays the name of the previous track which is annoying. The PineTime is IP67 rated and there are reports on Reddit of people wearing it in the shower and swimming pool. I wouldn’t recommend those things although it should work OK. It might be an option for controlling music when in the bath or when having a pool party.

When the watch is running normally and displays a new notification it’s not possible to swipe it away. You have to go to the notifications menu afterwards to swipe them which I find annoying. Also the notification of an inbound call remains in the notification list indefinitely while I think a more appropriate action is to have it disappear in an amount of time where it’s already been answered or gone to voicemail. Voicemail timeouts are as low as 15 seconds so having the notification disappear after 1 minute would be reasonable.

I have configured my PineTime to take 2 taps on the screen to wake up. I previously had it set to 1 tap and had problems with accidentally doing something it registered as a tap while in bed and waking me up. Also I found that if I want to turn the screen on when my hands are dirty so I don’t want to touch it with a finger then tapping it on my nose works well. Apparently it is programmed to ignore taps on large areas so I can’t wake it with my elbow.

I’ve setup a PineTime for an elderly relative who is greatly enjoying it. I don’t expect them to flash new firmware or do any other complex things, but they are doing well with using the device. They are considering getting a different band as they don’t like rubber. I’m sure their local jeweler has some leather and metal bands that could fit. There is a design on Thiniverse for a PineTime case [4], this could be used for making an adaptor to fit a PineTime to a greatly different type of band, an instrument console, etc.

Generally I think the PineTime is an OK smart watch for someone who’s not into FOSS for it’s own sake. My relative could have been happy with a slightly cheaper watch, but it’s still significantly cheaper than the Samsung and Apple options so it’s not particularly expensive. A benefit for them is that having the same type of SmartWatch as me they will get better tech support.

Related posts:

  1. The PineTime I have just got a PineTime smart watch [1] from...
  2. PinePhone Status 4 months ago I got my PinePhonePro [1]. Since then...
  3. Long-term Device Use It seems to me that Android phones have recently passed...
Categories: FLOSS Project Planets

Iustin Pop: How to set a per-app locale in MacOS

Fri, 2023-10-20 17:11

After spending ~20+ years with a Linux desktop, I’m trying to expand my desktop setup to include MacOS (well, desktop/laptop, I mean end user in general). And to my surprise, there’s no clear repository of MacOS info. Man pages yes, some StackOverflow, some Apple forums, but no canonical version. Or, I didn’t find it, please enlighten me 🙏

Another issue is that Apple apparently changes behaviour without clearly documenting it. In this specific case, the “region” part of the locale went through significant churn lately.

So, my goal:

  • keep system wide locale set to English default, with region Switzerland.
  • this region=CH means that date and number formats are also changed to CH version, e.g. one thousand + ½ is displayed as 1'000,50.
  • however, for a single app, I want a “standard” en_US locale, with the above number shown/formatted as 1,000.50; well, it’s more about parsing than formatting, but that’s irrelevant.

In Linux, this would simply mean running the app with the correct environment variables. But MacOS deprecated this a while back (it used to work). After reading what I could, the solution is quite easy, just not obvious:

% defaults read .GlobalPreferences|grep en_ AKLastLocale = "en_CH"; AppleLocale = "en_CH"; % defaults read -app FooBar (has no AppleLocale key) % defaults write -app FooBar AppleLocale en_US

And that’s it. Now, the defaults man page says the global-global is NSGlobalDomain, I don’t know where I got the .GlobalPreferences. But I only needed to know the key name (in this case, AppleLocale - of course it couldn’t be LC_ALL/LANG).

One day I’ll know MacOS better, but I try to learn more for 2+ years now, and it’s not a smooth ride. Old dog new tricks, right?

Categories: FLOSS Project Planets

Pages