Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 3 hours 23 min ago

Jonathan McDowell: Building a desktop to improve my work/life balance

Thu, 2021-12-02 15:00

It’s been over 20 months since the first COVID lockdown kicked in here in Northern Ireland and I started working from home. Even when the strict lockdown was lifted the advice here has continued to be “If you can work from home you should work from home”. I’ve been into the office here and there (for new starts given you need to hand over a laptop and sort out some login details it’s generally easier to do so in person, and I’ve had a couple of whiteboard sessions that needed the high bandwidth face to face communication), but day to day is all from home.

Early on I commented that work had taken over my study. This has largely continued to be true. I set my work laptop on the stand on a Monday morning and it sits there until Friday evening, when it gets switched for the personal laptop. I have a lovely LG 34UM88 21:9 Ultrawide monitor, and my laptops are small and light so I much prefer to use them docked. Also my general working pattern is to have a lot of external connections up and running (build machine, test devices, log host) which means a suspend/resume cycle disrupts things. So I like to minimise moving things about.

I spent a little bit of time trying to find a dual laptop stand so I could have both machines setup and switch between them easily, but I didn’t find anything that didn’t seem to be geared up for DJs with a mixer + laptop combo taking up quite a bit of desk space rather than stacking laptops vertically. Eventually I realised that the right move was probably a desktop machine.

Now, I haven’t had a desktop machine since before I moved to the US, realising at the time that having everything on my laptop was much more convenient. I decided I didn’t want something too big and noisy. Cheap GPUs seem hard to get hold of these days - I’m not a gamer so all I need is something that can drive a ~ 4K monitor reliably enough. Looking around the AMD Ryzen 7 5700G seemed to be a decent CPU with one of the better integrated GPUs. I spent some time looking for a reasonable Mini-ITX case + motherboard and then I happened upon the ASRock DeskMini X300. This turns out to be perfect; I’ve no need for a PCIe slot or anything more than an m.2 SSD. I paired it with a Noctua NH-L9a-AM4 heatsink + fan (same as I use in the house server), 32GB DDR4 and a 1TB WD SN550 NVMe SSD. Total cost just under £650 inc VAT + delivery (and that’s a story for another post).

A desktop solves the problem of fitting both machines on the desk at once, but there’s still the question of smoothly switching between them. I read Evgeni Golov’s article on a simple KVM switch for €30. My monitor has multiple inputs, so that’s sorted. I did have a cheap USB2 switch (all I need for the keyboard/trackball) but it turned out to be pretty unreliable at the host detecting the USB change. I bought a UGREEN USB 3.0 Sharing Switch Box instead and it’s turned out to be pretty reliable. The problem is that the LG 32UM88 turns out to have a poor DDC implementation, so while I can flip the keyboard easily with the UGREEN box I also have to manually select the monitor input. Which is a bit annoying, but not terrible.

The important question is whether this has helped. I built all this at the end of October, so I’ve had a month to play with it. Turns out I should have done it at some point last year. At the end of the day instead of either sitting “at work” for a bit longer, or completely avoiding the study, I’m able to lock the work machine and flick to my personal setup. Even sitting in the same seat that “disconnect”, and the knowledge I won’t see work Slack messages or emails come in and feeling I should respond, really helps. It also means I have access to my personal setup during the week without incurring a hit at the start of the working day when I have to set things up again. So it’s much easier to just dip in to some personal tech stuff in the evening than it was previously. Also from the point of view I don’t need to setup the personal config, I can pick up where I left off. All of which is really nice.

It’s also got me thinking about other minor improvements I should make to my home working environment to try and improve things. One obvious thing now the winter is here again is to improve my lighting; I have a good overhead LED panel but it’s terribly positioned for video calls, being just behind me. So I think I’m looking some sort of strip light I can have behind the large monitor to give a decent degree of backlight (possibly bouncing off the white wall). Lots of cheap options I’m not convinced about, and I’ve had a few ridiculously priced options from photographer friends; suggestions welcome.

Categories: FLOSS Project Planets

Paul Tagliamonte: Processing IQ data formats (Part 1/5) 🐀

Thu, 2021-12-02 12:00
🐀 This post is part of a series called "PACKRAT". If this is the first post you've found, it'd be worth reading the intro post first and then looking over all posts in the series.

When working with SDRs, information about the signals your radio is receiving are communicated by streams of IQ data. IQ is short for “In-phase” and “Quadrature”, which means 90 degrees out of phase. Values in the IQ stream are commonly treated as complex numbers because it helps greatly when processing the IQ data for meaning.

I won’t get too deep into what IQ is or why complex numbers (mostly since I don’t think I fully understand it well enough to explain it yet), but here’s some basics in case this is your first interaction with IQ data before going off and reading more.

Before we get started — at any point, if you feel lost in this post, it's OK to take a break to do a bit of learning elsewhere in the internet. I'm still new to this, so I'm sure my overview in one paragraph here won't help clarify things too much. This took me months to sort out on my own. It's not you, really! I particularly enjoyed reading visual-dsp.switchb.org when it came to learning about how IQ represents signals, and Software-Defined Radio for Engineers for a more general reference.

Each value in the stream is taken at a precisely spaced sampling interval (called the sampling rate of the radio). Jitter in that sampling interval, or a drift in the requested and actual sampling rate (usually represented in PPM, or parts per million – how many samples out of one million are missing) can cause errors in frequency. In the case of a PPM error, one radio may think it’s 100.1MHz and the other may think it’s 100.2MHz, and jitter will result in added noise in the resulting stream.

A single IQ sample is both the real and imaginary values, together. The complex number (both parts) is the sample. The number of samples per second is the number of real and imaginary value pairs per second.

Each sample is reading the electrical energy coming off the antenna at that exact time instant. We’re looking to see how that goes up and down over time to determine what frequencies we’re observing around us. If the IQ stream is only real-valued measures (e.g., float values rather than complex values reading voltage from a wire), you can still send and receive signals, but those signals will be mirrored across your 0Hz boundary. That means if you’re tuned to 100MHz, and you have a nearby transmitter at 99.9MHz, you’d see it at 100.1MHz. If you want to get an intuitive understanding of this concept before getting into the heavy math, a good place to start is looking at how Quadrature encoders work. Using complex numbers means we can see “up” in frequency as well as “down” in frequency, and understand that those are different signals.

The reason why we need negative frequencies is that our 0Hz is the center of our SDR’s tuned frequency, not actually at 0Hz in nature. Generally speaking, it’s doing loads in hardware (and firmware!) to mix the raw RF signals with a local oscillator to a frequency that can be sampled at the requested rate (fundamentally the same concept as a superheterodyne receiver), so a frequency of ‘-10MHz’ means that signal is 10 MHz below the center of our SDR’s tuned frequency.

The sampling rate dictates the amount of frequency representable in the data stream. You’ll sometimes see this called the Nyquist frequency. The Nyquist Frequency is one half of the sampling rate. Intuitively, if you think about the amount of bandwidth observable as being 1:1 with the sampling rate of the stream, and the middle of your bandwidth is 0 Hz, you would only have enough space to go up in frequency for half of your bandwidth – or half of your sampling rate. Same for going down in frequency.

Float 32 / Complex 64

IQ samples that are being processed by software are commonly processed as an interleaved pair of 32 bit floating point numbers, or a 64 bit complex number. The first float32 is the real value, and the second is the imaginary value.

I#0 Q#0 I#1 Q#1 I#2 Q#2

The complex number 1+1i is represented as 1.0 1.0 and the complex number -1-1i is represented as -1.0 -1.0. Unless otherwise specified, all the IQ samples and pseudocode to follow assumes interleaved float32 IQ data streams.

Example interleaved float32 file (10Hz Wave at 1024 Samples per Second)

RTL-SDR

IQ samples from the RTL-SDR are encoded as a stream of interleaved unsigned 8 bit integers (uint8 or u8). The first sample is the real (in-phase or I) value, and the second is the imaginary (quadrature or Q) value. Together each pair of values makes up a complex number at a specific time instant.

I#0 Q#0 I#1 Q#1 I#2 Q#2

The complex number 1+1i is represented as 0xFF 0xFF and the complex number -1-1i is represented as 0x00 0x00. The complex number 0+0i is not easily representable – since half of 0xFF is 127.5.

Complex Number Representation 1+1i []uint8{0xFF, 0xFF} -1+1i []uint8{0x00, 0xFF} -1-1i []uint8{0x00, 0x00} 0+0i []uint8{0x80, 0x80} or []uint8{0x7F, 0x7F}

And finally, here’s some pseudocode to convert an rtl-sdr style IQ sample to a floating point complex number:

... in = []uint8{0x7F, 0x7F} real = (float(iq[0])-127.5)/127.5 imag = (float(iq[1])-127.5)/127.5 out = complex(real, imag) ....

Example interleaved uint8 file (10Hz Wave at 1024 Samples per Second)

HackRF

IQ samples from the HackRF are encoded as a stream of interleaved signed 8 bit integers (int8 or i8). The first sample is the real (in-phase or I) value, and the second is the imaginary (quadrature or Q) value. Together each pair of values makes up a complex number at a specific time instant.

I#0 Q#0 I#1 Q#1 I#2 Q#2

Formats that use signed integers do have one quirk due to two’s complement, which is that the smallest negative number representable’s absolute value is one more than the largest positive number. int8 values can range between -128 to 127, which means there’s bit of ambiguity in how +1, 0 and -1 are represented. Either you can create perfectly symmetric ranges of values between +1 and -1, but 0 is not representable, have more possible values in the negative range, or allow values above (or just below) the maximum in the range to be allowed.

Within my implementation, my approach has been to scale based on the max integer value of the type, so the lowest possible signed value is actually slightly smaller than -1. Generally, if your code is seeing values that low the difference in step between -1 and slightly less than -1 isn’t very significant, even with only 8 bits. Just a curiosity to be aware of.

Complex Number Representation 1+1i []int8{127, 127} -1+1i []int8{-128, 127} -1-1i []int8{-128, -128} 0+0i []int8{0, 0}

And finally, here’s some pseudocode to convert a hackrf style IQ sample to a floating point complex number:

... in = []int8{-5, 112} real = (float32(in[0]))/127 imag = (float32(in[1]))/127 out = complex(real, imag) ....

Example interleaved int8 file (10Hz Wave at 1024 Samples per Second)

PlutoSDR

IQ samples from the PlutoSDR are encoded as a stream of interleaved signed 16 bit integers (int16 or i16). The first sample is the real (in-phase or I) value, and the second is the imaginary (quadrature or Q) value. Together each pair of values makes up a complex number at a specific time instant.

Almost no SDRs capture at a 16 bit depth natively, often you’ll see 12 bit integers (as is the case with the PlutoSDR) being sent around as 16 bit integers. This leads to the next possible question, which is are values LSB or MSB aligned? The PlutoSDR sends data LSB aligned (which is to say, the largest real or imaginary value in the stream will not exceed 4095), but expects data being transmitted to be MSB aligned (which is to say the lowest set bit possible is the 5th bit in the number, or values can only be set in increments of 16).

As a result, the quirk observed with the HackRF (that the range of values between 0 and -1 is different than the range of values between 0 and +1) does not impact us so long as we do not use the whole 16 bit range.

Complex Number Representation 1+1i []int16{32767, 32767} -1+1i []int16{-32768, 32767} -1-1i []int16{-32768, -32768} 0+0i []int16{0, 0}

And finally, here’s some pseudocode to convert a PlutoSDR style IQ sample to a floating point complex number, including moving the sample from LSB to MSB aligned:

... in = []int16{-15072, 496} // shift left 4 bits (16 bits - 12 bits = 4 bits) // to move from LSB aligned to MSB aligned. in[0] = in[0] << 4 in[1] = in[1] << 4 real = (float32(in[0]))/32767 imag = (float32(in[1]))/32767 out = complex(real, imag) ....

Example interleaved i16 file (10Hz Wave at 1024 Samples per Second)

Next Steps

Now that we can read (and write!) IQ data, we can get started first on the transmitter, which we can (in turn) use to test receiving our own BPSK signal, coming next in Part 2!

Categories: FLOSS Project Planets

Paul Tagliamonte: Intro to PACKRAT (Part 0/5) 🐀

Thu, 2021-12-02 11:00

Hello! Welcome. I’m so thrilled you’re here.

Some of you may know this (as I’ve written about in the past), but if you’re new to my RF travels, I’ve spent nights and weekends over the last two years doing some self directed learning on how radios work. I’ve gone from a very basic understanding of wireless communications, all the way through the process of learning about and implementing a set of libraries to modulate and demodulate data using my now formidable stash of SDRs. I’ve been implementing all of the RF processing code from first principals and purely based on other primitives I’ve written myself to prove to myself that I understand each concept before moving on.

I’ve just finished a large personal milestone – I was able to successfully send a cURL HTTP request through a network interface into my stack of libraries, through my own BPSK implementation, framed in my own artisanal hand crafted Layer 2 framing scheme, demodulated by my code on the other end, and sent into a Linux network interface. The combination of the Layer 1 PHY and Layer 2 Data Link is something that I’ve been calling “PACKRAT”.

$ curl http://44.127.0.8:8000/ * Connected to 44.127.0.8 (44.127.0.8) port 8000 (#0) > GET / HTTP/1.1 > Host: localhost:1313 > User-Agent: curl/7.79.1 > Accept: */* > * Mark bundle as not supporting multiuse * HTTP/1.0, assume close after body < HTTP/1.0 200 OK < Content-Length: 236 < ____ _ ____ _ ______ _ _____ | _ \ / \ / ___| |/ / _ \ / \|_ _| | |_) / _ \| | | ' /| |_) | / _ \ | | | __/ ___ \ |___| . \| _ < / ___ \| | |_| /_/ \_\____|_|\_\_| \_\/_/ \_\_| * Closing connection 0

In an effort to “pay it forward” to thank my friends for their time walking me through huge chunks of this, and those who publish their work, I’m now spending some time documenting how I was able to implement this protocol. I would never have gotten as far as I did without the incredible patience and kindness of friends spending time working with me, and educators publishing their hard work for the world to learn from. Please accept my deepest thanks and appreciation.

The PACKRAT posts are written from the perspective of a novice radio engineer, but experienced software engineer. I’ll be leaving out a lot of the technical details on the software end and specific software implementation, focusing on the general gist of the implementation in the radio critical components exclusively. The idea here is this is intended to be a framework – a jumping off point – for those who are interested in doing this themselves. I hope that this series of blog posts will come to be useful to those who embark on this incredibly rewarding journey after me.

This is the first post in the series, and it will contain links to all the posts to follow. This is going to be the landing page I link others to – as I publish additional posts, I’ll be updating the links on this page. The posts will also grow a tag, which you can check back on, or follow along with here.

Tau

Tau (𝜏) is a much more natural expression of the mathematical constant used for circles which I use rather than Pi (π). You may see me use Tau in code or text – Tau is the same as 2π, so if you see a Tau and don’t know what to do, feel free to mentally or textually replace it with 2π. I just hate always writing 2π everywhere – and only using π (or worse yet – 2π/2) .when I mean 1/2 of a circle (or, 𝜏/2).

Psuedo-code

Basicaly none of the code contained in this series is valid on its own. It’s very lightly basically Go, and only meant to express concepts in term of software. The examples in the post shouldn’t be taken on their own as working snippits to process IQ data, but rather, be used to guide implementations to process the data in question. I’d love to invite all readers to try to “play at home” with the examples, and try and work through the example data captures!

Captures

Speaking of captures, I’ve included live on-the-air captures of PACKRAT packets, as transmitted from my implementation, in different parts of these posts. This means you can go through the process of building code to parse and receive PACKRAT packets, and then build a transmitter that is validated by your receiver. It’s my hope folks will follow along at home and experiment with software to process RF data on their own!

Posts in this series
Categories: FLOSS Project Planets

Steve Kemp: It has been some time..

Thu, 2021-12-02 10:00

I realize it has been quite some time since I last made a blog-post, so I guess the short version is "I'm still alive", or as Granny Weatherwax would have said:

I ATE'NT DEAD

Of course if I die now this would be an awkward post!

I can't think of anything terribly interesting I've been doing recently, mostly being settled in my new flat and tinkering away with things. The latest "new" code was something for controlling mpd via a web-browser:

This is a simple HTTP server which allows you to minimally control mpd running on localhost:6600. (By minimally I mean literally "stop", "play", "next track", and "previous track").

I have all my music stored on my desktop, I use mpd to play it locally through a pair of speakers plugged into that computer. Sometimes I want music in the sauna, or in the bedroom. So I have a couple of bluetooth speakers which are used to send the output to another room. When I want to skip tracks I just open the mpd-web site on my phone and tap the button. (I did look at android mpd-clients, but at the same time it seemed like installing an application for this was a bit overkill).

I guess I've not been doing so much "computer stuff" outside work for a year or so. I guess lack of time, lack of enthusiasm/motivation.

So looking forward to things? I'll be in the UK for a while over Christmas, barring surprises. That should be nice as I'll get to see family, take our child to visit his grandparents (on his birthday no less) and enjoy playing the "How many Finnish people can I spot in the UK?" game

Categories: FLOSS Project Planets

Dirk Eddelbuettel: drat 0.2.2 on CRAN: Package Maintenance

Thu, 2021-12-02 08:33

A fresh and new minor release of drat arrived on CRAN overnight. This is another small update relative to the 0.2.0 release in April followed by a 0.2.1 update in July. This release follows the changes made in digest yesterday. We removed the YAML file (and badge) for the disgraced former continuous integration service we shall not name (yet that we all used to use). And we converted the vignette from using the minidown package to the (fairly new) simplermarkdown package which is so much more appropriate for our use of the minimal water.css style.

drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found reasonably widespread adoption among R users because repositories with marked releases is the better way to distribute code. See below for a few custom reference examples.

Because for once it really is as your mother told you: Friends don’t let friends install random git commit snapshots. Properly rolled-up releases it is. Just how CRAN shows us: a model that has demonstrated for two-plus decades how to do this. And you can too: drat is easy to use, documented by six vignettes and just works.

Detailed information about drat is at its documentation site.

The NEWS file summarises the release as follows:

Changes in drat version 0.2.2 (2021-12-01)
  • Travis artifacts and badges have been pruned

  • Vignettes now use simplermarkdown

Courtesy of my CRANberries, there is a comparison to the previous release. More detailed information is on the drat page as well as at the documentation site.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Junichi Uekawa: December.

Wed, 2021-12-01 18:56
December. The world is turbulent and I am still worried where we are going.

Categories: FLOSS Project Planets

Thorsten Alteholz: My Debian Activities in November 2021

Wed, 2021-12-01 10:33

FTP master

This month I accepted 564 and rejected 93 packages. The overall number of packages that got accepted was 591.

Debian LTS

This was my eighty-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 40h. During that time I did LTS and normal security uploads of:

  • [DLA 2820-1] atftp security update for two CVEs
  • [DLA 2821-1] axis security update for one CVE
  • [DLA 2822-1] netkit-rsh security update for two CVEs
  • [DLA 2825-1] libmodbus security update for two CVEs
  • [#1000408] for libmodbus in Buster
  • [#1000485] for btrbk in Bullseye
  • [#1000486] for btrbk in Buster

I also started to work on pgbouncer to get an update for each release and had to process packages from NEW on security-master.

Further I worked on a script to automatically publish DLAs on the Debian website, that are posted to debian-lts-announce. The script can be found on salsa. It only publishes stuff from people on a whitelist. At the moment it is running on a computer at home. You might run your own copy, or just send me an email to be put on the whitelist as well.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the forty-first ELTS month.

During my allocated time I uploaded:

  • ELA-517-1 for atftp
  • ELA-519-1 for qtbase-opensource-src
  • ELA-520-1 for libsdl1.2
  • ELA-521-1 for libmodbus

Last but not least I did some days of frontdesk duties.

Debian Printing

Unfortunately I did not do as much as I wanted this month. At least I looked at some old bugs and uploaded new upstream versions of …

I hope this will improve in December again. New versions of cups and hplip are on my TODO-list.

Debian Astro

This month I uploaded new versions of …

Other stuff

I improved packaging or fixed bugs of:

Categories: FLOSS Project Planets

Dirk Eddelbuettel: digest 0.6.29 on CRAN: Package Maintenance

Wed, 2021-12-01 07:31

Release 0.6.29 of the digest package arrived at CRAN earlier today, and will be uploaded Debian shortly.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, spookyhash, and blake3 algorithms) permitting easy comparison of R language objects. It is a mature and widely-used as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation.

This release only contains two smaller internal changes. We removed the YAML file (and badge) for the disgraced former continuous integration service we shall not name (yet that we all used to use). And we converted the vignette from using the minidown package to the (fairly new) simplermarkdown package which is so much more appropriate for our use of the minimal water.css style.

My CRANberries provides the usual summary of changes to the previous version. For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Russ Allbery: Review: A World Without Email

Wed, 2021-12-01 00:07

Review: A World Without Email, by Cal Newport

Publisher: Portfolio/Penguin Copyright: 2021 ISBN: 0-525-53657-4 Format: Kindle Pages: 264

A World Without Email is the latest book by computer science professor and productivity writer Cal Newport. After a detour to comment on the drawbacks of social media in Digital Minimalism, Newport is back to writing about focus and concentration in the vein of Deep Work. This time, though, the topic is workplace structure and collaborative process rather than personal decisions.

This book is a bit hard for me to review because I spoiled myself for the contents by listening to a lot of Newport's podcast, where he covers the same material. I therefore didn't enjoy it as much as I otherwise would have because the ideas were familiar. I recommend the book over the podcast, though; it's tighter, more coherent, and more comprehensive.

The core contention of this book is that knowledge work (roughly, jobs where one spends significant time working on a computer processing information) has stumbled into a superficially tempting but inefficient and psychologically harmful structure that Newport calls the hyperactive hive mind. This way of organizing work is a local maxima: it feels productive, it's flexible and very easy to deploy, and most minor changes away from it make overall productivity worse. However, the incentive structure is all wrong. It prioritizes quick responses and coordination overhead over deep thinking and difficult accomplishments.

The characteristic property of the hyperactive hive mind is free-flowing, unstructured communication between co-workers. If you need something from someone else, you ask them for it and they send it to you. The "email" in the title is not intended literally; Slack and related instant messaging apps are even more deeply entrenched in the hyperactive hive mind than email is. The key property of this workflow is that most collaborative work is done by contacting other people directly via ad hoc, unstructured messages.

Newport's argument is that this workflow has multiple serious problems, not the least of which is that it makes us miserable. If you have read his previous work, you will correctly expect this to tie into his concept of deep work. Ad hoc, unstructured communication creates a constant barrage of unimportant small tasks and interrupts, most of which require several asynchronous exchanges before your brain can stop tracking the task. This creates constant context-shifting, loss of focus and competence, and background stress from ever-growing email inboxes, unread message notifications, and the semi-frantic feeling that you're forgetting something you need to do.

This is not an original observation, of course. Many authors have suggested individual ways to improve this workflow: rules about how often to check one's email, filtering approaches, task managers, and other personal systems. Newport's argument is that none of these individual approaches can address the problem due to social effects. It's all well and good to say that you should unplug from distractions and ignore requests while you concentrate, but everyone else's workflow assumes that their co-workers are responsive to ad hoc requests. Ignoring this social contract makes the job of everyone still stuck in the hyperactive hive mind harder. They won't appreciate that, and your brain will not be able to relax knowing that you're not meeting your colleagues' expectations.

In Newport's analysis, the necessary solution is a comprehensive redesign of how we do knowledge work, akin to the redesign of factory work that came with the assembly line. It's a collective problem that requires a collective solution. In other industries, organizing work for efficiency and quality is central to the job of management, but in knowledge work (for good historical reasons) employees are mostly left to organize their work on their own. That self-organization has produced a system that doesn't require centralized coordination or decisions and provides a lot of superficial flexibility, but which may be significantly inferior to a system designed for how people think and work.

Even if you find this convincing (and I think Newport makes a good case), there are reasons to be suspicious of corporations trying to make people more productive. The assembly line made manufacturing much more efficient, but it also increased the misery of workers so much that Henry Ford had to offer substantial raises to retain workers. As one of Newport's knowledge workers, I'm not enthused about that happening to my job.

Newport recognizes this and tries to address it by drawing a distinction between the workflow (how information moves between workers) and the work itself (how individual workers solve problems in their area of expertise). He argues that companies need to redesign the former, but should leave the latter to each worker. It's a nice idea, and it will probably work in industries like tech with substantial labor bargaining power. I'm more cynical about other industries.

The second half of the book is Newport's specific principles and recommendations for designing better workflows that don't rely on unstructured email. Some of this will be familiar (and underwhelming) to anyone who works in tech; Newport recommends ticket systems and thinks agile, scrum, and kanban are pointed in the right direction. But there are some other good ideas in here, such as embracing specialization.

Newport argues (with some evidence) that the drastic reduction in secretarial jobs, on the grounds that workers with computers can do the same work themselves, was a mistake. Even with new automation, this approach increased the range of tasks required in every other job. Not only was this a drain on the time of other workers, it caused more context switching, which made everyone less efficient and undermined work quality. He argues for reversing that trend: where the work cannot be automated, hire more support workers and more specialized workers in general, stop expecting everyone to be their own generalist admin, and empower support workers to create better systems rather than using the hyperactive hive mind model to answer requests.

There's more here, ranging from specifics of how to develop a structured process for a type of work to the importance of enabling sustained concentration on a task. It's a less immediately actionable book than Newport's previous writing, but I welcome the partial shift in focus to more systemic issues. Newport continues to be relentlessly apolitical, but here it feels less like he's eliding important analysis and more like he thinks the interests of workers and good employers are both served by the approach he's advocating.

I will warn that Newport leans heavily on evolutionary psychology in his argument that the hyperactive hive mind is bad for us. I think he has some good arguments about the anxiety that comes with not responding to requests from others, but I'm not sure intrusive experiments on spectacularly-unusual remnant hunter-gatherer groups, who are treated like experimental animals, are the best way of making that case. I realize this isn't Newport's research, but I think he could have made his point with more directly relevant experiments.

He also continues his obsession with the superiority of in-person conversation over written communication, and while he has a few good arguments, he has a tendency to turn them into sweeping generalizations that are directly contradicted by, well, my entire life. It would be nice if he were more willing to acknowledge that it's possible to express deep emotional nuance and complex social signaling in writing; it simply requires a level of practice and familiarity (and shared vocabulary) that's often missing from the workplace.

I was muttering a lot near the start of this book, but thankfully those sections are short, and I think the rest of his argument sits on a stronger foundation.

I hope Newport continues moving in the direction of more systemic analysis. If you enjoyed Deep Work, you will probably find A World Without Email interesting. If you're new to Newport, this is not a bad place to start, particularly if you have influence on how communication is organized in your workplace. Those who work in tech will find some bits of this less interesting, but Newport approaches the topic from a different angle than most agile books and covers a broader range if ideas.

Recommended if you like reading this sort of thing.

Rating: 7 out of 10

Categories: FLOSS Project Planets

Paul Wise: FLOSS Activities November 2021

Tue, 2021-11-30 21:52
Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes Issues Review Administration
  • Debian BTS: unarchive/reopen/triage bugs for reintroduced packages
  • Debian wiki: unblock IP addresses, approve accounts
Communication
  • Respond to queries from Debian users and contributors on the mailing lists and IRC
Sponsors

The SPTAG, visdom, gensim, purple-discord, plac, fail2ban, uvloop work was sponsored by my employer. All other work was done on a volunteer basis.

Categories: FLOSS Project Planets

Russell Coker: Links November 2021

Tue, 2021-11-30 08:31

The Guardian has an amusing article by Sophie Elmhirst about Libertarians buying a cruise ship to make a “seasteading” project off the coast of Panama [1]. It turns out that you need permits etc to do this and maintaining a ship is expensive. Also you wouldn’t want to mine cryptocurrency in a ship cabin as most cabins are small and don’t have enough airconditioning to remain pleasant if you dump 1kW or more into the air.

NPR has an interesting article about the reaction of the NRA to the Columbine shootings [2]. Seems that some NRA person isn’t a total asshole and is sharing their private information, maybe they are dying and are worried about going to hell.

David Brin wrote an insightful blog post about the singleton hypothesis where he covers some of the evidence of autocratic societies failing [3]. I think he makes a convincing point about a single centralised government for human society not being viable. But something like the EU on a world wide scale could work well.

Ken Shirriff wrote an interesting blog post about reverse engineering the Yamaha DX7 synthesiser [4].

The New York Times has an interesting article about a Baboon troop that became less aggressive after the alpha males all died at once from tuberculosis [5]. They established a new more peaceful culture that has outlived the beta males who avoided tuberculosis.

The Guardian has an interesting article about how sequencing the genomes of the entire population can save healthcare costs while improving the health of the population [6]. This is somthing wealthy countries should offer for free to the world population. At a bit under $1000 per test that’s only about $7 trillion to test everyone, and of course the price should drop significantly if there were billions of tests being done.

The Strategy Bridge has an interesting article about SciFi books that have useful portrayals of military strategy [7]. The co-author is Major General Mick Ryan of the Australian Army which is noteworthy as Major General is the second highest rank in use by the Australian Army at this time.

Vice has an interesting article about the co-evolution of penises and vaginas and how a lot of that evolution is based on avoiding impregnation from rape [8].

Cory Doctorow wrote an insightful Medium article about the way that governments could force interoperability through purchasing power [9].

Cory Doctorow wrote an insightful article for Locus Magazine about imagining life after capitalism and how capitalism might be replaced [10]. We need a Star Trek future!

Arstechnica has an informative article about new developmenet in the rowhammer category of security attacks on DRAM [11]. It seems that DDR4 with ECC is the best current mitigation technique and that DDR3 with ECC is harder to attack than non-ECC RAM. So the thing to do is use ECC on all workstations and avoid doing security critical things on laptops because they can’t juse ECC RAM.

Related posts:

  1. Links April 2021 Dr Justin Lehmiller’s blog post comparing his official (academic style)...
  2. Links October 2021 Bloomburg has an insightful article about Juniper, the NSA, and...
  3. Links September 2021 Matthew Garrett wrote an interesting and insightful blog post about...
Categories: FLOSS Project Planets

Steinar H. Gunderson: Commitcoin

Tue, 2021-11-30 06:00

How do you get a git commit with an interesting commit ID (or “SHA”)? Of course, interesting is in the eye of the beholder, but let's define it as having many repeated hex nibbles, e.g. “000” in the commit would be somewhat interesting and “8888888888888888888888888” would be very interesting. This is pretty similar to the dreaded cryptocoin mining; we have no simple way of forcing a given SHA-1 hash unless someone manages a complete second-preimage break, so we must brute-force. (And hopefully without boiling the planet in the process; we'd have to settle for a bit shorter runs than in the example above.)

Git commit IDs are SHA-1 checksums of what they contain; the tree object (“what does the commit contain”), the parents, the commit message and some dates. Of those, let's use the author date as the nonce (I chose to keep the committer date truthful, so as to not be accused of forging history too much). We can set up a shell script to commit with --amend, sweeping GIT_AUTHOR_DATE over the course of a day or so and having EDITOR=true in order not to have to close the editor all the time.

It turns out this is pretty slow (unsurprisingly!). So we discover that actually launching the “editor” takes a long time, and --no-edit is much faster. We can also move to a tmpfs in order not to be block on fsync and block allocation (eatmydata would also work, but doesn't fix the filesystem overhead). At this point, we're at roughly 50 commits/sec or so. So we can sweep through the entire day of author dates, and if nothing interesting comes up, we can just try again (as we also get a new committer date, we've essentially reset our random generator).

But we can do much better than this. A commit in git is many different things; load the index, see if we need to add something, then actually make the commit object and finally update HEAD and whatever branch we might be on. Of those, we only really need to make the commit object and see what it hash ended up with! So we change our script to use git commit-tree instead, and whoa, we're up to 300 commits/sec.

Now we're bottlenecked at the time it takes to fork and launch the git binary—so we can hack the git sources and move the date sweep into builtin/commit-tree.c. This is radically faster; about 100 times as fast! Now what takes time is compressing and creating the commit object.

But OK, my 5950X has 16 cores, right, so we can just split the range in 16 and have different cores test different ranges? Wrong! Because now, the entire sweep takes less than a second, so we no longer get the different committer date and the cores are testing the same SHA over and over. (In effect, our nonce space is too small.) We cheat a bit and add extra whitespace to the end of the commit message to get a larger parameter space; the core ID determines how many spaces.

At this point, you can make commits so fast that the problem essentially becomes that you run out of space, and need to run git prune every few seconds. So the obvious next step would be to not compress and write out the commits at all… and then, I suppose, optimize the routines to not call any git stuff anymore, and then have GPUs do the testing, and of course, finally we'll have Gitcoin ASICs, and every hope of reaching the 1.5 degree goal is lost…

Did I say Gitcoin? No, unfortunately that name was already taken. So I'll call it Commitcoin. And I'm satisifed with a commit containing dddddddd, even though it's of course possible to do much better—hardness is only approximately 2^26 commits to get a commit as interesting as that.

(Cryptobros, please stay out of my inbox. I'm not interested.)

Categories: FLOSS Project Planets

Russell Coker: Your Device Has Been Improved

Tue, 2021-11-30 04:41

I’ve just started a Samsung tablet downloading a 770MB update, the description says:

  • Overall stability of your device has been improved
  • The security of your device has been improved

Technically I have no doubt that both those claims are true and accurate. But according to common understanding of the English language I think they are both misleading.

By “stability improved” they mean “fixed some bugs that made it unstable” and no technical person would imagine that after a certain number of such updates the number of bugs will ever reach zero and the tablet will be perfectly reliable. In fact if you should consider yourself lucky if they fix more bugs than they add. It’s not THAT uncommon for phones and tablets to be bricked (rendered unusable by software) by an update. In the past I got a Huawei Mate9 as a warranty replacement for a Nexus 6P because an update caused so many Nexus 6P phones to fail that they couldn’t be replaced with an identical phone [1].

By “security improved” they usually mean “fixed some security flaws that were recently discovered to make it almost as secure as it was designed to be”. Note that I deliberately say “almost as secure” because it’s sometimes impossible to fix a security flaw without making significant changes to interfaces which requires more work than desired for an old product and also gives a higher probability of things going wrong. So it’s sometimes better to aim for almost as secure or alternatively just as secure but with some features disabled.

Device manufacturers (and most companies in the Android space make the same claims while having the exact same bugs to deal with, Samsung is no different from the others in this regards) are not making devices more secure or more reliable than when they were initially released. They are aiming to make them almost as secure and reliable as when they were released. They don’t have much incentive to try too hard in this regard, Samsung won’t suffer if I decide my old tablet isn’t reliable enough and buy a new one, which will almost certainly be from Samsung because they make nice tablets.

As a thought experiment, consider if car repairers did the same thing. “Getting us to service your car will improve fuel efficiency”, great how much more efficient will it be than when I purchased it?

As another thought experiment, consider if car companies stopped providing parts for car repair a few years after releasing a new model. This is effectively what phone and tablet manufacturers have been doing all along, software updates for “stability and security” are to devices what changing oil etc is for cars.

Related posts:

  1. Long-term Device Use It seems to me that Android phones have recently passed...
  2. Android Device Service Life In recent years Android devices have been the most expensive...
  3. Xen and eth device renaming Recently I rebooted one of my Debian Xen servers and...
Categories: FLOSS Project Planets

Evgeni Golov: Getting access to somebody else's Ansible Galaxy namespace

Mon, 2021-11-29 03:00

TL;DR: adding features after the fact is hard, normalizing names is hard, it's patched, carry on.

I promise, the longer version is more interesting and fun to read!

Recently, I was poking around Ansible Galaxy and almost accidentally got access to someone else's namespace. I was actually looking for something completely different, but accidental finds are the best ones!

If you're asking yourself: "what the heck is he talking about?!", let's slow down for a moment:

  • Ansible is a great automation engine built around the concept of modules that do things (mostly written in Python) and playbooks (mostly written in YAML) that tell which things to do
  • Ansible Galaxy is a place where people can share their playbooks and modules for others to reuse
  • Galaxy Namespaces are a way to allow users to distinguish who published what and reduce name clashes to a minimum

That means that if I ever want to share how to automate installing vim, I can publish evgeni.vim on Galaxy and other people can download that and use it. And if my evil twin wants their vim recipe published, it will end up being called evilme.vim. Thus while both recipes are called vim they can coexist, can be downloaded to the same machine, and used independently.

How do you get a namespace? It's automatically created for you when you login for the first time. After that you can manage it, you can upload content, allow others to upload content and other things. You can also request additional namespaces, this is useful if you want one for an Organization or similar entities, which don't have a login for Galaxy.

Apropos login, Galaxy uses GitHub for authentication, so you don't have to store yet another password, just smash that octocat!

Did anyone actually click on those links above? If you did (you didn't, right?), you might have noticed another section in that document: Namespace Limitations. That says:

Namespace names in Galaxy are limited to lowercase word characters (i.e., a-z, 0-9) and ‘_’, must have a minimum length of 2 characters, and cannot start with an ‘_’. No other characters are allowed, including ‘.’, ‘-‘, and space. The first time you log into Galaxy, the server will create a Namespace for you, if one does not already exist, by converting your username to lowercase, and replacing any ‘-‘ characters with ‘_’.

For my login evgeni this is pretty boring, as the generated namespace is also evgeni. But for the GitHub user Evil-Pwnwil-666 it will become evil_pwnwil_666. This can be a bit confusing.

Another confusing thing is that Galaxy supports two types of content: roles and collections, but namespaces are only for collections! So it is Evil-Pwnwil-666.vim if it's a role, but evil_pwnwil_666.vim if it's a collection.

I think part of this split is because collections were added much later and have a much more well thought design of both the artifact itself and its delivery mechanisms.

This is by the way very important for us! Due to the fact that collections (and namespaces!) were added later, there must be code that ensures that users who were created before also get a namespace.

Galaxy does this (and I would have done it the same way) by hooking into the login process, and after the user is logged in it checks if a Namespace exists and if not it creates one and sets proper permissions.

And this is also exactly where the issue was!

The old code looked like this:

# Create lowercase namespace if case insensitive search does not find match qs = models.Namespace.objects.filter( name__iexact=sanitized_username).order_by('name') if qs.exists(): namespace = qs[0] else: namespace = models.Namespace.objects.create(**ns_defaults) namespace.owners.add(user)

See how namespace.owners.add is always called? Even if the namespace already existed? Yepp!

But how can we exploit that? Any user either already has a namespace (and owns it) or doesn't have one that could be owned. And given users are tied to GitHub accounts, there is no way to confuse Galaxy here. Now, remember how I said one could request additional namespaces, for organizations and stuff? Those will have owners, but the namespace name might not correspond to an existing user!

So all we need is to find an existing Galaxy namespace that is not a "default" namespace (aka a specially requested one) and get a GitHub account that (after the funny name conversion) matches the namespace name.

Thankfully Galaxy has an API, so I could dump all existing namespaces and their owners. Next I filtered that list to have only namespaces where the owner list doesn't contain a username that would (after conversion) match the namespace name. I found a few. And for one of them (let's call it the_target), the corresponding GitHub username (the-target) was available! Jackpot!

I've registered a new GitHub account with that name, logged in to Galaxy and had access to the previously found namespace.

This felt like sufficient proof that my attack worked and I mailed my findings to the Ansible Security team. The issue was fixed in d4f84d3400f887a26a9032687a06dd263029bde3 by moving the namespace.owners.add call to the "new namespace" branch.

And this concludes the story of how I accidentally got access to someone else's Galaxy namespace (which was revoked after the report, no worries).

Categories: FLOSS Project Planets

Dima Kogan: GL_image_display

Mon, 2021-11-29 02:53

I just spent an unspeakable number of days typing to produce something that sounds very un-impressive: an FLTK widget that can display an image. The docs and code live here. The big difference from the usual image-drawing widget is that this one uses OpenGL internally, so after the initial image load, the common operations (drawing, redrawing, panning and zooming) are very fast. I have high-resolution images in my projects, and this will make my tools much nicer.

Three separate interfaces are available:

  • C: core library and GLUT application
  • C++: FLTK widget
  • Python: FLTK widget for pyfltk

The FLTK widgets have built-in interactive panning/zooming, and the library can draw line overlays. So nice applications can be built quickly.

I already added some early disabled-by-default support into the mrcal-stereo tool to visualize the rectification and report sensitivities:

Nice!

Categories: FLOSS Project Planets

Russ Allbery: Fall haul

Sun, 2021-11-28 22:45

It's been a while since I've posted one of these, and I also may have had a few moments of deciding to support authors by buying their books even if I'm not going to get a chance to read them soon. There's also a bit of work reading in here.

Ryka Aoki — Light from Uncommon Stars (sff)
Frederick R. Chromey — To Measure the Sky (non-fiction)
Neil Gaiman, et al. — Sandman: Overture (graphic novel)
Alix E. Harrow — A Spindle Splintered (sff)
Jordan Ifueko — Raybearer (sff)
Jordan Ifueko — Redemptor (sff)
T. Kingfisher — Paladin's Hope (sff)
TJ Klune — Under the Whispering Door (sff)
Kiese Laymon — How to Slowly Kill Yourself and Others in America (non-fiction)
Yuna Lee — Fox You (romance)
Tim Mak — Misfire (non-fiction)
Naomi Novik — The Last Graduate (sff)
Shelley Parker-Chan — She Who Became the Sun (sff)
Gareth L. Powell — Embers of War (sff)
Justin Richer & Antonio Sanso — OAuth 2 in Action (non-fiction)
Dean Spade — Mutual Aid (non-fiction)
Lana Swartz — New Money (non-fiction)
Adam Tooze — Shutdown (non-fiction)
Bill Watterson — The Essential Calvin and Hobbes (strip collection)
Bill Willingham, et al. — Fables: Storybook Love (graphic novel)
David Wong — Real-World Cryptography (non-fiction)
Neon Yang — The Black Tides of Heaven (sff)
Neon Yang — The Red Threads of Fortune (sff)
Neon Yang — The Descent of Monsters (sff)
Neon Yang — The Ascent to Godhood (sff)
Xiran Jay Zhao — Iron Widow (sff)

Categories: FLOSS Project Planets

Wouter Verhelst: GR procedures and timelines

Sun, 2021-11-28 14:04

A vote has been proposed in Debian to change the formal procedure in Debian by which General Resolutions (our name for "votes") are proposed. The original proposal is based on a text by Russ Allberry, which changes a number of rules to be less ambiguous and, frankly, less weird.

One thing Russ' proposal does, however, which I am absolutely not in agreement with, is to add a absolutly hard time limit after three weeks. That is, in the proposed procedure, the discussion time will be two weeks initially (unless the Debian Project Leader chooses to reduce it, which they can do by up to one week), and it will be extended if more options are added to the ballot; but after three weeks, no matter where the discussion stands, the discussion period ends and Russ' proposed procedure forces us to go to a vote, unless all proposers of ballot options agree to withdraw their option.

I believe this is a big mistake. I think any procedure we come up with should allow for the possibility that we may end up with a situation where everyone agrees that extending the discussion time a short time is a good idea, without necessarily resetting the whole discussion time to another two weeks (modulo a decision by the DPL).

At the same time, any procedure we come up with should try to avoid the possibility of process abuse by people who would rather delay a vote ad infinitum than to see it voted upon. A hard time limit certainly does that; but I believe it causes more problems than it solves.

I think insted that it is necessary for any procedure to allow for the discussion time to be extended as long as a strong enough consensus exists that this would be beneficial.

As such, I have proposed an amendment to Russ' proposal (a full version of my proposed constitution can be seen on salsa) that hopefully solves these issues in a novel way: it allows anyone to request an extension to the discussion time, which then needs to be sponsored according to the same rules as a new ballot option. If the time extension is successfully created, those who supported the extension can then also no longer propose any new ones. Additionally, after 4 weeks, the proposed procedure allows anyone to object, so that 4 weeks is probably the practical limit -- although the possibility exists if enough support exists to extend the discussion time (or not enough to end it). The full rules involve slightly more than that (I don't like to put too much formal language in a blog post), but they're not too complicated, I think.

That proposal has received a number of seconds, but after a week it hasn't yet reached the constitutional requirement for the option to be on the ballot.

So, I guess this is a public request for more support to my proposal. If you're a Debian Developer and you agree with me that my proposed procedure is better than the alternative, please step forward and let yourself be heard.

Thanks!

Categories: FLOSS Project Planets

Joachim Breitner: Zero-downtime upgrades of Internet Computer canisters

Sun, 2021-11-28 12:11

TL;DR: Zero-downtime upgrades are possible if you stick to the basic actor model.

Background

DFINITY’s Internet Computer provides a kind of serverless compute platform, where the services are WebAssemmbly programs called “canisters”. These services run without stopping (or at least that’s what it feels like from the service’s perspective; this is called “orthogonal persistence”), and process one message after another. Messages not only come from the outside (“ingress” calls), but are also exchanged between canisters.

On top of these uni-directional messages, the system provides the concept of “inter-canister calls”, which associates a respondse message with the outgoing message, and guarantees that a response will come. This RPC-like interface allows canister developers to program in the popular async/await model, where these inter-canister calls look almost like normal function calls, and the subsequent code is suspended until the response comes back.

The problem

This is all very well, until you try to upgrade your canister, i.e. install new code to fix a bug or add a feature. Because if you used the await pattern, there may still be suspended computations waiting for the response. If you swap out the program now, the code of that suspended computation will no longer be present, and the response cannot be handled! Worse, because of an infelicity with the current system’s API, when the response comes back, it may actually corrupt your service’s state.

That is why upgrading a canister requires stopping it first, which means waiting for all outstanding calls to come back. During this time, your canister is not available for new calls (so there is downtime), and worse, the length of the downtime is at the whims of the canisters you called – they could withhold the response ad infinitum, rendering your canister unupgradeable.

Clearly, this is not acceptable for any serious application. In this post, I’ll explore some of the ways to mitigate this problem, and how to create canisters that are safely instantanously (no downtime) upgradeable.

It’s a spectrum

Some canisters are trivially upgradeable, for others all hope is lost; it depends on what the canister does and how. As an overview, here is the spectrum:

  1. A canister that never performs inter-canister calls can always be upgraded without stopping.
  2. A canister that only does one-way calls, and does them in a particular way (see below), can always be upgraded without stopping.
  3. A canister that performs calls, and where it is acceptable to simply drop outstanding repsonses, can always be upgraded without stopping, once the System API has been improved and your Canister Development Kit (CDK; Motoko or Rust) has adapted.
  4. A canister that performs calls, but uses explicit continuations to handle, responses instead of the await-convenience, based on an eventually fixed System API, can be upgradeded without stopping, and will even handle responses afterwards.
  5. A canister that uses await to do inter-canister call cannot be upgraded without stopping.

In this post I will explain 2, which is possible now, in more detail. Variant 3 and 4 only become reality if and when the System API has improved.

One-way calls

A one-way call is a call where you don’t care about the response; neither the replied data, nor possible failure conditions.

Since you don’t care about the response, you can pass an invalid continuation to the system (technical detail: a Wasm table index of -1). Because it is invalid for any (realistic) Wasm module, it will stay invalid even after an upgrade, and the problem of silent corruption mentioned above is avoided. And otherwise it’s fine for this to be invalid: it means the canister “traps” once the response comes back, which is harmeless (and possibly even cheaper than a do-nothing computation).

This requires your CDK to support this kind of call. Mostly incidential, Motoko (and Candid) actually have the concept of one-way call in their type system, namely shared functions with return type () instead of async ... (Motoko is actually older than the system, and not every prediction about what the system will provide has proven successful). So, pending this PR to be released, Motoko will implement one-way calls in this way. On Rust, you have to use the System API directly or wait for cdk-rs to provide this ability (patches welcome, happy to advise).

You might wonder: How are calls useful if I don’t get to look at the response? Of course, this is a set-back – calls with responses are useful, and await is convenient. And if you have to integrate with an existing service that only provides normal calls, you are out of luck.

But if you get to design the canister and all called canisters together, it may be possible to use only one-way messages. You’d be programming in the plain actor model now, with all its advantages (simple concurrency, easy to upgrade, general robustness).

Consider for example a token ledger canister, not unlike the ICP ledger canister. For the most part, it doesn’t have to do any outgoing calls (and thus be trivially upgradeble). But say we need to add notify functionality, where the ledger canister tells other canisters about a transaction. This is a good example for a one-way call: Maybe the ledger canister doesn’t care if that notification was received? The ICP leder does care (once it comes back successful, this particular notification cannot be sent again), but maybe your ledger can do it differently: let the other canister confirm the receip via another one-way call, instead of via the reply; or simply charge for each notification and do not worry about repeated notifications.

Maybe you want to add archiving functionality, where the ledger canister streams its data to an archive canister. There, again, instead of using successful responses to confirm receipt, the archive canister can ping the ledger canister with the latest received index directly.

Yes, it changes the programming model a bit, and all involved parties have to play together, but the gain (zero-downtime upgrades) is quite valuable, and removes a fair number of other sources of issues.

And in the future?

The above is possible with today’s Internet Computer. If the System API gets improves the way I hope it will be, you have a possible middle ground: You still don’t get to use await and instead have to write your response handler as separate functions, but this way you can call any canister again, and you get the system’s assistance in mapping responses to calls. With this in place, any canister can be rewritten to a form that supports zero-downtime upgrades, without affecting its interface or what the canister can do.

Categories: FLOSS Project Planets

Russ Allbery: Review: Soul Music

Sun, 2021-11-28 00:35

Review: Soul Music, by Terry Pratchett

Series: Discworld #16 Publisher: Harper Copyright: January 1995 Printing: November 2013 ISBN: 0-06-223741-1 Format: Mass market Pages: 420

Soul Music is the sixteenth Discworld novel and something of a plot sequel to Reaper Man (although more of a sequel to the earlier Mort). I would not start reading the Discworld books here.

Susan is a student in the Quirm College for Young Ladies with an uncanny habit of turning invisible. Well, not invisible exactly; rather, people tend to forget that she's there, even when they're in the middle of talking to her. It's disconcerting for the teachers, but convenient when one is uninterested in Literature and would rather read a book.

She listened with half an ear to what the rest of the class was doing.

It was a poem about daffodils.

Apparently the poet had liked them very much.

Susan was quite stoic about this. It was a free country. People could like daffodils if they wanted to. They just should not, in Susan's very definite opinion, be allowed to take up more than a page to say so.

She got on with her education. In her opinion, school kept on trying to interfere with it.

Around her, the poet's vision was being taken apart with inexpert tools.

Susan's determinedly practical education is interrupted by the Death of Rats, with the help of a talking raven and Binky the horse, and without a lot of help from Susan, who is decidedly uninterested in being the sort of girl who goes on adventures. Adventures have a different opinion, since Susan's grandfather is Death. And Death has wandered off again.

Meanwhile, the bard Imp y Celyn, after an enormous row with his father, has gone to Ankh-Morpork. This is not going well; among other things, the Guild of Musicians and their monopoly and membership dues came as a surprise. But he does meet a dwarf and a troll in the waiting room of the Guild, and then buys an unusual music instrument in the sort of mysterious shop that everyone knows has been in that location forever, but which no one has seen before.

I'm not sure there is such a thing as a bad Discworld novel, but there is such a thing as an average Discworld novel. At least for me, Soul Music is one of those. There are some humorous bits, a few good jokes, one great character, and some nice bits of philosophy, but I found the plot forgettable and occasionally annoying. Susan is great. Imp is... not, which is made worse by the fact the reader is eventually expected to believe Susan cares enough about Imp to drive the plot.

Discworld has always been a mix of parody and Pratchett's own original creation, and I have always liked the original creation substantially more than the parody. Soul Music is a parody of rock music, complete with Cut-Me-Own-Throat Dibbler as an unethical music promoter. The troll Imp meets makes music by beating rocks together, so they decide to call their genre "music with rocks in it." The magical instrument Imp buys has twelve strings and a solid body. Imp y Celyn means "bud of the holly." You know, like Buddy Holly. Get it?

Pratchett's reference density is often on the edge of overwhelming the book, but for some reason the parody references in this one felt unusually forced and obvious to me. I did laugh occasionally, but by the end of the story the rock music plot had worn out its welcome. This is not helped by the ending being a mostly incoherent muddle of another parody (admittedly featuring an excellent motorcycle scene). Unlike Moving Pictures, which is a similar parody of Hollywood, Pratchett didn't seem to have much insightful to say about music. Maybe this will be more your thing if you like constant Blues Brothers references.

Susan, on the other hand, is wonderful, and for me is the reason to read this book. She is a delightfully atypical protagonist, and her interactions with the teachers and other students at the girl's school are thoroughly enjoyable. I would have happily read a whole book about her, and more broadly about Death and his family and new-found curiosity about the world. The Death of Rats was also fun, although more so in combination with the raven to translate. I wish this part of her story had a more coherent ending, but I'm looking forward to seeing her in future books.

Despite my complaints, the parody part of this book wasn't bad. It just wasn't as good as the rest of the book. I wanted a better platform for Susan's introduction than a lot of music and band references. If you really like Pratchett's parodies, your mileage may vary. For me, this book was fun but forgettable.

Followed, in publication order, by Interesting Times. The next Death book is Hogfather.

Rating: 7 out of 10

Categories: FLOSS Project Planets

Russ Allbery: Review: A Psalm for the Wild-Built

Sat, 2021-11-27 00:27

Review: A Psalm for the Wild-Built, by Becky Chambers

Series: Monk & Robot #1 Publisher: Tordotcom Copyright: July 2021 ISBN: 1-250-23622-3 Format: Kindle Pages: 160

At the start of the story, Sibling Dex is a monk in a monastery in Panga's only City. They have spent their entire life there, love the buildings, know the hidden corners of the parks, and find the architecture beautiful. They're also heartily sick of it and desperate for the sound of crickets.

Sometimes, a person reaches a point in their life when it becomes absolutely essential to get the fuck out of the city.

Sibling Dex therefore decides to upend their life and travel the outlying villages doing tea service. And they do. They commission an ox-bike wagon, throw themselves into learning cultivation and herbs, experiment with different teas, and practice. It's a lot to learn, and they don't get it right from the start, but Sibling Dex is the sort of person who puts in the work to do something well. Before long, they have a new life as a traveling tea monk.

It's better than living in the City. But it still isn't enough.

We don't find out much about the moon of Panga in this story. Humans live there and it has a human-friendly biosphere with recognizable species, but it is clearly not Earth. The story does not reveal how humans came to live there. Dex's civilization is quite advanced and appears to be at least partly post-scarcity: people work and have professions, but money is rarely mentioned, poverty doesn't appear to be a problem, and Dex, despite being a monk with no obvious source of income, is able to commission the construction of a wagon home without any difficulty. They follow a religion that has no obvious Earth analogue.

The most fascinating thing about Panga is an event in its history. It previously had an economy based on robot factories, but the robots became sentient. Since this is a Becky Chambers story, the humans reaction was to ask the robots what they wanted to do and respect their decision. The robots, not very happy about having their whole existence limited to human design, decided to leave, walking off into the wild. Humans respected their agreement, rebuilt their infrastructure without using robots or artificial intelligence, and left the robots alone. Nothing has been heard from them in centuries.

As you might expect, Sibling Dex meets a robot. Its name is Mosscap, and it was selected to check in with humans. Their attempts to understand each other is much of the story. The rest is Dex's attempt to find what still seems to be missing from life, starting with an attempt to reach a ruined monastery out in the wild.

As with Chambers's other books, A Psalm for the Wild-Built contains a lot of earnest and well-meaning people having thoughtful conversations. Unlike her other books, there is almost no plot apart from those conversations of self-discovery and a profile of Sibling Dex as a character. That plus the earnestness of two naturally introspective characters who want to put their thoughts into words gave this story an oddly didactic tone for me. There are moments that felt like the moral of a Saturday morning cartoon show (I am probably dating myself), although the morals are more sophisticated and conditional. Saying I disliked the tone would be going too far, but it didn't flow as well for me as Chambers's other novels.

I liked the handling of religion, and I loved Sibling Dex's efforts to describe or act on an almost impossible to describe sense that their life isn't quite what they want. There are some lovely bits of description, including the abandoned monastery. The role of a tea monk in this imagined society is a neat, if small, bit of world-building: a bit like a counselor and a bit like a priest, but not truly like either because of the different focus on acceptance, listening, and a hot cup of tea. And Dex's interaction with Mosscap over offering and accepting food is a beautiful bit of characterization.

That said, the story as a whole didn't entirely gel for me, partly because of the didactic tone and partly because I didn't find Mosscap or the described culture of the robots as interesting as I was hoping that I would. But I'm still invested enough that I would read the sequel.

A Psalm for the Wild-Built feels like a prelude or character introduction more than a complete story. When we leave the characters, they're just getting started. You know more about the robots (and Sibling Dex) at the end than you did at the beginning, but don't expect much in the way of resolution.

Followed by A Prayer for the Crown-Shy, scheduled for 2022.

Rating: 7 out of 10

Categories: FLOSS Project Planets

Pages