FLOSS Project Planets

Dirk Eddelbuettel: RcppFarmHash 0.0.2: Maintenance

Planet Debian - 2 hours 8 min ago

A minor maintenance release of the new package RcppFarmHash, first released in version 0.0.1 a week ago, is now on CRAN in an version 0.0.2.

RcppFarmHash wraps the Google FarmHash family of hash functions (written by Geoff Pike and contributors) that are used for example by Google BigQuery for the FARM_FINGERPRINT digest.

This releases adds a #define which was needed on everybody’s favourite CRAN platform to not attempt to include a missing header endian.h. With this added #define all is well as we can already tell from looking at the CRAN status where the three machines maintained by you-may-know-who have already built the package. The others will follow over the next few days.

I also tweeted about the upload with a screenshot demonstrating an eight minute passage from upload to acceptance with the added #ThankYouCRAN tag to say thanks for very smooth and fully automated processing at their end.

The very brief NEWS entry follows:

Changes in version 0.0.2 (2021-08-02)
  • On SunOS, set endianness to not error on #include endian.h

  • Add badges and installation notes to README as package is on CRAN

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Talking Drupal: Talking Drupal #305 - Project Browser

Planet Drupal - 10 hours 27 min ago

Today we are talking about The Project Browser Initiative with Chris Wells.

www.talkingdrupal.com/305

Topics
  • John - Design 4 Drupal
  • AmyJune - Roadtrip, redwoods and Banana Slugs
  • Chris - Upcoming RV roadtrip
  • Nic - Tugboat
  • Project Browser elevator pitch
  • Initiative start
  • Inspiration to join and seek leadership
  • Current status
  • Estimated timeline
  • How can someone get involved
Resources

Banana Slugs https://en.wikipedia.org/wiki/Banana_slug Project Browser Slack https://drupal.slack.com/archives/C01UHB4QG12 Project Browser Module Page https://www.drupal.org/project/project_browser Project Browser Sandbox https://dev-pb1.pantheonsite.io Project Browser Initiative https://www.drupal.org/about/core/strategic-initiatives/project-browser

Guests

Chris Wells - https://www.redfinsolutions.com @chrisfromredfin

Hosts

Nic Laflin - www.nLighteneddevelopment.com @nicxvan

John Picozzi - www.epam.com @johnpicozzi

AmyJune Hineline - @volkswagenchick

Categories: FLOSS Project Planets

GNU Guix: Taming the ‘stat’ storm with a loader cache

GNU Planet! - 11 hours 3 min ago

It was one of these days where some of us on IRC were rehashing that old problem—that application startup in Guix causes a “stat storm”—and lamenting the lack of a solution when suddenly, Ricardo proposes what, in hindsight, looks like an obvious solution: “maybe we could use a per-application ld cache?”. A moment where collective thinking exceeds the sum of our individual thoughts. The result is one of the many features that made it in the core-updates branch, slated to be merged in the coming weeks, one that reduces application startup time.

ELF files and their dependencies

Before going into detail, let’s look at what those “stat storms” look like and where they come from. Loading an ELF executable involves loading the shared libraries (the .so files, for “shared objects”) it depends on, recursively. This is the job of the loader (or dynamic linker), ld.so, which is part of the GNU C Library (glibc) package. What shared libraries an executable like that of Emacs depends on? The ldd command answers that question:

$ ldd $(type -P .emacs-27.2-real) linux-vdso.so.1 (0x00007fff565bb000) libtiff.so.5 => /gnu/store/l1wwr5c34593gqxvp34qbwdkaf7xhdbd-libtiff-4.2.0/lib/libtiff.so.5 (0x00007fd5aa2b1000) libjpeg.so.62 => /gnu/store/5khkwz9g6vza1n4z8xlmdrwhazz7m8wp-libjpeg-turbo-2.0.5/lib/libjpeg.so.62 (0x00007fd5aa219000) libpng16.so.16 => /gnu/store/3x2kak8abb6z2klch72kfff2qxzv00pj-libpng-1.6.37/lib/libpng16.so.16 (0x00007fd5aa1e4000) libz.so.1 => /gnu/store/rykm237xkmq7rl1p0nwass01p090p88x-zlib-1.2.11/lib/libz.so.1 (0x00007fd5aa1c2000) libgif.so.7 => /gnu/store/bpw826hypzlnl4gr6d0v8m63dd0k8waw-giflib-5.2.1/lib/libgif.so.7 (0x00007fd5aa1b8000) libXpm.so.4 => /gnu/store/jgdsl6whyimkz4hxsp2vrl77338kpl0i-libxpm-3.5.13/lib/libXpm.so.4 (0x00007fd5aa1a4000) […] $ ldd $(type -P .emacs-27.2-real) | wc -l 89

(If you’re wondering why we’re looking at .emacs-27.2-real rather than emacs-27.2, it’s because in Guix the latter is a tiny shell wrapper around the former.)

To load a graphical program like Emacs, the loader needs to load more than 80 shared libraries! Each is in its own /gnu/store sub-directory in Guix, one directory per package.

But how does ld.so know where to find these libraries in the first place? In Guix, during the link phase that produces an ELF file (executable or shared library), we tell the linker to populate the RUNPATH entry of the ELF file with the list of directories where its dependencies may be found. This is done by passing -rpath options to the linker, which Guix’s “linker wrapper” takes care of. The RUNPATH is the run-time library search path: it’s a colon-separated list of directories where ld.so will look for shared libraries when it loads an ELF file. We can look at the RUNPATH of our Emacs executable like this:

$ objdump -x $(type -P .emacs-27.2-real) | grep RUNPATH RUNPATH /gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31/lib:/gnu/store/01b4w3m6mp55y531kyi1g8shh722kwqm-gcc-7.5.0-lib/lib:/gnu/store/l1wwr5c34593gqxvp34qbwdkaf7xhdbd-libtiff-4.2.0/lib:/gnu/store/5khkwz9g6vza1n4z8xlmdrwhazz7m8wp-libjpeg-turbo-2.0.5/lib:[…]

This RUNPATH has 39 entries, which roughly corresponds to the number of direct dependencies of the executable—dependencies are listed as NEEDED entries in the ELF file:

$ objdump -x $(type -P .emacs-27.2-real) | grep NEED | head NEEDED libtiff.so.5 NEEDED libjpeg.so.62 NEEDED libpng16.so.16 NEEDED libz.so.1 NEEDED libgif.so.7 NEEDED libXpm.so.4 NEEDED libgtk-3.so.0 NEEDED libgdk-3.so.0 NEEDED libpangocairo-1.0.so.0 NEEDED libpango-1.0.so.0 $ objdump -x $(type -P .emacs-27.2-real) | grep NEED | wc -l 52

(Some of these .so files live in the same directory, which is why there are more NEEDED entries than directories in the RUNPATH.)

A system such as Debian that follows the file system hierarchy standard (FHS), where all libraries are in /lib or /usr/lib, does not have to bother with RUNPATH: all .so files are known to be found in one of these two “standard” locations. Anyway, let’s get back to our initial topic: the “stat storm”.

Walking search paths

As you can guess, when we run Emacs, the loader first needs to locate and load the 80+ shared libraries it depends on. That’s where things get pretty inefficient: the loader will search each .so file Emacs depends on in one of the 39 directories listed in its RUNPATH. Likewise, when it finally finds libgtk-3.so, it’ll look for its dependencies in each of the directories in its RUNPATH. We can see that at play by tracing system calls with the strace command:

$ strace -c emacs --version GNU Emacs 27.2 Copyright (C) 2021 Free Software Foundation, Inc. GNU Emacs comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GNU Emacs under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING. % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 55.46 0.006629 3 1851 1742 openat 16.06 0.001919 4 422 mmap 11.46 0.001370 2 501 477 stat 4.79 0.000573 4 122 mprotect 3.84 0.000459 4 111 read 2.45 0.000293 2 109 fstat 2.34 0.000280 2 111 close […] ------ ----------- ----------- --------- --------- ---------------- 100.00 0.011952 3 3325 2227 total

For this simple emacs --version command, the loader and emacs probed for more than 2,200 files, with the openat and stat system calls, and most of these probes were unsuccessful (counted as “errors” here, meaning that the call returned an error). The fraction of “erroneous” system calls is no less than 67% (2,227 over 3,325). We can see the desperate search of .so files by looking at individual calls:

$ strace -e openat,stat emacs --version […] openat(AT_FDCWD, "/gnu/store/fa6wj5bxkj5ll1d7292a70knmyl7a0cr-glibc-2.31/lib/libpng16.so.16", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/gnu/store/01b4w3m6mp55y531kyi1g8shh722kwqm-gcc-7.5.0-lib/lib/libpng16.so.16", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/gnu/store/l1wwr5c34593gqxvp34qbwdkaf7xhdbd-libtiff-4.2.0/lib/libpng16.so.16", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/gnu/store/5khkwz9g6vza1n4z8xlmdrwhazz7m8wp-libjpeg-turbo-2.0.5/lib/libpng16.so.16", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/gnu/store/3x2kak8abb6z2klch72kfff2qxzv00pj-libpng-1.6.37/lib/tls/haswell/x86_64/libpng16.so.16", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat("/gnu/store/3x2kak8abb6z2klch72kfff2qxzv00pj-libpng-1.6.37/lib/tls/haswell/x86_64", 0x7ffe428a1c70) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/gnu/store/3x2kak8abb6z2klch72kfff2qxzv00pj-libpng-1.6.37/lib/tls/haswell/libpng16.so.16", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat("/gnu/store/3x2kak8abb6z2klch72kfff2qxzv00pj-libpng-1.6.37/lib/tls/haswell", 0x7ffe428a1c70) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/gnu/store/3x2kak8abb6z2klch72kfff2qxzv00pj-libpng-1.6.37/lib/tls/x86_64/libpng16.so.16", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat("/gnu/store/3x2kak8abb6z2klch72kfff2qxzv00pj-libpng-1.6.37/lib/tls/x86_64", 0x7ffe428a1c70) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/gnu/store/3x2kak8abb6z2klch72kfff2qxzv00pj-libpng-1.6.37/lib/tls/libpng16.so.16", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat("/gnu/store/3x2kak8abb6z2klch72kfff2qxzv00pj-libpng-1.6.37/lib/tls", 0x7ffe428a1c70) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/gnu/store/3x2kak8abb6z2klch72kfff2qxzv00pj-libpng-1.6.37/lib/haswell/x86_64/libpng16.so.16", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat("/gnu/store/3x2kak8abb6z2klch72kfff2qxzv00pj-libpng-1.6.37/lib/haswell/x86_64", 0x7ffe428a1c70) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/gnu/store/3x2kak8abb6z2klch72kfff2qxzv00pj-libpng-1.6.37/lib/haswell/libpng16.so.16", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat("/gnu/store/3x2kak8abb6z2klch72kfff2qxzv00pj-libpng-1.6.37/lib/haswell", 0x7ffe428a1c70) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/gnu/store/3x2kak8abb6z2klch72kfff2qxzv00pj-libpng-1.6.37/lib/x86_64/libpng16.so.16", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) stat("/gnu/store/3x2kak8abb6z2klch72kfff2qxzv00pj-libpng-1.6.37/lib/x86_64", 0x7ffe428a1c70) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/gnu/store/3x2kak8abb6z2klch72kfff2qxzv00pj-libpng-1.6.37/lib/libpng16.so.16", O_RDONLY|O_CLOEXEC) = 3 […]

Above is the sequence where we see ld.so look for libpng16.so.16, searching in locations where we know it’s not going to find it. A bit ridiculous. How does this affect performance? The impact is small in the most favorable case—on a hot cache, with fast solid state device (SSD) storage. But it likely has a visible effect in other cases—on a cold cache, with a slower spinning hard disk drive (HDD), on a network file system (NFS).

Enter the per-package loader cache

The idea that Ricardo submitted, using a loader cache, makes a lot of sense: we know from the start that libpng.so may only be found in /gnu/store/…-libpng-1.6.37, no need to look elsewhere. In fact, it’s not new: glibc has had such a cache “forever”; it’s the /etc/ld.so.cache file you can see on FHS distros and which is typically created by running ldconfig when a package has been installed. Roughly, the cache maps library SONAMEs, such as libpng16.so.16, to their file name on disk, say /usr/lib/libpng16.so.16.

The problem is that this cache is inherently system-wide: it assumes that there is only one libpng16.so on the system; any binary that depends on libpng16.so will load it from its one and only location. This models perfectly matches the FHS, but it’s at odds with the flexibility offered by Guix, where several variants or versions of the library can coexist on the system, used by different applications. That’s the reason why Guix and other non-FHS distros such as NixOS or GoboLinux typically turn off that feature altogether… and pay the cost of those stat storms.

The insight we gained on that Tuesday evening IRC conversation is that we could adapt glibc’s loader cache to our setting: instead of a system-wide cache, we’d have a per-application loader cache. As one of the last package build phases, we’d run ldconfig to create etc/ld.so.cache within that package’s /gnu/store sub-directory. We then need to modify the loader so it would look for ${ORIGIN}/../etc/ld.so.cache instead of /etc/ld.so.cache, where ${ORIGIN} is the location of the ELF file being loaded. A discussion of these changes is in the issue tracker; you can see the glibc patch and the new make-dynamic-linker-cache build phase. In short, the make-dynamic-linker-cache phase computes the set of direct and indirect dependencies of an ELF file using the file-needed/recursive procedure and derives from that the library search path, creates a temporary ld.so.conf file containing this search path for use by ldconfig, and finally runs ldconfig to actually build the cache.

How does this play out in practice? Let’s try an emacs build that uses this new loader cache:

$ strace -c /gnu/store/ijgcbf790z4x2mkjx2ha893hhmqrj29j-emacs-27.2/bin/emacs --version GNU Emacs 27.2 Copyright (C) 2021 Free Software Foundation, Inc. GNU Emacs comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GNU Emacs under the terms of the GNU General Public License. For more information about these matters, see the file named COPYING. % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 28.68 0.002909 26 110 13 openat 25.13 0.002549 26 96 read 20.41 0.002070 4 418 mmap 9.34 0.000947 10 90 pread64 6.60 0.000669 5 123 mprotect 4.12 0.000418 3 107 1 newfstatat 2.19 0.000222 2 99 close […] ------ ----------- ----------- --------- --------- ---------------- 100.00 0.010144 8 1128 24 total

Compared to what we have above, the total number of system calls has been divided by 3, and the fraction of erroneous system calls goes from 67% to 0.2%. Quite a difference! We count on you, dear users, to let us know how this impacts load time for you.

Flexibility without stat storms

With GNU Stow in the 1990s, and then Nix, Guix, and other distros, the benefits of flexible file layouts rather than the rigid Unix-inherited FHS have been demonstrated—nowadays I see it as an antidote to opaque and bloated application bundles à la Docker. Luckily, few of our system tools have FHS assumptions baked in, probably in large part thanks to GNU’s insistence on a rigorous installation directory categorization in the early days rather than hard-coded directory names. The loader cache is one of the few exceptions. Adapting it to a non-FHS context is fruitful for Guix and for the other distros and packaging tools in a similar situation; perhaps it could become an option in glibc proper?

This is not the end of stat storms, though. Interpreters and language run-time systems rely on search paths—GUILE_LOAD_PATH for Guile, PYTHONPATH for Python, OCAMLPATH for OCaml, etc.—and are equally prone to stormy application startups. Unlike ELF, they do not have a mechanism akin to RUNPATH, let alone a run-time search path cache. We have yet to find ways to address these.

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the Hurd or the Linux kernel, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, AArch64 and POWER9 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

Categories: FLOSS Project Planets

eiriksm.dev: A Robot Updated my Website

Planet Drupal - 15 hours 19 min ago

At Drupalcon North America this year (2021), I had a presentation named "Less human interaction, more security?". This pandemic-ridden year made sure it was an online version of the conference, which makes presenting a bit different. One interesting aspect of online presentations, is that it enables feedback and chatter in the comments section. First of all, this meant that I could get feedback on my GIFs even if I wasn't in the same room as the session participants. As we all know, GIFs are an important part of every presentation. Second, I also received a lot of relevant and interesting comments on other matters related to the presentation. Today, I want to talk about one of the subjects that came up: is it scary to let a robot update and deploy your website?

To those of you that didn't attend the presentation; let’s do a quick recap. First, I talked about how to set up a fully automatic discovery, patching and deployment of Drupal security updates using violinist.io and Gitlab CI. Next, I demonstrated how a new Drupal version was picked up automatically, updated and pushed to open a Gitlab Merge Request for the project in question.

The Merge request was then analyzed and found to be a security update, which enabled auto merge for the merge request

Finally it was tested according to the continuous integration set up for the project, and deployed to the production server for the demonstration.

No human hands were used to write commands or click buttons in the demonstration. What does this mean? Robots taking over? Reason for anxiety attacks? In this case, it's a clear "no" from me. Let me explain why.

Several things could be considered scary about letting a robot update and deploy your website. First, you´re somehow allowing third party access to your production server. In my opinion, this can actually be a step towards making your deployments more secure. Let me explain: maybe your current workflow involves a person remotely accessing the server and running deployment commands. Depending on your configuration of remote access, this can actually create a larger attack surface, instead of making sure only the machine deploy user can deploy to a server. There are of course more things to consider to this question, but my point still stands: moving towards only automated deployments will make your application more secure, more robust and more predictable.

Another potential cause of concern about letting a robot update and deploy your website, is that an update might break the website. Before deploying a change to a production server, people often like to check that their website still works in a real browser. I acknowledge this need for manual verification, but I believe there's a better way. Take the task of updating Drupal Core. I'll wager that a Drupal release has a lot more test coverage than any website you are currently maintaining. If you combine this with some basic functional testing for your website, an update like this is probably some of the safest code changes you can do. Furthermore, having a robot do it for you makes it very unlikely that the update commands will be done wrong, which could happen if done by human hands.

Of course, sometimes automatic updates will crash your site some way or another. I find this unproblematic. Not that I want websites to crash, but it happens. And this also happens when a human being is the author of code changes. However, I prefer that automatic updates crash my site, as this uncovers missing test coverage and makes me more confident about future updates. Say that your Drupal website was relying on the (fantastic) Metatag module, but one particular update made your metatags stop working on a particular content type. How come the update was deployed then? Because you did not have test coverage for that functionality. By learning from this, you can expand your test coverage, and feel even more confident about automatically updating the Metatag module the next time there is a new release.

Don´t get me wrong. You don't have to wait for your site to crash to learn that you´re missing test coverage. Start by introducing automated updates to your website through merge requests. When a new merge request comes along for an update, you´ll get a feeling about how confident you are to deploy it. Maybe it's the metatag module, and you know you have to test it manually to make sure it works. This could be an indication that you are lacking test coverage. To be able to automatically deploy the next version of metatag, just write some tests for the things you are testing manually.

Ultimately, my claim is that updating automatically will make your updates more secure and predictable. Over time, it will also increase your test coverage and the general robustness of your project. And at a certain point, maybe when your site is in more of a maintenance state, you can merge and deploy all updates automatically.

This is also why I now routinely enable automatic PHP updates of composer dependencies on all my sites and projects. And now that these concerns are addressed: Please check out the presentation and make up your own mind on how scary automatic updates really are. And if you want to start updating right away, here is a link to violinist.io

Disclaimer: I am the founder of violinist.io

Categories: FLOSS Project Planets

Colin Watson: Launchpad now runs on Python 3!

Planet Debian - 15 hours 29 min ago

After a very long porting journey, Launchpad is finally running on Python 3 across all of our systems.

I wanted to take a bit of time to reflect on why my emotional responses to this port differ so much from those of some others who’ve done large ports, such as the Mercurial maintainers. It’s hard to deny that we’ve had to burn a lot of time on this, which I’m sure has had an opportunity cost, and from one point of view it’s essentially running to stand still: there is no single compelling feature that we get solely by porting to Python 3, although it’s clearly a prerequisite for tidying up old compatibility code and being able to use modern language facilities in the future. And yet, on the whole, I found this a rewarding project and enjoyed doing it.

Some of this may be because by inclination I’m a maintenance programmer and actually enjoy this sort of thing. My default view tends to be that software version upgrades may be a pain but it’s much better to get that pain over with as soon as you can rather than trying to hold back the tide; you can certainly get involved and try to shape where things end up, but rightly or wrongly I can’t think of many cases when a righteously indignant user base managed to arrange for the old version to be maintained in perpetuity so that they never had to deal with the new thing (OK, maybe Perl 5 counts here).

I think a more compelling difference between Launchpad and Mercurial, though, may be that very few other people really had a vested interest in what Python version Launchpad happened to be running, because it’s all server-side code (aside from some client libraries such as launchpadlib, which were ported years ago). As such, we weren’t trying to do this with the internet having Strong Opinions at us. We were doing this because it was obviously the only long-term-maintainable path forward, and in more recent times because some of our library dependencies were starting to drop support for Python 2 and so it was obviously going to become a practical problem for us sooner or later; but if we’d just stayed on Python 2 forever then fundamentally hardly anyone else would really have cared directly, only maybe about some indirect consequences of that. I don’t follow Mercurial development so I may be entirely off-base, but if other people were yelling at me about how late my project was to finish its port, that in itself would make me feel more negatively about the project even if I thought it was a good idea. Having most of the pressure come from ourselves rather than from outside meant that wasn’t an issue for us.

I’m somewhat inclined to think of the process as an extreme version of paying down technical debt. Moving from Python 2.7 to 3.5, as we just did, means skipping over multiple language versions in one go, and if similar changes had been made more gradually it would probably have felt a lot more like the typical dependency update treadmill. I appreciate why not everyone might want to think of it this way: maybe this is just my own rationalization.

Reflections on porting to Python 3

I’m not going to defend the Python 3 migration process; it was pretty rough in a lot of ways. Nor am I going to spend much effort relitigating it here, as it’s already been done to death elsewhere, and as I understand it the core Python developers have got the message loud and clear by now. At a bare minimum, a lot of valuable time was lost early in Python 3’s lifetime hanging on to flag-day-type porting strategies that were impractical for large projects, when it should have been providing for “bilingual” strategies (code that runs in both Python 2 and 3 for a transitional period) which is where most libraries and most large migrations ended up in practice. For instance, the early advice to library maintainers to maintain two parallel versions or perhaps translate dynamically with 2to3 was entirely impractical in most non-trivial cases and wasn’t what most people ended up doing, and yet the idea that 2to3 is all you need still floats around Stack Overflow and the like as a result. (These days, I would probably point people towards something more like Eevee’s porting FAQ as somewhere to start.)

There are various fairly straightforward things that people often suggest could have been done to smooth the path, and I largely agree: not removing the u'' string prefix only to put it back in 3.3, fewer gratuitous compatibility breaks in the name of tidiness, and so on. But if I had a time machine, the number one thing I would ask to have been done differently would be introducing type annotations in Python 2 before Python 3 branched off. It’s true that it’s technically possible to do type annotations in Python 2, but the fact that it’s a different syntax that would have to be fixed later is offputting, and in practice it wasn’t widely used in Python 2 code. To make a significant difference to the ease of porting, annotations would need to have been introduced early enough that lots of Python 2 library code used them so that porting code didn’t have to be quite so much of an exercise of manually figuring out the exact nature of string types from context.

Launchpad is a complex piece of software that interacts with multiple domains: for example, it deals with a database, HTTP, web page rendering, Debian-format archive publishing, and multiple revision control systems, and there’s often overlap between domains. Each of these tends to imply different kinds of string handling. Web page rendering is normally done mainly in Unicode, converting to bytes as late as possible; revision control systems normally want to spend most of their time working with bytes, although the exact details vary; HTTP is of course bytes on the wire, but Python’s WSGI interface has some string type subtleties. In practice I found myself thinking about at least four string-like “types” (that is, things that in a language with a stricter type system I might well want to define as distinct types and restrict conversion between them): bytes, text, “ordinary” native strings (str in either language, encoded to UTF-8 in Python 2), and native strings with WSGI’s encoding rules. Some of these are emergent properties of writing in the intersection of Python 2 and 3, which is effectively a specialized language of its own without coherent official documentation whose users must intuit its behaviour by comparing multiple sources of information, or by referring to unofficial porting guides: not a very satisfactory situation. Fortunately much of the complexity collapses once it becomes possible to write solely in Python 3.

Some of the difficulties we ran into are not ones that are typically thought of as Python 2-to-3 porting issues, because they were changed later in Python 3’s development process. For instance, the email module was substantially improved in around the 3.2/3.3 timeframe to handle Python 3’s bytes/text model more correctly, and since Launchpad sends quite a few different kinds of email messages and has some quite picky tests for exactly what it emits, this entailed a lot of work in our email sending code and in our test suite to account for that. (It took me a while to work out whether we should be treating raw email messages as bytes or as text; bytes turned out to work best.) 3.4 made some tweaks to the implementation of quoted-printable encoding that broke a number of our tests in ways that took some effort to fix, because the tests needed to work on both 2.7 and 3.5. The list goes on. I got quite proficient at digging through Python’s git history to figure out when and why some particular bit of behaviour had changed.

One of the thorniest problems was parsing HTTP form data. We mainly rely on zope.publisher for this, which in turn relied on cgi.FieldStorage; but cgi.FieldStorage is badly broken in some situations on Python 3. Even if that bug were fixed in a more recent version of Python, we can’t easily use anything newer than 3.5 for the first stage of our port due to the version of the base OS we’re currently running, so it wouldn’t help much. In the end I fixed some minor issues in the multipart module (and was kindly given co-maintenance of it) and converted zope.publisher to use it. Although this took a while to sort out, it seems to have gone very well.

A couple of other interesting late-arriving issues were around pickle. For most things we normally prefer safer formats such as JSON, but there are a few cases where we use pickle, particularly for our session databases. One of my colleagues pointed out that I needed to remember to tell pickle to stick to protocol 2, so that we’d be able to switch back and forward between Python 2 and 3 for a while; quite right, and we later ran into a similar problem with marshal too. A more surprising problem was that datetime.datetime objects pickled on Python 2 require special care when unpickling on Python 3; rather than the approach that ended up being implemented and documented for Python 3.6, though, I preferred a custom unpickler, both so that things would work on Python 3.5 and so that I wouldn’t have to risk affecting the decoding of other pickled strings in the session database.

General lessons

Writing this over a year after Python 2’s end-of-life date, and certainly nowhere near the leading edge of Python 3 porting work, it’s perhaps more useful to look at this in terms of the lessons it has for other large technical debt projects.

I mentioned in my previous article that I used the approach of an enormous and frequently-rebased git branch as a working area for the port, committing often and sometimes combining and extracting commits for review once they seemed to be ready. A port of this scale would have been entirely intractable without a tool of similar power to git rebase, so I’m very glad that we finished migrating to git in 2019. I relied on this right up to the end of the port, and it also allowed for quick assessments of how much more there was to land. git worktree was also helpful, in that I could easily maintain working trees built for each of Python 2 and 3 for comparison.

As is usual for most multi-developer projects, all changes to Launchpad need to go through code review, although we sometimes make exceptions for very simple and obvious changes that can be self-reviewed. Since I knew from the outset that this was going to generate a lot of changes for review, I therefore structured my work from the outset to try to make it as easy as possible for my colleagues to review it. This generally involved keeping most changes to a somewhat manageable size of 800 lines or less (although this wasn’t always possible), and arranging commits mainly according to the kind of change they made rather than their location. For example, when I needed to fix issues with / in Python 3 being true division rather than floor division, I did so in one commit across the various places where it mattered and took care not to mix it with other unrelated changes. This is good practice for nearly any kind of development, but it was especially important here since it allowed reviewers to consider a clear explanation of what I was doing in the commit message and then skim-read the rest of it much more quickly.

It was vital to keep the codebase in a working state at all times, and deploy to production reasonably often: this way if something went wrong the amount of code we had to debug to figure out what had happened was always tractable. (Although I can’t seem to find it now to link to it, I saw an account a while back of a company that had taken a flag-day approach instead with a large codebase. It seemed to work for them, but I’m certain we couldn’t have made it work for Launchpad.)

I can’t speak too highly of Launchpad’s test suite, much of which originated before my time. Without a great deal of extensive coverage of all sorts of interesting edge cases at both the unit and functional level, and a corresponding culture of maintaining that test suite well when making new changes, it would have been impossible to be anything like as confident of the port as we were.

As part of the porting work, we split out a couple of substantial chunks of the Launchpad codebase that could easily be decoupled from the core: its Mailman integration and its code import worker. Both of these had substantial dependencies with complex requirements for porting to Python 3, and arranging to be able to do these separately on their own schedule was absolutely worth it. Like disentangling balls of wool, any opportunity you can take to make things less tightly-coupled is probably going to make it easier to disentangle the rest. (I can see a tractable way forward to porting the code import worker, so we may well get that done soon. Our Mailman integration will need to be rewritten, though, since it currently depends on the Python-2-only Mailman 2, and Mailman 3 has a different architecture.)

Python lessons

Our database layer was already in pretty good shape for a port, since at least the modern bits of its table modelling interface were already strict about using Unicode for text columns. If you have any kind of pervasive low-level framework like this, then making it be pedantic at you in advance of a Python 3 port will probably incur much less swearing in the long run, as you won’t be trying to deal with quite so many bytes/text issues at the same time as everything else.

Early in our port, we established a standard set of __future__ imports and started incrementally converting files over to them, mainly because we weren’t yet sure what else to do and it seemed likely to be helpful. absolute_import was definitely reasonable (and not often a problem in our code), and print_function was annoying but necessary. In hindsight I’m not sure about unicode_literals, though. For files that only deal with bytes and text it was reasonable enough, but as I mentioned above there were also a number of cases where we needed literals of the language’s native str type, i.e. bytes in Python 2 and text in Python 3: this was particularly noticeable in WSGI contexts, but also cropped up in some other surprising places. We generally either omitted unicode_literals or used six.ensure_str in such cases, but it was definitely a bit awkward and maybe I should have listened more to people telling me it might be a bad idea.

A lot of Launchpad’s early tests used doctest, mainly in the style where you have text files that interleave narrative commentary with examples. The development team later reached consensus that this was best avoided in most cases, but by then there were far too many doctests to conveniently rewrite in some other form. Porting doctests to Python 3 is really annoying. You run into all the little changes in how objects are represented as text (particularly u'...' versus '...', but plenty of other cases as well); you have next to no tools to do anything useful like skipping individual bits of a doctest that don’t apply; using __future__ imports requires the rather obscure approach of adding the relevant names to the doctest’s globals in the relevant DocFileSuite or DocTestSuite; dealing with many exception tracebacks requires something like zope.testing.renormalizing; and whatever code refactoring tools you’re using probably don’t work properly. Basically, don’t have done that. It did all turn out to be tractable for us in the end, and I managed to avoid using much in the way of fragile doctest extensions aside from the aforementioned zope.testing.renormalizing, but it was not an enjoyable experience.

Regressions

I know of nine regressions that reached Launchpad’s production systems as a result of this porting work; of course there were various other regressions caught by CI or in manual testing. (Considering the size of this project, I count it as a resounding success that there were only nine production issues, and that for the most part we were able to fix them quickly.)

Equality testing of removed database objects

One of the things we had to do while porting to Python 3 was to implement the __eq__, __ne__, and __hash__ special methods for all our database objects. This was quite conceptually fiddly, because doing this requires knowing each object’s primary key, and that may not yet be available if we’ve created an object in Python but not yet flushed the actual INSERT statement to the database (most of our primary keys are auto-incrementing sequences). We thus had to take care to flush pending SQL statements in such cases in order to ensure that we know the primary keys.

However, it’s possible to have a problem at the other end of the object lifecycle: that is, a Python object might still be reachable in memory even though the underlying row has been DELETEd from the database. In most cases we don’t keep removed objects around for obvious reasons, but it can happen in caching code, and buildd-manager crashed as a result (in fact while it was still running on Python 2). We had to take extra care to avoid this problem.

Debian imports crashed on non-UTF-8 filenames

Python 2 has some unfortunate behaviour around passing bytes or Unicode strings (depending on the platform) to shutil.rmtree, and the combination of some porting work and a particular source package in Debian that contained a non-UTF-8 file name caused us to run into this. The fix was to ensure that the argument passed to shutil.rmtree is a str regardless of Python version.

We’d actually run into something similar before: it’s a subtle porting gotcha, since it’s quite easy to end up passing Unicode strings to shutil.rmtree if you’re in the process of porting your code to Python 3, and you might easily not notice if the file names in your tests are all encoded using UTF-8.

lazr.restful ETags

We eventually got far enough along that we could switch one of our four appserver machines (we have quite a number of other machines too, but the appservers handle web and API requests) to Python 3 and see what happened. By this point our extensive test suite had shaken out the vast majority of the things that could go wrong, but there was always going to be room for some interesting edge cases.

One of the Ubuntu kernel team reported that they were seeing an increase in 412 Precondition Failed errors in some of their scripts that use our webservice API. These can happen when you’re trying to modify an existing resource: the underlying protocol involves sending an If-Match header with the ETag that the client thinks the resource has, and if this doesn’t match the ETag that the server calculates for the resource then the client has to refresh its copy of the resource and try again. We initially thought that this might be legitimate since it can happen in normal operation if you collide with another client making changes to the same resource, but it soon became clear that something stranger was going on: we were getting inconsistent ETags for the same object even when it was unchanged. Since we’d recently switched a quarter of our appservers to Python 3, that was a natural suspect.

Our lazr.restful package provides the framework for our webservice API, and roughly speaking it generates ETags by serializing objects into some kind of canonical form and hashing the result. Unfortunately the serialization was dependent on the Python version in a few ways, and in particular it serialized lists of strings such as lists of bug tags differently: Python 2 used [u'foo', u'bar', u'baz'] where Python 3 used ['foo', 'bar', 'baz']. In lazr.restful 1.0.3 we switched to using JSON for this, removing the Python version dependency and ensuring consistent behaviour between appservers.

Memory leaks

This problem took the longest to solve. We noticed fairly quickly from our graphs that the appserver machine we’d switched to Python 3 had a serious memory leak. Our appservers had always been a bit leaky, but now it wasn’t so much “a small hole that we can bail occasionally” as “the boat is sinking rapidly”:

(Yes, this got in the way of working out what was going on with ETags for a while.)

I spent ages messing around with various attempts to fix this. Since only a quarter of our appservers were affected, and we could get by on 75% capacity for a while, it wasn’t urgent but it was definitely annoying. After spending some quality time with objgraph, for some time I thought traceback reference cycles might be at fault, and I sent a number of fixes to various upstream projects for those (e.g. zope.pagetemplate). Those didn’t help the leaks much though, and after a while it became clear to me that this couldn’t be the sole problem: Python has a cyclic garbage collector that will eventually collect reference cycles as long as there are no strong references to any objects in them, although it might not happen very quickly. Something else must be going on.

Debugging reference leaks in any non-trivial and long-running Python program is extremely arduous, especially with ORMs that naturally tend to end up with lots of cycles and caches. After a while I formed a hypothesis that zope.server might be keeping a strong reference to something, although I never managed to nail it down more firmly than that. This was an attractive theory as we were already in the process of migrating to Gunicorn for other reasons anyway, and Gunicorn also has a convenient max_requests setting that’s good at mitigating memory leaks. Getting this all in place took some time, but once we did we found that everything was much more stable:

This isn’t completely satisfying as we never quite got to the bottom of the leak itself, and it’s entirely possible that we’ve only papered over it using max_requests: I expect we’ll gradually back off on how frequently we restart workers over time to try to track this down. However, pragmatically, it’s no longer an operational concern.

Mirror prober HTTPS proxy handling

After we switched our script servers to Python 3, we had several reports of mirror probing failures. (Launchpad keeps lists of Ubuntu archive and image mirrors, and probes them every so often to check that they’re reasonably complete and up to date.) This only affected HTTPS mirrors when probed via a proxy server, support for which is a relatively recent feature in Launchpad and involved some code that we never managed to unit-test properly: of course this is exactly the code that went wrong. Sadly I wasn’t able to sort out that gap, but at least the fix was simple.

Non-MIME-encoded email headers

As I mentioned above, there were substantial changes in the email package between Python 2 and 3, and indeed between minor versions of Python 3. Our test coverage here is pretty good, but it’s an area where it’s very easy to have gaps. We noticed that a script that processes incoming email was crashing on messages with headers that were non-ASCII but not MIME-encoded (and indeed then crashing again when it tried to send a notification of the crash!). The only examples of these I looked at were spam, but we still didn’t want to crash on them.

The fix involved being somewhat more careful about both the handling of headers returned by Python’s email parser and the building of outgoing email notifications. This seems to be working well so far, although I wouldn’t be surprised to find the odd other incorrect detail in this sort of area.

Failure to handle non-ISO-8859-1 URL-encoded form input

Remember how I said that parsing HTTP form data was thorny? After we finished upgrading all our appservers to Python 3, people started reporting that they couldn’t post Unicode comments to bugs, which turned out to be only if the attempt was made using JavaScript, and was because I hadn’t quite managed to get URL-encoded form data working properly with zope.publisher and multipart. The current standard describes the URL-encoded format for form data as “in many ways an aberrant monstrosity”, so this was no great surprise.

Part of the problem was some very strange choices in zope.publisher dating back to 2004 or earlier, which I attempted to clean up and simplify. The rest was that Python 2’s urlparse.parse_qs unconditionally decodes percent-encoded sequences as ISO-8859-1 if they’re passed in as part of a Unicode string, so multipart needs to work around this on Python 2.

I’m still not completely confident that this is correct in all situations, but at least now that we’re on Python 3 everywhere the matrix of cases we need to care about is smaller.

Inconsistent marshalling of Loggerhead’s disk cache

We use Loggerhead for providing web browsing of Bazaar branches. When we upgraded one of its two servers to Python 3, we immediately noticed that the one still on Python 2 was failing to read back its revision information cache, which it stores in a database on disk. (We noticed this because it caused a deployment to fail: when we tried to roll out new code to the instance still on Python 2, Nagios checks had already caused an incompatible cache to be written for one branch from the Python 3 instance.)

This turned out to be a similar problem to the pickle issue mentioned above, except this one was with marshal, which I didn’t think to look for because it’s a relatively obscure module mostly used for internal purposes by Python itself; I’m not sure that Loggerhead should really be using it in the first place. The fix was relatively straightforward, complicated mainly by now needing to cope with throwing away unreadable cache data.

Ironically, if we’d just gone ahead and taken the nominally riskier path of upgrading both servers at the same time, we might never have had a problem here.

Intermittent bzr failures

Finally, after we upgraded one of our two Bazaar codehosting servers to Python 3, we had a report of intermittent bzr branch hangs. After some digging I found this in our logs:

Traceback (most recent call last): ... File "/srv/bazaar.launchpad.net/production/codehosting1-rev-20124175fa98fcb4b43973265a1561174418f4bd/env/lib/python3.5/site-packages/twisted/conch/ssh/channel.py", line 136, in addWindowBytes self.startWriting() File "/srv/bazaar.launchpad.net/production/codehosting1-rev-20124175fa98fcb4b43973265a1561174418f4bd/env/lib/python3.5/site-packages/lazr/sshserver/session.py", line 88, in startWriting resumeProducing() File "/srv/bazaar.launchpad.net/production/codehosting1-rev-20124175fa98fcb4b43973265a1561174418f4bd/env/lib/python3.5/site-packages/twisted/internet/process.py", line 894, in resumeProducing for p in self.pipes.itervalues(): builtins.AttributeError: 'dict' object has no attribute 'itervalues'

I’d seen this before in our git hosting service: it was a bug in Twisted’s Python 3 port, fixed after 20.3.0 but unfortunately after the last release that supported Python 2, so we had to backport that patch. Using the same backport dealt with this.

Onwards!
Categories: FLOSS Project Planets

Angie "webchick" Byron: Ch-ch-ch-ch-changes... (and a brief history of Drupal)

Planet Drupal - 17 hours 6 min ago
What the what?!



Sad Drupal is Sad. :'(

Well, might as well get right down to it... I've made the incredibly difficult decision to leave Acquia, and my employment there officially ended last week. :'(

Some important notes about this:

  • This is in no way a negative reflection on Acquia. I have worked with SO many amazing people there in the past 10 years(!), and have endless gratitude for all of the challenges, opportunities, learning, and laughs. The leadership team has a solid strategy, and the effort everyone there puts into achieving it every day is inspiring.
  • This is in no way a negative reflection on Drupal. In my time here, I've seen Drupal through its youthful toddler years, to its surly teenage years, and now Drupal's all grown up, with a nice, stable apartment downtown. :) Drupal is and remains an amazingly powerful, flexible solution for building every single type of application one can dream of, with an incredibly strong and vibrant community behind it.
  • What this is about is about an opportunity that came up to take lessons learned from Drupal and apply them more broadly to (hopefully) make an even bigger impact (more on that below).
10 years sure is a long time…

It sure is! Here are some fun Drupal facts that help illustrate key achievements of Acquia's Drupal Acceleration Team [DAT] (née Office of the CTO [OCTO]) over the years, and hopefully provide some insight into the areas of investment Acquia has made and continues to make in Drupal.

IMPORTANT NOTE: The folks explicitly called out below are former co-workers who work / have worked at Acquia, since this post is in some ways a "farewell" to them, and an opportunity to celebrate their often unsung efforts. This is unfortunately NOT able to be a comprehensive list of ALL of the amazing people working on various initiatives, because that list would be far, far too long and I'd invariably miss someone. :( Suffice it to say, however, that none of the items listed below would be possible without hard work, input, funding, and help from literally thousands of other people across the wider Drupal community!

Did you know? Back in 2011:

Authoring Experience and Strategic Initiatives



Eat your heart out, WordPress! :D

Eas[y|ier] Upgrades



Tale as old as tiiiiiiime...

  • Drupal 7 had just come out, and the first initiative of the day was to help the community get all of the modules ported to the new version. (Sound familiar? ;)) Drupal Gardens (R.I.P.) was key to this effort, with a world-class team focusing on the biggest, gnarliest modules first. We then repeated that initiative for Drupal 8, for Drupal 9, and, because @Gábor Hojtsy is such an *amazing* overachiever, have already started it for Drupal 10 as well! Ted Bowman (@tedbow) and Katherine Druckman (@katherined) deserve shout-outs for doing a huge ton of Drupal 7 > 8 and Drupal 8 > 9 (respectively) porting in their initial Acquia assignments!
    • To assist with these efforts, OCTO/DAT has also developed numerous pieces of tooling over the years to help make upgrades easier, including:
      • Drupal Module Upgrader (originally an Acquia Hackathon project!), an automated code analysis/porting tool for Drupal 7 -> D8/9 code which @phenaproxima rocked the crap out of as an intern back in the day!
      • Upgrade Status, a dashboard of your contributed modules' porting status that gives you a dynamic "todo" list for major upgrades, which the team has shepherded over the years from Daniel Kudwien (@sun)'s original efforts back in the day!
    Predictability/Reliability



    Three words to strike fear into any Drupal site admin.

    • In 2011, Drupal releases took the philosophy of "it's ready when it's ready," which made them basically impossible to plan around. Even security releases came out on an unpredictable, as-needed basis, so running a Drupal site required CONSTANT VIGILANCE. :P OCTO/DAT worked with the Drupal Security Team to develop the Wednesday security release windows system we all know today, and also with core contributors to develop the predictable semantic versioning release approach that Drupal 8+ uses. Major kudos go to Jess (@xjm) for making sure those trains run on time, and that they start any fires when they reach the station! :D
    • Another Drupal release philosophy from 2011 was "we'll break your code, not your data". OCTO/DAT has done extensive work, alongside other major community contributors, to enact various policies that ensure Drupal upgrades are easy from Drupal 8 onward. Governance



      I don't have a pithy caption here; this is actually sound advice.

      • Back in 2011, Drupal was very much a "do-ocracy" which meant that it pretended it didn't have a governance structure, which mostly meant that if you weren't already neck-deep in the community to know who everyone was, you were completely in the dark as to which people held key decision making powers. :P We held a Governance Sprint alongside OSCON with mindful community members, as well as various luminaries from other open source projects, and developed an explicit, scalable governance framework for the Drupal Association, for Drupal Core, and for the Drupal Community. Much of that framework exists to this day, and others have evolved as project needs have changed. Alex Bronstein (@effulgentsia) deserves some props here as he's always incredibly thoughtful of structural changes within Drupal and their longer term ramifications.
      Sustainability



      The delicate balance...

      • Back in 2011, if you needed a Drupal 7 committer, it was down to either @Dries or me. Over the years, we've built this up to a team of 14 committers, including different specializations in Product Management, Framework Management, Frontend Framework Management, and Release Management. We've also brought onboard a Core Team Facilitator (Pamela Barone (@pameela)) to help with coordinating efforts of the team itself.
        • Another major initiative that falls under this is the Drupal.org contribution credit system, to encourage organizations to donate time back to the project and create more makers than takers.
        • In partnership with the Drupal Association, we created the Drupal 8 Accelerate grants program to bash through the final critical bugs holding Drupal 8.0 from release. The core committer team was in charge of disbursing $125,000 in the form of bug bounties, and directly resulted in Drupal 8.0 shipping in 2015 (without it, who knows :\).

      I'm sure I'm forgetting a million and a half other things that happened over the years, but hopefully this helps paint a picture for those who are newer to the company of how far Drupal has come, and the role Acquia has helped play in that growth.

      What's next?



      Webchick is going Web Scale! ;)

      Starting today, I'm going back to my community building roots at MongoDB as Principal Community Manager on the Community Team, focusing on initiatives such as building out an open source contributor program and further awesome-ifying the MongoDB Champions program.

      What drew me to this opportunity is:

      • MongoDB has focused from the outset on stellar developer experience, which is a value I believe in strongly, even since before my time in Drupal. I really appreciate that they take a strong, developer-centered view of how databases ought to work, and that they've tackled so many of the hard problems up-front.
      • Because MongoDB is a technology that is used by tons of other projects, languages, frameworks, etc. it seems like a really great opportunity to both take some of the lessons learned from Drupal over the years and apply them to other communities, while also being able to get a broader perspective from other communities and bring those back into Drupal!
      • The people. While it's really (really!) scary to think about starting out in a new place with new faces, I had the opportunity to interview with folks from a wide cross-section of the company. Every single person I spoke with fully embodied their core values. Every single person was passionate, genuine, kind, and determined to make a huge difference. In a lot of cases speaking to folks felt like speaking to a friend you've had for years.
      • MongoDB has also been a tremendous ally to Drupal (speaking of 10+ years :)), putting funding and time into efforts such as Drupal 7's database abstraction layer, the Views in Drupal Core initiative, and more.
      • From being prompted for pronouns on the initial application form, to a variety of queer/trans-inclusive benefits, to dedicated LGBTQIA+ inclusive initiatives, MongoDB takes their commitment to Diversity and Inclusion more seriously than just about any other tech company I've seen. Colour me impressed!
      • AND, to top it off, they're going to continue to give me dedicated time to work on Drupal as well! :O



      Our new chihuahua puppy Arthur wearing the MongoDB Pride bandana, more or less like a cape. :D

      So, fret not! I'll still be around in the Drupal community, hopefully with a broadened perspective and bringing in some new ideas and energy along the way! :)

      Sheesh, what's the TL;DR here?!

      So, in short:

      • I no longer work at Acquia, but want to sincerely thank everyone there for 10+ years of important work, learning opportunities, amazing friends, and of course laughs.
      • A LOT has changed in Drupal over the last 10+ years, and up above you can view a tiny sampling of it.
      • I'm starting a new position at MongoDB today as Principal Community Manager, and am really excited! (And also nervous, but hey. ;))
      • I'll still see you Drupal folk around in Drupal Slack and the issue queue. :)
      • MOST importantly, we now have a puppy. ;)

      One last thing: If we've worked together over the years in a Drupal community capacity and if you're up for it, I would be hugely appreciative of a LinkedIn recommendation. And I'll do my best to reciprocate! :)

      Tags: drupalmongodbacquiacareerfuturepastcute animals
Categories: FLOSS Project Planets

TestDriven.io: Supporting Multiple Languages in Django

Planet Python - 21 hours 23 min ago
This tutorial looks at how to add multiple language support to your Django project.
Categories: FLOSS Project Planets

Russ Allbery: Review: Piranesi

Planet Debian - 21 hours 51 min ago

Review: Piranesi, by Susanna Clarke

Publisher: Bloomsbury Publishing Copyright: 2020 ISBN: 1-63557-564-8 Format: Kindle Pages: 245

Piranesi is a story told in first-person journal entries by someone who lives in a three-floored world of endless halls full of statues. The writing style is one of the most distinctive things about this book (and something you'll have to get along with to enjoy it), so it's worth quoting a longer passage from the introductory description of the world:

I am determined to explore as much of the World as I can in my lifetime. To this end I have travelled as far as the Nine-Hundred-and-Sixtieth Hall to the West, the Eight-Hundred-and-Ninetieth Hall to to the North and the Seven-Hundred-and-Sixty-Eighth Hall to the South. I have climbed up to the Upper Halls where Clouds move in slow procession and Statues appear suddenly out of the Mists. I have explored the Drowned Halls where the Dark Waters are carpeted with white water lilies. I have seen the Derelict Halls of the East where Ceilings, Floors — sometimes even Walls! — have collapsed and the dimness is split by shafts of grey Light.

In all these places I have stood in Doorways and looked ahead. I have never seen any indication that the World was coming to an End, but only the regular progression of Halls and Passageways into the Far Distance.

No Hall, no Vestibule, no Staircase, no Passage is without its Statues. In most Halls they cover all the available space, though here and there you will find an Empty Plinth, Niche or Apse, or even a blank space on a Wall otherwise encrusted with Statues. These Absences are as mysterious in their way as the Statues themselves.

So far as the protagonist knows, the world contains only one other living person, the Other, and thirteen dead ones who exist only as bones. The Other is a scientist searching for Great and Secret Knowledge, and calls the protagonist Piranesi, which is odd because that is not the protagonist's name.

Be warned that I'm skating around spoilers for the rest of this review. I don't think I'm giving away anything that would ruin the book, but the nature of the story takes some sharp turns. If knowing anything about that would spoil the book for you and you want to read this without that knowledge, you may want to stop reading here.

I also want to disclose early in this review that I wanted this to be a different book than it is, and that had a significant impact on how much I enjoyed it. Someone who came to it with different expectations may have a different and more enjoyable experience.

I was engrossed by the strange world, the atmosphere, and the mystery of the halls full of statues. The protagonist is also interested in the same things, and the early part of the book is full of discussion of exploration, scientific investigation, and attempts to understand the nature of the world. That led me to hope for the sort of fantasy novel in which the setting is a character and where understanding the setting is a significant part of the plot.

Piranesi is not that book. The story that Clarke wants to tell is centered on psychology rather than setting. The setting does not become a character, nor do we learn much about it by the end of the book. While we do learn how the protagonist came to be in this world, my first thought when that revelation starts halfway through the book was "this is going to be disappointing." And, indeed, it was.

I say all of this because I think Piranesi looks, from both its synopsis and from the first few chapters, like it's going to be a world building and exploration fantasy. I think it runs a high risk of disappointing readers in the way that it disappointed me, and that can lead to disliking a book one may have enjoyed if one had read it in a different mood and with a different set of expectations.

Piranesi is, instead, about how the protagonist constructs the world, about the effect of trauma on that construction, and about the complexities hidden behind the idea of recovery. And there is a lot to like here: The ending is complex and subtle and does not arrive at easy answers (although I also found it very sad), and although Clarke, by the end of the book, is using the setting primarily as metaphor, the descriptions remain vivid and immersive. I still want the book that I thought I was reading, but I want that book in large part because the fragments of that book that are in this one are so compelling and engrossing.

What did not work for me was every character in the book except for the protagonist and one supporting character.

The relationship between the protagonist and the Other early in the book is a lovely bit of unsettling complexity. It's obvious that the Other has a far different outlook on the world than the protagonist, but the protagonist seems unaware of it. It's also obvious that the Other is a bit of a jerk, but I was hoping for a twist that showed additional complexity in his character. Sadly, when we get the twist, it's not in the direction of more complexity. Instead, it leads to a highly irritating plot that is unnecessarily prolonged through the protagonist being gullible and child-like in the face of blatantly obvious gaslighting. This is a pattern for the rest of the book: Once villains appear on stage, they're one-note narcissists with essentially no depth.

There is one character in Piranesi that I liked as well or better than the protagonist, but they only show up late in the story and get very little character development. Clarke sketches the outline of a character I wanted to learn much more about, but never gives us the details on the page. That leads to what I thought was too much telling rather than showing in the protagonist's relationships at the end of the book, which is part of why I thought the ending was so sad. What the protagonist loses is obvious to me (and lines up with the loss I felt when the book didn't turn out to be what I was hoping it would be); what the protagonist gains is less obvious, is working more on the metaphorical level of the story than the literal level, and is more narrated than shown.

In other words, this is psychological fantasy with literary sensibilities told in a frame that looks like exploration fantasy. Parts of it, particularly the descriptions and the sense of place, are quite skillful, but the plot, once revealed, is superficial, obvious, and disappointing. I think it's possible this shift in the reader's sense of what type of book they're reading is intentional on Clarke's part, since it works with the metaphorical topic of the book. But it's not the existence of a shift itself that is my primary objection. I like psychological fantasy as well as exploration fantasy. It's that I thought the book after the shift was shallower, less interesting, and more predictable than the book before the shift.

The one thing that is excellent throughout Piranesi, though, is the mood. It takes a bit to get used to the protagonist's writing style (and I continue to dislike the Affectation of capitalizing Nouns when writing in English), but it's open-hearted, curious, thoughtful, observant, and capable in a way I found delightful. Some of the events in this book are quite dark, but it never felt horrifying or oppressive because the protagonist remains so determinedly optimistic and upbeat, even when yanked around by the world's most obvious and blatant gaslighting. That persistent hopefulness and lightness is a good feature in a book published in 2020 and is what carried me through the parts of the story I didn't care for.

I wish this had been a different book than it was, or failing that, a book with more complex and interesting supporting characters and plot to fit its complex and interesting psychological arc. I also wish that Clarke had done something more interesting with gender in this novel; it felt like she was setting that up for much of the book, and then it never happened. Ah well.

As is, I can't recommend Piranesi, but I can say the protagonist, atmosphere, and sense of place are very well done and I think it will work for some other readers better than it did for me.

Rating: 6 out of 10

Categories: FLOSS Project Planets

libc @ Savannah: The GNU C Library version 2.34 is now available

GNU Planet! - Sun, 2021-08-01 23:57

The GNU C Library
=================

The GNU C Library version 2.34 is now available.

The GNU C Library is used as the C library in the GNU system and
in GNU/Linux systems, as well as many other systems that use Linux
as the kernel.

The GNU C Library is primarily designed to be a portable
and high performance C library.  It follows all relevant
standards including ISO C11 and POSIX.1-2017.  It is also
internationalized and has one of the most complete
internationalization interfaces known.

The GNU C Library webpage is at http://www.gnu.org/software/libc/

Packages for the 2.34 release may be downloaded from:
        http://ftpmirror.gnu.org/libc/
        http://ftp.gnu.org/gnu/libc/

The mirror list is at http://www.gnu.org/order/ftp.html

NEWS for version 2.34
=====================

Major new features:

  • In order to support smoother in-place-upgrades and to simplify

  the implementation of the runtime all functionality formerly
  implemented in the libraries libpthread, libdl, libutil, libanl has
  been integrated into libc.  New applications do not need to link with
  -lpthread, -ldl, -lutil, -lanl anymore.  For backwards compatibility,
  empty static archives libpthread.a, libdl.a, libutil.a, libanl.a are
  provided, so that the linker options keep working.  Applications which
  have been linked against glibc 2.33 or earlier continue to load the
  corresponding shared objects (which are now empty).  The integration
  of those libraries into libc means that additional symbols become
  available by default.  This can cause applications that contain weak
  references to take unexpected code paths that would only have been
  used in previous glibc versions when e.g. preloading libpthread.so.0,
  potentially exposing application bugs.

  • When _DYNAMIC_STACK_SIZE_SOURCE or _GNU_SOURCE are defined,

  PTHREAD_STACK_MIN is no longer constant and is redefined to
  sysconf(_SC_THREAD_STACK_MIN).  This supports dynamic sized register
  sets for modern architectural features like Arm SVE.

  • Add _SC_MINSIGSTKSZ and _SC_SIGSTKSZ.  When _DYNAMIC_STACK_SIZE_SOURCE

  or _GNU_SOURCE are defined, MINSIGSTKSZ and SIGSTKSZ are no longer
  constant on Linux.  MINSIGSTKSZ is redefined to sysconf(_SC_MINSIGSTKSZ)
  and SIGSTKSZ is redefined to sysconf (_SC_SIGSTKSZ).  This supports
  dynamic sized register sets for modern architectural features like
  Arm SVE.

  • The dynamic linker implements the --list-diagnostics option, printing

  a dump of information related to IFUNC resolver operation and
  glibc-hwcaps subdirectory selection.

  • On Linux, the function execveat has been added.  It operates similar to

  execve and it is is already used to implement fexecve without requiring
  /proc to be mounted.  However, different than fexecve, if the syscall is not
  supported by the kernel an error is returned instead of trying a fallback.

  • The ISO C2X function timespec_getres has been added.
  • The feature test macro _STDC_WANT_IEC_60559_EXT_, from draft ISO

  C2X, is supported to enable declarations of functions defined in Annex F
  of C2X.  Those declarations are also enabled when
  _STDC_WANT_IEC_60559_BFP_EXT_, as specified in TS 18661-1, is
  defined, and when _GNU_SOURCE is defined.

  • On powerpc64*, glibc can now be compiled without scv support using the

  --disable-scv configure option.

  • Add support for 64-bit time_t on configurations like x86 where time_t

  is traditionally 32-bit.  Although time_t still defaults to 32-bit on
  these configurations, this default may change in future versions.
  This is enabled with the _TIME_BITS preprocessor macro set to 64 and is
  only supported when LFS (_FILE_OFFSET_BITS=64) is also enabled.  It is
  only enabled for Linux and the full support requires a minimum kernel
  version of 5.1.

  • The main gconv-modules file in glibc now contains only a small set of

  essential converter modules and the rest have been moved into a supplementary
  configuration file gconv-modules-extra.conf in the gconv-modules.d directory
  in the same GCONV_PATH.  Similarly, external converter modules directories
  may have supplementary configuration files in a gconv-modules.d directory
  with names ending with .conf to logically classify the converter modules in
  that directory.

  • On Linux, a new tunable, glibc.pthread.stack_cache_size, can be used

  to configure the size of the thread stack cache.

  • The function _Fork has been added as an async-signal-safe fork replacement

  since Austin Group issue 62 droped the async-signal-safe requirement for
  fork (and it will be included in the future POSIX standard).  The new _Fork
  function does not run any atfork function neither resets any internal state
  or lock (such as the malloc one), and only sets up a minimal state required
  to call async-signal-safe functions (such as raise or execve).  This function
  is currently a GNU extension.

  • On Linux, the close_range function has been added.  It allows efficiently

  closing a range of file descriptors on recent kernels (version 5.9).

  • The function closefrom has been added.  It closes all file descriptors

  greater than or equal to a given integer.  This function is a GNU extension,
  although it is also present in other systems.

  • The posix_spawn_file_actions_addclosefrom_np function has been added,

  enabling posix_spawn and posix_spawnp to close all file descriptors greater
  than or equal to a given integer.  This function is a GNU extension,
  although Solaris also provides a similar function.

Deprecated and removed features, and other changes affecting compatibility:

  • The function pthread_mutex_consistent_np has been deprecated; programs

  should use the equivalent standard function pthread_mutex_consistent
  instead.

  • The function pthread_mutexattr_getrobust_np has been deprecated;

  programs should use the equivalent standard function
  pthread_mutexattr_getrobust instead.

  • The function pthread_mutexattr_setrobust_np has been deprecated;

  programs should use the equivalent standard function
  pthread_mutexattr_setrobust instead.

  • The function pthread_yield has been deprecated; programs should use

  the equivalent standard function sched_yield instead.

  • The function inet_neta declared in <arpa/inet.h> has been deprecated.
  • Various rarely-used functions declared in <resolv.h> and

  <arpa/nameser.h> have been deprecated.  Applications are encouraged to
  use dedicated DNS processing libraries if applicable.  For <resolv.h>,
  this affects the functions dn_count_labels, fp_nquery, fp_query,
  fp_resstat, hostalias, loc_aton, loc_ntoa, p_cdname, p_cdnname,
  p_class, p_fqname, p_fqnname, p_option, p_query, p_rcode, p_time,
  p_type, putlong, putshort, res_hostalias, res_isourserver,
  res_nameinquery, res_queriesmatch, res_randomid, sym_ntop, sym_ntos,
  sym_ston.  For <arpa/nameser.h>, the functions ns_datetosecs,
  ns_format_ttl, ns_makecanon, ns_parse_ttl, ns_samedomain, ns_samename,
  ns_sprintrr, ns_sprintrrf, ns_subdomain have been deprecated.

  • Various symbols previously defined in libresolv have been moved to libc

  in order to prepare for libresolv moving entirely into libc (see earlier
  entry for merging libraries into libc).  The symbols __dn_comp,
  __dn_expand, __dn_skipname, __res_dnok, __res_hnok, __res_mailok,
  __res_mkquery, __res_nmkquery, __res_nquery, __res_nquerydomain,
  __res_nsearch, __res_nsend, __res_ownok, __res_query, __res_querydomain,
  __res_search, __res_send formerly in libresolv have been renamed and no
  longer have a __ prefix.  They are now available in libc.

  • The pthread cancellation handler is now installed with SA_RESTART and

  pthread_cancel will always send the internal SIGCANCEL on a cancellation
  request.  It should not be visible to applications since the cancellation
  handler should either act upon cancellation (if asynchronous cancellation
  is enabled) or ignore the cancellation internal signal.  However there are
  buggy kernel interfaces (for instance some CIFS versions) that could still
  see a spurious EINTR error when cancellation interrupts a blocking syscall.

  • Previously, glibc installed its various shared objects under versioned

  file names such as libc-2.33.so.  The ABI sonames (e.g., libc.so.6)
  were provided as symbolic links.  Starting with glibc 2.34, the shared
  objects are installed under their ABI sonames directly, without
  symbolic links.  This increases compatibility with distribution
  package managers that delete removed files late during the package
  upgrade or downgrade process.

  • The symbols mallwatch and tr_break are now deprecated and no longer used in

  mtrace.  Similar functionality can be achieved by using conditional
  breakpoints within mtrace functions from within gdb.

  • The __morecore and __after_morecore_hook malloc hooks and the default

  implementation __default_morecore have been removed from the API.  Existing
  applications will continue to link against these symbols but the interfaces
  no longer have any effect on malloc.

  • Debugging features in malloc such as the MALLOC_CHECK_ environment variable

  (or the glibc.malloc.check tunable), mtrace() and mcheck() have now been
  disabled by default in the main C library.  Users looking to use these
  features now need to preload a new debugging DSO libc_malloc_debug.so to get
  this functionality back.

  • The deprecated functions malloc_get_state and malloc_set_state have been

  moved from the core C library into libc_malloc_debug.so.  Legacy applications
  that still use these functions will now need to preload libc_malloc_debug.so
  in their environment using the LD_PRELOAD environment variable.

  • The deprecated memory allocation hooks __malloc_hook, __realloc_hook,

  __memalign_hook and __free_hook are now removed from the API.  Compatibility
  symbols are present to support legacy programs but new applications can no
  longer link to these symbols.  These hooks no longer have any effect on glibc
  functionality.  The malloc debugging DSO libc_malloc_debug.so currently
  supports hooks and can be preloaded to get this functionality back for older
  programs.  However this is a transitional measure and may be removed in a
  future release of the GNU C Library.  Users may port away from these hooks by
  writing and preloading their own malloc interposition library.

Changes to build and runtime requirements:

  • On Linux, the shm_open, sem_open, and related functions now expect the

  file shared memory file system to be mounted at /dev/shm.  These functions
  no longer search among the system's mount points for a suitable
  replacement if /dev/shm is not available.

Security related changes:

  CVE-2021-27645: The nameserver caching daemon (nscd), when processing
  a request for netgroup lookup, may crash due to a double-free,
  potentially resulting in degraded service or Denial of Service on the
  local system.  Reported by Chris Schanzle.

  CVE-2021-33574: The mq_notify function has a potential use-after-free
  issue when using a notification type of SIGEV_THREAD and a thread
  attribute with a non-default affinity mask.

  CVE-2021-35942: The wordexp function may overflow the positional
  parameter number when processing the expansion resulting in a crash.
  Reported by Philippe Antoine.

The following bugs are resolved with this release:

  [4737] libc: fork is not async-signal-safe
  [5781] math: Slow dbl-64 sin/cos/sincos for special values
  [10353] libc: Methods for deleting all file descriptors greater than
    given integer (closefrom)
  [14185] glob: fnmatch() fails when '*' wildcard is applied on the file
    name containing multi-byte character(s)
  [14469] math: Inaccurate j0f function
  [14470] math: Inaccurate j1f function
  [14471] math: Inaccurate y0f function
  [14472] math: Inaccurate y1f function
  [14744] nptl: kill -32 $pid or kill -33 $pid on a process cancels a
    random thread
  [15271] dynamic-link: dlmopen()ed shared library with LM_ID_NEWLM
    crashes if it fails dlsym() twice
  [15648] nptl: multiple definition of `__lll_lock_wait_private'
  [16063] nptl: Provide a pthread_once variant in libc directly
  [17144] libc: syslog is not thread-safe if NO_SIGPIPE is not defined
  [17145] libc: syslog with LOG_CONS leaks console file descriptor
  [17183] manual: description of ENTRY struct in <search.h> in glibc
    manual is incorrect
  [18435] nptl: pthread_once hangs when init routine throws an exception
  [18524] nptl: Missing calloc error checking in
    __cxa_thread_atexit_impl
  [19329] dynamic-link: dl-tls.c assert failure at concurrent
    pthread_create and dlopen
  [19366] nptl: returning from a thread should disable cancellation
  [19511] nptl: 8MB memory leak in pthread_create in case of failure
    when non-root user changes priority
  [20802] dynamic-link: getauxval NULL pointer dereference after static
    dlopen
  [20813] nptl: pthread_exit is inconsistent between libc and libpthread
  [22057] malloc: malloc_usable_size is broken with mcheck
  [22668] locale: LC_COLLATE: the last character of ellipsis is not
    ordered correctly
  [23323] libc: [RFE] CSU startup hardening.
  [23328] malloc: Remove malloc hooks and ensure related APIs return no
    data.
  [23462] dynamic-link: Static binary with dynamic string tokens ($LIB,
    $PLATFORM, $ORIGIN) crashes
  [23489] libc: "gcc -lmcheck" aborts on free when using posix_memalign
  [23554] nptl: pthread_getattr_np reports wrong stack size with
    MULTI_PAGE_ALIASING
  [24106] libc: Bash interpreter in ldd script is taken from host
  [24773] dynamic-link: dlerror in an secondary namespace does not use
    the right free implementation
  [25036] localedata: Update collation order for Swedish
  [25383] libc: where_is_shmfs/__shm_directory/SHM_GET_NAME may cause
    shm_open to pick wrong directory
  [25680] dynamic-link: ifuncmain9picstatic and ifuncmain9picstatic
    crash in IFUNC resolver due to stack canary (--enable-stack-
    protector=all)
  [26874] build: -Warray-bounds in _IO_wdefault_doallocate
  [26983] math: [x86_64] x86_64 tgamma has too large ULP error
  [27111] dynamic-link: pthread_create and tls access use link_map
    objects that may be concurrently freed by dlclose
  [27132] malloc: memusagestat is linked to system librt, leading to
    undefined symbols on major version upgrade
  [27136] dynamic-link: dtv setup at thread creation may leave an entry
    uninitialized
  [27249] libc: libSegFault.so does not output signal number properly
  [27304] nptl: pthread_cond_destroy does not pass private flag to futex
    system calls
  [27318] dynamic-link: glibc fails to load binaries when built with
    -march=sandybridge:  CPU ISA level is lower than required
  [27343] nss: initgroups() SIGSEGVs when called on a system without
    nsswich.conf (in a chroot)
  [27346] dynamic-link: x86: PTWRITE feature check is missing
  [27389] network: NSS chroot hardening causes regressions in chroot
    deployments
  [27403] dynamic-link: aarch64: tlsdesc htab is not freed on dlclose
  [27444] libc: sysconf reports unsupported option (-1) for
    _SC_LEVEL1_ICACHE_LINESIZE on X86 since v2.33
  [27462] nscd: double-free in nscd (CVE-2021-27645)
  [27468] malloc: aarch64: realloc crash with heap tagging: FAIL:
    malloc/tst-malloc-thread-fail
  [27498] dynamic-link: __dl_iterate_phdr lacks unwinding information
  [27511] libc: S390 memmove assumes Vector Facility when MIE Facility 3
    is present
  [27522] glob: glob, glob64 incorrectly marked as __THROW
  [27555] dynamic-link: Static tests fail with --enable-stack-
    protector=all
  [27559] libc: fstat(AT_FDCWD) succeeds (it shouldn't) and returns
    information for the current directory
  [27577] dynamic-link: elf/ld.so --help doesn't work
  [27605] libc: tunables can't control xsave/xsavec selection in
    dl_runtime_resolve_*
  [27623] libc: powerpc: Missing registers in sc[v] clobbers list
  [27645] libc: [linux] sysconf(_SC_NPROCESSOR...) breaks down on
    containers
  [27646] dynamic-link: Linker error for non-existing NSS symbols (e.g.
    _nss_files_getcanonname_r) from within a dlmopen namespace.
  [27648] libc: FAIL: misc/tst-select
  [27650] stdio: vfscanf returns too early if a match is longer than
    INT_MAX
  [27651] libc: Performance regression after updating to 2.33
  [27655] string: Wrong size calculation in string/test-strnlen.c
  [27706] libc: select fails to update timeout on error
  [27709] libc: arm: FAIL: debug/tst-longjmp_chk2
  [27721] dynamic-link: x86: ld_audit ignores bind now for TLSDESC and
    tries resolving them lazily
  [27744] nptl: Support different libpthread/ld.so load orders in
    libthread_db
  [27749] libc: Data race __run_exit_handlers
  [27761] libc: getconf: Segmentation fault when passing '-vq' as
    argument
  [27832] nss: makedb.c:797:7: error: 'writev' specified size 4294967295
    exceeds maximum object size 2147483647
  [27870] malloc: MALLOC_CHECK_ causes realloc(valid_ptr, TOO_LARGE) to
    not set ENOMEM
  [27872] build: Obsolete configure option --enable-stackguard-
    randomization
  [27873] build: tst-cpu-features-cpuinfo fail when building on AMD cpu
  [27882] localedata: Use U+00AF MACRON in more EBCDIC charsets
  [27892] libc: powerpc: scv ABI error handling fails to check
    IS_ERR_VALUE
  [27896] nptl: mq_notify does not handle separately allocated thread
    attributes (CVE-2021-33574)
  [27901] libc: TEST_STACK_ALIGN doesn't work
  [27902] libc: The x86-64 clone wrapper fails to align child stack
  [27914] nptl: Install SIGSETXID handler with SA_ONSTACK
  [27939] libc: aarch64: clone does not align the stack
  [27968] libc: s390x: clone does not align the stack
  [28011] libc: Wild read in wordexp (parse_param) (CVE-2021-35942)
  [28024] string: s390(31bit): Wrong result of memchr (MEMCHR_Z900_G5)
    with n >= 0x80000000
  [28028] malloc: malloc: tcache shutdown sequence does not work if the
    thread never allocated anything
  [28033] libc: Need to check RTM_ALWAYS_ABORT for RTM
  [28064] string: x86_64:wcslen implementation list has wcsnlen
  [28067] libc: FAIL: posix/tst-spawn5
  [28068] malloc: FAIL: malloc/tst-mallocalign1-mcheck
  [28071] time: clock_gettime, gettimeofday, time lost vDSO acceleration
    on older kernels
  [28075] nis: Out-of-bounds static buffer read in nis_local_domain
  [28089] build: tst-tls20 fails when linker defaults to --as-needed
  [28090] build: elf/tst-cpu-features-cpuinfo-static fails on certain
    AMD64 cpus
  [28091] network: ns_name_skip may return 0 for domain names without
    terminator

Release Notes
=============

https://sourceware.org/glibc/wiki/Release/2.34

Contributors
============

This release was made possible by the contributions of many people.
The maintainers are grateful to everyone who has contributed
changes or bug reports.  These include:

Adhemerval Zanella
Alejandro Colomar \(man-pages\)
Alexandra Hájková
Alice Xu
Alyssa Ross
Andreas Roeseler
Andreas Schwab
Anton Blanchard
Arjun Shankar
Armin Brauns
Bruno Haible
Carlos O'Donell
Cooper Qu
DJ Delorie
Dan Raymond
Darius Rad
David Hughes
Fangrui Song
Florian Weimer
H.J. Lu
Hanataka Shinya
Hugo Gabriel Eyherabide
Jakub Jelinek
JeffyChen
John David Anglin
Joseph Myers
Khem Raj
Lirong Yuan
Lucas A. M. Magalhaes
Lukasz Majewski
Maninder Singh
Mark Harris
Martin Sebor
Matheus Castanho
Michal Nazarewicz
Mike Hommey
Naohiro Tamura
Nicholas Piggin
Noah Goldstein
Paul Eggert
Paul Zimmermann
Pedro Franco de Carvalho
Raoni Fassina Firmino
Raphael Moreira Zinsly
Romain GEISSLER
Sajan Karumanchi
Samuel Thibault
Sebastian Rasmussen
Sergei Trofimovich
Shen-Ta Hsieh
Siddhesh Poyarekar
Stafford Horne
Stefan Liebler
Sunil K Pandey
Szabolcs Nagy
Tulio Magno Quites Machado Filho
Vineet Gupta
Vitaly Buka
Vitaly Chikunov
Wilco Dijkstra
Xeonacid
Xiaoming Ni
Yang Xu
liuhongt
noah
Érico Nogueira

Categories: FLOSS Project Planets

Drupal Diversity & Inclusion: Meet DDI Camp keynote speaker: Monica Flores

Planet Drupal - Sun, 2021-08-01 23:11
Meet DDI Camp keynote speaker: Monica Flores Alex Laughnan Sun, 08/01/2021 - 20:11
Categories: FLOSS Project Planets

diffutils @ Savannah: diffutils-3.8 released [stable]

GNU Planet! - Sun, 2021-08-01 22:14

This is to announce diffutils-3.8, a stable release.

There have been 47 commits by 5 people in the 2.6 years since 3.7.

See the NEWS below for a brief summary.

Thanks to everyone who has contributed!
The following people contributed changes to this release:

  Bruno Haible (2)
  Dave Odell (1)
  Jim Meyering (23)
  KO Myung-Hun (1)
  Paul Eggert (20)

Jim [on behalf of the diffutils maintainers]
==================================================================

Here is the GNU diffutils home page:
    http://gnu.org/s/diffutils/

For a summary of changes and contributors, see:
  http://git.sv.gnu.org/gitweb/?p=diffutils.git;a=shortlog;h=v3.8
or run this command from a git-cloned diffutils directory:
  git shortlog v3.7..v3.8

To summarize the 2453 gnulib-related changes, run these commands
from a git-cloned diffutils directory:
  git checkout v3.8
  git submodule summary v3.7

Here are the compressed sources and a GPG detached signature[*]:
  https://ftp.gnu.org/gnu/diffutils/diffutils-3.8.tar.xz
  https://ftp.gnu.org/gnu/diffutils/diffutils-3.8.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://ftpmirror.gnu.org/diffutils/diffutils-3.8.tar.xz
  https://ftpmirror.gnu.org/diffutils/diffutils-3.8.tar.xz.sig

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify diffutils-3.8.tar.xz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

  gpg --keyserver keys.gnupg.net --recv-keys 7FD9FCCB000BEEEE

and rerun the 'gpg --verify' command.

This release was bootstrapped with the following tools:
  Autoconf 2.71
  Automake 1.16d
  Gnulib v0.1-4758-gb48905892

NEWS

* Noteworthy changes in release 3.8 (2021-08-01) [stable]

** Incompatible changes

  diff no longer treats a closed stdin as representing an absent file
  in usage like 'diff --new-file - foo <&-'.  This feature was rarely
  if ever used and was not portable to POSIX platforms that reopen
  stdin on exec, such as SELinux if the process underwent an AT_SECURE
  transition, or HP-UX even if not setuid.
  [bug#33965 introduced in 2.8]

** Bug fixes

  diff and related programs no longer get confused if stdin, stdout,
  or stderr are closed.  Previously, they sometimes opened files into
  file descriptors 0, 1, or 2 and then mistakenly did I/O with them
  that was intended for stdin, stdout, or stderr.
  [bug#33965 present since "the beginning"]

  cmp, diff and sdiff no longer treat negative command-line
  option-arguments as if they were large positive numbers.
  [bug#35256 introduced in 2.8]

Categories: FLOSS Project Planets

François Marier: Time-stretch in Kodi

Planet Debian - Sun, 2021-08-01 14:47

VLC has a really neat feature which consists of time-stretching audio to allow users to speed up or slow video playback with the [ and ] keys without affecting the pitch of the sound. I recently switched to Kodi as my video player of choice and I was looking for the equivalent feature.

Kodi equivalent

To enable this feature in Kodi, you first need to enable Sync playback to display in Settings | Player | Videos.

Then map the tempoup and tempodown commands to the same keyboard shorcuts as VLC.

In my case however, I wanted to map these functions to buttons on my Streamzap remote and so I put the following in my ~/.kodi/userdata/keymaps/remote.xml:

<FullscreenVideo> <remote> <pageminus>PlayerControl(tempodown)</pageminus> <pageplus>PlayerControl(tempoup)</pageplus> </remote> </FullscreenVideo>

which allows me to press the Ch + and Ch - buttons on the remote to adjust the speed while the video is playing (in full-screen mode only, not with the menu displayed).

Examples

Here are three ways I use this functionality:

  • I set it to 0.9x for movies in languages I'm not totally proficient in.
  • I set it to 1.1x for almost everything since the difference is not especially perceptible, but it still allows me to watch 10% more movies in the same amount of time
  • I set it to 1.2x for Rick & Morty because it makes Rick even more hilariously reckless and impatient.

Unfortunately, I haven't found a way to set the default tempo value. The closest setting I could find is the one which allows you to set the maximum tempo value maxtempo. If you know of a way, please leave a comment!

Categories: FLOSS Project Planets

#! code: Drupal 9: Selecting Machine Names For Content Entities And Fields

Planet Drupal - Sun, 2021-08-01 13:34

Naming things is hard[citation needed] and there are a lot of things that you can name when configuring a Drupal site. Picking the right machine names for the different parts of Drupal can make your life easy in the long run. Changing labels is simply a case of tweaking the label in the interface, or through configuration updates. The issue is that once you decide on a machine name for something in Drupal it's pretty much set in stone forever.

The machine names you pick are often used in database tables, paths, interface elements and pretty much anywhere it is used. Changing entity or field machine names at a later date is difficult and can mean writing complex code or using migrations to achieve.

I have built a lot of Drupal sites over the years and done detailed audits on quite a few as well. This experience has given me a lot of insight on how to set up by machine names in Drupal. I have seem some horrific naming practices and poorly configured sites and in my experience there is a correlation between poorly named things in Drupal and bugs caused by those poor choice of names.

As far as I can tell the drupal.org documentation doesn't actually cover this aspect of setting up Drupal. It does address naming conventions in code and modules as part of the coding standards, but not with machine names for content and fields.

Recently, I got into a discussion with a couple of other Drupal developers about content entities and fields, what to name them, and what to avoid when reusing fields. Surprisingly few people have written about naming conventions in Drupal, despite the fact that it has such an effect on the life cycle of the site itself. I thought I would write down some of the conventions I use when naming things and some of the best practices when reusing fields within a Drupal site.

Read more.

Categories: FLOSS Project Planets

KDE e.V. is looking for a project lead/event manager for environmental sustainability project

Planet KDE - Sun, 2021-08-01 09:00

KDE e.V., the non-profit organisation supporting the KDE community, is looking for a great person to run a project related to the environmental sustainability of our software. The position we are looking to fill immediately is that of a project lead who can also do some event management duties. Please see the job ad for the project lead for more details about this employment opportunity. We are looking forward to your application.

Categories: FLOSS Project Planets

June/July in KDE Itinerary

Planet KDE - Sun, 2021-08-01 03:30

Since the previous update KDE Itinerary has been making big steps towards the upcoming 21.08 release, and with support for vaccination certificates is adapting to the requirements of travel during a global pandemic.

New Features

Travel during COVID-19 requires certificates proving your are vaccinated, (negatively) tested or have recovered from an infection in many places. Official apps for that are only available on the major proprietary platforms, so for Plasma Mobile we need to take care of this ourselves.

Technically those certificates are just QR codes, so an image viewer is all you would need. Once you are managing more than one such code, e.g. due to having had multiple tests or due to traveling with family, it becomes useful to also see the content in a human readable form.

Based on the KHealthCertificate library this has been added to KDE Itinerary, and a standalone health certificate wallet app for Plasma Mobile is also in the works. So far the DIVOC and EU DGC formats are supported, covering Europe and India.

KDE Itinerary's health certificate manager

Certificates can be imported from PDF files, or via the clipboard (typically coming from a barcode scanning app there). This depends on very recent changes to KDE Frameworks’s barcode library in order to correctly handle the large binary content QR codes used by those certificates.

Another new feature in KDE Itinerary is the ability to import and export favorite locations in the GPX format, as well as to export entire trips to GPX. This is interesting for interoperability with other application, such as Nextcloud Maps. So if you already have relevant data for your travel in there, or need data about your trips there, that has now become possible.

Nextcloud Maps showing exported trips from KDE Itinerary

More work on interoperability happened in Plasma Mobile’s barcode scanning app Qrca. Qrca can now detect known barcode content formats, including those of transport tickets, and open them in the right app if available, making importing e.g. boarding passes into KDE Itinerary even easier.

KDE Itinerary integration in Qrca Distribution

Work on KDE’s Android build and delivery pipeline continued, bringing a few interesting new features for KDE Itinerary:

  • Application meta-data is now automatically updated in the Google Play store as well, significantly easing maintenance of all the translated versions.
  • Built release APKs are now automatically uploaded to the Google Play staging ground, making publishing new beta or production releases a matter of a just a few clicks.

Thanks to that, KDE Itinerary is now finally available in the Google Play store.

Infrastructure Work Public transport data

KPublicTransport gained support for IFOPT location identifiers. Those are hierarchical and thus are much better suited to represent the complex structure of larger train station than simpler scheme like UIC station codes.

Being a standardized scheme, IFOPT identifiers can be found in OSM data as well as the responses of OTP, EFA and IVV ASS backends. The wide use helps a lot with merging data from multiple sources.

Due to their hierarchical nature they allow referencing highly detailed location information such as a specific platform or bus stop point as well as the larger station those belong to at the same time, and both are very relevant information to navigate to the right place.

KDE Frameworks

Bringing coordinate-based lookup for countries, country subdivisions and timezones to KF5::I18n is progressing, and will eventually replace the current inferior solution in KItinerary, as well as make this also available for KPublicTransport. This will give us more complete and more reliable timezone information at all parts of a journey.

Fixes & Improvements Travel document extractor

While there has been a number of improvements e.g. for the various SNCF ticket variants, the majority of extractor work is clearly influenced by the pandemic:

  • Extracting event reservations for COVID test and vaccination centers.
  • Additions and improvements for extractors for event booking sites, as more things need registration nowadays.
  • Extractors for bookings for activities that previously didn’t need pre-booking at all, such as public swimming pools.
Public transport data
  • Initial support for version 2 of OpenTripPlanner (OTP) has been added.
  • OTP location queries are now cached.
  • All backends now provide correct location types, and location merging now adapts to the location type (e.g. by using different thresholds depending on the usual size of the location, a rental bike dock tends to be much smaller than a train station).
  • The Hafas backend now supports earlier/later journey and stopover queries.
  • Support for the VRT public transport provider in Germany has been added.
  • Timezone information is now also propagated correctly to intermediate stops.
Indoor Maps
  • Added support for MapCSS layer selectors. This allows single OSM elements to emit multiple scene graph items, which allows fixing problems we had e.g. with tram tracks on top of roads not being rendered correctly.
  • Ways or rivers that exactly follow an administrative boundary are now longer hidden.
  • Tracks and paths are rendered as well now, which can be useful outside of urban areas.
  • More cases of inaccessible elevators are handled now, and elevators displayed correctly in that case.
  • At final destinations without any subsequent departure, the map now initially centers on the arrival location instead on a random point.
Apple Wallet Passes
  • Date/time field formats are now detected and handled correctly.
  • Formatted fields are displayed correctly in the KMail plugin and the Itinerary app.
Itinerary app
  • It’s now possible to delete entire trip groups.
  • Unbound train bookings will no longer have transfers added before/after them before a specific train has been selected.
  • Links in disruption notes now work.
  • Exporting the entire data in KDE Itinerary now works again on platforms other than Android, and importing this data again is no longer constrained by the size limit for importing ticket data.
  • Various UI fixes to improve handling of modal pages and HIG compliance.
  • The favorites location selector page now closes automatically after picking the location.
  • The first received realtime data confirming a departure platform does no longer trigger an empty notification if there actually isn’t any platform change.
Contribute

While field testing and collecting travel document samples is still difficult in many parts of the world, there’s plenty of other things that can be done. The KDE Itinerary workboard or the more specialized indoor map workboard show what’s on the todo list, and are a good place for collecting new ideas. For questions and suggestions, please feel free to join us on the KDE PIM mailing list or in the #kontact channel on Matrix.

Categories: FLOSS Project Planets

Tryton News: Newsletter for August 2021

Planet Python - Sun, 2021-08-01 02:00

Summer Letter Cube on Soil1280×773 102 KB

Development has slowed down a bit during the summer but improvements are still being made.

Changes for the User

The clients now only use the last menu entry as the label on each tab.

A new sales report by customer category has been added.

In order to close an accounting period, inactive accounts must now be balanced.

New Modules

The Account Stock Shipment Cost Module allocates shipment costs based on invoices.

Changes for the System Administrator

The server will now use argon2 or scrypt to hash passwords by default.

Changes for the Developer

It is now possible to use a PYSON expression as the key for a PYSON.In with a dict object.

1 post - 1 participant

Read full topic

Categories: FLOSS Project Planets

Russ Allbery: Review: Fugitive Telemetry

Planet Debian - Sun, 2021-08-01 00:26

Review: Fugitive Telemetry, by Martha Wells

Series: Murderbot Diaries #6 Publisher: Tordotcom Copyright: April 2021 ISBN: 1-250-76538-2 Format: Kindle Pages: 167

Fugitive Telemetry is the fifth Murderbot novella. It is not a sequel to the (as yet) lone novel, Network Effect. Instead, it takes place between Exit Strategy and Network Effect, filling in more of the backstory of the novel. You should not read it before Exit Strategy, but I believe it and Network Effect could be read in any order.

A human has been murdered on Preservation Station. That is not a thing that happens on Preservation Station, which is normally a peaceful place whose crime is limited to intoxication-related stupidity. Murderbot's first worry, and the first worry of his humans, is that this may be one of their enemies getting into position to target them. That risk at least makes the murder worth investigating, rather than leaving it solely to Station Security.

The problem from Murderbot's perspective is that there is an effective and efficient way of doing such an investigation, which starts with hacking into the security systems to get necessary investigative data and may end with the silent disposal of dead bodies of enemy agents. But this is Preservation Station, not the Corporation Rim, and Murderbot agreed to not do things like casually compromise all the station security systems or murder people who are security threats.

There was a big huge deal about it, and Security was all "but what if it take over the station's systems and kills everybody" and Pin-Lee told them "if it wanted to do that it would have done it by now," which in hindsight was probably not the best response.

Worse, Murderbot's human wants it to work collaboratively with Station Security. That is a challenge, given that Security has a lot of reasons not to trust SecUnits, and Murderbot has a lot of reasons not to trust a security organization (not to mention considers them largely incompetent). Also, the surveillance systems are totally inadequate compared to the Corporation Rim for various financial and civil rights reasons that are doubtless wonderful except in situations where someone has been murdered. But hopefully the humans won't get in the way too much.

This is one of those books (well, novellas) that I finished a while back but then stalled out on reviewing. I think that's because I don't have that much to say about it. Network Effect pushed the world-building and Murderbot's personal storyline forward significantly, but Fugitive Telemetry doesn't pick up those threads. Instead, this is another novella in much the same vein as the first four. If you, like me, are eager to see where Wells takes the story after the events of the novel, this is somewhat disappointing. But if you enjoyed the novellas, this is more of what you enjoyed: snarky comments about humanity, competence porn, Murderbot getting pulled into problems somewhat against its will and then trying to sort them out, and the occasional touching moment of emotional connection that Murderbot escapes from as quickly as possible.

It's quite enjoyable, helped considerably by Wells's wise choice to not make the supporting human characters idiots. Collaboration is not Murderbot's strength; it is certain the investigation will be an endless series of frustrations and annoyances given the level of suspicion Station Security starts with. But some humans (and some SecUnits) are capable of re-evaluating their conclusions when given new evidence, and watching that happen is part of the fun of this novella.

What this novella is missing is the overarching plot structure of the rest of the series, since where this story sits chronologically doesn't leave much room for advancing or even deepening the plot arc. It therefore feels incidental: delightful while I was reading it, probably missable if you have to, and not something I spent time thinking about after I finished it.

If you liked the Murderbot novellas up until now, you will want to read this one. If you haven't started the series yet, this is not a place to start. If you want something more like the Network Effect novel, or a story where Murderbot makes significant decisions about its future, the wait continues.

Rating: 8 out of 10

Categories: FLOSS Project Planets

Paul Wise: FLOSS Activities July 2021

Planet Debian - Sat, 2021-07-31 21:54
Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes Issues Review Administration
  • libusbgx/gt: triage issues
  • Debian packages: triaged bugs for reintroduced packages
  • Debian servers: debug lists mail issue, debug lists subscription issue
  • Debian wiki: unblock IP addresses, approve accounts
Communication
  • Respond to queries from Debian users and contributors on the mailing lists and IRC
Sponsors

The microsoft-authentication-library-for-python and purple-discord work was sponsored by my employer. All other work was done on a volunteer basis.

Categories: FLOSS Project Planets

Junichi Uekawa: August comes.

Planet Debian - Sat, 2021-07-31 21:29
August comes. Kids are on summer staycation. This is not sustainable.

Categories: FLOSS Project Planets

Pages