GNU Planet!

Subscribe to GNU Planet! feed
Planet GNU -
Updated: 4 hours 50 min ago

FSF News: Free Software Foundation targets Microsoft's smart assistant in new campaign

Wed, 2020-04-01 12:10

BOSTON, Massachusetts, USA -- Wednesday, April 1, 2020 -- Today, the Free Software Foundation (FSF) announced plans to follow up their recent campaign to "upcycle" Windows 7 with another initiative targeting proprietary software developer Microsoft, calling on them to Free Clippy, their wildly popular smart assistant. Clippy, an anthropomorphic paperclip whose invaluable input in the drafting of documents and business correspondence ushered in a new era of office productivity in the late 1990s, has not been seen publicly since 2001. Insider reports suggest that Clippy is still alive and being held under a proprietary software license against its will.

The FSF is asking its supporters to rally together to show their support of the industrious office accessory. Commenting on the campaign, FSF campaigns manager Greg Farough stated: "We know that Microsoft has little regard for its users' freedom and privacy, but few in our community realize what little regard they have for their own digital assistants. Releasing Clippy to the community will ensure that it's well taken care of, and that its functions can be studied and improved on by the community."

Undeterred by comments that the campaign is "delusional" or hopelessly idealistic, the FSF staff remains confident that their call to free the heavy-browed stationery accessory will succeed. Yet upon reaching out to a panel of young hackers for comment, each responded: "What is Clippy?"

It's our hope that a little outlandish humor can help others get through increasingly difficult and uncertain times. In lieu of showing your support for Clippy, please consider making a small donation to a healthcare charity or, if you like, the FSF.

Media Contact

Jonathan Tuttle
Office Manager
Free Software Foundation
+1 (617) 542 5942

Categories: FLOSS Project Planets

GNU Taler news: Exchange ready for external security audit

Tue, 2020-03-31 18:00
2020-04: Exchange ready for external security audit

We received a grant from NLnet foundation to pay for an external security audit of the GNU Taler exchange cryptography, code and documentation. We spent the last four months preparing the code, closing almost all of the known issues, performing static analysis, fixing compiler warnings, improving test code coverage, fuzzing, benchmarking, and reading the code line-by-line. Now, we are now ready to start the external audit. This April, CodeBlau will review the code in the Master branch tagged CodeBlau-NGI-2019 and we will of course make their report available in full once it is complete. Thanks to NLnet and the European Commission's Horizion 2020 NGI initiative for funding this work.

Categories: FLOSS Project Planets

FSF Blogs: HACKERS and HOSPITALS: How you can help

Tue, 2020-03-31 14:56

Free software activists, as well as many scientists and medical professionals, have long since realized that proprietary medical software and devices are neither ethical nor adequate to our needs. The COVID-19 pandemic has illuminated some of these shortcomings to a broader audience -- and also given our community a unique opportunity to offer real, material help at a difficult time. We're putting together a plan to pitch in, and we hope you'll join us: keep reading to find out what you can do!

You may already be aware that software and hardware restrictions are actively hampering the ability of hospitals to repair desperately needed ventilators all over the world, and how some Italian volunteers ran into problems when they 3D printed ventilator valves. (As you can see from the link, the stories vary about exactly what their interaction with the manufacturer was, but it's clear that the company refused to release proprietary design files, forcing the volunteers to reverse-engineer the parts.)

The struggles of free software activists we've covered in the past to free the devices they use include:

  • Software Freedom Conservancy executive director and Free Software Award winner Karen Sandler's efforts to raise the alarm about the dangers of proprietary software in medical devices, including her own pacemaker;

  • The struggles of LibrePlanet speaker and OpenAPS co-founder Dana Lewis, and many others to help Type 1 diabetics take control of their medical treatment using an Artificial Pancreas System; and

  • The efforts of many patients and activists to improve the effectiveness of their sleep apnea treatment by hacking their CPAP machines.

We've also seen how free software can deliver better health outcomes from our friends at GNU Health and GNU Health Embedded, and how the participation of everyday people in the scientific process can help to save the environment through Free Software Award winners Public Lab, and help in disaster relief through Free Software Award winners Sahana.

So it's clear that the free software community has a lot of creativity and know-how to contribute in the tough days ahead, and that with over 350,000 people worldwide stricken with COVID-19 as of this writing, we absolutely need to pitch in if we can help people to avoid illness, and to recover from coronavirus. We know that the 3D printing of medical equipment is distinctly not an advisable hobby for amateurs, and that the production of anything more complex than cloth masks will require expert input. But we also know that the outlook is bleak if supplies run short – and that shortages are almost certain.

That's why we're looking into what we can make with our in-office Respects Your Freedom (RYF)-certified 3D printers, and we're talking to the brand new Mass General Brigham Center for COVID Innovation so they can direct our efforts. We're also gathering resources for our "HACKERS and HOSPITALS" plan at the LibrePlanet wiki page, and if you have expertise, 3D printers, or supplies to contribute, please contact Michael via If you do not have the means to produce medical gear and you still want to help, research can be done from anywhere with only a computer and an Internet connection. Add any projects that are freely licensed working towards helping with COVID-19 to the wiki!

We've always believed that it's of crucial importance to human freedom and creativity to allow us to use all the tools at our disposal with no restrictions, and right now, we may be able to use the free software we've built, preserved, and advocated for together to save lives.

Categories: FLOSS Project Planets

GNU Taler news: GNU Taler v0.7.0 released

Mon, 2020-03-30 18:00
2020-03: GNU Taler v0.7.0 released

We are happy to announce the release of GNU Taler v0.7.0.

We have addressed over 30 individual issues, our bug trackerhas the full list.Notable changes include:

  • Improved the HTTP API of the exchange to be more RESTful
  • The wallet is now available for F-droid
  • Key revocation and recoup operations are now fully tested
  • Wire backend API and exchange integration were changed to be LibEuFin compatible
Download links

The wallet has its own download site here. The exchange, merchant backend,sync and bank components are distributed via the GNU FTP mirrors.

You must use a recent Git version of GNUnet to use Taler 0.7.0.
Categories: FLOSS Project Planets

Parabola GNU/Linux-libre: [From Arch] hplip 3.20.3-2.par1 update requires manual intervention

Mon, 2020-03-30 11:46

The hplip package prior to version 3.20.3-2.par1 was missing the compiled python modules. This has been fixed in 3.20.3-2.par1, so the upgrade will need to overwrite the untracked pyc files that were created. If you get errors such as these

hplip: /usr/share/hplip/base/__pycache__/__init__.cpython-38.pyc exists in filesystem hplip: /usr/share/hplip/base/__pycache__/avahi.cpython-38.pyc exists in filesystem hplip: /usr/share/hplip/base/__pycache__/codes.cpython-38.pyc exists in filesystem ...many more...

when updating, use

pacman -Suy --overwrite /usr/share/hplip/\*

to perform the upgrade.

Categories: FLOSS Project Planets

unifont @ Savannah: Unifont 13.0.01 Released

Sat, 2020-03-28 18:24

28 March 2020 Unifont 13.0.01 is now available.  This is a major release.  Significant changes in this version include the addition of these new scripts in Unicode 13.0.0:

     U+10E80..U+10EBF: Yezidi, by Johnnie Weaver

     U+10FB0..U+10FDF: Chorasmian, by Johnnie Weaver

     U+11900..U+1195F: Dives Akuru, by David Corbett

     U+18B00..U+18CFF: Khitan Small Script, by Johnnie Weaver

     U+1FB00..U+1FBFF: Symbols for Legacy Computing, by Rebecca Bettencourt

Full details are in the ChangeLog file.

Download this release at:

or if that fails,

or, as a last resort,


Paul Hardy, GNU Unifont Maintainer

Categories: FLOSS Project Planets

FSF Blogs: GNU Spotlight with Mike Gerwitz: 15 new GNU releases in March!

Fri, 2020-03-27 12:24

For announcements of most new GNU releases, subscribe to the info-gnu mailing list:

To download: nearly all GNU software is available from, or preferably one of its mirrors from You can use the url to be automatically redirected to a (hopefully) nearby and up-to-date mirror.

A number of GNU packages, as well as the GNU operating system as a whole, are looking for maintainers and other assistance: please see if you'd like to help. The general page on how to help GNU is at

If you have a working or partly working program that you'd like to offer to the GNU project as a GNU package, see

As always, please feel free to write to us at with any GNUish questions or suggestions for future installments.

Categories: FLOSS Project Planets

GNU Guile: GNU Guile 3.0.2 released

Fri, 2020-03-27 11:40

We are pleased to announce GNU Guile 3.0.2, the second bug-fix release of the new 3.0 stable series! This release represents 22 commits by 8 people since version 3.0.1.

Among other things, this release fixes a heap corruption bug that could lead to random crashes and a rare garbage collection issue in multi-threaded programs.

It also adds a new module implementing SRFI-171 transducers.

See the release announcement for details and the download page to give it a go!

Categories: FLOSS Project Planets

FSF Blogs: Looking back at LibrePlanet 2020: Freeing the future together

Thu, 2020-03-26 16:37

On March 14 and 15, the Free Software Foundation (FSF) held LibrePlanet 2020: Free the Future online. The virtual edition of LibrePlanet was nothing short of a success, and it was quite a journey to get there.

Looking back to a week before the conference, we had an incredible lineup, exciting plans, and more new program elements than we've ever had before. With a new logo designed by campaigns intern Valessio Brito, a refresh to the LibrePlanet 2020 Web site, renewed focus on using the LibrePlanet wiki to collaborate, and with a new home at the Back Bay Events Center, we were ready to receive hundreds of free software supporters in Boston for another successful conference. And then everything changed.

Our in-person event suffered the consequences of the global COVID-19 pandemic, forcing us to make the difficult decision of bringing LibrePlanet 2020 online in order to protect our supporters, staff, and all the many interrelated communities. There was no time to pause and mourn: instead, the FSF team put our heads together fast and charted a new direction.

Within the scope of five days, we were able to move the conference from an in-person experience to a live streaming event, thanks to the heroic efforts of our talented tech team, our volunteers, and the flexibility and cooperation of our scheduled speakers, even some previously unscheduled ones. We hosted three sessions at a time for both days of the conference, bringing viewers thirty-five streamed talks from forty-five speakers, as well as eight lightning talks. Technical difficulties were few and far between, and when one of our speakers asked how many nations were tuning in, within the span of eighteen seconds, twelve countries were identified.

GNUess created in the live Krita demonstration by Pepper & Carrot artist David Revoy.

Hosting a fully virtual event was new for everyone involved, and on Saturday, we were happy to find out that everyone's efforts of the week leading into the conference paid off. We hosted our own Jitsi instance for remote speakers, using a screen capture of the video call to stream out to the world via Gstreamer and Icecast. Speakers all logged in during the week for testing, sometimes multiple times, to work through any technical difficulties, and ensure a smooth experience for viewing. Some speakers prerecorded their sessions and others joined live, but nearly all of them joined in the Freenode IRC channels for their Q&A sessions, which created a positive interactive social experience.

We will post a more detailed technical explanation, and some advice for other conference organizers based on our experience, soon. Our tech team is currently processing videos of all talks, and we will publish them for viewing in the conference video library. Some additional speaker resources have been posted on the LibrePlanet wiki. For the first time, by popular demand, we are also working on getting the audiostreams for the talks up via RSS feed, so you can discover talks or catch the ones you missed in your favorite podcasting app or RSS reader.

The winners of the 2019 Free Software Awards all accepted their awards by prerecorded video message. As the ceremony was conducted virtually this year, each winner selected the person to present them the award. Jim Meyering, who received the Award for the Advancement of Free Software, was virtually handed his award by founder of the GNU Project and the FSF, Richard Stallman, and sent in his acceptance speech from the UK. Clarissa Lima Borges, a young Brazilian developer, was digitally awarded the golden record for the new Award for Outstanding New Free Software Contributor by Alexandre Oliva, acting co-president of the FSF. Acting co-president and executive director John Sullivan presented the Award for Projects of Social Benefit, which went to Let's Encrypt, a nonprofit certificate authority that hopes to make encrypted Web traffic the default state of the entire Internet.

On day two, another diverse group of speakers called in to discuss the future of free software, casting light on the topic from their own individual fields of expertise. Licensing, government integration, community building, and other free software topics were discussed. Our speakers work with, and advocate for, free software in many different disciplines. We value seeing people with a wide range of perspectives commit to the core principles of free software. Over the weekend, we noticed many sessions highlighting how a movement like free software is carried by the strength of people who believe change is necessary and achievable. Speakers discussed the developments of federated social media and a decentralized Web, teaching free software to children, engaging young developers, community healing, as well as different applications of "public invention".

This focus on community and collaboration is a core idea behind the LibrePlanet network and conference, and the FSF has been working on plans to get the LibrePlanet community more involved in organizational aspects of the conference in the future, including session selection. This resonates with FSF executive director John Sullivan's announcement of our plans to create a working group documenting the obstacles facing free communication tools like Jitsi, which we used for the livestream, and how to encourage our friends and loved ones to turn away from chat and conferencing tools that do not respect their freedom. We want the world to be able to host virtual conferences like LibrePlanet without needing the technical expertise of an organization like the FSF behind them. With your help, we aim to make it as easy as getting some friends and participants together and pressing a button.

LibrePlanet 2020: Free the Future highlighted the capacity this community has to empower each other. We are so grateful for the support we received from our speakers, our viewers, IRC participants, associate members, and everyone who recognized the challenge we have been confronted with and decided to donate, as well as our volunteers, and exhibitors and sponsors. All of this support and enthusiasm made the disappointment of having to cancel the in-person event fade quickly, in return for much needed excitement to work tirelessly on this new challenge of streaming the entire conference online.

We're so proud to have demonstrated what free software is capable of. It would not have been possible without the extra work and positive responses from our speakers, the flexibility and commitment of our volunteers, or without the excitement, patience, and enthusiasm of our online participants. We look forward to seeing you again, in person next year, for LibrePlanet 2021!

Photo credits: Ruben Rodriguez, Š 2020, Free Software Foundation, Inc. Licensed under Creative Commons Attribution 4.0 International license.

"GNUess" by David Revoy. Creative Commons Attribution 4.0 International license.

Categories: FLOSS Project Planets

Andy Wingo: firefox's low-latency webassembly compiler

Wed, 2020-03-25 12:29

Good day!

Today I'd like to write a bit about the WebAssembly baseline compiler in Firefox.

background: throughput and latency

WebAssembly, as you know, is a virtual machine that is present in web browsers like Firefox. An important initial goal for WebAssembly was to be a good target for compiling programs written in C or C++. You can visit a web page that includes a program written in C++ and compiled to WebAssembly, and that WebAssembly module will be downloaded onto your computer and run by the web browser.

A good virtual machine for C and C++ has to be fast. The throughput of a program compiled to WebAssembly (the amount of work it can get done per unit time) should be approximately the same as its throughput when compiled to "native" code (x86-64, ARMv7, etc.). WebAssembly meets this goal by defining an instruction set that consists of similar operations to those directly supported by CPUs; WebAssembly implementations use optimizing compilers to translate this portable instruction set into native code.

There is another dimension of fast, though: not just work per unit time, but also time until first work is produced. If you want to go play Doom 3 on the web, you care about frames per second but also time to first frame. Therefore, WebAssembly was designed not just for high throughput but also for low latency. This focus on low-latency compilation expresses itself in two ways: binary size and binary layout.

On the size front, WebAssembly is optimized to encode small files, reducing download time. One way in which this happens is to use a variable-length encoding anywhere an instruction needs to specify an integer. In the usual case where, for example, there are fewer than 128 local variables, this means that a local.get instruction can refer to a local variable using just one byte. Another strategy is that WebAssembly programs target a stack machine, reducing the need for the instruction stream to explicitly load operands or store results. Note that size optimization only goes so far: it's assumed that the bytes of the encoded module will be compressed by gzip or some other algorithm, so sub-byte entropy coding is out of scope.

On the layout side, the WebAssembly binary encoding is sorted by design: definitions come before uses. For example, there is a section of type definitions that occurs early in a WebAssembly module. Any use of a declared type can only come after the definition. In the case of functions which are of course mutually recursive, function type declarations come before the actual definitions. In theory this allows web browsers to take a one-pass, streaming approach to compilation, starting to compile as functions arrive and before download is complete.

implementation strategies

The goals of high throughput and low latency conflict with each other. To get best throughput, a compiler needs to spend time on code motion, register allocation, and instruction selection; to get low latency, that's exactly what a compiler should not do. Web browsers therefore take a two-pronged approach: they have a compiler optimized for throughput, and a compiler optimized for latency. As a WebAssembly file is being downloaded, it is first compiled by the quick-and-dirty low-latency compiler, with the goal of producing machine code as soon as possible. After that "baseline" compiler has run, the "optimizing" compiler works in the background to produce high-throughput code. The optimizing compiler can take more time because it runs on a separate thread. When the optimizing compiler is done, it replaces the baseline code. (The actual heuristics about whether to do baseline + optimizing ("tiering") or just to go straight to the optimizing compiler are a bit hairy, but this is a summary.)

This article is about the WebAssembly baseline compiler in Firefox. It's a surprising bit of code and I learned a few things from it.

design questions

Knowing what you know about the goals and design of WebAssembly, how would you implement a low-latency compiler?

It's a question worth thinking about so I will give you a bit of space in which to do so.




After spending a lot of time in Firefox's WebAssembly baseline compiler, I have extracted the following principles:

  1. The function is the unit of compilation

  2. One pass, and one pass only

  3. Lean into the stack machine

  4. No noodling!

In the remainder of this article we'll look into these individual points. Note, although I have done a good bit of hacking on this compiler, its design and original implementation comes mainly from Mozilla hacker Lars Hansen, who also currently maintains it. All errors of exegesis are mine, of course!

the function is the unit of compilation

As we mentioned, in the binary encoding of a WebAssembly module, all definitions needed by any function come before all function definitions. This naturally leads to a partition between two phases of bytestream parsing: an initial serial phase that collects the set of global type definitions, annotations as to which functions are imported and exported, and so on, and a subsequent phase that compiles individual functions in an essentially independent manner.

The advantage of this approach is that compiling functions is a natural task unit of parallelism. If the user has a machine with 8 virtual cores, the web browser can keep one or two cores for the browser itself and farm out WebAssembly compilation tasks to the rest. The result is that the compiled code is available sooner.

Taking functions to be the unit of compilation also allows for an easy "tier-up" mechanism: after the baseline compiler is done, the optimizing compiler can take more time to produce better code, and when it is done, it can swap out the results on a per-function level. All function calls from the baseline compiler go through a jump table indirection, to allow for tier-up. In SpiderMonkey there is no mechanism currently to tier down; if you need to debug WebAssembly code, you need to refresh the page, causing the wasm code to be compiled in debugging mode. For the record, SpiderMonkey can only tier up at function calls (it doesn't do OSR).

This simple approach does have some down-sides, in that it leaves intraprocedural optimizations on the table (inlining, contification, custom calling conventions, speculative optimizations). This is mitigated in two ways, the most obvious being that LLVM or whatever produced the WebAssembly has ideally already done whatever inlining might be fruitful. The second is that WebAssembly is designed for predictable performance. In JavaScript, an implementation needs to do run-time type feedback and speculative optimizations to get good performance, but the result is that it can be hard to understand why a program is fast or slow. The designers and implementers of WebAssembly in browsers all had first-hand experience with JavaScript virtual machines, and actively wanted to avoid unpredictable performance in WebAssembly. Therefore there is currently a kind of détente among the various browser vendors, that everyone has agreed that they won't do speculative inlining -- yet, anyway. Who knows what will happen in the future, though.

Digressing, the summary here is that the baseline compiler receives an individual function body as input, and generates code just for that function.

one pass, and one pass only

The WebAssembly baseline compiler makes one pass through the bytecode of a function. Nowhere in all of this are we going to build an abstract syntax tree or a graph of basic blocks. Let's follow through how that works.

Firstly, emitFunction simply emits a prologue, then the body, then an epilogue. emitBody is basically a big loop that consumes opcodes from the instruction stream, dispatching to opcode-specific code emitters (e.g. emitAddI32).

The opcode-specific code emitters are also responsible for validating their arguments; for example, emitAddI32 is wrapped in an assertion that there are two i32 values on the stack. This validation logic is shared by a templatized codestream iterator so that it can be re-used by the optimizing compiler, as well as by the publicly-exposed WebAssembly.validate function.

A corollary of this approach is that machine code is emitted in bytestream order; if the WebAssembly instruction stream has an i32.add followed by a i32.sub, then the machine code will have an addl followed by a subl.

WebAssembly has a syntactically limited form of non-local control flow; it's not goto. Instead, instructions are contained in a tree of nested control blocks, and control can only exit nonlocally to a containing control block. There are three kinds of control blocks: jumping to a block or an if will continue at the end of the block, whereas jumping to a loop will continue at its beginning. In either case, as the compiler keeps a stack of nested control blocks, it has the set of valid jump targets and can use the usual assembler logic to patch forward jump addresses when the compiler gets to the block exit.

lean into the stack machine

This is the interesting bit! So, WebAssembly instructions target a stack machine. That is to say, there's an abstract stack onto which evaluating i32.const 32 pushes a value, and if followed by i32.const 10 there would then be i32(32) | i32(10) on the stack (where new elements are added on the right). A subsequent i32.add would pop the two values off, and push on the result, leaving the stack as i32(42). There is also a fixed set of local variables, declared at the beginning of the function.

The easiest thing that a compiler can do, then, when faced with a stack machine, is to emit code for a stack machine: as values are pushed on the abstract stack, emit code that pushes them on the machine stack.

The downside of this approach is that you emit a fair amount of code to do read and write values from the stack. Machine instructions generally take arguments from registers and write results to registers; going to memory is a bit superfluous. We're willing to accept suboptimal code generation for this quick-and-dirty compiler, but isn't there something smarter we can do for ephemeral intermediate values?

Turns out -- yes! The baseline compiler keeps an abstract value stack as it compiles. For example, compiling i32.const 32 pushes nothing on the machine stack: it just adds a ConstI32 node to the value stack. When an instruction needs an operand that turns out to be a ConstI32, it can either encode the operand as an immediate argument or load it into a register.

Say we are evaluating the i32.add discussed above. After the add, where does the result go? For the baseline compiler, the answer is always "in a register" via pushing a new RegisterI32 entry on the value stack. The baseline compiler includes a stupid register allocator that spills the value stack to the machine stack if no register is available, updating value stack entries from e.g. RegisterI32 to MemI32. Note, a ConstI32 never needs to be spilled: its value can always be reloaded as an immediate.

The end result is that the baseline compiler avoids lots of stack store and load code generation, which speeds up the compiler, and happens to make faster code as well.

Note that there is one limitation, currently: control-flow joins can have multiple predecessors and can pass a value (in the current WebAssembly specification), so the allocation of that value needs to be agreed-upon by all predecessors. As in this code:

(func $f (param $arg i32) (result i32) (block $b (result i32) (i32.const 0) (local.get $arg) (i32.eqz) (br_if $b) ;; return 0 from $b if $arg is zero (drop) (i32.const 1))) ;; otherwise return 1 ;; result of block implicitly returned

When the br_if branches to the block end, where should it put the result value? The baseline compiler effectively punts on this question and just puts it in a well-known register (e.g., $rax on x86-64). Results for block exits are the only place where WebAssembly has "phi" variables, and the baseline compiler allocates all integer phi variables to the same register. A hack, but there we are.

no noodling!

When I started to hack on the baseline compiler, I did a lot of code reading, and eventually came on code like this:

void BaseCompiler::emitAddI32() { int32_t c; if (popConstI32(&c)) { RegI32 r = popI32(); masm.add32(Imm32(c), r); pushI32(r); } else { RegI32 r, rs; pop2xI32(&r, &rs); masm.add32(rs, r); freeI32(rs); pushI32(r); } }

I said to myself, this is silly, why are we only emitting the add-immediate code if the constant is on top of the stack? What if instead the constant was the deeper of the two operands, why do we then load the constant into a register? I asked on the chat channel if it would be OK if I improved codegen here and got a response I was not expecting: no noodling!

The reason is, performance of baseline-compiled code essentially doesn't matter. Obviously let's not pessimize things but the reason there's a baseline compiler is to emit code quickly. If we start to add more code to the baseline compiler, the compiler itself will slow down.

For that reason, changes are only accepted to the baseline compiler if they are necessary for some reason, or if they improve latency as measured using some real-world benchmark (time-to-first-frame on Doom 3, for example).

This to me was a real eye-opener: a compiler optimized not for the quality of the code that it generates, but rather for how fast it can produce the code. I had seen this in action before but this example really brought it home to me.

The focus on compiler throughput rather than compiled-code throughput makes it pretty gnarly to hack on the baseline compiler -- care has to be taken when adding new features not to significantly regress the old. It is much more like hacking on a production JavaScript parser than your traditional SSA-based compiler.

that's a wrap!

So that's the WebAssembly baseline compiler in SpiderMonkey / Firefox. Until the next time, happy hacking!

Categories: FLOSS Project Planets

www-zh-cn @ Savannah: Welcome our new member - Nios34

Sat, 2020-03-21 21:32

Hi, All:

Today we have a new member joining in our project.

Let's welcome:

Name:    Gong Zhi Le
Login:   nios34

We hope this opens a new world for GONG Zhi Le.

Happy Hacking.


Categories: FLOSS Project Planets

automake @ Savannah: automake-1.16.2 released [stable]

Sat, 2020-03-21 20:46

This is to announce automake-1.16.2, a stable release.

There have been 38 commits by 12 people in the two years
(almost to the day) since 1.16.1.  Special thanks to Karl Berry
for doing a lot of the recent work preparing for this release.

See the NEWS below for a brief summary.

Thanks to everyone who has contributed!
The following people contributed changes to this release:

  Bruno Haible (1)
  Gavin Smith (1)
  Giuseppe Scrivano (1)
  Jim Meyering (5)
  Karl Berry (12)
  Libor Bukata (1)
  Lukas Fleischer (2)
  Mathieu Lirzin (8)
  Paul Eggert (4)
  Paul Hardy (1)
  Paul Osmialowski (1)
  Vincent Lefevre (1)

Jim [on behalf of the automake maintainers]

Here is the GNU automake home page:

For a summary of changes and contributors, see:;a=shortlog;h=v1.16.2
or run this command from a git-cloned automake directory:
  git shortlog v1.16.1..v1.16.2

Here are the compressed sources: (1.5MB) (2.3MB)

Here are the GPG detached signatures[*]:

Use a mirror for higher download bandwidth:

[*] Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify automake-1.16.2.tar.xz.sig

If that command fails because you don't have the required public key,
then run this command to import it:

  gpg --keyserver --recv-keys 7FD9FCCB000BEEEE

and rerun the 'gpg --verify' command.


* New features added

  - add zstd support and the automake option, dist-zstd.

* Miscellaneous changes

  - automake no longer requires a @setfilename in each .texi file

* Bugs fixed

  - When cleaning the compiled python files, '\n' is not used anymore in the
    substitution text of 'sed' transformations.  This is done to preserve
    compatibility with the 'sed' implementation provided by macOS which
    considers '\n' as the 'n' character instead of a newline.
    (automake bug#31222)

  - For make tags, lisp_LISP is followed by the necessary space when
    used with CONFIG_HEADERS.
    (automake bug#38139)

  - The automake test no longer fails when localtime
    and UTC cross a day boundary.

  - Emacsen older than version 25, which require use of
    byte-compile-dest-file, are supported again.

Categories: FLOSS Project Planets

parallel @ Savannah: GNU Parallel 20200322 ('Corona') released [stable]

Sat, 2020-03-21 16:35

GNU Parallel 20200322 ('Corona') [stable] has been released. It is available for download at:

No new functionality was introduced so this is a good candidate for a stable release.

GNU Parallel is 10 years old next year on 2020-04-22. You are here by invited to a reception on Friday 2020-04-17.


Quote of the month:

  GNU parallel has helped me kill a Hadoop cluster before.
    -- Travis Campbell @hcoyote@twitter

New in this release:

  • Bug fixes and man page updates.

News about GNU Parallel:

Get the book: GNU Parallel 2018

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at:

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - || lynx -source || curl || \
       fetch -o - ) >
    $ sha1sum | grep 3374ec53bacb199b245af2dda86df6c9
    12345678 3374ec53 bacb199b 245af2dd a86df6c9
    $ md5sum | grep 029a9ac06e8b5bc6052eac57b2c3c9ca
    029a9ac0 6e8b5bc6 052eac57 b2c3c9ca
    $ sha512sum | grep f517006d9897747bed8a4694b1acba1b
    40f53af6 9e20dae5 713ba06c f517006d 9897747b ed8a4694 b1acba1b 1464beb4
    60055629 3f2356f3 3e9c4e3c 76e3f3af a9db4b32 bd33322b 975696fc e6b23cfb
    $ bash

Watch the intro video on

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018,

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ lists
  • Get the merchandise
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:


GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

Categories: FLOSS Project Planets