GNU Planet!

Subscribe to GNU Planet! feed
Planet GNU - https://planet.gnu.org/
Updated: 4 hours 55 min ago

FSF News: US Federal employees and retirees: Contribute conveniently through the Combined Federal Campaign

Mon, 2023-12-04 17:06
BOSTON, Massachusetts, USA -- Monday, December 4, 2023 -- The Free Software Foundation, today, highlighted its participation as a charity in the 2023 Combined Federal Campaign, which is focused on human rights this week.
Categories: FLOSS Project Planets

GNU Taler news: Launch of the TALER NGI PILOT

Sat, 2023-12-02 06:05
We are excited to announce the creation of an EU-funded consortium with the central objective to launch GNU Taler as a privacy-preserving payment system across Europe.
Categories: FLOSS Project Planets

FSF Events: Free Software Directory meeting on IRC: Friday, December 08, starting at 12:00 EST (17:00 UTC)

Fri, 2023-12-01 12:13
Join the FSF and friends on Friday, December 08, from 12:00 to 15:00 EST (17:00 to 20:00 UTC) to help improve the Free Software Directory.
Categories: FLOSS Project Planets

Gary Benson: GitLab YAML Docker Registry client

Fri, 2023-12-01 09:13

Have you written a Docker Registry API client in GitLab CI/CD YAML? I have.

# Delete candidate image from CI repository. clean-image: stage: .post except: - main variables: AUTH_API: "$CI_SERVER_URL/jwt/auth" SCOPE: "repository:$CI_PROJECT_PATH" REGISTRY_API: "https://$CI_REGISTRY/v2/$CI_PROJECT_PATH" before_script: - > which jq >/dev/null || (sudo apt-get update && sudo apt-get -y install jq) script: - echo "Deleting $CANDIDATE_IMAGE" - > TOKEN=$(curl -s -u "$CI_REGISTRY_USER:$CI_REGISTRY_PASSWORD" "$AUTH_API?service=container_registry&scope=$SCOPE:delete,pull" | jq -r .token) - > DIGEST=$(curl -s -I -H "Authorization: Bearer $TOKEN" -H "Accept: application/vnd.docker.distribution.manifest.v2+json" "$REGISTRY_API/manifests/$CI_COMMIT_SHORT_SHA" | tr -d "\r" | grep -i "^docker-content-digest: " | sed "s/^[^:]*: *//") - > curl -s -X DELETE -H "Authorization: Bearer $TOKEN" "$REGISTRY_API/manifests/"$(echo $DIGEST | sed "s/:/%3A/g")
Categories: FLOSS Project Planets

FSF News: EmacsConf joins Free Software Foundation fiscal sponsorship program

Thu, 2023-11-30 17:31
BOSTON, Massachusetts, USA -- Thursday, November 30, 2023 -- The Free Software Foundation (FSF) announced today that EmacsConf will join the Working Together for Free Software Fund. The one and only conference dedicated to the joy of Emacs is joining just before their event on December 2 and 3, 2023.
Categories: FLOSS Project Planets

GNU Taler news: GNU Taler v0.9.3 released

Wed, 2023-11-29 18:00
We are happy to announce the release of GNU Taler v0.9.3.
Categories: FLOSS Project Planets

FSF News: Worldwide community of activists protest OverDrive and others forcing DRM upon libraries

Tue, 2023-11-28 16:22
BOSTON, Massachusetts, USA -- Tuesday, November 28, 2023 -- The Free Software Foundation (FSF) has announced its Defective by Design campaign's 17th annual International Day Against DRM (IDAD). It will protest uses of Digital Restrictions Management technology's hold over public libraries around the world, exemplified by corporations like OverDrive and Follett Destiny. IDAD will take place digitally and worldwide on December 8, 2023.
Categories: FLOSS Project Planets

FSF Events: Free Software Directory meeting on IRC: Friday, December 01, starting at 12:00 EST (17:00 UTC)

Tue, 2023-11-28 14:10
Join the FSF and friends on Friday, December 01, from 12:00 to 15:00 EST (17:00 to 20:00 UTC) to help improve the Free Software Directory.
Categories: FLOSS Project Planets

Greg Casamento: Objective-C end of life?? Not a chance...

Mon, 2023-11-27 23:41
Recently, I saw this article regarding ObjCs "end of life" from JetBrains.

The tiobe index seems to disagree. It’s also important to remember that jetbrains recently had to take down their AppCode application (which sucked) since it didn’t sell.
Jetbrains is the creator of the kotlin language so they have a vested interest in their android customers. I would take their “index” with a grain of salt to say the least.

While it is certain that Apple won’t be investing into thing beyond ObjC 2.0, it is foolhardy to think that ObjC is going away anytime soon since there is an enormous installed base of stable code, not the least of which is Foundation and AppKit themselves. Also consider CocoaPods.

So, no, not worried about it. Also… look at Java and COBOL. For years people have declared the end of both languages. Java is still popular, though not in vogue and COBOL while not one of the “cool kids” has literally billions of lines of code being maintained and new code being written every year. This (admittedly biased as it is by the CTO of MicroFocus) article gives some reasons why….

Here is the article about COBOL...

Plus… Apple already has a mechanism for automatically allowing objc and swift to work together. Take a look at the frameworks in Xcode and you’ll notice some files called *.apinotes. These are YAML files that are used by the compiler to allow easy integration into swift projects. So, essentially, if Apple writes an ObjC version of a framework they get the swift version for absolutely free (minus the cost of writing the YAML file). If they write a swift only version they don’t get that benefit.

So, yeah, in conclusion… Yes, ObjC is NOT on the rise, but reports of its demise have been greatly exaggerated! ;)

PS. That being said, Apple dumping ObjC might spell a boom for us as all of the people who have installed codebases would suddenly need support for it either on macOS (on which we don’t currently work) or on other platforms. Something to think about…

PPS. All of the above being said. I admit I wouldn’t be terribly shocked to hear from Apple that “we have dropped support for the legacy objc language to provide you with the best support for our new swift language to make it the ‘greatest developer experience in the world’” or some grotesque BS like that. Lol

GC
Categories: FLOSS Project Planets

FSF Events: FSF Free Software Community Meetup on December 15, 2023

Mon, 2023-11-27 14:26
We are inviting you to the first ever FSF Free Software Community Meetup on Friday, December 15, 2023, from 18:45 to 21:00 (6:45 PM to 9:00 PM) EST.
Categories: FLOSS Project Planets

GNU Guix: Write package definitions in a breeze

Fri, 2023-11-24 08:05

More than 28,000 packages are available in Guix today, not counting third-party channels. That’s a lot—the 5th largest GNU/Linux distro! But it’s nothing if the one package you care about is missing. So even you, dear reader, may one day find yourself defining a package for your beloved deployment tool. This post introduces a new tool poised to significantly lower the barrier to writing new packages.

Introducing Guix Packager

Defining packages for Guix is not all that hard but, as always, it’s much harder the first time you do it, especially when starting from a blank page and/or not being familiar with the programming environment of Guix. Guix Packager is a new web user interface to get you started—try it!. It arrived right in time as an aid to the packaging tutorial given last week at the Workshop on Reproducible Software Environments.

The interface aims to be intuitive: fill in forms on the left and it produces a correct, ready-to-use package definition on the right. Importantly, it helps you avoid pitfalls that trip up many newcomers:

  • When you add a dependency in one of the “Inputs” fields, it adds the right variable name in the generated code and imports the right package module.
  • Likewise, you can choose a license and be sure the license field will refer to the right variable representing that license.
  • You can turn tests on and off, and add configure flags. These translate to a valid arguments field of your package, letting you discover the likes of keyword arguments and G-expressions without having to first dive into the manual.

Pretty cool, no?

Implementation

All the credit for this tool goes to co-worker and intrepid hacker Philippe Virouleau. A unique combination of paren aversion and web development superpowers—unique in the Guix community—led Philippe to develop the whole thing in a glimpse (says Ludovic!).

The purpose was to provide a single view to be able to edit a package recipe, therefore the application is a single-page application (SPA) written in using the UI library Philippe is most comfortable with: React, and MaterialUI for styling the components. It's built with TypeScript, and the library part actually defines all the types needed to manipulate Guix packages and their components (such as build systems or package sources). One of the more challenging parts was to be able to provide fast and helpful “search as you type” results over the 28k+ packages. It required a combination of MaterialUI's virtualized inputs, as well as caching the packages data in the browser's local storage, when possible (packaging metadata itself is fetched from https://guix.gnu.org/packages.json, a generic representation of the current package set).

While the feature set provides a great starting point, there are still a few things that may be worth implementing. For instance, only the GNU and CMake build systems are supported so far; it would make sense to include a few others (Python-related ones might be good candidates).

Running a local (development) version of the application can happen on top of Guix, since—obviously—it's been developed with the node version packaged in Guix, using the quite standard packages.json for JavaScript dependencies installed through npm. Contributions welcome!

Lowering the barrier to entry

This neat tool complements a set of steps we’ve taken over time to make packaging in Guix approachable. Indeed, while package definitions are actually code written in the Scheme language, the package “language” was designed from the get-go to be fully declarative—think JSON with parens instead of curly braces and semicolons. More recently we simplified the way package inputs are specified with an eye on making package definitions less intimidating.

The guix import command also exists to make it easier to simplify packaging: it can generate a package definition for anything available in other package repositories such as PyPI, CRAN, Crates.io, and so forth. If your preference goes to curly braces rather than parens, it can also convert a JSON package description to Scheme code. Once you have your first .scm file, guix build prints hints for common errors such missing module imports (those #:use-module stanzas). We also put effort into providing reference documentation, a video tutorial, and a tutorial for more complex packages.

Do share your experience with us and until then, happy packaging!

Acknowledgments

Thanks to Felix Lechner and Timothy Sample for providing feedback on an earlier draft of this post.

About GNU Guix

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the Hurd or the Linux kernel, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, AArch64 and POWER9 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

Categories: FLOSS Project Planets

Andy Wingo: tree-shaking, the horticulturally misguided algorithm

Fri, 2023-11-24 06:41

Let's talk about tree-shaking!

looking up from the trough

But first, I need to talk about WebAssembly's dirty secret: despite the hype, WebAssembly has had limited success on the web.

There is Photoshop, which does appear to be a real success. 5 years ago there was Figma, though they don't talk much about Wasm these days. There are quite a number of little NPM libraries that use Wasm under the hood, usually compiled from C++ or Rust. I think Blazor probably gets used for a few in-house corporate apps, though I could be fooled by their marketing.

But then you might recall the hyped demos of 3D first-person-shooter games with Unreal engine, but that was 5 years and one major release of Unreal ago: the current Unreal 5 does not support targetting WebAssembly.

Don't get me wrong, I think WebAssembly is great. It is having fine success in off-the-web environments, and I think it is going to be a key and growing part of the Web platform. I suspect, though, that we are only just now getting past the trough of dissolutionment.

It's worth reflecting a bit on the nature of web Wasm's successes and failures. Taking Photoshop as an example, I think we can say that Wasm does very well at bringing large C++ programs to the web. I know that it took quite some work, but I understand the end result to be essentially the same source code, just compiled for a different target.

Similarly for the JavaScript module case, Wasm finds success in getting legacy C++ code to the web, and as a way to write new web-targetting Rust code. These are often tasks that JavaScript doesn't do very well at, or which need a shared implementation between client and server deployments.

On the other hand, WebAssembly has not been a Web success for DOM-heavy apps. Nobody is talking about rewriting wordpress.com in Wasm, for example. Why is that? It may sound like a silly question to you: Wasm just isn't good at that stuff. But why? If you dig down a bit, I think it's that the programming models are just too different: the Web's primary programming model is JavaScript, a language with dynamic typing and managed memory, whereas WebAssembly 1.0 was about static typing and linear memory. Getting to the DOM from Wasm was a hassle that was overcome only by the most ardent of the true Wasm faithful.

Relatedly, Wasm has also not really been a success for languages that aren't, like, C or Rust. I am guessing that wordpress.com isn't written mostly in C++. One of the sticking points for this class of language. is that C#, for example, will want to ship with a garbage collector, and that it is annoying to have to do this. Check my article from March this year for more details.

Happily, this restriction is going away, as all browsers are going to ship support for reference types and garbage collection within the next months; Chrome and Firefox already ship Wasm GC, and Safari shouldn't be far behind thanks to the efforts from my colleague Asumu Takikawa. This is an extraordinarily exciting development that I think will kick off a whole 'nother Gartner hype cycle, as more languages start to update their toolchains to support WebAssembly.

if you don't like my peaches

Which brings us to the meat of today's note: web Wasm will win where compilers create compact code. If your language's compiler toolchain can manage to produce useful Wasm in a file that is less than a handful of over-the-wire kilobytes, you can win. If your compiler can't do that yet, you will have to instead rely on hype and captured audiences for adoption, which at best results in an unstable equilibrium until you figure out what's next.

In the JavaScript world, managing bloat and deliverable size is a huge industry. Bundlers like esbuild are a ubiquitous part of the toolchain, compiling down a set of JS modules to a single file that should include only those functions and data types that are used in a program, and additionally applying domain-specific size-squishing strategies such as minification (making monikers more minuscule).

Let's focus on tree-shaking. The visual metaphor is that you write a bunch of code, and you only need some of it for any given page. So you imagine a tree whose, um, branches are the modules that you use, and whose leaves are the individual definitions in the modules, and you then violently shake the tree, probably killing it and also annoying any nesting birds. The only thing that's left still attached is what is actually needed.

This isn't how trees work: holding the trunk doesn't give you information as to which branches are somehow necessary for the tree's mission. It also primes your mind to look for the wrong fixed point, removing unneeded code instead of keeping only the necessary code.

But, tree-shaking is an evocative name, and so despite its horticultural and algorithmic inaccuracies, we will stick to it.

The thing is that maximal tree-shaking for languages with a thicker run-time has not been a huge priority. Consider Go: according to the golang wiki, the most trivial program compiled to WebAssembly from Go is 2 megabytes, and adding imports can make this go to 10 megabytes or more. Or look at Pyodide, the Python WebAssembly port: the REPL example downloads about 20 megabytes of data. These are fine sizes for technology demos or, in the limit, very rich applications, but they aren't winners for web development.

shake a different tree

To be fair, both the built-in Wasm support for Go and the Pyodide port of Python both derive from the upstream toolchains, where producing small binaries is nice but not necessary: on a server, who cares how big the app is? And indeed when targetting smaller devices, we tend to see alternate implementations of the toolchain, for example MicroPython or TinyGo. TinyGo has a Wasm back-end that can apparently go down to less than a kilobyte, even!

These alternate toolchains often come with some restrictions or peculiarities, and although we can consider this to be an evil of sorts, it is to be expected that the target platform exhibits some co-design feedback on the language. In particular, running in the sea of the DOM is sufficiently weird that a Wasm-targetting Python program will necessarily be different than a "native" Python program. Still, I think as toolchain authors we aim to provide the same language, albeit possibly with a different implementation of the standard library. I am sure that the ClojureScript developers would prefer to remove their page documenting the differences with Clojure if they could, and perhaps if Wasm becomes a viable target for Clojurescript, they will.

on the algorithm

To recap: now that it supports GC, Wasm could be a winner for web development in Python and other languages. You would need a different toolchain and an effective tree-shaking algorithm, so that user experience does not degrade. So let's talk about tree shaking!

I work on the Hoot Scheme compiler, which targets Wasm with GC. We manage to get down to 70 kB or so right now, in the minimal "main" compilation unit, and are aiming for lower; auxiliary compilation units that import run-time facilities (the current exception handler and so on) from the main module can be sub-kilobyte. Getting here has been tricky though, and I think it would be even trickier for Python.

Some background: like Whiffle, the Hoot compiler prepends a prelude onto user code. Tree-shakind happens in a number of places:

Generally speaking, procedure definitions (functions / closures) are the easy part: you just include only those functions that are referenced by the code. In a language like Scheme, this gets you a long way.

However there are three immediate challenges. One is that the evaluation model for the definitions in the prelude is letrec*: the scope is recursive but ordered. Binding values can call or refer to previously defined values, or capture values defined later. If evaluating the value of a binding requires referring to a value only defined later, then that's an error. Again, for procedures this is trivially OK, but as soon as you have non-procedure definitions, sometimes the compiler won't be able to prove this nice "only refers to earlier bindings" property. In that case the fixing letrec (reloaded) algorithm will end up residualizing bindings that are set!, which of all the tree-shaking passes above require the delicate DCE pass to remove them.

Worse, some of those non-procedure definitions are record types, which have vtables that define how to print a record, how to check if a value is an instance of this record, and so on. These vtable callbacks can end up keeping a lot more code alive even if they are never used. We'll get back to this later.

Similarly, say you print a string via display. Well now not only are you bringing in the whole buffered I/O facility, but you are also calling a highly polymorphic function: display can print anything. There's a case for bitvectors, so you pull in code for bitvectors. There's a case for pairs, so you pull in that code too. And so on.

One solution is to instead call write-string, which only writes strings and not general data. You'll still get the generic buffered I/O facility (ports), though, even if your program only uses one kind of port.

This brings me to my next point, which is that optimal tree-shaking is a flow analysis problem. Consider display: if we know that a program will never have bitvectors, then any code in display that works on bitvectors is dead and we can fold the branches that guard it. But to know this, we have to know what kind of arguments display is called with, and for that we need higher-level flow analysis.

The problem is exacerbated for Python in a few ways. One, because object-oriented dispatch is higher-order programming. How do you know what foo.bar actually means? Depends on foo, which means you have to thread around representations of what foo might be everywhere and to everywhere's caller and everywhere's caller's caller and so on.

Secondly, lookup in Python is generally more dynamic than in Scheme: you have __getattr__ methods (is that it?; been a while since I've done Python) everywhere and users might indeed use them. Maybe this is not so bad in practice and flow analysis can exclude this kind of dynamic lookup.

Finally, and perhaps relatedly, the object of tree-shaking in Python is a mess of modules, rather than a big term with lexical bindings. This is like JavaScript, but without the established ecosystem of tree-shaking bundlers; Python has its work cut out for some years to go.

in short

With GC, Wasm makes it thinkable to do DOM programming in languages other than JavaScript. It will only be feasible for mass use, though, if the resulting Wasm modules are small, and that means significant investment on each language's toolchain. Often this will take the form of alternate toolchains that incorporate experimental tree-shaking algorithms, and whose alternate standard libraries facilitate the tree-shaker.

Welp, I'm off to lunch. Happy wassembling, comrades!

Categories: FLOSS Project Planets

parallel @ Savannah: GNU Parallel 20231122 ('Grindavík') released

Thu, 2023-11-23 17:50

GNU Parallel 20231122 ('Grindavík') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

  Got around to using GNU parallel for the first time from a suggestion by @jdwasmuth ... now I'm wishing I started using this years ago
    -- Stefan Gavriliuc @GavriliucStefan@twitter

New in this release:

  • -a file1 -a +file2 will link file2 to file1 similar to ::::+
  • --bar shows total time when all jobs are done.
  • Bug fixes and man page updates.


News about GNU Parallel:


GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.


About GNU Parallel


GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
    12345678 883c667e 01eed62f 975ad28b 6d50e22a
    $ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
    cc21b4c9 43fd03e9 3ae1ae49 e28573c0
    $ sha512sum install.sh | grep ec113b49a54e705f86d51e784ebced224fdff3f52
    79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
    fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
    $ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference


If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)


If GNU Parallel saves you money:



About GNU SQL


GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.


About GNU Niceload


GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

Categories: FLOSS Project Planets

gnuastro @ Savannah: Gnuastro development job at CEFCA/Spain for ESA's ARRAKIHS mission

Wed, 2023-11-22 18:23

A (scientific) software developer position that has just opened up in CEFCA for the development of Gnuastro for the data reduction pipeline of European Space Agency (ESA's) newly approved ARRAKIHS mission (to be launched in 2030), as well as other data from our Astronomical Observatory of Javalambre (OAJ):

https://www.cefca.es/cefca_en/reference_0119
(For 2 years, deadline: January 15th 2024)

ARRAKIHS is expected to last until ~2035 and we will be applying for future grants to keep the core pipeline team until the end of the project.

The job will be based in Teruel/Spain, which is a beautiful city (recognized as a UNESCO World heritage for its "Mudejar" architecture). Teruel just 1.5 hours from Valencia by car and with a population of 35000 people, everything is nicely within reach and you will not waste hours every day in traffic or long commutes as in large cities! Our observatory (OAJ) is also just 1.5 hours away by car (we have one of the darkest skies with fewest cloudy nights in continental Europe)!

As the ARRAKIHS pipeline engineer, the successful applicant will also be visiting other ARRAKIHS consortium members: IFCA/Santander, ESAC/Madrid; UCM/Madrid, IAA/Granada, EPFL/Switzerland, Univ. Lund/Sweden, Univ. Innsbruck/Austria.

The job will involve major developments in Gnuastro for the missing features or things that can be improved for Low Surface Brightness optimized reduction pipelines and high-level science from it (Gnuastro's MakeCatalog for example).

Once tested in the ARRAKIHS/OAJ pipelines, all those features will be brought into the core of Gnuastro for everyone to use in any pipeline! This is thus a major development in Gnuastro's history!

Anyone with a B.Sc degree or higher can apply! So please share this email with anyone you think may be interested. People with a M.Sc or PhD are also welcome to apply; it is "scientific"/Research software engineer position after all; and we expect to publish many papers on the algorithms/tools that we develop.

If you can't wait to get your hands dirty, and want to improve your profile for the application, there is a nice checklist in our Google Summer of Code guidelines to help you get started and fix a bug or two until the deadline to include in the application (you have almost two months from now):

https://savannah.gnu.org/support/?110827#comment0

Please don't hesitate to ask any questions from the contact person in the main announcement above; we'd be happy to help clarify any doubts.

Categories: FLOSS Project Planets

FSF Blogs: FSF Giving Guide: Tech changes, freedom doesn't

Wed, 2023-11-22 16:32
This year's FSF Giving Guide is here: Make freedom your gift!
Categories: FLOSS Project Planets

FSF Events: Free Software Directory meeting on IRC: Friday, November 24, starting at 12:00 EST (17:00 UTC)

Tue, 2023-11-21 14:02
Join the FSF and friends on Friday, November 24, from 12:00 to 15:00 EST (17:00 to 20:00 UTC) to help improve the Free Software Directory.
Categories: FLOSS Project Planets

GNUnet News: RFC 9498: The GNU Name System

Mon, 2023-11-20 18:00
RFC 9498: The GNU Name System

We are happy to announce that our The GNU Name System (GNS) specification is now published as RFC 9498 .

GNS addresses long-standing security and privacy issues in the ubiquitous Domain Name System (DNS) . Previous attempts to secure DNS ( DNSSEC ) fail to address critical security issues such as end-to-end security, query privacy, censorship, and centralization of root zone governance. After 40 years of patching, it is time for a new beginning.

The GNU Name System is our contribution towards a decentralized and censorship-resistant domain name resolution system that provides a privacy-enhancing alternative to the Domain Name System (DNS).

As part of our work on RFC 9498, we have also contributed to the specification of the .alt top-level domain to be used by alternative name resolution systems and have established the GANA registry for ".alt" .

GNS is implemented according to RFC 9598 in GNUnet 0.20.0. It is also implemented as part of GNUnet-Go .

We thank all reviewers for their comments. In particular, we thank D. J. Bernstein, S. Bortzmeyer, A. Farrel, E. Lear, and R. Salz for their insightful and detailed technical reviews. We thank J. Yao and J. Klensin for the internationalization reviews. We thank Dr. J. Appelbaum for suggesting the name "GNU Name System" and Dr. Richard Stallman for approving its use. We thank T. Lange and M. Wachs for their earlier contributions to the design and implementation of GNS. We thank J. Yao and J. Klensin for the internationalization reviews. We thank NLnet and NGI DISCOVERY for funding work on the GNU Name System.

The work does not stop here: We encourage further implementations of RFC 9498 to learn more both in terms of technical documentation and actual deployment experiences. Further, we are currently working on the specification of the R 5 N DHT and BFT Set Reconciliation which are underlying building blocks of GNS in GNUnet and not covered by RFC 9498.

Categories: FLOSS Project Planets

gettext @ Savannah: GNU gettext 0.22.4 released

Sun, 2023-11-19 16:32

Download from https://ftp.gnu.org/pub/gnu/gettext/gettext-0.22.4.tar.gz

This is a bug-fix release.

New in this release:

  • Bug fixes:
    • AM_GNU_GETTEXT now recognizes a statically built libintl on macOS and AIX.
    • Build fixes on AIX.
Categories: FLOSS Project Planets

Andy Wingo: a whiff of whiffle

Thu, 2023-11-16 16:11

A couple nights ago I wrote about a superfluous Scheme implementation and promised to move on from sheepishly justifying my egregious behavior in my next note, and finally mention some results from this experiment. Well, no: I am back on my bullshit. Tonight I write about a couple of implementation details that discerning readers may find of interest: value representation, the tail call issue, and the standard library.

what is a value?

As a Lisp, Scheme is one of the early "dynamically typed" languages. These days when you say "type", people immediately think propositions as types, mechanized proof of program properties, and so on. But "type" has another denotation which is all about values and almost not at all about terms: one might say that vector-ref has a type, but it's not part of a proof; it's just that if you try to vector-ref a pair instead of a vector, you get a run-time error. You can imagine values as being associated with type tags: annotations that can be inspected at run-time for, for example, the sort of error that vector-ref will throw if you call it on a pair.

Scheme systems usually have a finite set of type tags: there are fixnums, booleans, strings, pairs, symbols, and such, and they all have their own tag. Even a Scheme system that provides facilities for defining new disjoint types (define-record-type et al) will implement these via a secondary type tag layer: for example that all record instances are have the same primary tag, and that you have to retrieve their record type descriptor to discriminate instances of different record types.

Anyway. In Whiffle there are immediate types and heap types. All values have a low-bit tag which is zero for heap objects and nonzero for immediates. For heap objects, the first word of the heap object has tagging in the low byte as well. The 3-bit heap tag for pairs is chosen so that pairs can just be two words, with no header word. There is another 3-bit heap tag for forwarded objects, which is used but the GC when evacuating a value. Other objects put their heap tags in the low 8 bits of the first word. Additionally there is a "busy" tag word value, used to prevent races when evacuating from multiple threads.

Finally, for generational collection of objects that can be "large" -- the definition of large depends on the collector implementation, and is not nicely documented, but is more than, like, 256 bytes -- anyway these objects might need to have space for a "remembered" bit in the object themselves. This is not the case for pairs but is the case for, say, vectors: even though they are prolly smol, they might not be, and they need space for a remembered bit in the header.

tail calls

When I started Whiffle, I thought, let's just compile each Scheme function to a C function. Since all functions have the same type, clang and gcc will have no problem turning any tail call into a proper tail call.

This intuition was right and wrong: at optimization level -O2, this works great. We don't even do any kind of loop recognition / contification: loop iterations are tail calls and all is fine. (Not the most optimal implementation technique, but the assumption is that for our test cases, GC costs will dominate.)

However, when something goes wrong, I will need to debug the program to see what's up, and so you might think to compile at -O0 or -Og. In that case, somehow gcc does not compile to tail calls. One time while debugging a program I was flummoxed at a segfault during the call instruction; turns out it was just stack overflow, and the call was trying to write the return address into an unmapped page. For clang, I could use the musttail attribute; perhaps I should, to allow myself to debug properly.

Not being able to debug at -O0 with gcc is annoying. I feel like if GNU were an actual thing, we would have had the equivalent of a musttail attribute 20 years ago already. But it's not, and we still don't.

stdlib

So Whiffle makes C, and that C uses some primitives defined as inline functions. Whiffle actually lexically embeds user Scheme code with a prelude, having exposed a set of primitives to that prelude and to user code. The assumption is that the compiler will open-code all primitives, so that the conceit of providing a primitive from the Guile compilation host to the Whiffle guest magically works out, and that any reference to a free variable is an error. This works well enough, and it's similar to what we currently do in Hoot as well.

This is a quick and dirty strategy but it does let us grow the language to something worth using. I think I'll come back to this local maximum later if I manage to write about what Hoot does with modules.

coda

So, that's Whiffle: the Guile compiler front-end for Scheme, applied to an expression that prepends a user's program with a prelude, in a lexical context of a limited set of primitives, compiling to very simple C, in which tail calls are just return f(...), relying on the C compiler to inline and optimize and all that.

Perhaps next up: some results on using Whiffle to test Whippet. Until then, good night!

Categories: FLOSS Project Planets

Pages