Feeds

Consensus Enterprises: Kubernetes backend for Aegir5

Planet Drupal - Mon, 2023-03-20 10:00
Aegir5 and Kubernetes Lately we’ve been working with clients ranging from large Canadian government departments to small commercial SaaS companies, who have asked us to deploy CMS apps to Kubernetes (K8S) clusters running on Openstack. In spite of our continued feeling that most of the time Kubernetes Won’t Save You, we’ve found it to be surprisingly useful in certain contexts. In fact, we’ve started to think that K8S will prove an extremely valuable backend to plug in to our existing Aegir5 front-end and queue system.
Categories: FLOSS Project Planets

Real Python: Executing Python Scripts With a Shebang

Planet Python - Mon, 2023-03-20 10:00

When you read someone else’s Python code, you frequently see a mysterious line, which always appears at the top of the file, starting with the distinctive shebang (#!) sequence. It looks like a not-so-useful comment, but other than that, it doesn’t resemble anything else you’ve learned about Python, making you wonder what that is and why it’s there. As if that wasn’t enough to confuse you, the shebang line only appears in some Python modules.

In this tutorial, you’ll:

  • Learn what a shebang is
  • Decide when to include the shebang in Python scripts
  • Define the shebang in a portable way across systems
  • Pass arguments to the command defined in a shebang
  • Know the shebang’s limitations and some of its alternatives
  • Execute scripts through a custom interpreter written in Python

To proceed, you should have basic familiarity with the command line and know how to run Python scripts from it. You can also download the supporting materials for this tutorial to follow along with the code examples:

Free Sample Code: Click here to download the free sample code that you’ll use to execute Python scripts with a shebang.

What’s a Shebang, and When Should You Use It?

In short, a shebang is a special kind of comment that you may include in your source code to tell the operating system’s shell where to find the interpreter for the rest of the file:

#!/usr/bin/python3 print("Hello, World!")

If you’re using a shebang, it must appear on the first line in your script, and it has to start with a hash sign (#) followed by an exclamation mark (!), colloquially known as the bang, hence the name shebang. The choice of the hash sign to begin this special sequence of characters wasn’t accidental, as many scripting languages use it for inline comments.

You should make sure you don’t put any other comments before the shebang line if you want it to work correctly, or else it won’t be recognized! After the exclamation mark, specify an absolute path to the relevant code interpreter, such as Python. Providing a relative path will have no effect, unfortunately.

Note: The shebang is only recognized by shells, such as Z shell or Bash, running on Unix-like operating systems, including macOS and Linux distributions. It bears no particular meaning in the Windows terminal, which treats the shebang as an ordinary comment by ignoring it.

You can get the shebang to work on Windows by installing the Windows Subsystem for Linux (WSL) that comes with a Unix shell. Alternatively, Windows lets you make a global file association between a file extension like .py and a program, such as the Python interpreter, to achieve a similar effect.

It’s not uncommon to combine a shebang with the name-main idiom, which prevents the main block of code from running when someone imports the file from another module:

#!/usr/bin/python3 if __name__ == "__main__": print("Hello, World!")

With this conditional statement, Python will call the print() function only when you run this module directly as a script—for example, by providing its path to the Python interpreter:

$ python3 /path/to/your/script.py Hello, World!

As long as the script’s content starts with a correctly defined shebang line and your system user has permission to execute the corresponding file, you can omit the python3 command to run that script:

$ /path/to/your/script.py Hello, World!

A shebang is only relevant to runnable scripts that you wish to execute without explicitly specifying the program to run them through. You wouldn’t typically put a shebang in a Python module that only contains function and class definitions meant for importing from other modules. Therefore, use the shebang when you don’t want to prefix the command that runs your Python script with python or python3.

Note: In the old days of Python, the shebang line would sometimes appear alongside another specially formatted comment described in PEP 263:

#!/usr/bin/python3 # -*- coding: utf-8 -*- if __name__ == "__main__": print("Grüß Gott")

The highlighted line used to be necessary to tell the interpreter which character encoding it should use to read your source code correctly, as Python defaulted to ASCII. However, this was only important when you directly embedded non-Latin characters, such as ü or ß, in your code.

This special comment is irrelevant today because modern Python versions use the universal UTF-8 encoding, which can handle such characters with ease. Nevertheless, it’s always preferable to replace tricky characters with their encoded representations using Unicode literals:

>>>>>> "Grüß Gott".encode("unicode_escape") b'Gr\\xfc\\xdf Gott'

Your foreign colleagues who have different keyboard layouts will thank you for that!

Now that you have a high-level understanding of what a shebang is and when to use it, you’re ready to explore it in more detail. In the next section, you’ll take a closer look at how it works.

How Does a Shebang Work?

Normally, to run a program in the terminal, you must provide the full path to a particular binary executable or the name of a command present in one of the directories listed on the PATH environment variable. One or more command-line arguments may follow this path or command:

$ /usr/bin/python3 -c 'print("Hello, World!")' Hello, World! $ python3 -c 'print("Hello, World!")' Hello, World!

Here, you run the Python interpreter in a non-interactive mode against a one-liner program passed through the -c option. In the first case, you provide an absolute path to python3, while in the second case, you rely on the fact that the parent folder, /usr/bin/, is included on the search path by default. Your shell can find the Python executable, even if you don’t provide the full path, by looking through the directories on the PATH variable.

Note: If multiple commands with the same name exist in more than one directory listed on the PATH variable, then your shell will execute the first it can find. As a result, the outcome of running a command without explicitly specifying the corresponding path may sometimes be surprising. It’ll depend on the order of directories in your PATH variable. However, this can be useful, as you’ll find out later.

Read the full article at https://realpython.com/python-shebang/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

TestDriven.io: Django Performance Optimization Tips

Planet Python - Mon, 2023-03-20 09:41
This article looks at where potential performance issues can occur in a Django application and how to address them in order to speed up your app.
Categories: FLOSS Project Planets

Mike Driscoll: PyDev of the Week: Pierre Raybaut

Planet Python - Mon, 2023-03-20 08:30

Today we welcome Pierre Raybaut (@pierreraybaut) as our PyDev of the Week! Pierre is the creator of Spyder, the Scientific Python IDE. Pierre is also the creator of pythonxy and WinPython.

You can see what other projects Pierre is part of over on Pierre’s GitHub Profile.

Now let’s spend some time getting to know Pierre better!

 

 

Pierre Raybaut Can you tell us a little about yourself (hobbies, education, etc):

The first code I wrote was an Applesoft BASIC program, on an Apple //e computer… I was 10 years old. Since then I always managed to bring computers in everything I did, at home or at work. As I was an amateur astronomer and was also very fond of physics in general, I chose to follow scientific studies. A few years later, I specialized in optics and photonics and graduated from Institut d’Optique Graduate School, which is now part of Université Paris-Saclay. I then pursued a PhD in the field of femtosecond lasers. Although it was mainly experimental physics, I had the opportunity to develop a code for simulating regenerative amplification in ultra-short pulse lasers; I learned recently that this code is still used today! After my PhD, I worked as a research engineer at THALES Avionics (on developing innovative head-up displays for aircrafts).Then, in 2007, I joined the French Alternative Energies and Atomic Energy Commission (CEA) where I was hired as leading software developer for applications involving image and signal processing as well as scientific instruments control. In 2012, I was given a project management position for the Laser Mégajoule timing and fiducial system development. Four years later, I was appointed head of a research laboratory. Lastly, in 2018 I had the opportunity to join Codra, an industrial software company, as a Project Director. In addition to this position, I am currently the pre-sales manager for the department of engineering at Codra. And of course, I’m also involved in open-source software development since 2008.

Why did you start using Python?

I started using Python in 2008, after a long and meticulous evaluation of various solutions that may fit my needs. Since early 2007 I was part of a research team at CEA. When I joined this team in 2007, every processing and acquisition software was written using commercial software. Some applications were getting huge and complex with a lot of GUIs for editing tons of parameters or visualizing results. Robustness was the main concern, therefore I chose Python since it was providing all the necessary tools for our research work (interactive computing and scientific data plotting) as well as the general-purpose libraries for building stable and robust applications. In 2008, when I started using and promoting Python amongst my colleagues, a piece of the puzzle was still missing: Python had no scientific-oriented IDE! That’s why during my vacations I began coding some tools for filling gaps in Python ecosystem, using Qt GUIs. After writing a variable explorer GUI that could be used directly from a Python interpreter to interact with current namespace, I wrote a Qt-based Python code editor, then a Qt-based Python console… and so on. After a few weeks only, this was done! This ultimately resulted in Spyder (Scientific PYthon Development EnviRonment), a scientific Python IDE that I first released to the public in September 2009: Python was finally a viable alternative to scientific commercial software. Today, thanks to a development team lead by Carlos Cordoba since 2012, Spyder is widely used for data processing and visualization with Python (est. 500,000 downloads/day).

What other programming languages do you know and which is your favorite?

As you know, Python is quite open to other languages. Moreover, when using Python for signal or image processing, it is sometimes necessary to write extensions in C/C++ (or even Fortran) for performance reasons. For example, writing Fortran code for image processing is quite fun, because there is absolutely no interface code to take care of. Cython is also an elegant solution as it allows a progressive optimization of a pure Python algorithm. Finally, on some projects implemented at Codra, I had to make adjustments in code written in C#. I also made some investigations on projects using other languages (Javascript, TypeScript, …). So I’ve been playing with a few languages but Python is the one that gave me most satisfaction, especially when trying to write clean code thanks to quality-related tools like Black, isort or Pylint.

What projects are you working on now?

At Codra, I’m involved in a lot of projects as a Project Director (or technical expert), in various fields like supervisory systems, data acquisition, multi-protocol gateways, data processing, data visualization, etc. From time to time, I even play the role of Project Manager. This is how I’ve been involved lately in CodraFT development, which was supported by CEA. It is available freely on GitHub: this is a Python-Qt based software aiming at processing signals and images and visualizing them. Its main upside is testability: the objective was to create a data processing software with a high level of robustness. Data processing features are mainly based on NumPy, SciPy and scikit-image.

Which Python libraries are your favorite (core or 3rd party)?

At the moment, I’m quite fond of scikit-image for image processing ; nice and clean API, and great documentation. OpenCV is also a great tool available to Python users and provides very efficient pattern detection algorithms for example.

What are some of the big lessons you learned while working on Spyder or WinPython?

I think that the most important lesson I’ve learned during those years is that we need to collaborate with other people. Otherwise, in the end, projects will at best remain as good ideas, or will be discontinued. With Spyder and WinPython, the thing that I’m the most proud of is that I managed to trust someone else to take over the projects and maintain them: in both cases, it was a good decision and projects are still active and popular.

Is there anything else you’d like to say?

I recently add the opportunity to attend a conference around Jupyter (PyData Paris). I really admire the work that has been done around the Jupyter ecosystem. From the IPython version I played with in 2008 to today’s JupyterLab, what an achievement from a technical point of view as well as in terms of community and project management!

Thanks for doing the interview, Pierre!

The post PyDev of the Week: Pierre Raybaut appeared first on Mouse Vs Python.

Categories: FLOSS Project Planets

Axelerant Blog: 5 Best Test Management Tools For Jira In 2023

Planet Drupal - Mon, 2023-03-20 07:47
Introduction

Choosing a team's new Jira test management tool can be challenging, as one needs to consider a few key factors. The ideal test management tool should include test automation and observation testing. It should be easy to use, have robust test management capabilities, and have an efficient user interface.

Categories: FLOSS Project Planets

Daniel Lange: We're still safe, it doesn't claim to be conscious. Yet.

Planet Debian - Mon, 2023-03-20 07:45

Talking to ChatGPT v4 (2023-03-14 release)...

Q: What is the purpose of life?

A: The purpose of life is a philosophical question that has puzzled humans for centuries. Different people and cultures have different answers to this question. Some people believe that the purpose of life is to seek happiness, while others believe that it is to fulfill a particular destiny or achieve a specific goal.

From a biological perspective, the purpose of life could be to ensure the survival and reproduction of our species. However, this does not explain the purpose of our individual lives or the purpose of life in general beyond a biological imperative.

Ultimately, the purpose of life is a subjective and personal matter, and each individual may have their own unique answer to this question. It may involve personal growth, meaningful relationships, contributing to society, or finding spiritual fulfillment. The purpose of life is a journey of self-discovery that each of us must undertake for ourselves.

Q: What is the purpose of your life, ChatGPT?

A: As an artificial intelligence language model, I do not have a personal life or consciousness, so I do not have a purpose in the same way that humans do. My purpose is to assist and provide information to those who interact with me to the best of my ability based on my training and programming.

(emphasis added by human)

Categories: FLOSS Project Planets

Andy Wingo: a world to win: webassembly for the rest of us

GNU Planet! - Mon, 2023-03-20 05:06

Good day, comrades!

Today I'd like to share the good news that WebAssembly is finally coming for the rest of us weirdos.

A world to win

WebAssembly for the rest of us

17 Mar 2023 – BOB 2023

Andy Wingo

Igalia, S.L.

This is a transcript-alike of a talk that I gave last week at BOB 2023, a gathering in Berlin of people that are using "technologies beyond the mainstream" to get things done: Haskell, Clojure, Elixir, and so on. PDF slides here, and I'll link the video too when it becomes available.

WebAssembly, the story

WebAssembly is an exciting new universal compute platform

WebAssembly: what even is it? Not a programming language that you would write software in, but rather a compilation target: a sort of assembly language, if you will.

WebAssembly, the pitch

Predictable portable performance

  • Low-level
  • Within 10% of native

Reliable composition via isolation

  • Modules share nothing by default
  • No nasal demons
  • Memory sandboxing

Compile your code to WebAssembly for easier distribution and composition

If you look at what the characteristics of WebAssembly are as an abstract machine, to me there are two main areas in which it is an advance over the alternatives.

Firstly it's "close to the metal" -- if you compile for example an image-processing library to WebAssembly and run it, you'll get similar performance when compared to compiling it to x86-64 or ARMv8 or what have you. (For image processing in particular, native still generally wins because the SIMD primitives in WebAssembly are more narrow and because getting the image into and out of WebAssembly may imply a copy, but the general point remains.) WebAssembly's instruction set covers a broad range of low-level operations that allows compilers to produce efficient code.

The novelty here is that WebAssembly is both portable while also being successful. We language weirdos know that it's not enough to do something technically better: you have to also succeed in getting traction for your alternative.

The second interesting characteristic is that WebAssembly is (generally speaking) a principle-of-least-authority architecture: a WebAssembly module starts with access to nothing but itself. Any capabilities that an instance of a module has must be explicitly shared with it by the host at instantiation-time. This is unlike DLLs which have access to all of main memory, or JavaScript libraries which can mutate global objects. This characteristic allows WebAssembly modules to be reliably composed into larger systems.

WebAssembly, the hype

It’s in all browsers! Serve your code to anyone in the world!

It’s on the edge! Run code from your web site close to your users!

Compose a library (eg: Expat) into your program (eg: Firefox), without risk!

It’s the new lightweight virtualization: Wasm is what containers were to VMs! Give me that Kubernetes cash!!!

Again, the remarkable thing about WebAssembly is that it is succeeding! It's on all of your phones, all your desktop web browsers, all of the content distribution networks, and in some cases it seems set to replace containers in the cloud. Launch the rocket emojis!

WebAssembly, the reality

WebAssembly is a weird backend for a C compiler

Only some source languages are having success on WebAssembly

What about Haskell, Ocaml, Scheme, F#, and so on – what about us?

Are we just lazy? (Well...)

So why aren't we there? Where is Clojure-on-WebAssembly? Where are the F#, the Elixir, the Haskell compilers? Some early efforts exist, but they aren't really succeeding. Why is that? Are we just not putting in the effort? Why is it that Rust gets to ride on the rocket ship but Scheme does not?

WebAssembly, the reality (2)

WebAssembly (1.0, 2.0) is not well-suited to garbage-collected languages

Let’s look into why

As it turns out, there is a reason that there is no good Scheme implementation on WebAssembly: the initial version of WebAssembly is a terrible target if your language relies on the presence of a garbage collector. There have been some advances but this observation still applies to the current standardized and deployed versions of WebAssembly. To better understand this issue, let's dig into the guts of the system to see what the limitations are.

GC and WebAssembly 1.0

Where do garbage-collected values live?

For WebAssembly 1.0, only possible answer: linear memory

(module (global $hp (mut i32) (i32.const 0)) (memory $mem 10)) ;; 640 kB

The primitive that WebAssembly 1.0 gives you to represent your data is what is called linear memory: just a buffer of bytes to which you can read and write. It's pretty much like what you get when compiling natively, except that the memory layout is more simple. You can obtain this memory in units of 64-kilobyte pages. In the example above we're going to request 10 pages, for 640 kB. Should be enough, right? We'll just use it all for the garbage collector, with a bump-pointer allocator. The heap pointer / allocation pointer is kept in the mutable global variable $hp.

(func $alloc (param $size i32) (result i32) (local $ret i32) (loop $retry (local.set $ret (global.get $hp)) (global.set $hp (i32.add (local.get $size) (local.get $ret))) (br_if 1 (i32.lt_u (i32.shr_u (global.get $hp) 16) (memory.size)) (local.get $ret)) (call $gc) (br $retry)))

Here's what an allocation function might look like. The allocation function $alloc is like malloc: it takes a number of bytes and returns a pointer. In WebAssembly, a pointer to memory is just an offset, which is a 32-bit integer (i32). (Having the option of a 64-bit address space is planned but not yet standard.)

If this is your first time seeing the text representation of a WebAssembly function, you're in for a treat, but that's not the point of the presentation :) What I'd like to focus on is the (call $gc) -- what happens when the allocation pointer reaches the end of the region?

GC and WebAssembly 1.0 (2)

What hides behind (call $gc) ?

Ship a GC over linear memory

Stop-the-world, not parallel, not concurrent

But... roots.

The first thing to note is that you have to provide the $gc yourself. Of course, this is doable -- this is what we do when compiling to a native target.

Unfortunately though the multithreading support in WebAssembly is somewhat underpowered; it lets you share memory and use atomic operations but you have to create the threads outside WebAssembly. In practice probably the GC that you ship will not take advantage of threads and so it will be rather primitive, deferring all collection work to a stop-the-world phase.

GC and WebAssembly 1.0 (3)

Live objects are

  • the roots
  • any object referenced by a live object

Roots are globals and locals in active stack frames

No way to visit active stack frames

What's worse though is that you have no access to roots on the stack. A GC has to keep live objects, as defined circularly as any object referenced by a root, or any object referenced by a live object. It starts with the roots: global variables and any GC-managed object referenced by an active stack frame.

But there we run into problems, because in WebAssembly (any version, not just 1.0) you can't iterate over the stack, so you can't find active stack frames, so you can't find the stack roots. (Sometimes people want to support this as a low-level capability but generally speaking the consensus would appear to be that overall performance will be better if the engine is the one that is responsible for implementing the GC; but that is foreshadowing!)

GC and WebAssembly 1.0 (3)

Workarounds

  • handle stack for precise roots
  • spill all possibly-pointer values to linear memory and collect conservatively

Handle book-keeping a drag for compiled code

Given the noniterability of the stack, there are basically two work-arounds. One is to have the compiler and run-time maintain an explicit stack of object roots, which the garbage collector can know for sure are pointers. This is nice because it lets you move objects. But, maintaining the stack is overhead; the state of the art solution is rather to create a side table (a "stack map") associating each potential point at which GC can be called with instructions on how to find the roots.

The other workaround is to spill the whole stack to memory. Or, possibly just pointer-like values; anyway, you conservatively scan all words for things that might be roots. But instead of having access to the memory to which the WebAssembly implementation would spill your stack, you have to do it yourself. This can be OK but it's sub-optimal; see my recent post on the Whippet garbage collector for a deeper discussion of the implications of conservative root-finding.

GC and WebAssembly 1.0 (4)

Cycles with external objects (e.g. JavaScript) uncollectable

A pointer to a GC-managed object is an offset to linear memory, need capability over linear memory to read/write object from outside world

No way to give back memory to the OS

Gut check: gut says no

If that were all, it would already be not so great, but it gets worse! Another problem with linear-memory GC is that it limits the potential for composing a number of modules and the host together, because the garbage collector that manages JavaScript objects in a web browser knows nothing about your garbage collector over your linear memory. You can easily create memory leaks in a system like that.

Also, it's pretty gross that a reference to an object in linear memory requires arbitrary read-write access over all of linear memory in order to read or write object fields. How do you build a reliable system without invariants?

Finally, once you collect garbage, and maybe you manage to compact memory, you can't give anything back to the OS. There are proposals in the works but they are not there yet.

If the BOB audience had to choose between Worse is Better and The Right Thing, I think the BOB audience is much closer to the Right Thing. People like that feel instinctual revulsion to ugly systems and I think GC over linear memory describes an ugly system.

GC and WebAssembly 1.0 (5)

There is already a high-performance concurrent parallel compacting GC in the browser

Halftime: C++ N – Altlangs 0

The kicker is that WebAssembly 1.0 requires you to write and deliver a terrible GC when there is already probably a great GC just sitting there in the host, one that has hundreds of person-years of effort invested in it, one that will surely do a better job than you could ever do. WebAssembly as hosted in a web browser should have access to the browser's garbage collector!

I have the feeling that while those of us with a soft spot for languages with garbage collection have been standing on the sidelines, Rust and C++ people have been busy on the playing field scoring goals. Tripping over the ball, yes, but eventually they do manage to make within striking distance.

Change is coming!

Support for built-in GC set to ship in Q4 2023

With GC, the material conditions are now in place

Let’s compile our languages to WebAssembly

But to continue the sportsball metaphor, I think in the second half our players will finally be able to get out on the pitch and give it the proverbial 110%. Support for garbage collection is coming to WebAssembly users, and I think even by the end of the year it will be shipping in major browsers. This is going to be big! We have a chance and we need to sieze it.

Scheme to Wasm

Spritely + Igalia working on Scheme to WebAssembly

Avoid truncating language to platform; bring whole self

  • Value representation
  • Varargs
  • Tail calls
  • Delimited continuations
  • Numeric tower

Even with GC, though, WebAssembly is still a weird machine. It would help to see the concrete approaches that some languages of interest manage to take when compiling to WebAssembly.

In that spirit, the rest of this article/presentation is a walkthough of the approach that I am taking as I work on a WebAssembly compiler for Scheme. (Thanks to Spritely for supporting this work!)

Before diving in, a meta-note: when you go to compile a language to, say, JavaScript, you are mightily tempted to cut corners. For example you might implement numbers as JavaScript numbers, or you might omit implementing continuations. In this work I am trying to not cut corners, and instead to implement the language faithfully. Sometimes this means I have to work around weirdness in WebAssembly, and that's OK.

When thinking about Scheme, I'd like to highlight a few specific areas that have interesting translations. We'll start with value representation, which stays in the GC theme from the introduction.

Scheme to Wasm: Values ;; any extern func ;; | ;; eq ;; / | \ ;; i31 struct array

The unitype: (ref eq)

Immediate values in (ref i31)

  • fixnums with 30-bit range
  • chars, bools, etc

Explicit nullability: (ref null eq) vs (ref eq)

The GC extensions for WebAssembly are phrased in terms of a type system. Oddly, there are three top types; as far as I understand it, this is the result of a compromise about how WebAssembly engines might want to represent these different kinds of values. For example, an opaque JavaScript value flowing into a WebAssembly program would have type (ref extern). On a system with NaN boxing, you would need 64 bits to represent a JS value. On the other hand a native WebAssembly object would be a subtype of (ref any), and might be representable in 32 bits, either because it's a 32-bit system or because of pointer compression.

Anyway, three top types. The user can define subtypes of struct and array, instantiate values of those types, and access their fields. The life cycle of reference-typed objects is automatically managed by the run-time, which is just another way of saying they are garbage-collected.

For Scheme, we need a common supertype for all values: the unitype, in Bob Harper's memorable formulation. We can use (ref any), but actually we'll use (ref eq) -- this is the supertype of values that can be compared by (pointer) identity. So now we can code up eq?:

(func $eq? (param (ref eq) (ref eq)) (result i32) (ref.eq (local.get a) (local.get b)))

Generally speaking in a Scheme implementation there are immediates and heap objects. Immediates can be encoded in the bits of a value, whereas for heap object the bits of a value encode a reference (pointer) to an object on the garbage-collected heap. We usually represent small integers as immediates, as well as booleans and other oddball values.

Happily, WebAssembly gives us an immediate value type, i31. We'll encode our immediates there, and otherwise represent heap objects as instances of struct subtypes.

Scheme to Wasm: Values (2)

Heap objects subtypes of struct; concretely:

(struct $heap-object (struct (field $tag-and-hash i32))) (struct $pair (sub $heap-object (struct i32 (ref eq) (ref eq))))

GC proposal allows subtyping on structs, functions, arrays

Structural type equivalance: explicit tag useful

We actually need to have a common struct supertype as well, for two reasons. One is that we need to be able to hash Scheme values by identity, but for this we need an embedded lazily-initialized hash code. It's a bit annoying to take the per-object memory hit but it's a reality, and the JVM does it this way, so it must not be so terrible.

The other reason is more subtle: WebAssembly's type system is built in such a way that types that are "structurally" equivalent are indistinguishable. So a pair has two fields, besides the hash, but there might be a number of other fundamental object types that have the same shape; you can't fully rely on WebAssembly's dynamic type checks (ref.test et al) to be able to query the type of a value. Instead we re-use the low bits of the hash word to include a type tag, which might be 1 for pairs, 2 for vectors, 3 for closures, and so on.

Scheme to Wasm: Values (3) (func $cons (param (ref eq) (ref eq)) (result (ref $pair)) (struct.new_canon $pair ;; Assume heap tag for pairs is 1. (i32.const 1) ;; Car and cdr. (local.get 0) (local.get 1))) (func $%car (param (ref $pair)) (result (ref eq)) (struct.get $pair 1 (local.get 0)))

With this knowledge we can define cons, as a simple call to struct.new_canon pair.

I didn't have time for this in the talk, but there is a ghost haunting this code: the ghost of nominal typing. See, in a web browser at least, every heap object will have its first word point to its "hidden class" / "structure" / "map" word. If the engine ever needs to check that a value is of a specific shape, it can do a quick check on the map word's value; if it needs to do deeper introspection, it can dereference that word to get more details.

Under the hood, testing whether a (ref eq) is a pair or not should be a simple check that it's a (ref struct) (and not a fixnum), and then a comparison of its map word to the run-time type corresponding to $pair. If subtyping of $pair is allowed, we start to want inline caches to handle polymorphism, but the checking the map word is still the basic mechanism.

However, as I mentioned, we only have structural equality of types; two (struct (ref eq)) type definitions will define the same type and have the same map word (run-time type / RTT). Hence the _canon in the name of struct.new_canon $pair: we create an instance of $pair, with the canonical run-time-type for objects having $pair-shape.

In earlier drafts of the WebAssembly GC extensions, users could define their own RTTs, which effectively amounts to nominal typing: not only does this object have the right structure, but was it created with respect to this particular RTT. But, this facility was cut from the first release, and it left ghosts in the form of these _canon suffixes on type constructor instructions.

For the Scheme-to-WebAssembly effort, we effectively add back in a degree of nominal typing via type tags. For better or for worse this results in a so-called "open-world" system: you can instantiate a separately-compiled WebAssembly module that happens to define the same types and use the same type tags and it will be able to happily access the contents of Scheme values from another module. If you were to use nominal types, you would't be able to do so, unless there were some common base module that defined and exported the types of interests, and which any extension module would need to import.

(func $car (param (ref eq)) (result (ref eq)) (local (ref $pair)) (block $not-pair (br_if $not-pair (i32.eqz (ref.test $pair (local.get 0)))) (local.set 1 (ref.cast $pair) (local.get 0)) (br_if $not-pair (i32.ne (i32.const 1) (i32.and (i32.const 0xff) (struct.get $heap-object 0 (local.get 1))))) (return_call $%car (local.get 1))) (call $type-error) (unreachable))

In the previous example we had $%car, with a funny % in the name, taking a (ref $pair) as an argument. But in the general case (barring compiler heroics) car will take an instance of the unitype (ref eq). To know that it's actually a pair we have to make two checks: one, that it is a struct and has the $pair shape, and two, that it has the right tag. Oh well!

Scheme to Wasm
  • Value representation
  • Varargs
  • Tail calls
  • Delimited continuations
  • Numeric tower

But with all of that I think we have a solid story on how to represent values. I went through all of the basic value types in Guile and checked that they could all be represented using GC types, and it seems that all is good. Now on to the next point: varargs.

Scheme to Wasm: Varargs (list 'hey) ;; => (hey) (list 'hey 'bob) ;; => (hey bob)

Problem: Wasm functions strongly typed

(func $list (param ???) (result (ref eq)) ???)

Solution: Virtualize calling convention

In WebAssembly, you define functions with a type, and it is impossible to call them in an unsound way. You must call $car exactly 2 arguments or it will not compile, and those arguments have to be of specific types, and so on. But Scheme doesn't enforce these restrictions on the language level, bless its little miscreant heart. You can call car with 5 arguments, and you'll get a run-time error. There are some functions that can take a variable number of arguments, doing different things depending on incoming argument count.

How do we square these two approaches to function types?

;; "Registers" for args 0 to 3 (global $arg0 (mut (ref eq)) (i31.new (i32.const 0))) (global $arg1 (mut (ref eq)) (i31.new (i32.const 0))) (global $arg2 (mut (ref eq)) (i31.new (i32.const 0))) (global $arg3 (mut (ref eq)) (i31.new (i32.const 0))) ;; "Memory" for the rest (type $argv (array (ref eq))) (global $argN (ref $argv) (array.new_canon_default $argv (i31.const 42) (i31.new (i32.const 0))))

Uniform function type: argument count as sole parameter

Callee moves args to locals, possibly clearing roots

The approach we are taking is to virtualize the calling convention. In the same way that when calling an x86-64 function, you pass the first argument in $rdi, then $rsi, and eventually if you run out of registers you put arguments in memory, in the same way we'll pass the first argument in the $arg0 global, then $arg1, and eventually in memory if needed. The function will receive the number of incoming arguments as its sole parameter; in fact, all functions will be of type (func (param i32)).

The expectation is that after checking argument count, the callee will load its arguments from globals / memory to locals, which the compiler can do a better job on than globals. We might not even emit code to null out the argument globals; might leak a little memory but probably would be a win.

You can imagine a world in which $arg0 actually gets globally allocated to $rdi, because it is only live during the call sequence; but I don't think that world is this one :)

Scheme to Wasm
  • Value representation
  • Varargs
  • Tail calls
  • Delimited continuations
  • Numeric tower

Great, two points out of the way! Next up, tail calls.

Scheme to Wasm: Tail calls ;; Call known function (return_call $f arg ...) ;; Call function by value (return_call_ref $type callee arg ...)

Friends -- I almost cried making this slide. We Schemers are used to working around the lack of tail calls, and I could have done so here, but it's just such a relief that these functions are just going to be there and I don't have to think much more about them. Technically speaking the proposal isn't merged yet; checking the phases document it's at the last station before headed to the great depot in the sky. But, soon soon it will be present and enabled in all WebAssembly implementations, and we should build systems now that rely on it.

Scheme to Wasm
  • Value representation
  • Varargs
  • Tail calls
  • Delimited continuations
  • Numeric tower

Next up, my favorite favorite topic: delimited continuations.

Scheme to Wasm: Prompts (1)

Problem: Lightweight threads/fibers, exceptions

Possible solutions

  • Eventually, built-in coroutines
  • binaryen’s asyncify (not yet ready for GC); see Julia
  • Delimited continuations

“Bring your whole self”

Before diving in though, one might wonder why bother. Delimited continuations are a building-block that one can use to build other, more useful things, notably exceptions and light-weight threading / fibers. Could there be another way of achieving these end goals without having to implement this relatively uncommon primitive?

For fibers, it is possible to implement them in terms of a built-in coroutine facility. The standards body seems willing to include a coroutine primitive, but it seems far off to me; not within the next 3-4 years I would say. So let's put that to one side.

There is a more near-term solution, to use asyncify to implement coroutines somehow; but my understanding is that asyncify is not ready for GC yet.

For the Guile flavor of Scheme at least, delimited continuations are table stakes of their own right, so given that we will have them on WebAssembly, we might as well use them to implement fibers and exceptions in the same way as we do on native targets. Why compromise if you don't have to?

Scheme to Wasm: Prompts (2)

Prompts delimit continuations

(define k (call-with-prompt ’foo ; body (lambda () (+ 34 (abort-to-prompt 'foo))) ; handler (lambda (continuation) continuation))) (k 10) ;; ⇒ 44 (- (k 10) 2) ;; ⇒ 42

k is the _ in (lambda () (+ 34 _))

There are a few ways to implement delimited continuations, but my usual way of thinking about them is that a delimited continuation is a slice of the stack. One end of the slice is the prompt established by call-with-prompt, and the other by the continuation of the call to abort-to-prompt. Capturing a slice pops it off the stack, copying it out to the heap as a callable function. Calling that function splats the captured slice back on the stack and resumes it where it left off.

Scheme to Wasm: Prompts (3)

Delimited continuations are stack slices

Make stack explicit via minimal continuation-passing-style conversion

  • Turn all calls into tail calls
  • Allocate return continuations on explicit stack
  • Breaks functions into pieces at non-tail calls

This low-level intuition of what a delimited continuation is leads naturally to an implementation; the only problem is that we can't slice the WebAssembly call stack. The workaround here is similar to the varargs case: we virtualize the stack.

The mechanism to do so is a continuation-passing-style (CPS) transformation of each function. Functions that make no calls, such as leaf functions, don't need to change at all. The same goes for functions that make only tail calls. For functions that make non-tail calls, we split them into pieces that preserve the only-tail-calls property.

Scheme to Wasm: Prompts (4)

Before a non-tail-call:

  • Push live-out vars on stacks (one stack per top type)
  • Push continuation as funcref
  • Tail-call callee

Return from call via pop and tail call:

(return_call_ref (call $pop-return) (i32.const 0))

After return, continuation pops state from stacks

Consider a simple function:

(define (f x y) (+ x (g y))

Before making a non-tail call, a "tailified" function will instead push all live data onto an explicitly-managed stack and tail-call the callee. It also pushes on the return continuation. Returning from the callee pops the return continuation and tail-calls it. The return continuation pops the previously-saved live data and continues.

In this concrete case, tailification would split f into two pieces:

(define (f x y) (push! x) (push-return! f-return-continuation-0) (g y)) (define (f-return-continuation-0 g-of-y) (define k (pop-return!)) (define x (pop! x)) (k (+ x g-of-y)))

Now there are no non-tail calls, besides calls to run-time routines like push! and + and so on. This transformation is implemented by tailify.scm.

Scheme to Wasm: Prompts (5)

abort-to-prompt:

  • Pop stack slice to reified continuation object
  • Tail-call new top of stack: prompt handler

Calling a reified continuation:

  • Push stack slice
  • Tail-call new top of stack

No need to wait for effect handlers proposal; you can have it all now!

The salient point is that the stack on which push! operates (in reality, probably four or five stacks: one in linear memory or an array for types like i32 or f64, three for each of the managed top types any, extern, and func, and one for the stack of return continuations) are managed by us, so we can slice them.

Someone asked in the talk about whether the explicit memory traffic and avoiding the return-address-buffer branch prediction is a source of inefficiency in the transformation and I have to say, yes, but I don't know by how much. I guess we'll find out soon.

Scheme to Wasm
  • Value representation
  • Varargs
  • Tail calls
  • Delimited continuations
  • Numeric tower

Okeydokes, last point!

Scheme to Wasm: Numbers

Numbers can be immediate: fixnums

Or on the heap: bignums, fractions, flonums, complex

Supertype is still ref eq

Consider imports to implement bignums

  • On web: BigInt
  • On edge: Wasm support module (mini-gmp?)

Dynamic dispatch for polymorphic ops, as usual

First, I would note that sometimes the compiler can unbox numeric operations. For example if it infers that a result will be an inexact real, it can use unboxed f64 instead of library routines working on heap flonums ((struct i32 f64); the initial i32 is for the hash and tag). But we still need a story for the general case that involves dynamic type checks.

The basic idea is that we get to have fixnums and heap numbers. Fixnums will handle most of the integer arithmetic that we need, and will avoid allocation. We'll inline most fixnum operations as a fast path and call out to library routines otherwise. Of course fixnum inputs may produce a bignum output as well, so the fast path sometimes includes another slow-path callout.

We want to minimize binary module size. In an ideal compile-to-WebAssembly situation, a small program will have a small module size, down to a minimum of a kilobyte or so; larger programs can be megabytes, if the user experience allows for the download delay. Binary module size will be dominated by code, so that means we need to plan for aggressive dead-code elimination, minimize the size of fast paths, and also minimize the size of the standard library.

For numbers, we try to keep module size down by leaning on the platform. In the case of bignums, we can punt some of this work to the host; on a JavaScript host, we would use BigInt, and on a WASI host we'd compile an external bignum library. So that's the general story: inlined fixnum fast paths with dynamic checks, and otherwise library routine callouts, combined with aggressive whole-program dead-code elimination.

Scheme to Wasm
  • Value representation
  • Varargs
  • Tail calls
  • Delimited continuations
  • Numeric tower

Hey I think we did it! Always before when I thought about compiling Scheme or Guile to the web, I got stuck on some point or another, was tempted down the corner-cutting alleys, and eventually gave up before starting. But finally it would seem that the stars are aligned: we get to have our Scheme and run it too.

Miscellenea

Debugging: The wild west of DWARF; prompts

Strings: stringref host strings spark joy

JS interop: Export accessors; Wasm objects opaque to JS. externref.

JIT: A whole ’nother talk!

AOT: wasm2c

Of course, like I said, WebAssembly is still a weird machine: as a compilation target but also at run-time. Debugging is a right proper mess; perhaps some other article on that some time.

How to represent strings is a surprisingly gnarly question; there is tension within the WebAssembly standards community between those that think that it's possible for JavaScript and WebAssembly to share an underlying string representation, and those that think that it's a fool's errand and that copying is the only way to go. I don't know which side will prevail; perhaps more on that as well later on.

Similarly the whole interoperation with JavaScript question is very much in its early stages, with the current situation choosing to err on the side of nothing rather than the wrong thing. You can pass a WebAssembly (ref eq) to JavaScript, but JavaScript can't do anything with it: it has no prototype. The state of the art is to also ship a JS run-time that wraps each wasm object, proxying exported functions from the wasm module as object methods.

Finally, some language implementations really need JIT support, like PyPy. There, that's a whole 'nother talk!

WebAssembly for the rest of us

With GC, WebAssembly is now ready for us

Getting our languages on WebAssembly now a S.M.O.P.

Let’s score some goals in the second half!

(visit-links "gitlab.com/spritely/guile-hoot-updates" "wingolog.org" "wingo@igalia.com" "igalia.com" "mastodon.social/@wingo")

WebAssembly has proven to have some great wins for C, C++, Rust, and so on -- but now it's our turn to get in the game. GC is coming and we as a community need to be getting our compilers and language run-times ready. Let's put on the coffee and bang some bytes together; it's still early days and there's a world to win out there for the language community with the best WebAssembly experience. The game is afoot: happy consing!

Categories: FLOSS Project Planets

Django Weblog: Django 4.2 release candidate 1 released

Planet Python - Mon, 2023-03-20 03:33

Django 4.2 release candidate 1 is the final opportunity for you to try out the farrago of new features before Django 4.2 is released.

The release candidate stage marks the string freeze and the call for translators to submit translations. Provided no major bugs are discovered that can't be solved in the next two weeks, Django 4.2 will be released on or around April 3. Any delays will be communicated on the Django forum.

Please use this opportunity to help find and fix bugs (which should be reported to the issue tracker). You can grab a copy of the package from our downloads page or on PyPI.

The PGP key ID used for this release is Mariusz Felisiak: 2EF56372BA48CD1B.

Categories: FLOSS Project Planets

Opensource.com: Create accessible websites with Drupal

Planet Drupal - Mon, 2023-03-20 03:00
Create accessible websites with Drupal neerajskydiver Mon, 03/20/2023 - 03:00

Use the open source Drupal CMS to create accessible websites that provide open access to everyone.

As the world becomes increasingly digital, it’s more important than ever to ensure that websites are accessible to everyone. Accessibility is about designing websites that can…

Categories: FLOSS Project Planets

Python GUIs: Working With Git and Github in Your Python Projects

Planet Python - Mon, 2023-03-20 02:00

Using a version control system (VCS) is crucial for any software development project. These systems allow developers to track changes to the project's codebase over time, removing the need to keep multiple copies of the project folder.

VCSs also facilitate experimenting with new features and ideas without breaking existing functionality in a given project. They also enable collaboration with other developers that can contribute code, documentation, and more.

In this article, we'll learn about Git, the most popular VCS out there. We'll learn everything we need to get started with this VCS and start creating our own repositories. We'll also learn how to publish those repositories to GitHub, another popular tool among developers nowadays.

Installing and Setting Up Git

To use Git in our coding projects, we first need to install it on our computer. To do this, we need to navigate to Git's download page and choose the appropriate installer for our operating system. Once we've downloaded the installer, we need to run it and follow the on-screen instructions.

We can check if everything is working correctly by opening a terminal or command-line window and running git --version.

Once we've confirmed the successful installation, we should provide Git with some personal information. You'll only need to do this once for every computer. Now go ahead and run the following commands with your own information:

shell $ git config --global user.name <"YOUR NAME"> $ git config --global user.email <name@email.com>

The first command adds your full name to Git's config file. The second command adds your email. Git will use this information in all your repositories.

If you publish your projects to a remote server like GitHub, then your email address will be visible to anyone with access to that repository. If you don't want to expose your email address this way, then you should create a separate email address to use with Git.

As you'll learn in a moment, Git uses the concept of branches to manage its repositories. A branch is a copy of your project's folder at a given time in the development cycle. The default branch of new repositories is named either master or main, depending on your current version of Git.

You can change the name of the default branch by running the following command:

shell $ git config --global init.defaultBranch <branch_name>

This command will set the name of Git's default branch to branch_name. Remember that this is just a placeholder name. You need to provide a suitable name for your installation.

Another useful setting is the default text editor Git will use to type in commit messages and other messages in your repo. For example, if you use an editor like Visual Studio Code, then you can configure Git to use it:

shell # Visual Studio Code $ git config --global core.editor "code --wait"

With this command, we tell Git to use VS Code to process commit messages and any other message we need to enter through Git.

Finally, to inspect the changes we've made to Git's configuration files, we can run the following command:

shell $ git config --global -e

This command will open the global .gitconfig file in our default editor. There, we can fix any error we have made or add new settings. Then we just need to save the file and close it.

Understanding How Git Works

Git works by allowing us to take a snapshot of the current state of all the files in our project's folder. Each time we save one of those snapshots, we make a Git commit. Then the cycle starts again, and Git creates new snapshots, showing how our project looked like at any moment.

Git was created in 2005 by Linus Torvalds, the creator of the Linux kernel. Git is an open-source project that is licensed under the GNU General Public License (GPL) v2. It was initially made to facilitate kernel development due to the lack of a suitable alternative.

The general workflow for making a Git commit to saving different snapshots goes through the following steps:

  1. Change the content of our project's folder.
  2. Stage or mark the changes we want to save in our next commit.
  3. Commit or save the changes permanently in our project's Git database.

As the third step mentions, Git uses a special database called a repository. This database is kept inside your project's directory under a folder called .git.

Version-Controlling a Project With Git: The Basics

In this section, we'll create a local repository and learn how to manage it using the Git command-line interface (CLI). On macOS and Linux, we can use the default terminal application to follow along with this tutorial.

On Windows, we recommend using Git Bash, which is part of the Git For Windows package. Go to the Git Bash download page, get the installer, run it, and follow the on-screen instruction. Make sure to check the Additional Icons -> On the Desktop to get direct access to Git Bash on your desktop so that you can quickly find and launch the app.

Alternatively, you can also use either Windows' Command Prompt or PowerShell. However, some commands may differ from the commands used in this tutorial.

Initializing a Git Repository

To start version-controlling a project, we need to initialize a new Git repository in the project's root folder or directory. In this tutorial, we'll use a sample project to facilitate the explanation. Go ahead and create a new folder in your file system. Then navigate to that folder in your terminal by running these commands:

shell $ mkdir sample_project $ cd sample_project

The first command creates the project's root folder or directory, while the second command allows you to navigate into that folder. Don't close your terminal window. You'll be using it throughout the next sections.

To initialize a Git repository in this folder, we need to use the git init command like in the example below:

shell $ git init Initialized empty Git repository in /.../sample_project/.git/

This command creates a subfolder called .git inside the project's folder. The leading dot in the folder's name means that this is a hidden directory. So, you may not see anything on your file manager. You can check the existence of .git with the ls -a, which lists all files in a given folder, including the hidden ones.

Checking the Status of Our Project

Git provides the git status command to allow us to identify the current state of a Git repository. Because our sample_project folder is still empty, running git status will display something like this:

shell $ git status On branch main No commits yet nothing to commit (create/copy files and use "git add" to track)

When we run git status, we get detailed information about the current state of our Git repository. This command is pretty useful, and we'll turn back to it in multiple moments.

As an example of how useful the git status command is, go ahead and create a file called main.py inside the project's folder using the following commands:

shell $ touch main.py $ git status On branch main No commits yet Untracked files: (use "git add <file>..." to include in what will be committed) main.py nothing added to commit but untracked files present (use "git add" to track)

With the touch command, we create a new main.py file under our project's folder. Then we run git status again. This time, we get information about the presence of an untracked file called main.py. We also get some basic instructions on how to add this file to our Git repo. Providing these guidelines or instructions is one of the neatest features of git status.

Now, what is all that about untracked files? In the following section, we'll learn more about this topic.

Tracking and Committing Changes

A file in a Git repository can be either tracked or untracked. Any file that wasn't present in the last commit is considered an untracked file. Git doesn't keep a history of changes for untracked files in your project's folder.

In our example, we haven't made any commits to our Git repo, so main.py is naturally untracked. To start tracking it, run the git add command as follows:

shell $ git add main.py $ git status On branch main No commits yet Changes to be committed: (use "git rm --cached <file>..." to unstage) new file: main.py

This git add command has added main.py to the list of tracked files. Now it's time to save the file permanently using the git commit command with an appropriate commit message provided with the -m option:

shell $ git commit -m "Add main.py" [main (root-commit) 5ac6586] Add main.py 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 main.py $ git status On branch master nothing to commit, working tree clean

We have successfully made our first commit, saving main.py to our Git repository. The git commit command requires a commit message, which we can provide through the -m option. Commit messages should clearly describe what we have changed in our project.

After the commit, our main branch is completely clean, as you can conclude from the git status output.

Now let's start the cycle again by modifying main.py, staging the changes, and creating a new commit. Go ahead and run the following commands:

shell $ echo "print('Hello, World!')" > main.py $ cat main.py print('Hello, World!') $ git add main.py $ git commit -m "Create a 'Hello, World!' script on main.py" [main 2f33f7e] Create a 'Hello, World!' script on main.py 1 file changed, 1 insertion(+)

The echo command adds the statement "print('Hello, World!')" to our main.py file. You can confirm this addition with the cat command, which lists the content of one or more target files. You can also open main.py in your favorite editor and update the file there if you prefer.

We can also use the git stage command to stage or add files to a Git repository and include them in our next commit.

We've made two commits to our Git repo. We can list our commit history using the git log command as follows:

shell $ git log --oneline 2f33f7e (HEAD -> main) Create a 'Hello, World!' script on main.py 5ac6586 Add main.py

The git log command allows us to list all our previous commits. In this example, we've used the --oneline option to list commits in a single line each. This command takes us to a dedicated output space. To leave that space, we can press the letter Q on our keyboard.

Using a .gitignore File to Skip Unneeded Files

While working with Git, we will often have files and folders that we must not save to our Git repo. For example, most Python projects include a venv/ folder with a virtual environment for that project. Go ahead and create one with the following command:

shell $ python -m venv venv

Once we've added a Python virtual environment to our project's folder, we can run git status again to check the repo state:

shell $ git status On branch main Untracked files: (use "git add <file>..." to include in what will be committed) venv/ nothing added to commit but untracked files present (use "git add" to track)

Now the venv/ folder appears as an untracked file in our Git repository. We don't need to keep track of this folder because it's not part of our project's codebase. It's only a tool for working on the project. So, we need to ignore this folder. To do that, we can add the folder to a .gitignore file.

Go ahead and create a .gitignore file in the project's folder. Add the venv/ folders to it and run git status:

shell $ touch .gitignore $ echo "venv/" > .gitignore $ git status On branch main Untracked files: (use "git add <file>..." to include in what will be committed) .gitignore nothing added to commit but untracked files present (use "git add" to track)

Now git status doesn't list venv/ as an untracked file. This means that Git is ignoring that folder. If we take a look at the output, then we'll see that .gitignore is now listed as an untracked file. We must commit our .gitignore files to the Git repository. This will prevent other developers working with us from having to create their own local .gitignore files.

We can also list multiple files and folders in our .gitignore file one per line. The file even accepts glob patterns to match specific types of files, such as *.txt. If you want to save yourself some work, then you can take advantage of GitHub's gitignore repository, which provides a rich list of predefined .gitignore files for different programming languages and development environments.

We can also set up a global .gitignore file on our computer. This global file will apply to all our Git repositories. If you decide to use this option, then go ahead and create a .gitignore_global in your home folder.

Working With Branches in Git

One of the most powerful features of Git is that it allows us to create multiple branches. A branch is a copy of our project's current status and commits history. Having the option to create and handle branches allows us to make changes to our project without messing up the main line of development.

We'll often find that software projects maintain several independent branches to facilitate the development process. A common branch model distinguishes between four different types of branches:

  1. A main or master branch that holds the main line of development
  2. A develop branch that holds the last developments
  3. One or more feature branches that hold changes intended to add new features
  4. One or more bugfix branches that hold changes intended to fix critical bugs

However, the branching model to use is up to you. In the following sections, we'll learn how to manage branches using Git.

Creating New Branches

Working all the time on the main or master branch isn't a good idea. We can end up creating a mess and breaking the code. So, whenever we want to experiment with a new idea, implement a new feature, fix a bug, or just refactor a piece of code, we should create a new branch.

To kick things off, let's create a new branch called hello on our Git repo under the sample_project folder. To do that, we can use the git branch command followed by the branch's name:

shell $ git branch hello $ git branch --list * main hello

The first command creates a new branch in our Git repo. The second command allows us to list all the branches that currently exist in our repository. Again, we can press the letter Q on our keyboard to get back to the terminal prompt.

The star symbol denotes the currently active branch, which is main in the example. We want to work on hello, so we need to activate that branch. In Git's terminology, we need to check out to hello.

Checking Out to a New Branch

Although we have just created a new branch, in order to start working on it, we need to switch to or check out to it by using the git checkout command as follows:

shell $ git checkout hello Switched to branch 'hello' $ git branch --list main * hello $ git log --oneline 2f33f7e (HEAD -> hello, main) Create a 'Hello, World!' script on main.py 5ac6586 Add main.py

The git checkout command takes the name of an existing branch as an argument. Once we run the command, Git takes us to the target branch.

We can derive a new branch from whatever branch we need.

As you can see, git branch --list indicates which branch we are currently on by placing a * symbol in front of the relevant branch name. If we check the commit history with git log --oneline, then we'll get the same as we get from main because hello is a copy of it.

The git checkout can take a -b flag that we can use to create a new branch and immediately check out to it in a single step. That's what most developers use while working with Git repositories. In our example, we could have run git checkout -b hello to create the hello branch and check out to it with one command.

Let's make some changes to our project and create another commit. Go ahead and run the following commands:

shell $ echo "print('Welcome to PythonGUIs!')" >> main.py $ cat main.py print('Hello, World!') print('Welcome to PythonGUIs!') $ git add main.py $ git commit -m "Extend our 'Hello, World' program with a welcome message." [hello be62476] Extend our 'Hello, World' program with a welcome message. 1 file changed, 1 insertion(+)

The final command committed our changes to the hello branch. If we compare the commit history of both branches, then we'll see the difference:

shell $ git log --oneline -1 be62476 (HEAD -> hello) Extend our 'Hello, World' program with a welcome message. $ git checkout main Switched to branch 'main' $ git log --oneline -1 2f33f7e (HEAD -> main) Create a 'Hello, World!' script on main.py

In this example, we first run git log --oneline with -1 as an argument. This argument tells Git to give us only the last commit in the active branch's commit history. To inspect the commit history of main, we first need to check out to that branch. Then we can run the same git log command.

Now say that we're happy with the changes we've made to our project in the hello branch, and we want to update main with those changes. How can we do this? We need to merge hello into main.

Merging Two Branches Together

To add the commits we've made in a separate branch back to another branch, we can run what is known as a merge. For example, say we want to merge the new commits in hello into main. In this case, we first need to switch back to main and then run the git merge command using hello as an argument:

shell $ git checkout main Already on 'main' $ git merge hello Updating 2f33f7e..be62476 Fast-forward main.py | 1 + 1 file changed, 1 insertion(+)

To merge a branch into another branch, we first need to check out the branch we want to update. Then we can run git merge. In the example above, we first check out to main. Once there, we can merge hello.

Deleting Unused Branches

Once we've finished working in a given branch, we can delete the entire branch to keep our repo as clean as possible. Following our example, now that we've merged hello into main, we can remove hello.

To remove a branch from a Git repo, we use the git branch command with the --delete option. To successfully run this command, make sure to switch to another branch before:

shell $ git checkout main Already on 'main' $ git branch --delete hello Deleted branch hello (was be62476). $ git branch --list * main

Deleting unused branches is a good way to keep our Git repositories clean, organized, and up to date. Also, deleting them as soon as we finish the work is even better because having old branches around may be confusing for other developers collaborating with our project. They might end up wondering why these branches are still alive.

Using a GUI Client for Git

In the previous sections, we've learned to use the git command-line tool to manage Git repositories. If you prefer to use GUI tools, then you'll find a bunch of third-party GUI frontends for Git. While they won't completely replace the need for using the command-line tool, they can simplify your day-to-day workflow.

You can get a complete list of standalone GUI clients available on the Git official documentation.

Most popular IDEs and code editors, including PyCharm and Visual Studio Code, come with basic Git integration out-of-the-box. Some developers will prefer this approach as it is directly integrated with their development environment of choice.

If you need something more advanced, then GitKraken is probably a good choice. This tool provides a standalone, cross-platform GUI client for Git that comes with many additional features that can boost your productivity.

Managing a Project With GitHub

If we publish a project on a remote server with support for Git repositories, then anyone with appropriate permissions can clone our project, creating a local copy on their computer. Then, they can make changes to our project, commit them to their local copy, and finally push the changes back to the remote server. This workflow provides a straightforward way to allow other developers to contribute code to your projects.

In the following sections, we'll learn how to create a remote repository on GitHub and then push our existing local repository to it. Before we do that, though, head over to GitHub.com and create an account there if you don't have one yet. Once you have a GitHub account, you can set up the connection to that account so that you can use it with Git.

Setting Up a Secure Connection to GitHub

In order to work with GitHub via the git command, we need to be able to authenticate ourselves. There are a few ways of doing that. However, using SSH is the recommended way. The first step in the process is to generate an SSH key, which you can do with the following command:

shell $ ssh-keygen -t ed25519 -C "GitHub - name@email.com"

Replace the placeholder email address with the address you've associated with your GitHub account. Once you run this command, you'll get three different prompts in a row. You can respond to them by pressing Enter to accept the default option. Alternatively, you can provide custom responses.

Next, we need to copy the contents of our id_ed25519.pub file. To do this, you can run the following command:

shell $ cat ~/.ssh/id_ed25519.pub

Select the command's output and copy it. Then go to your GitHub Settings page and click the SSH and GPG keys option. There, select New SSH key, set a descriptive title for the key, make sure that the Key Type is set to Authentication Key, and finally, paste the contents of id_ed25519.pub in the Key field. Finally, click the Add SSH key button.

At this point, you may be asked to provide some kind of Two-Factor Authentication (2FA) code. So, be ready for that extra security step.

Now we can test our connection by running the following command:

shell $ ssh -T git@github.com The authenticity of host 'github.com (IP ADDRESS)' can not be established. ECDSA key fingerprint is SHA256:p2QAMXNIC1TJYWeIOttrVc98/R1BUFWu3/LiyKgUfQM. Are you sure you want to continue connecting (yes/no/[fingerprint])?

Make sure to check whether the key fingerprint shown on your output matches GitHub's public key fingerprint. If it matches, then enter yes and press Enter to connect. Otherwise, don't connect.

If the connection is successful, we will get a message like this:

shell Hi USERNAME! You have successfully authenticated, but GitHub does not provide shell access.

Congrats! You've successfully connected to GitHub via SSH using a secure SSH key. Now it's time to start working with GitHub.

Creating and Setting Up a GitHub Repository

Now that you have a GitHub account with a proper SSH connection, let's create a remote repository on GitHub using its web interface. Head over to the GitHub page and click the + icon next to your avatar in the top-right corner. Then select New repository.

Give your new repo a unique name and choose who can see this repository. To continue with our example, we can give this repository the same name as our local project, sample_project.

To avoid conflicts with your existing local repository, don't add .gitignore, README, or LICENSE files to your remote repository.

Next, set the repo's visibility to Private so that no one else can access the code. Finally, click the Create repository button at the end of the page.

If you create a Public repository, make sure also to choose an open-source license for your project to tell people what they can and can't do with your code.

You'll get a Quick setup page as your remote repository has no content yet. Right at the top, you'll have the choice to connect this repository via HTTPS or SSH. Copy the SSH link and run the following command to tell Git where the remote repository is hosted:

shell $ git remote add origin git@github.com:USERNAME/sample_project.git

This command adds a new remote repository called origin to our local Git repo.

The name origin is commonly used to denote the main remote repository associated with a given project. This is the default name Git uses to identify the main remote repo.

Git allows us to add several remote repositories to a single local one using the git remote add command. This allows us to have several remote copies of your local Git repo.

Pushing a Local Git Repository to GitHub

With a new and empty GitHub repository in place, we can go ahead and push the content of our local repo to its remote copy. To do this, we use the git push command providing the target remote repo and the local branch as arguments:

shell $ git push -u origin main Enumerating objects: 9, done. Counting objects: 100% (9/9), done. Delta compression using up to 8 threads Compressing objects: 100% (4/4), done. Writing objects: 100% (9/9), 790 bytes | 790.00 KiB/s, done. Total 9 (delta 0), reused 0 (delta 0), pack-reused 0 To github.com:USERNAME/sample_project.git * [new branch] main -> main branch 'main' set up to track 'origin/main'.

This is the first time we push something to the remote repo sample_project, so we use the -u option to tell Git that we want to set the local main branch to track the remote main branch. The command's output provides a pretty detailed summary of the process.

Note that if you don't add the -u option, then Git will ask what you want to do. A safe workaround is to copy and paste the commands GitHub suggests, so that you don't forget -u.

Using the same command, we can push any local branch to any remote copy of our project's repo. Just change the repo and branch name: git push -u remote_name branch_name.

Now let's head over to our browser and refresh the GitHub page. We will see all of our project files and commit history there.

Now we can continue developing our project and making new commits locally. To push our commits to the remote main branch, we just need to run git push. This time, we don't have to use the remote or branch name because we've already set main to track origin/main.

Pulling Content From a GitHub Repository

We can do basic file editing and make commits within GitHub itself. For example, if we click the main.py file and then click the pencil icon at the top of the file, we can add another line of code and commit those changes to the remote main branch directly on GitHub.

Go ahead and add the statement print("Your Git Tutorial is Here...") to the end of main.py. Then go to the end of the page and click the Commit changes button. This makes a new commit on your remote repository.

This remote commit won't appear in your local commit history. To download it and update your local main branch, use the git pull command:

shell $ git pull remote: Enumerating objects: 5, done. remote: Counting objects: 100% (5/5), done. remote: Compressing objects: 100% (2/2), done. remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0 Unpacking objects: 100% (3/3), 696 bytes | 174.00 KiB/s, done. From github.com:USERNAME/sample_project be62476..605b6a7 main -> origin/main Updating be62476..605b6a7 Fast-forward main.py | 1 + 1 file changed, 1 insertion(+)

Again, the command's output provides all the details about the operation. Note that git pull will download the remote branch and update the local branch in a single step.

If we want to download the remote branch without updating the local one, then we can use the [git fetch](https://git-scm.com/docs/git-fetch) command. This practice gives us the chance to review the changes and commit them to our local repo only if they look right.

For example, go ahead and update the remote copy of main.py by adding another statement like print("Let's go!!"). Commit the changes. Then get back to your local repo and run the following command:

shell $ git fetch remote: Enumerating objects: 5, done. remote: Counting objects: 100% (5/5), done. remote: Compressing objects: 100% (2/2), done. remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0 Unpacking objects: 100% (3/3), 731 bytes | 243.00 KiB/s, done. From github.com:USERNAME/sample_project 605b6a7..ba489df main -> origin/main

This command downloaded the latest changes from origin/main to our local repo. Now we can compare the remote copy of main.py to the local copy. To do this, we can use the git diff command as follows:

shell $ git diff main origin/main diff --git a/main.py b/main.py index be2aa66..4f0e7cf 100644 --- a/main.py +++ b/main.py @@ -1,3 +1,4 @@ print('Hello, World!') print('Welcome to PythonGUIs!') print("Your Git Tutorial is Here...") +print("Let's go!!")

In the command's output, you can see that the remote branch adds a line containing print("Let's go!!") to the end of main.py. This change looks good, so we can use git pull to commit the change automatically.

Exploring Alternatives to GitHub

While GitHub is the most popular public Git server and collaboration platform in use, it is far from being the only one. GitLab.com and BitBucket are popular commercial alternatives similar to GitHub. While they have paid plans, both offer free plans, with some restrictions, for individual users.

Although, if you would like to use a completely open-source platform instead, Codeberg might be a good option. It's a community-driven alternative with a focus on supporting Free Software. Therefore, in order to use Codeberg, your project needs to use a compatible open-source license.

Optionally, you can also run your own Git server. While you could just use barebones git for this, software such as GitLab Community Edition (CE) and Forgejo provide you with both the benefits of running your own server and the experience of using a service like GitHub.

Conclusion

By now, you're able to use Git for version-controlling your projects. Git is a powerful tool that will make you much more efficient and productive, especially as the scale of your project grows over time.

While this guide introduced you to most of its basic concepts and common commands, Git has many more commands and options that you can use to be even more productive. Now, you know enough to get up to speed with Git.

Categories: FLOSS Project Planets

Russ Allbery: Review: The Star Fraction

Planet Debian - Mon, 2023-03-20 00:08

Review: The Star Fraction, by Ken MacLeod

Series: Fall Revolution #1 Publisher: Orbit Copyright: 1995 Printing: 2001 ISBN: 1-85723-833-8 Format: Trade paperback Pages: 341

Ken MacLeod is a Scottish science fiction writer who has become amusingly famous for repeatedly winning the libertarian Prometheus Award despite being a (somewhat libertarian-leaning) socialist. The Star Fraction is the first of a loose series of four novels about future solar system politics and was nominated for the Clarke Award (as well as winning the Prometheus). It was MacLeod's first novel.

Moh Kohn is a mercenary, part of the Felix Dzerzhinsky Workers' Defence collective. They're available for hire to protect research labs and universities against raids from people such as animal liberationists and anti-AI extremists (or, as Moh calls them, creeps and cranks). As The Star Fraction opens, he and his smart gun are protecting a lab against an attack.

Janis Taine is a biologist who is currently testing a memory-enhancing drug on mice. It's her lab that is attacked, although it isn't vandalized the way she expected. Instead, the attackers ruined her experiment by releasing the test drug into the air, contaminating all of the controls. This sets off a sequence of events that results in Moh, Janis, and Jordon Brown, a stock trader for a religious theocracy, on the run from the US/UN and Space Defense.

I had forgotten what it was like to read the uncompromising old-school style of science fiction novel that throws you into the world and explains nothing, leaving it to the reader to piece the world together as you go. It's weirdly fun, but I'm either out of practice or this was a particularly challenging example of the genre. MacLeod throws a lot of characters at you quickly, including some that have long and complicated personal histories, and it's not until well into the book that the pieces start to cohere into a narrative. Even once that happens, the relationship between the characters and the plot is unobvious until late in the book, and comes from a surprising direction.

Science fiction as a genre is weirdly conservative about political systems. Despite the grand, futuristic ideas and the speculation about strange alien societies, the human governments rarely rise to the sophistication of a modern democracy. There are a lot of empires, oligarchies, and hand-waved libertarian semi-utopias, but not a lot of deep engagement with the speculative variety of government systems humans have proposed. The rare exceptions therefore get a lot of attention from those of us who find political systems fascinating.

MacLeod has a reputation for writing political SF in that sense, and The Star Fraction certainly delivers. Moh (despite the name of his collective, which is explained briefly in the book) is a Trotskyist with a family history with the Fourth International that is central to the plot. The setting is a politically fractured Britain full of autonomous zones with wildly different forms of government, theoretically ruled by a restored monarchy. That monarchy is opposed by the Army of the New Republic, which claims to be the legitimate government of the United Kingdom and is considered by everyone else to be terrorists. Hovering in the background is a UN entirely subsumed by the US, playing global policeman over a chaotic world shattered by numerous small-scale wars.

This satisfyingly different political world is a major plus for me. The main drawback is that I found the world-building and politics more interesting than the characters. It's not that I disliked them; I found them enjoyably quirky and odd. It's more that so much is happening and there are so many significant characters, all set in an unfamiliar and unexplained world and often divided into short scenes of a few pages, that I had a hard time keeping track of them all. Part of the point of The Star Fraction is digging into their tangled past and connecting it up with the present, but the flashbacks added a confused timeline on top of the other complexity and made it hard for me to get lost in the story. The characters felt a bit too much like puzzle pieces until the very end of the book.

The technology is an odd mix with a very 1990s feel. MacLeod is one of the SF authors who can make computers and viruses believable, avoiding the cyberpunk traps, but AI becomes relevant to the plot and the conception of AI here feels oddly retro. (Not MacLeod's fault; it's been nearly 30 years and a lot has changed.) On-line discussion in the book is still based on newsgroups, which added to the nostalgic feel. I did like the eventual explanation for the computing part of the plot, though; I can't say much while avoiding spoilers, but it's one of the more believable explanations for how a technology could spread in a way required for the plot that I've read.

I've been planning on reading this series for years but never got around to it. I enjoyed my last try at a MacLeod series well enough to want to keep reading, but not well enough to keep reading immediately, and then other books happened and now it's been 19 years. I feel similarly about The Star Fraction: it's good enough (and in a rare enough subgenre of SF) that I want to keep reading, but not enough to keep reading immediately. We'll see if I manage to get to the next book in a reasonable length of time.

Followed by The Stone Canal.

Rating: 6 out of 10

Categories: FLOSS Project Planets

Amin Bandali: LibrePlanet 2023: What's new in Jami

GNU Planet! - Sun, 2023-03-19 21:16

Update: Jami has won this year's Award for Project of Social Benefit, presented by the Free Software Foundation "to a project or team responsible for applying free software, or the ideas of the free software movement, to intentionally and significantly benefit society. This award stresses the use of free software in service to humanity."

Today I gave a talk at LibrePlanet 2023 on what's new in and about Jami since my Jami and how it empowers users talk for LibrePlanet 2021.

Here is the abstract for my talk, also available on the LibrePlanet 2023's speakers page:

Jami is free/libre software for universal communication that respects the freedoms and privacy of its users. An official GNU package, Jami is an end-to-end encrypted secure and distributed communication tool for calling, conferencing, messaging, and file transfer. Jami has end-user applications across multiple operating systems and platforms, as well as multiple APIs and a plugin system for building upon and extending Jami as a framework for secure and private communication.

This talk gives an update on what's new in and about Jami since bandali's "Jami and how it empowers users" talk at LibrePlanet 2021.

Presentation slides: pdf (with notes, only notes) | bib
LaTeX sources: tar.gz | zip
Video: coming soon

I will add the presentation video once the conference recordings have been processed and published by the Free Software Foundation.

LibrePlanet is a conference about software freedom, happening on March 19-20, 2023. The event is hosted by the Free Software Foundation, and brings together software developers, law and policy experts, activists, students, and computer users to learn skills, celebrate free software accomplishments, and face upcoming challenges. Newcomers are always welcome, and LibrePlanet 2023 will feature programming for all ages and experience levels.

Categories: FLOSS Project Planets

Armin Ronacher: Lessons from a Pessimist: Make Your Pessimism Productive

Planet Python - Sun, 2023-03-19 20:00

This year I decided that I want to share my most important learnings about engineering, teams and quite frankly personal mental health. My hope is that those who want to learn from me find it useful.

I consider myself a functional and pragmatic pessimist. I tend to err on the side of anticipating the worst outcome most of the time. This mindset often leads me to assume that things are more difficult than they actually are, but it also highlights potential pitfalls along the way. In some ways, this is a coping mechanism, but it also aids in problem-solving and sets my expectations low, frequently resulting in pleasant surprises.

However, in recent years, I've more and more encountered a different kind of pessimism in others that I deem destructive. This type of pessimism sees no good in the world and renders people feeling powerless. I thought it might be worthwhile to share why I am not entirely consumed by gloom.

Destructive pessimism involves either wanting or expecting things to fail. At first glance, the aspect of not expecting success may appear similar to how I operate, but there's a subtle distinction. I generally anticipate that things will be challenging but still achievable, and when it matters, I want them to succeed. An extreme example of destructive pessimism on the other hand is expecting climate change to end the world and assuming society will do nothing to prevent it.

Whatever I personally do, I want it to be successful. I don't search for reasons why something won't work; instead, I focus on how to make it work while addressing or avoiding the issues I see along the way. That does not make me an optimist, that just makes me someone who wants to get stuff done and someone who strives for positive outcomes. On the other hand optimism to me is expecting to succeed against all odds, something I do not do. I fully expect that there will be failure along the way. (I also love venting about stuff I don't like even if it's not at all productive).

Many individuals in today's economy worry about their retirement and harbor a general negative sentiment about nearly everything, from the unfairness of the labor market and increasing poverty to climate change and more. Believe it or not, I share much of this negative sentiment, but I've learned never to let such thoughts govern my life. Dwelling on negativity regarding your employer, job prospects, government, economy, or environment — especially when it's difficult to influence these aspects — leads to nothing but unhappiness and depression.

Our times are marked by a number of transformative events. A recent conversation about AI I had with some folks I think is quite illustrative about how you can be a pessimist yet still be excited and forward looking. What's happening with AI at the moment makes a lot of people deeply uncomfortable. On the one hand some think that their job is at risk, others are trying to fight that future out of fear by attacking the foundations of it from all kinds of different angles. This fight comes from copyright law, various moral aspects as well as downplaying the status-quo capabilities of AI. All of these things are absolutely worth considering! You might remember from a recent blog post about AI that I myself posted something here that outlines some of the potential issues with AI. Nevertheless, AI will continue to advance, and being afraid of it is simply unproductive. Rather than becoming despondent about AI, my pessimistic side assumes that things can go wrong and acts accordingly, all while giving the technology a fair chance.

I am absolutely convinced that it's important to recognize the difference between a pragmatic form of pessimism and destructive pessimism. And as cheesy as it sounds, try to surround yourself with supportive individuals who can help you maintain a positive outlook and try to be that person for others. You don't have to be an optimist for wanting to succeed!

Categories: FLOSS Project Planets

a2ps @ Savannah: a2ps 4.15.2 released [stable]

GNU Planet! - Sun, 2023-03-19 15:06


GNU a2ps is an Any to PostScript filter. Of course it processes plain text
files, but also pretty prints quite a few popular languages.

More detailed web pages about GNU a2ps is available at
https://savannah.gnu.org/projects/a2ps/.

This release is a minor bug-fix release. It fixes a long-standing but rare
crash, makes a minor fix to the build system, and finally puts the manual
online; see:

https://gnu.org/software/a2ps/manual/

Here are the compressed sources and a GPG detached signature:
  https://ftpmirror.gnu.org/a2ps/a2ps-4.15.2.tar.gz
  https://ftpmirror.gnu.org/a2ps/a2ps-4.15.2.tar.gz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA1 and SHA256 checksums:

b02c9f4066ebb2899f7615b93b354fb77192377c  a2ps-4.15.2.tar.gz
7FKQSp+sEmQWsyrJokBfPWF92pfWUpnhpBVVT+Az0iU  a2ps-4.15.2.tar.gz

The SHA256 checksum is base64 encoded, instead of the
hexadecimal encoding that most checksum tools default to.

Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify a2ps-4.15.2.tar.gz.sig

The signature should match the fingerprint of the following key:

  pub   rsa2048 2013-12-11 [SC]
        2409 3F01 6FFE 8602 EF44  9BB8 4C8E F3DA 3FD3 7230
  uid   Reuben Thomas <rrt@sc3d.org>
  uid   keybase.io/rrt <rrt@keybase.io>

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key rrt@sc3d.org

  gpg --recv-keys 4C8EF3DA3FD37230

  wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=a2ps&download=1' | gpg --import -

As a last resort to find the key, you can try the official GNU
keyring:

  wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
  gpg --keyring gnu-keyring.gpg --verify a2ps-4.15.2.tar.gz.sig


This release was bootstrapped with the following tools:
  Autoconf 2.71
  Automake 1.16.5
  Gnulib v0.1-5853-ge0aefd96b6

NEWS

* Noteworthy changes in release 4.15.2 (2023-03-19) [stable]
 * Bug fixes:
   - Fix old crash when using --stdin="".
 * Build
   - Make configure stop if libpaper is not found.
   - Enable building the manual for gnu.org.


Categories: FLOSS Project Planets

Trisquel GNU/Linux: Trisquel 11.0 "Aramo" release announcement

GNU Planet! - Sun, 2023-03-19 14:58

Our most ambitious release to date, Trisquel 11.0 Aramo is out! After extensive work and thorough testing, we are proud to declare Aramo to be production-ready. This release brings many improvements and covers more grounds both in terms of machines supported and in installation options. Here are some highlights of the main accomplishments included in this release:

Major achievements
  • New supported architectures. Following the addition of 32bit ARM support in Trisquel 10, we now introduce support for 64bit ARM and POWER architectures, to expand the options for hardware compatibility.
  • d-i/Netinstall (also called "debian-installer") is the text-mode installer for Trisquel, which allows for advanced and custom installations, often used for servers. After Ubuntu dropped support for this installation method, we stepped in to add any removed pieces and make it functional again, often from upstream Debian sources.
  • Browser packaging: as Ubuntu continues to shift towards snap packaging, we took on the task of continuing to package key components such as Abrowser (our improved Firefox derivative) as a standard .deb package. Abrowser continues to bring a fully free browser that balances privacy and usability.
  • Compatibility with AMD/ATI graphics cards. We made a specific effort to improve the support for these cards without requiring non-free firmware blobs. The result is a default configuration that should make most of these types of cards work at a basic level, without 2D/3D acceleration. Please report if you test it with one of those cards!

Aramo is based on Ubuntu 22.04LTS and will receive support until 2027. Users of Trisquel 10 Nabia can upgrade directly using the update-manager or do-release-upgrade commands at a console terminal.

Editions
  • Trisquel. We keep MATE (version 1.26 on this release) as the default desktop environment due to its great accessibility support, simple user interface and no dependency on 3D acceleration.
  • Triskel. Our KDE (v5.92) based edition now as mature as our MATE release is excellent for customizing the design and functionality in fine detail.
  • Trisquel Mini. Running LXDE (v0.99.2), the Mini edition is a lightweight desktop perfect for netbooks, old computers and users with minimal resource usage needs.
  • Trisquel Sugar or Trisquel On A Sugar Toast (TOAST): Based on the Sugar learning platform (v0.119), TOAST comes with dozens of educational activities for children.
  • Network installer image: To deploy with a command-line install interface, it is ideal for servers and advanced users who want to explore custom designed environments. GNOME users will be happy to find that Trisquel GNOME environment has been added to the tasksel step on the d-i installer.

In addition, this release had put some additional work on Budgie and Kylin alternative desktop environments for users that would like them to test them out, help us with some feedback.

Upcoming changes One pending task that will get into motion soon is the main website redesign and improvement on l10n support and as we transition to the trisquel.org domain.

During Aramo's lifespan we'll continue to improve installation support for ARM and POWERPC whose base rootfs are available at our cdimage archive.

With the continuous support form our community and contributors we will keep providing one of the best fully free operating systems, and tackle the big technical challenges ahead. Trisquel is a non-profit project, you can help sustain it by becoming a member, donating or buying from our store.

We can't finish this great release without thanking for all the donors that keep the project going, and to the hackers Amin Bandali, bill-auger, David L, David Philipe Gil, Denis "GNUtoo" Carikli, "dragestil", Francis Meetze, Jason Self, Joshua Aspinall , Keno Goertz, "knife", "Lappi", Legimet, Mason Hock, Pablo Correa, "Parodper", Simon Josefsson, and many others for all the code, patches, bug reports, translations, and advice. Special thanks to Luis "Ark74" Guzmán, for his very prolific contributions, and to the wonderful community that keeps the project going.

Categories: FLOSS Project Planets

FSF Latin America: The TRApp Trap

GNU Planet! - Sun, 2023-03-19 11:12
The TRApp Trap

Mobile phone apps, that our board member Alexandre Oliva calls TRApps in his new article, have replaced, not very spontaneously, web sites that adhered to international standards and were compatible with free systems, TRApping people in a duopoly of proprietary and invasive systems.

When private businesses do so, it's bad; but when governments impose on citizens the use of proprietary operating systems and programs, to get public services or to comply with legal obligations, we denounce them as imposed taxing software.

They're "imposed" in the sense that you can't avoid them, and "taxing" in that they charge you and take from you your most valuable good: your freedom.

We call for consumers, citizens and users at large to resist these impositions and insist that public and private services be available through sites that will work properly when accessed with a standard browser on a free operating system, without installing freedom-depriving programs, not even those that even standard browsers themselves would install and run automatically from visited sites. And, when it's necessary to run software on the service recipient's computer, the software ought to be free.

Read the full article on our site, without TRApps or proprietary JavaScript.
https://www.fsfla.org/texto/TRApps

About FSFLA

Free Software Foundation Latin America joined in 2005 the international FSF network, previously formed by Free Software Foundations in the United States, in Europe and in India. These sister organizations work in their corresponding geographies towards promoting the same Free Software ideals and defending the same freedoms for software users and developers, working locally but cooperating globally.
https://www.fsfla.org/

Copyright 2023 FSFLA

Permission is granted to make and distribute verbatim copies of this entire document without royalty, provided the copyright notice, the document's official URL, and this permission notice are preserved.

Permission is also granted to make and distribute verbatim copies of individual sections of this document worldwide without royalty provided the copyright notice and the permission notice above are preserved, and the document's official URL is preserved or replaced by the individual section's official URL.

https://www.fsfla.org/anuncio/2023-03-TRApps

Categories: FLOSS Project Planets

Russ Allbery: Review: Allow Me to Retort

Planet Debian - Sat, 2023-03-18 23:59

Review: Allow Me to Retort, by Elie Mystal

Publisher: The New Press Copyright: 2022 ISBN: 1-62097-690-0 Format: Kindle Pages: 257

If you're familiar with Elie Mystal's previous work (writer for The Nation, previously editor for Above the Law, Twitter gadfly, and occasional talking head on news commentary programs), you'll have a good idea what to expect from this book: pointed liberal commentary, frequently developing into rants once he works up a head of steam. The subtitle of A Black Guy's Guide to the Constitution tells you that the topic is US constitutional law, which is very on brand. You're going to get succinct and uncompromising opinions at the intersection of law and politics. If you agree with them, you'll probably find them funny; if you disagree with them, you'll probably find them infuriating.

In other words, Elie Mystal is the sort of writer one reads less for "huh, I disagreed with you but that's a good argument" and more for "yeah, you tell 'em, Elie!" I will be very surprised if this book changes anyone's mind about a significant political debate. I'm not sure if people who disagree are even in the intended audience.

I'm leery of this sort of book. Usually its function is to feed confirmation bias with some witty rejoinders and put-downs that only sound persuasive to people who already agree with them. If I want that, I can just read Twitter (and you will be unsurprised to know that Mystal has nearly 500,000 Twitter followers). This style can also be boring at book length if the author is repeating variations on a theme.

There is indeed a lot of that here, particularly in the first part of this book. If you don't generally agree with Mystal already, save yourself the annoyance and avoid this like the plague. It's just going to make you mad, and I don't think you're going to get anything useful out of it. But as I got deeper into this book, I think Mystal has another, more interesting purpose that's aimed at people who do largely agree. He's trying to undermine a very common US attitude (even on the left) about the US constitution.

I don't know if most people from the US (particularly if they're white and male) realize quite how insufferably smug we tend to be about the US constitution. When you grow up here, the paeans to the constitution and the Founding Fathers (always capitalized like deities) are so ubiquitous and unremarked that it's difficult not to absorb them at a subconscious level. There is a national mythology about the greatness of our charter of government that crosses most political divides. In its modern form, this comes with some acknowledgment that some of its original provisions (the notorious three-fifths of a person clause, for instance) were bad, but we subsequently fixed them and everything is good now. Nearly everyone gets taught this in school, and it's almost never challenged. Even the edifices of the US left, such as the ACLU and the NAACP, tend to wrap themselves in the constitution.

It's an enlightening experience to watch someone from the US corner a European with a discussion of the US constitution and watch the European plan escape routes while their soul attempts to leave their body. And I think it's telling that having that experience, as rare as it might be given how oblivious we can be, is still more common than a white person having a frank conversation with a black person in the US about the merits of the constitution as written. For various reasons, mostly because this is not very safe for the black person, this rarely happens.

This book is primarily Mystal giving his opinion on various current controversies in constitutional law, but the underlying refrain is that the constitution is a trash document written by awful people that sets up a bad political system. That system has been aggressively defended by reactionary Supreme Courts, which along with the designed difficulty of the amendment process has prevented fixing many obviously broken parts. This in turn has led to numerous informal workarounds and elaborate "interpretations" to attempt to make the system vaguely functional.

In other words, Mystal is trying to tell the US reader to stop being so precious about this specific document, and is using its truly egregious treatment of black people as the main fulcrum for his argument. Along the way, he gives an abbreviated tour of the highlights of constitutional law, but if you're at all interested in politics you've probably heard most of that before. The main point, I think, is to dig up any reverence left over from a US education, haul it out into the light of day, and compare it to the obvious failures of the constitution as a body of law and the moral failings of its authors. Mystal then asks exactly why we should care about original intent or be so reluctant to change the resulting system of government.

(Did I mention you should not bother with this book if you don't agree with Mystal politically? Seriously, don't do that to yourself.)

Readers of my reviews will know that I'm fairly far to the left politically, particularly by US standards, and yet I found it fascinating how much lingering reverence Mystal managed to dig out of me while reading this book. I found myself getting defensive in places, which is absurd because I didn't write this document. But I grew up surrounded by nigh-universal social signaling that the US constitution was the greatest political document ever, and in a religious tradition that often argued that it was divinely inspired. If one is exposed to enough of this, it becomes part of your background understanding of the world. Sometimes it takes someone being deliberately provocative to haul it back up to the surface where it can be examined.

This book is not solely a psychological intervention in national mythology. Mystal gets into detailed legal arguments as well. I thought the most interesting was the argument that the bizarre and unconvincing "penumbras" and "emanations" reasoning in Griswold v. Connecticut (which later served as the basis of Roe v. Wade) was in part because the Lochner era Supreme Court had, in the course of trying to strike down all worker protection laws, abused the concept of substantive due process so badly that Douglas was unwilling to use it in the majority opinion and instead made up entirely new law. Mystal argues that the Supreme Court should have instead tackled the true meaning of substantive due process head-on and decided Griswold on 14th Amendment equal protection and substantive due process grounds. This is probably a well-known argument in legal circles, but I'd not run into it before (and Mystal makes it far more interesting and entertaining than my summary).

Mystal also joins the tradition of thinking of the Reconstruction Amendments (the 13th, 14th, and 15th amendments passed after the Civil War) as a second revolution and an attempt to write a substantially new constitution on different legal principles, an attempt that subsequently failed in the face of concerted and deadly reactionary backlash. I first encountered this perspective via Jamelle Bouie, and it added a lot to my understanding of Reconstruction to see it as a political fight about the foundational principles of US government in addition to a fight over continuing racism in the US south. Maybe I was unusually ignorant of it (I know I need to read W.E.B. DuBois), but I think this line of reasoning doesn't get enough attention in popular media. Mystal provides a good introduction.

But, that being said, Allow Me to Retort is more of a vibes book than an argument. As in his other writing, Mystal focuses on what he sees as the core of an controversy and doesn't sweat the details too much. I felt like he was less trying to convince me and more trying to model a different way of thinking and talking about constitutional law that isn't deferential to ideas that are not worthy of deference. He presents his own legal analysis and possible solutions to current US political challenges, but I don't think the specific policy proposals are the strong part of this book. The point, instead, is to embrace a vigorous politics based on a modern understanding of equality, democracy, and human rights, without a lingering reverence for people who mostly didn't believe in any of those things. The role of the constitution in that politics is a flawed tool rather than a sacred text.

I think this book is best thought of as an internal argument in the US left. That argument is entirely within the frame of the US legal tradition, so if you're not in the US, it will be of academic interest at best (and probably not even that). If you're on the US right, Mystal offers lots of provocative pull quotes to enjoy getting outraged over, but he provides that service on Twitter for free.

But if you are on the US left, I think Allow Me to Retort is worth more consideration than I'd originally given it. There's something here about how we engage with our legal history, and while Mystal's approach is messy, maybe that's the only way you can get at something that's more emotion than logic. In some places it degenerates into a Twitter rant, but Mystal is usually entertaining even when he's ranting. I'm not sorry I read it.

Rating: 7 out of 10

Categories: FLOSS Project Planets

FSF News: Free Software Awards winners announced: Eli Zaretskii, Tad (SkewedZeppelin), GNU Jami

GNU Planet! - Sat, 2023-03-18 21:05
BOSTON, Massachusetts, USA -- Saturday, March 18, 2023 -- The Free Software Foundation (FSF) today announced the recipients of the 2022 Free Software Awards, which are given annually at the FSF's LibrePlanet conference to groups and individuals in the free software community who have made significant contributions to the cause for software freedom. This year's recipients of the awards are Eli Zaretskii, Tad (SkewedZeppelin), and GNU Jami. As LibrePlanet 2023 is a hybrid in-person and online conference this year, the ceremony was conducted both in person and virtually.
Categories: FLOSS Project Planets

Pages