Feeds

Freexian Collaborators: Monthly report about Debian Long Term Support, October 2024 (by Roberto C. Sánchez)

Planet Debian - Mon, 2024-11-11 19:00

Like each month, have a look at the work funded by Freexian’s Debian LTS offering.

Debian LTS contributors

In October, 20 contributors have been paid to work on Debian LTS, their reports are available:

  • Abhijith PA did 6.0h (out of 7.0h assigned and 7.0h from previous period), thus carrying over 8.0h to the next month.
  • Adrian Bunk did 15.0h (out of 87.0h assigned and 13.0h from previous period), thus carrying over 85.0h to the next month.
  • Arturo Borrero Gonzalez did 10.0h (out of 10.0h assigned).
  • Bastien Roucariès did 20.0h (out of 20.0h assigned).
  • Ben Hutchings did 4.0h (out of 0.0h assigned and 4.0h from previous period).
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Daniel Leidert did 29.0h (out of 26.0h assigned and 3.0h from previous period).
  • Emilio Pozuelo Monfort did 60.0h (out of 23.5h assigned and 36.5h from previous period).
  • Guilhem Moulin did 7.5h (out of 19.75h assigned and 0.25h from previous period), thus carrying over 12.5h to the next month.
  • Lee Garrett did 15.25h (out of 0.0h assigned and 60.0h from previous period), thus carrying over 44.75h to the next month.
  • Lucas Kanashiro did 10.0h (out of 10.0h assigned and 10.0h from previous period), thus carrying over 10.0h to the next month.
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 14.5h (out of 6.5h assigned and 17.5h from previous period), thus carrying over 9.5h to the next month.
  • Roberto C. Sánchez did 9.75h (out of 24.0h assigned), thus carrying over 14.25h to the next month.
  • Santiago Ruano Rincón did 23.5h (out of 25.0h assigned), thus carrying over 1.5h to the next month.
  • Sean Whitton did 6.25h (out of 1.0h assigned and 5.25h from previous period).
  • Stefano Rivera did 1.0h (out of 0.0h assigned and 10.0h from previous period), thus carrying over 9.0h to the next month.
  • Sylvain Beucler did 9.5h (out of 16.0h assigned and 44.0h from previous period), thus carrying over 50.5h to the next month.
  • Thorsten Alteholz did 11.0h (out of 11.0h assigned).
  • Tobias Frost did 10.5h (out of 12.0h assigned), thus carrying over 1.5h to the next month.
Evolution of the situation

In October, we have released 35 DLAs.

Some notable updates prepared in October include denial of service vulnerability fixes in nss, regression fixes in apache2, multiple fixes in php7.4, and new upstream releases of firefox-esr, openjdk-17, and opendk-11.

Additional contributions were made for the stable Debian 12 bookworm release by several LTS contributors. Arturo Borrero Gonzalez prepared a parallel update of nss, Bastien Roucariès prepared a parallel update of apache2, and Santiago Ruano Rincón prepared updates of activemq for both LTS and Debian stable.

LTS contributor Bastien Roucariès undertook a code audit of the cacti package and in the process discovered three new issues in node-dompurity, which were reported upstream and resulted in the assignment of three new CVEs.

As always, the LTS team continues to work towards improving the overall sustainability of the free software base upon which Debian LTS is built. We thank our many committed sponsors for their ongoing support.

Thanks to our sponsors

Sponsors that joined recently are in bold.

Categories: FLOSS Project Planets

Krita Monthly Update - Edition 20

Planet KDE - Mon, 2024-11-11 19:00

Welcome to the @Krita-promo team's October 2024 development and community update.

Development Report Android-only Krita 5.2.8 Hotfix Release

Krita 5.2.6 was reported to crash on startup on devices running Android 14 or later. This was caused by issues with an SDK update required for release on the Play Store, so a temporary 5.2.7 release reverting it was available from the downloads page only.

However, the issue has now been resolved and 5.2.8 is rolling out on the Play Store. Note that 5.2.8 raises the minimum supported Android version to Android 7.0 (Nougat).

Community Bug Hunt Started

The development team has declared a "Bug Hunt Month" running through November, and needs the community's help to decide what to do with each and every one of the hundreds of open bug reports on the bug tracker. Which reports are valid and need to be fixed? Which ones need more info or are already resolved?

Read the bug hunting guide and join in on the bug hunt thread on the Krita-Artists forum.

Community Report October 2024 Monthly Art Challenge Results

For the "Buried, Stuck, or otherwise Swallowed" theme, 16 members submitted 18 original artworks. And the winner is… Tomorrow, contest… I’m so finished by @mikuma_poponta!

The November Art Challenge is Open Now

For the November Art Challenge, @mikuma_poponta has chosen "Fluffy" as the theme, with the optional challenge of making it "The Ultimate Fluffy According to Me". See the full brief for more details, and get comfortable with this month's theme.

Featured Artwork Best of Krita-Artists - September/October 2024

8 images were submitted to the Best of Krita-Artists Nominations thread, which was open from September 14th to October 11th. When the poll closed on October 14th, moderators had to break a four-way tie for the last two spots, resulting in these five wonderful works making their way onto the Krita-Artists featured artwork banner:

Sapphire by @Dehaf

Sci-Fi Spaceship by @NAKIGRAL

Oracular Oriole by @SylviaRitter

Air Port by @agarad

Dancing with butterflies 🦋 by @Kichirou_Okami

Best of Krita-Artists - October/November 2024

Nominations were accepted until November 11th. The poll is now open until November 14th. Don't forgot to vote!

Ways to Help Krita

Krita is Free and Open Source Software developed by an international team of sponsored developers and volunteer contributors.

Visit Krita's funding page to see how user donations keep development going, and explore a one-time or monthly contribution. Or check out more ways to Get Involved, from testing, coding, translating, and documentation writing, to just sharing your artwork made with Krita.

The Krita-promo team has put out a call for volunteers, come join us and help keep these monthly updates going.

Notable Changes

Notable changes in Krita's development builds from Oct. 10 - Nov. 12, 2024.

Stable branch (5.2.9-prealpha):
  • Layers: Fix infinite loop when a clone layer is connected to a group with clones, and a filter mask triggers an out-of-bounds update. (Change, by Dmitry Kazakov)
  • General: Fix inability to save a document after saving while the image is busy and then canceling the busy operation. (bug report) (Change, by Dmitry Kazakov)
  • Resources: Fix crash when re-importing a resource after modifying it. (bug report) (Change, by Dmitry Kazakov)
  • Brush Presets: Fix loading embedded resources from .kpp files. (bug report, bug report, bug report) (Change, by Dmitry Kazakov)
  • Brush Tools: Fix the Dynamic Brush Tool to not use the Freehand Brush Tool's smoothing settings which it doesn't properly support. (bug report) (Change, by Mathias Wein)(Change, by Dmitry Kazakov)
  • Recorder Docker: Prevent interruption of the Text Tool by disabling recording while it is active. (bug report) (Change, by Dmitry Kazakov)
  • File Formats: EXR: Possibly fix saving EXR files with extremely low alpha values. (Change, by Dmitry Kazakov)
  • File Formats: EXR: Try to keep color space profile when saving EXR of incompatible type. (Change, by Dmitry Kazakov)
  • File Formats: EXR: Fix bogus offset when saving EXR with moved layers. (Change, by Dmitry Kazakov)
  • File Formats: JPEG-XL: Fix potential lockup when loading multi-page images. (Change, by Rasyuqa A H)
  • Keyboard Shortcuts: Set the default shortcut for Zoom In to = instead of +. (bug report) (Change, by Halla Rempt)
  • Brush Editor: Make the Saturation and Value brush options' graph and graph labels consistently agree on the range being -100% to 100% with 0% as neutral. (bug report) (Change, by Dmitry Kazakov)
Unstable branch (5.3.0-prealpha):

Bug fixes:

  • Vector Layers: Fix endlessly slow rendering of vector layers with clipping masks. (bug report) (Change, by Dmitry Kazakov)
  • Layers: Fix issue with transform masks on group layers not showing until visibility change, and visibility change of passthrough groups with layer styles causing artifacts. (bug report) (Change, by Dmitry Kazakov)
  • Brush Editor: Fix crash when clearing scratchpad while it's busy rendering a resource-intensive brushstroke. (bug report) (Change, by Dmitry Kazakov)
  • File Formats: EXR: Add GUI option for selecting the default color space for EXR files. (Change, by Dmitry Kazakov)
  • Transform Tool: Liquify: Move the Move/Rotate/Scale/Offset/Undo buttons to their own spot instead of alongside unrelated options, to avoid confusion. (bug report) (Change, by Emmet O'Neill)
  • Move Tool: Fix Force Instant Preview in the Move tool to be off by default. (CCbug report) (Change, by Halla Rempt)
  • Pop-Up Palette: Fix lag in selecting a color in the Pop-Up Palette. (bug report) (Change, by Dmitry Kazakov)
  • Scripting: Fix accessing active node state from the Python scripts. (bug report) (Change, by Dmitry Kazakov)
  • Usabillity: Remove unnecessary empty space at the bottom of Transform, Move and Crop tool options. (bug report) (Change, by Dmitry Kazakov)
Nightly Builds

Pre-release versions of Krita are built every day for testing new changes.

Get the latest bugfixes in Stable "Krita Plus" (5.2.9-prealpha): Linux - Windows - macOS (unsigned) - Android arm64-v8a - Android arm32-v7a - Android x86_64

Or test out the latest Experimental features in "Krita Next" (5.3.0-prealpha). Feedback and bug reports are appreciated!: Linux - Windows - macOS (unsigned) - Android arm64-v8a - Android arm32-v7a - Android x86_64

Have feedback?

Join the discussion of this post on the Krita-Artists forum!

Categories: FLOSS Project Planets

Python⇒Speed: Using portable SIMD in stable Rust

Planet Python - Mon, 2024-11-11 19:00

In a previous post we saw that you can speed up code significantly on a single core using SIMD: Single Instruction Multiple Data. These specialized CPU instructions allow you to, for example, add 4 values at once with a single instruction, instead of the usual one value at a time. The performance improvement you get compounds with multi-core parallelism: you can benefit from both SIMD and threading at the same time.

Unfortunately, SIMD instructions are specific both to CPU architecture and CPU model. Thus ARM CPUs as used on modern Macs have different SIMD instructions than x86-64 CPUs. And even if you only care about x86-64, different models support different instructions; the i7-12700K CPU in my current computer doesn’t support AVX-512 SIMD, for example.

One way to deal with this is to write custom versions for each variation of SIMD instructions. Another is to use a portable SIMD library, that provides an abstraction layer on top of these various instruction sets.

In the previous post we used the std::simd library. It is built-in to the Rust standard library… but unfortunately it’s currently only available when using an unstable (“nightly”) compiler.

How you can write portable SIMD if you want to use stable Rust? In this article we’ll:

  • Introduce the wide crate, that lets you write portable SIMD in stable Rust.
  • Show how it can be used to reimplement the Mandelbrot algorithm we previously implemented with std::simd.
  • Go over the pros and cons of these alternatives.
Read more...
Categories: FLOSS Project Planets

Eli Bendersky: ML in Go with a Python sidecar

Planet Python - Mon, 2024-11-11 17:29

Machine learning models are rapidly becoming more capable; how can we make use of these powerful new tools in our Go applications?

For top-of-the-line commercial LLMs like ChatGPT, Gemini or Claude, the models are exposed as language agnostic REST APIs. We can hand-craft HTTP requests or use client libraries (SDKs) provided by the LLM vendors. If we need more customized solutions, however, some challenges arise. Completely bespoke models are typically trained in Python using tools like TensorFlow, JAX or PyTorch that don't have real non-Python alternatives.

In this post, I will present some approaches for Go developers to use ML models in their applications - with increasing level of customization. The summary up front is that it's pretty easy, and we only have to deal with Python very minimally, if at all - depending on the circumstances.

Internet LLM services

This is the easiest category: multimodal services from Google, OpenAI and others are available as REST APIs with convenient client libraries for most leading languages (including Go), as well as third-party packages that provide abstractions on top (e.g. langchaingo).

Check out the official Go blog titled Building LLM-powered applications in Go that was published earlier this year. I've written about it before on this blog as well: #1, #2, #3 etc.

Go is typically as well supported as other programming languages in this domain; in fact, it's uniquely powerful for such applications because of its network-native nature; quoting from the Go blog post:

Working with LLM services often means sending REST or RPC requests to a network service, waiting for the response, sending new requests to other services based on that and so on. Go excels at all of these, providing great tools for managing concurrency and the complexity of juggling network services.

Since this has been covered extensively, let's move on to the more challenging scenarios.

Locally-running LLMs

There's a plethora of high-quality open models [1] one can choose from to run locally: Gemma, Llama, Mistral and many more. While these models aren't quite as capable as the strongest commercial LLM services, they are often surprisingly good and have clear benefits w.r.t. cost and privacy.

The industry has begun standardizing on some common formats for shipping and sharing these models - e.g. GGUF from llama.cpp, safetensors from Hugging Face or the older ONNX. Additionally, there are a number of excellent OSS tools that let us run such models locally and expose a REST API for an experience that's very similar to the OpenAI or Gemini APIs, including dedicated client libraries.

The best known such tool is probably Ollama; I've written extensively about it in the past: #1, #2, #3.

Ollama lets us customize an LLM through a Modelfile, which includes things like setting model parameters, system prompts etc. If we fine-tuned a model [2], it can also be loaded into Ollama by specifying our own GGUF file.

If you're running in a cloud environment, some vendors already have off-the-shelf solutions like GCP's Cloud Run integration that may be useful.

Ollama isn't the only player in this game, either; recently a new tool emerged with a slightly different approach. Llamafile distributes the entire model as a single binary, which is portable across several OSes and CPU architectures. Like Ollama, it provides REST APIs for the model.

If such a customized LLM is a suitable solution for your project, consider just running Ollama or Llamafile and using their REST APIs to communicate with the model. If you need higher degrees of customization, read on.

A note about the sidecar pattern

Before we proceed, I want to briefly discuss the sidecar pattern of application deployment. That k8s link talks about containers, but the pattern isn't limited to these. It applies to any software architecture in which functionality is isolated across processes.

Suppose we have an application that requires some library functionality; using Go as an example, we could find an appropriate package, import it and be on our way. Suppose there's no suitable Go package, however. If libraries exist with a C interface, we could alternatively use cgo to import it.

But say there's no C API either, for example if the functionality is only provided by a language without a convenient exported interface. Maybe it's in Lisp, or Perl, or... Python.

A very general solution could be to wrap the code we need in some kind of server interface and run it as a separate process; this kind of process is called a sidecar - it's launched specifically to provide additional functionality for another process. Whichever inter-process communication (IPC) mechanism we use, the benefits of this approach are many - isolation, security, language independence, etc. In today's world of containers and orchestration this approach is becoming increasingly more common; this is why many of the links about sidecars lead to k8s and other containerized solutions.

The Ollama approach outlined in the previous section is one example of using the sidecar pattern. Ollama provides us with LLM functionality but it runs as a server in its own process.

The solutions presented in the rest of this post are more explicit and fully worked-out examples of using the sidecar pattern.

Locally-running LLM with Python and JAX

Suppose none of the existing open LLMs will do for our project, even fine-tuned. At this point we can consider training our own LLM - this is hugely expensive, but perhaps there's no choice. Training usually involves one of the large ML frameworks like TensorFlow, JAX or PyTorch. In this section I'm not going to talk about how to train models; instead, I'll show how to run local inference of an already trained model - in Python with JAX, and use that as a sidecar server for a Go application.

The sample (full code is here) is based on the official Gemma repository, using its sampler library [3]. It comes with a README that explains how to set everything up. This is the relevant code instantiating a Gemma sampler:

# Once initialized, this will hold a sampler_lib.Sampler instance that # can be used to generate text. gemma_sampler = None def initialize_gemma(): """Initialize Gemma sampler, loading the model into the GPU.""" model_checkpoint = os.getenv("MODEL_CHECKPOINT") model_tokenizer = os.getenv("MODEL_TOKENIZER") parameters = params_lib.load_and_format_params(model_checkpoint) print("Parameters loaded") vocab = spm.SentencePieceProcessor() vocab.Load(model_tokenizer) transformer_config = transformer_lib.TransformerConfig.from_params( parameters, cache_size=1024, ) transformer = transformer_lib.Transformer(transformer_config) global gemma_sampler gemma_sampler = sampler_lib.Sampler( transformer=transformer, vocab=vocab, params=parameters["transformer"], ) print("Sampler ready")

The model weights and tokenizer vocabulary are files downloaded from Kaggle, per the instructions in the Gemma repository README.

So we have LLM inference up and running in Python; how do we use it from Go?

Using a sidecar, of course. Let's whip up a quick web server around this model and expose a trivial REST interface on a local port that Go (or any other tool) can talk to. As an example, I've set up a Flask-based web server around this inference code. The web server is invoked with gunicorn - see the shell script for details.

Excluding the imports, here's the entire application code:

def create_app(): # Create an app and perform one-time initialization of Gemma. app = Flask(__name__) with app.app_context(): initialize_gemma() return app app = create_app() # Route for simple echoing / smoke test. @app.route("/echo", methods=["POST"]) def echo(): prompt = request.json["prompt"] return {"echo_prompt": prompt} # The real route for generating text. @app.route("/prompt", methods=["POST"]) def prompt(): prompt = request.json["prompt"] # For total_generation_steps, 128 is a default taken from the Gemma # sample. It's a tradeoff between speed and quality (higher values mean # better quality but slower generation). # The user can override this value by passing a "sampling_steps" key in # the request JSON. sampling_steps = request.json.get("sampling_steps", 128) sampled_str = gemma_sampler( input_strings=[prompt], total_generation_steps=int(sampling_steps), ).text return {"response": sampled_str}

The server exposes two routes:

  • prompt: a client sends in a textual prompt, the server runs Gemma inference and returns the generated text in a JSON response
  • echo: used for testing and benchmarking

Here's how it all looks tied together:

The important takeaway is that this is just an example. Literally any part of this setup can be changed: one could use a different ML library (maybe PyTorch instead of JAX); one could use a different model (not Gemma, not even an LLM) and one can use a different setup to build a web server around it. There are many options, and each developer will choose what fits their project best.

It's also worth noting that we've written less than 100 lines of Python code in total - much of it piecing together snippets from tutorials. This tiny amount of Python code is sufficient to wrap an HTTP server with a simple REST interface around an LLM running locally through JAX on the GPU. From here on, we're safely back in our application's actual business logic and Go.

Now, a word about performance. One of the concerns developers may have with sidecar-based solutions is the performance overhead of IPC between Python and Go. I've added a simple echo endpoint to measure this effect; take a look at the Go client that exercises it; on my machine the latency of sending a JSON request from Go to the Python server and getting back the echo response is about 0.35 ms on average. Compared to the time it takes Gemma to process a prompt and return a response (typically measured in seconds, or maybe hundreds of milliseconds on very powerful GPUs), this is entirely negligible.

That said, not every custom model you may need to run is a full-fledged LLM. What if your model is small and fast, and the overhead of 0.35 ms becomes significant? Worry not, it can be optimized. This is the topic of the next section.

Locally-running fast image model with Python and TensorFlow

The final sample of this post mixes things up a bit:

  • We'll be using a simple image model (instead of an LLM)
  • We're going to train it ourselves using TensorFlow+Keras (instead of JAX)
  • We'll use a different IPC method between the Python sidecar server and clients (instead of HTTP+REST)

The model is still implemented in Python, and it's still driven as a sidecar server process by a Go client [4]. The idea here is to show the versatility of the sidecar approach, and to demonstrate a lower-latency way to communicate between the processes.

The full code of the sample is here. It trains a simple CNN (convolutional neural network) to classify images from the CIFAR-10 dataset:

The neural net setup with TensorFlow and Keras was taken from an official tutorial. Here's the entire network definition:

model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation="relu", input_shape=(32, 32, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation="relu")) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation="relu")) model.add(layers.Flatten()) model.add(layers.Dense(64, activation="relu")) model.add(layers.Dense(10))

CIFAR-10 images are 32x32 pixels, each pixel being 3 values for red, green and blue. In the original dataset, these values are bytes in the inclusive range 0-255 representing color intensity. This should explain the (32, 32, 3) shape appearing in the code. The full code for training the model is in the train.py file in the sample; it runs for a bit and saves the serialized model along with the trained weights into a local file.

The next component is an "image server": it loads the trained model+weights file from disk and runs inference on images passed into it, returning the label the model thinks is most likely for each.

The server doesn't use HTTP and REST, however. It creates a Unix domain socket and uses a simple length-prefix encoding protocol to communicate:

Each packet starts with a 4-byte field that specifies the length of the rest of the contents. A type is a single byte, and the body can be anything [5]. In the sample image server two commands are currently supported:

  • 0 means "echo" - the server will respond with the same packet back to the client. The contents of the packet body are immaterial.
  • 1 means "classify" - the packet body is interpreted as a 32x32 RGB image, encoded as the red channel for each pixel in the first 1024 bytes (32x32, row major), then green in the next 1024 bytes and finally blue in the last 1024 bytes. Here the server will run the image through the model, and reply with the label the model thinks describes the image.

The sample also includes a simple Go client that can take a PNG file from disk, encode it in the required format and send it over the domain socket to the server, recording the response.

The client can also be used to benchmark the latency of a roundtrip message exchange. It's easier to just show the code instead of explaining what it does:

func runBenchmark(c net.Conn, numIters int) { // Create a []byte with 3072 bytes. body := make([]byte, 3072) for i := range body { body[i] = byte(i % 256) } t1 := time.Now() for range numIters { sendPacket(c, messageTypeEcho, body) cmd, resp := readPacket(c) if cmd != 0 || len(resp) != len(body) { log.Fatal("bad response") } } elapsed := time.Since(t1) fmt.Printf("Num packets: %d, Elapsed time: %s\n", numIters, elapsed) fmt.Printf("Average time per request: %d ns\n", elapsed.Nanoseconds()/int64(numIters)) }

In my testing, the average latency of a roundtrip is about 10 μs (that's micro-seconds). Considering the size of the message and it being Python on the other end, this is roughly in-line with my earlier benchmarking of Unix domain socket latency in Go.

How long does a single image inference take with this model? In my measurements, about 3 ms. Recall that the communication latency for the HTTP+REST approach was 0.35 ms; while this is only 12% of the image inference time, it's close enough to be potentially worrying. On a beefy server-class GPU the time can be much shorter [6].

With the custom protocol over domain sockets, the latency - being 10 μs - seems quite negligible no matter what you end up running on your GPU.

Code

The full code for the samples in this post is on GitHub.

[1]To be pedantic, these models are not entirely open: their inference architecture is open-source and their weights are available, but the details of their training remain proprietary. [2]The details of fine-tuning models are beyond the scope of this post, but there are plenty resources about this online. [3]"Sampling" in LLMs means roughly "inference". A trained model is fed an input prompt and then "sampled" to produce its output. [4]In my samples, the Python server and Go client simply run in different terminals and talk to each other. How service management is structured is very project-specific. We could envision an approach wherein the Go application launches the Python server to run in the background and communicates with it. Increasingly likely these days, however, would be a container-based setup, where each program is its own container and an orchestration solution launches and manages these containers. [5]You may be wondering why I'm implementing a custom protocol here instead of using something established. In real life, I'd definitely recommend using something like gRPC. However, for the sake of this sample I wanted something that would be (1) simple without additional libraries and (2) very fast. FWIW, I don't think the latency numbers would be very much different for gRPC. Check out my earlier post about RPC over Unix domain sockets in Go. [6]On the other hand, the model I'm running here is really small. It's fair to say realistic models you'll use in your application will be much larger and hence slower.
Categories: FLOSS Project Planets

Talking Drupal: Talking Drupal #475 - Workspaces

Planet Drupal - Mon, 2024-11-11 15:00

Today we are talking about Workspaces, What They are, and How They Work with guest Scott Weston. We’ll also cover Workspaces Extra as our module of the week.

For show notes visit: https://www.talkingDrupal.com/475

Topics
  • What are Workspaces in Drupal
  • What's a common use cases for Workspaces
  • Are Workspaces stable
  • Do Workspaces help with content versioning
  • What does the module ecosystem look like for Workspaces
  • Inspiration
  • Workspaces best practices
  • Any interesting ways it is being used
  • Is there a way to access workspace content in twig
  • Navigation integration
  • Workspaces and workflows
  • What aspects of a Workspace are limited to live
  • If someone wants to get involved or get started
Resources Guests

Scott Weston - scott-weston

Hosts

Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Joshua "Josh" Mitchell - joshuami.com joshuami

MOTW Correspondent

Martin Anderson-Clutz - mandclu.com mandclu

  • Brief description:
    • Do you want to extend the capabilities of the Workspaces system in Drupal core? There’s a module for that.
  • Module name/project name:
  • Brief history
    • How old: created in Apr 2021 by Andrei Mateescu (amateescu) of tag1, who has also contributed to Workspaces in core, among other many things
    • Versions available: 2.0.0-alpha3 which works with Drupal 10.3 or 11
  • Maintainership
    • Actively maintained, latest release is less than a week old
    • Security coverage: technically yes, but not really until it has a stable release
    • Test coverage
    • Number of open issues: 20 open issues, 3 of which are bugs against the current branch, though one has already been fixed
  • Usage stats:
    • 89 sites
  • Module features and usage
    • One of the big features in Drupal 10.3 was that Workspaces is now officially stable. That said, not everything works the way some site builders will want it to. That’s where a contrib solution like Workspace Extra can help to fill in the gaps
    • It provides new options like letting you roll back changes from a published workspace, move content between workspaces, discard changes in a workspace, squashing content revisions when a workspace is published, and more
    • Workspaces Extra, or WSE also includes a number of submodules to add even more capabilities. For example, they can allow your workspace to stage an allowlist of configuration changes, deploy workspace content using an import/export system, stage menu changes, and more. For workflow, there’s an option to generate a shareable workspace preview link for external users, and a scheduler to publish your workspace at a specific day and time
    • I will add that the first time I played with workspaces I ran into an issue where I couldn’t create media entities within a workspace. I don’t know for sure that this hasn’t been fixed in core, but the core issue about it is still listed as “Needs work”. That said, the last comment on that issue (link in the show notes) lists WSE as something that helps, so if you encounter the same issue with Workspaces, WSE is worth a try
Categories: FLOSS Project Planets

Of Color and Software

Planet KDE - Mon, 2024-11-11 14:34

It’s been a minute!

We have been hard at work making sure our design system keeps moving forward. For the past weeks, we have made significant progress in the space of color creation and icons.

There is also an easter egg in the form of PenPot. Read the rest!

As previously mentioned, we restructured our color palettes to have set color variations at various levels. We will combine those colors into tokens that will be named something like this:

pd.sys.color.red50

Meaning:

  • PD: Plasma Design
  • SYS: System token (We also have reference tokens and component tokens, .ref. and .com. respectively)
  • Color: Token type
  • red50: color name + color value

Note that as we follow Material design guidelines for these colors, we have a collection of 100 different color shades for a given color. Depending on the needs of the system or changes in design, we could decide to not use red50 but we would like more intensity. So we would choose red49, or red48, and so on.

The color variable name would change accordingly. This set up would allow designers and developers to understand the kind of token they are working with and it would be the same language for both developer and designer.

In Figma and PenPot, designers have the ability to name tokens however they like. I opted for keeping token names as we are recommending them for the Plasma system. That way there is good consistency.

This week, we consolidated these colors and we added them to the list of tokens in Figma and PenPot. However, there is still more to be done in the form of documentation for our Plasma developer team. We are still working through it, making sure we are accurate in the request for development.

Additionally, this week we had the pleasure to meet with Pablo Ruiz, CEO at PenPot. Mike, one of our team members met Pablo recently and spoke of our Plasma Next project. This led to a meeting to discuss the needs that our team currently has for developing a design system.

The team at PenPot is excited to partner with our KDE team and the Plasma Next initiative. They have generously offered a few resources to help.

This couldn’t come at a better time as very recently we have been hitting gaps in our team knowledge when it comes to developing design systems. This process is a first for our desktop system and we want to get it right. With the help of the PenPot team and the changes they are making to the application, this should be easier.

As such, we also decided to request prioritization for some of our tickets that would allow us to set up and migrate our Figma assets into PenPot and eventually, share these with the community at large.

Today, we are not close to releasing a full design system for others to use, but we are making good progress. Stay tuned!

We also moved into the process of editing 16px icons. Given that we already have new icons in the 24px collection that we can leverage, we cut the design time in half or more. We don’t have to brainstorm new icons, we mostly just have to edit the 24px icon and adapt it to a 16px version. This work just barely started but we are making good progress.

One area that is still up in the air is our colorful icons. Given we edited the monochrome icons, this calls for editing colorful icons as well. We have received many suggestions on what kind of colorful style we should follow. I would like to extend that invitation.

If you have seen or created amazing colorful icons and would like to suggest that style for us at Plasma, send us a comment!

That’s it for this week. Good progress so far!

Categories: FLOSS Project Planets

Dirk Eddelbuettel: RcppSpdlog 0.0.19 on CRAN: New Upstream, New Features

Planet Debian - Mon, 2024-11-11 12:47

Version 0.0.19 of RcppSpdlog arrived on CRAN early this morning and has been uploaded to Debian. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

This releases updates the code to the version 1.15.0 of spdlog which was released on Saturday, and contains fmt 11.0.2. It also contains a contributed PR which allows use std::format under C++20, bypassing fmt (with some post-merge polish too), and another PR correcting a documentation double-entry.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.19 (2024-11-10)
  • Support use of std::format under C++20 via opt-in define instead of fmt (Xanthos Xanthopoulos in #19)

  • An erroneous duplicate log=level documentation level was removed (Contantinos Giachalis in #20)

  • Upgraded to upstream release spdlog 1.15.0 (Dirk in #21)

  • Partially revert / simplify src/formatter.cpp accomodating both #19 and previous state (Dirk in #21)

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppSpdlog page, or the package documention site. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Gunnar Wolf: Why academics under-share research data - A social relational theory

Planet Debian - Mon, 2024-11-11 09:53
This post is a review for Computing Reviews for Why academics under-share research data - A social relational theory , a article published in Journal of the Association for Information Science and Technology

As an academic, I have cheered for and welcomed the open access (OA) mandates that, slowly but steadily, have been accepted in one way or another throughout academia. It is now often accepted that public funds means public research. Many of our universities or funding bodies will demand that, with varying intensities–sometimes they demand research to be published in an OA venue, sometimes a mandate will only “prefer” it. Lately, some journals and funder bodies have expanded this mandate toward open science, requiring not only research outputs (that is, articles and books) to be published openly but for the data backing the results to be made public as well. As a person who has been involved with free software promotion since the mid 1990s, it was natural for me to join the OA movement and to celebrate when various universities adopt such mandates.

Now, what happens after a university or funder body adopts such a mandate? Many individual academics cheer, as it is the “right thing to do.” However, the authors observe that this is not really followed thoroughly by academics. What can be observed, rather, is the slow pace or “feet dragging” of academics when they are compelled to comply with OA mandates, or even an outright refusal to do so. If OA and open science are close to the ethos of academia, why aren’t more academics enthusiastically sharing the data used for their research? This paper finds a subversive practice embodied in the refusal to comply with such mandates, and explores an hypothesis based on Karl Marx’s productive worker theory and Pierre Bourdieu’s ideas of symbolic capital.

The paper explains that academics, as productive workers, become targets for exploitation: given that it’s not only the academics’ sharing ethos, but private industry’s push for data collection and industry-aligned research, they adapt to technological changes and jump through all kinds of hurdles to create more products, in a result that can be understood as a neoliberal productivity measurement strategy. Neoliberalism assumes that mechanisms that produce more profit for academic institutions will result in better research; it also leads to the disempowerment of academics as a class, although they are rewarded as individuals due to the specific value they produce.

The authors continue by explaining how open science mandates seem to ignore the historical ways of collaboration in different scientific fields, and exploring different angles of how and why data can be seen as “under-shared,” failing to comply with different aspects of said mandates. This paper, built on the social sciences tradition, is clearly a controversial work that can spark interesting discussions. While it does not specifically touch on computing, it is relevant to Computing Reviews readers due to the relatively high percentage of academics among us.

Categories: FLOSS Project Planets

Real Python: Python News Roundup: November 2024

Planet Python - Mon, 2024-11-11 09:00

The latest Python developments all point to the same thing—Python is currently thriving. The recent GitHub Octoverse 2024 report has revealed that Python is now the most used language on GitHub. Also, last month saw the release of Python 3.13, which is already laying the groundwork for some exciting future improvements.

While Python core developers have been busy exploring the language’s features as they tinker with upcoming enhancements, it’s good to know that working on Python’s source code isn’t the only way you can contribute to Python’s future. Another way to shape the focus of upcoming releases is to join the Python Developers Survey 2024.

And with the end of the year in sight, you may want to venture a look at next year’s calendar and mark some dates, such as the PyCon US conference in May or the Python 3.14 release in October 2025.

Now that you know the highlights, it’s time to dive into the most important Python news for November.

Join Now: Click here to join the Real Python Newsletter and you'll never miss another Python tutorial, course update, or post.

Python’s Popularity Shines in GitHub’s Octoverse 2024

The latest Octoverse report for 2024 shows that Python remains one of the most widely used languages on GitHub, securing its place as a core language in open-source and professional development. Python ranked among the top three most-used languages, demonstrating its continued appeal across industries and experience levels:

As GitHub’s annual report illustrates, Python’s popularity is fueled by its solid role in developing machine learning and artificial intelligence frameworks.

Another takeaway from the Octoverse survey is Python’s strong community engagement. Python developers are not only active in contributing code but also in participating in discussions, filing issues, and reviewing pull requests.

Read the full article at https://realpython.com/python-news-november-2024/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

death and gravity: reader 3.15 released – Retry-After

Planet Python - Mon, 2024-11-11 05:44

Hi there!

I'm happy to announce version 3.15 of reader, a Python feed reader library.

What's new? #

Here are the highlights since reader 3.13.

Retry-After #

Now that it supports scheduled updates, reader can honor the Retry-After HTTP header sent with 429 Too Many Requests or 503 Service Unavailable responses.

Adding this required an extensive rework of the parser internal API, but I'd say it was worth it, since we're getting quite close to it becoming stable.

Next up in HTTP compliance is to do more on behalf of the user: bump the update interval on repeated throttling, and handle gone and redirected feeds accordingly.

Faster tag filters, feed slugs #

OR-only tag filters like get_feeds(tags=[['one', 'two']]) now use an index.

This is useful for maintaining a reverse mapping to feeds/entries, like the feed slugs recipe does to add support for user-defined short URLs:

>>> url = 'https://death.andgravity.com/_feed/index.xml' >>> reader.set_feed_slug(url, 'andgravity') >>> reader.get_feed_by_slug('andgravity') Feed(url='https://death.andgravity.com/_feed/index.xml', ...)

(Interested in adopting this recipe as a real plugin? Submit a pull request!)

enclosure_tags improvements #

The enclosure_tags plugin fixes ID3 tags for MP3 enclosures like podcasts.

I've changed the implementation to rewrite tags on the fly, instead of downloading the entire file, rewriting tags, and then sending it to the user; this should allow browsers to display accurate download progress.

Some other, smaller improvements:

  • Set genre to Podcast if the feed has any tag containing "podcast".
  • Prefer feed user title to feed title if available.
  • Use feed title as artist, instead of author.
Using the installed feedparser #

Because feedparser makes PyPI releases at a lower cadence, reader has been using a vendored version of feedparser's develop branch for some time. It is now possible to opt out of this behavior and make reader use the installed feedparser package.

Python versions #

reader 3.14 (released back in July) adds support for Python 3.13.

That's it for now. For more details, see the full changelog.

Want to contribute? Check out the docs and the roadmap.

Learned something new today? Share this with others, it really helps!

What is reader#

reader takes care of the core functionality required by a feed reader, so you can focus on what makes yours different.

reader allows you to:

  • retrieve, store, and manage Atom, RSS, and JSON feeds
  • mark articles as read or important
  • add arbitrary tags/metadata to feeds and articles
  • filter feeds and articles
  • full-text search articles
  • get statistics on feed and user activity
  • write plugins to extend its functionality

...all these with:

  • a stable, clearly documented API
  • excellent test coverage
  • fully typed Python

To find out more, check out the GitHub repo and the docs, or give the tutorial a try.

Why use a feed reader library? #

Have you been unhappy with existing feed readers and wanted to make your own, but:

  • never knew where to start?
  • it seemed like too much work?
  • you don't like writing backend code?

Are you already working with feedparser, but:

  • want an easier way to store, filter, sort and search feeds and entries?
  • want to get back type-annotated objects instead of dicts?
  • want to restrict or deny file-system access?
  • want to change the way feeds are retrieved by using Requests?
  • want to also support JSON Feed?
  • want to support custom information sources?

... while still supporting all the feed types feedparser does?

If you answered yes to any of the above, reader can help.

The reader philosophy #
  • reader is a library
  • reader is for the long term
  • reader is extensible
  • reader is stable (within reason)
  • reader is simple to use; API matters
  • reader features work well together
  • reader is tested
  • reader is documented
  • reader has minimal dependencies
Why make your own feed reader? #

So you can:

  • have full control over your data
  • control what features it has or doesn't have
  • decide how much you pay for it
  • make sure it doesn't get closed while you're still using it
  • really, it's easier than you think

Obviously, this may not be your cup of tea, but if it is, reader can help.

Categories: FLOSS Project Planets

Programiz: Python f-string

Planet Python - Mon, 2024-11-11 04:15
A Python f-string (formatted string literal) allows you to insert variables or expressions directly into a string by placing them inside curly braces {}. In this tutorial, you will learn about the Pyt
Categories: FLOSS Project Planets

Vincent Bernat: Customize Caddy's plugins with Nix

Planet Debian - Mon, 2024-11-11 02:35

Caddy is an open-source web server written in Go. It handles TLS certificates automatically and comes with a simple configuration syntax. Users can extend its functionality through plugins1 to add features like rate limiting, caching, and Docker integration.

While Caddy is available in Nixpkgs, adding extra plugins is not simple.2 The compilation process needs Internet access, which Nix denies during build to ensure reproducibility. When trying to build the following derivation using xcaddy, a tool for building Caddy with plugins, it fails with this error: dial tcp: lookup proxy.golang.org on [::1]:53: connection refused.

{ pkgs }: pkgs.stdenv.mkDerivation { name = "caddy-with-xcaddy"; nativeBuildInputs = with pkgs; [ go xcaddy cacert ]; unpackPhase = "true"; buildPhase = '' xcaddy build --with github.com/caddy-dns/powerdns@v1.0.1 ''; installPhase = '' mkdir -p $out/bin cp caddy $out/bin ''; }

Fixed-output derivations are an exception to this rule and get network access during build. They need to specify their output hash. For example, the fetchurl function produces a fixed-output derivation:

{ stdenv, fetchurl }: stdenv.mkDerivation rec { pname = "hello"; version = "2.12.1"; src = fetchurl { url = "mirror://gnu/hello/hello-${version}.tar.gz"; hash = "sha256-jZkUKv2SV28wsM18tCqNxoCZmLxdYH2Idh9RLibH2yA="; }; }

To create a fixed-output derivation, you need to set the outputHash attribute. The example below shows how to output Caddy’s source code, with some plugin enabled, as a fixed-output derivation using xcaddy and go mod vendor.

pkgs.stdenvNoCC.mkDerivation rec { pname = "caddy-src-with-xcaddy"; version = "2.8.4"; nativeBuildInputs = with pkgs; [ go xcaddy cacert ]; unpackPhase = "true"; buildPhase = '' export GOCACHE=$TMPDIR/go-cache export GOPATH="$TMPDIR/go" XCADDY_SKIP_BUILD=1 TMPDIR="$PWD" \ xcaddy build v${version} --with github.com/caddy-dns/powerdns@v1.0.1 (cd buildenv* && go mod vendor) ''; installPhase = '' mv buildenv* $out ''; outputHash = "sha256-F/jqR4iEsklJFycTjSaW8B/V3iTGqqGOzwYBUXxRKrc="; outputHashAlgo = "sha256"; outputHashMode = "recursive"; }

With a fixed-output derivation, it is up to us to ensure the output is always the same:

  • we ask xcaddy to not compile the program and keep the source code,3
  • we pin the version of Caddy we want to build, and
  • we pin the version of each requested plugin.

You can use this derivation to override the src attribute in pkgs.caddy:

pkgs.caddy.overrideAttrs (prev: { src = pkgs.stdenvNoCC.mkDerivation { /* ... */ }; vendorHash = null; subPackages = [ "." ]; });

Check out the complete example in the GitHub repository. To integrate into a Flake, add github:vincentbernat/caddy-nix as an overlay:

{ inputs = { nixpkgs.url = "nixpkgs"; flake-utils.url = "github:numtide/flake-utils"; caddy.url = "github:vincentbernat/caddy-nix"; }; outputs = { self, nixpkgs, flake-utils, caddy }: flake-utils.lib.eachDefaultSystem (system: let pkgs = import nixpkgs { inherit system; overlays = [ caddy.overlays.default ]; }; in { packages = { default = pkgs.caddy.withPlugins { plugins = [ "github.com/caddy-dns/powerdns@v1.0.1" ]; hash = "sha256-Vh7JP6RK23Y0E5IDJ3zbBCnF3gKPIav05OMI4ALIcZg="; }; }; }); }
  1. This article uses the term “plugins,” though Caddy documentation also refers to them as “modules” since they are implemented as Go modules. ↩︎

  2. This is a feature request since quite some time. A proposed solution has been rejected. The one described in this article is a bit different. ↩︎

  3. This is not perfect: if the source code produced by xcaddy changes, the hash would change and the build would fail. ↩︎

Categories: FLOSS Project Planets

Django Weblog: Announcing DjangoCon Europe 2025 in Dublin, Ireland! 🍀

Planet Python - Mon, 2024-11-11 01:01

We're thrilled to announce the much-anticipated return of DjangoCon Europe, set to take place in the vibrant city of Dublin, Ireland, in 2025! DjangoCon Europe has been a cornerstone of the Django community, bringing together developers and enthusiasts from all over Europe and beyond to celebrate and advance the Django web framework.

Save the Dates

Mark your calendars for DjangoCon Europe 2025, which will be held from April 23th to 27th. The conference will host a balanced mix of insightful talks, hands-on workshops, and ample opportunities for networking and socialising with fellow Django enthusiasts.

Explore Dublin

With its rich history and vibrant tech scene, Dublin is the perfect backdrop for this year's conference. Dublin's thriving tech community and innovative spirit make it an ideal host for DjangoCon Europe. Plus, the city's lively culture, breathtaking architecture, and friendly locals are sure to provide an unforgettable experience.

Call for Proposals

DjangoCon Europe wouldn't be the same without the insightful and diverse talks contributed by our community. We encourage you to consider submitting a proposal to share your knowledge, experiences, and insights with the Django community. Keep an eye out for the Call for Proposals (CFP) announcement. This is your chance to contribute to the conference program and help make DjangoCon Europe 2025 exceptional.

Get Involved

DjangoCon Europe is a community-driven event, and we rely on the active participation and support of our community members. Here are a few ways you can get involved:

  • Attend: Join us in Dublin for a week of learning, networking, and fun.
  • Speak: Share your expertise by submitting a talk proposal when the CFP opens.
  • Sponsor: Support the conference financially and gain visibility in the Django community (email us at sponsors@djangocon.eu)
  • Volunteer: Help us make the conference run smoothly by volunteering your time and skills (https://forms.gle/xmwxssiheMa1oCvPA)

Stay tuned for updates on registration, sponsorship opportunities, and more by following DjangoCon Europe on Twitter and Linkedin.

Stay Informed

To stay up-to-date with the latest DjangoCon Europe 2025 news, visit our website and follow us on Twitter & Linkedin. We will be sharing details about the schedule, speakers, and more in the coming months, so make sure you're on the list!

We can't wait to see you in Dublin for DjangoCon Europe 2025. Get ready for a week of learning, networking, and celebrating all things Django. It's going to be an unforgettable event, and we look forward to sharing this experience with you. Thank you for being a part of our amazing Django community!

See you in Dublin! 🍀

PS: Keep an eye on our social media for special offer we will have during the upcoming holiday season 😉

Categories: FLOSS Project Planets

OpenSense Labs: Drupal 7 End Of Life: Top Reasons You Should Migrate To Drupal 10

Planet Drupal - Sun, 2024-11-10 23:54
Drupal 7 End Of Life: Top Reasons You Should Migrate To Drupal 10 Esha.B Mon, 11/11/2024 - 10:24 Off Image narrow

Drupal 10 was released in December 2022 and ever since, the community has been pushing its users to do Drupal 7 to 10 migration. As per w3techs.com, as many as 41.2% of all Drupal sites are running on Drupal 7.  

Using an outdated version has downsides. Businesses miss out on technological advancements and new features that can speed up and safeguard their digital properties. 

With the release of Drupal 10 and the fact that Drupal 7 end of life is in January 2025, it is crucial to do Drupal 7 to 10 migration soon. 

So, if your existing content management system is running on the Drupal 7 version, we suggest looking into OpenSense Labs' Drupal 7 to 10 migration services for guidance and upgrading to Drupal 10 today. 

Migrate To Drupal 10 

And if you’re still not convinced, let’s look into why enterprises should plan their Drupal 7 to 10 migration now, and not wait until the last moment.  

Why Should You Do Drupal 7 To 10 Migration?  

Drupal 10 brings automated updates, and improved user experience, along with several other feature additions. These components are more secure, user-friendly, and powerful. Let’s dive deep into why enterprises must plan their Drupal 7 to 10 migration right away.  

1. Drupal 7 Support From The Community 

As an open-source CMS Drupal 7 support that comes from the Drupal community is what keeps Drupal's continuous innovation ongoing. With the Drupal community prioritizing and actively focusing on the security of newer versions, when the Drupal 7 end of life comes, the Drupal 7 support from the community will also seize.  

This primarily jeopardizes the security of your Drupal 7 website. This also means that contributed modules and themes that are currently used in your Drupal 7 website, will also lose maintenance support. This would bring challenges in website maintenance. 

Also Check Out: 

  1. What to do with your Drupal 7 website? 

  1. After Drupal 7 reaches the end-of-life, what’s your plan? 

  1. Leading you towards the right upgrade from Drupal 7 

  1. Exploring Drupal's Single Directory Components: A Game-Changer for Developers 

2. New Features And Upgrades 

Another consequence of not upgrading to Drupal 10 is that certain functionalities may cease to perform as intended. Or there may be better alternatives available. Not only can this cause extra annoyance among website maintainers, but resolving these issues may incur additional expenditures for your company owing to the time and resources required to do so.  

In Drupal 7, while developers had to manually upgrade/update or search for modules from drupal.org, Drupal 10 has simplified this with Automated updates and a Project browser, respectively. A lot of Drupal 7 features are either incorporated out-of-the-box in Drupal 10 or simply removed to maintain ease of use.  

Also, the Drupal 7 ‘Seven’ theme from 2009 gave off an out-of-date system impression. Seven was replaced by the new ‘Claro’ theme, which was created by the most recent requirements.  

And the front-end theme, ‘Olivero,’ was created to fit with features that are well-liked by users, such as the Layout Builder. The Olivero theme will meet WCAG AA accessibility standards. 

The simple finding and installation of modules should empower Drupal newcomers as well as ‘ambitious site builders’. – Dries Buytaert 

3. Technical Dependencies 

Drupal works on currently supported PHP versions. Choosing the recommended PHP versions is ideal for developing a Drupal site, as they offer extended support over time. Drupal 10 is built on PHP version 8.0 while the Drupal 7 CMS is built on PHP 7 which has also reached its end of life.  

This creates technical dependencies in supporting the platform better. 

  • jQuery, jQuery UI, jQuery Forms: Drupal 7 CMS includes old and unsupported versions of these libraries. jQuery's current version is 3.7.1. Drupal 7 CMS includes 1.4.2. Other libraries have comparable challenges. You may minimize this little with the jQuery Update module, although the most recent version is 3.5.2.  

    Drupal 8 and later (as well as many other content management systems) make it simple to provide API access to your content. In the age of ‘publish everywhere’, this is a critical feature. Drupal 7 CMS has some basic API support, but if you want a full-fledged API with write support, you'll have to create it yourself, which adds technical debt and possible vulnerabilities. 
  • CKEditor 5 Update From CKEditor 4: With a thorough rebuild and an exciting new feature set, CKEditor 5 gives Drupal 10 a modern, collaborative editor experience. Users of programs like Microsoft Word or Google Docs will be used to the new CKEditor's interface. 
    It also provides standard collaboration tools such as comments, suggestions for changes, version histories, and other widely accepted editing methods. Additionally, it has outputs to .docx and .pdf files for straightforward conversion to print formats.  
  • Composer 2 And PHP 8 Support: Although the backporting of Composer 2 to Drupal 8 was successful, PHP 8 compatibility was not. PHP 8 will be required for Drupal 10 because PHP 7 was discontinued in November 2022. 

OpenSense Labs, as a Drupal organization, is committed to providing active support. Check out our Drupal 7 to Drupal 10 Migration services today for a long-term and fruitful collaboration. 

Migrate To Drupal 10 Today!

4. Modules That Have Gone Out Of Support 

The Drupal 10 core was updated to eliminate a few modules that are redundant or are not frequently used. For uniformity, these modules were transferred to the Contributed Module area. Gathers and presents syndicated material from outside sources (RSS, RDF, and Atom feeds). 

  • QuickEdit: In-place content editing 

  • HAL: Serializes entities using the Hypertext Application Language 

  • Activity Tracker: Users may keep track of recent content with the activity tracker feature 

  • RDF: Enhances websites with metadata so that other systems may comprehend their characteristics 

You will have to leave Drupal 7 CMS behind. Eventually, the opportunity cost of continuing to use software that is more than 10 years old is substantial, and once Drupal 7 end of life comes, the risk and expense of an uncovered vulnerability increases rapidly. 

There are several possibilities available to you, and now is the time for you to choose and make plans for one of them. The ideal option will rely on the expertise level of your team, the amount of business logic you have included in Drupal 7 CMS and your projected budget. 

Also Check Out: 

  1. DrupalCon Barcelona: 2024 Wrap-Up From Europe 

  1. Drupal 11: Nine Must-Know Features 

  1. 7 Quick Steps to Create API Documentation Using Postman 

  1. What is Product Engineering Life Cycle? 

CMS Drupal 7 v/s Drupal 10 

As this article aims to highlight the difficulties associated with the ongoing use of Drupal 7 CMS and to present the most effective solution, below is a comparison of CMS Drupal 7 v/s Drupal 10 to help you understand the benefits of Drupal 10 better.  

Our primary objective is to provide you with a comprehensive understanding of how various popular website features, tasks, and workflows are represented in both Drupal 7 CMS and Drupal 10. 

1. Mobile Design 

Drupal 7 CMS lacks the essential responsive design capabilities needed to develop web pages that adjust their structures to different screen sizes of devices. One can develop websites that are mobile-friendly with Drupal 7 CMS by manually adjusting settings and incorporating extra modules or themes that have been contributed by others. 

In Drupal 10, developers can construct responsive pages with greatly streamlined workflows, minimal manual configuration, and without the necessity for additional modules. The contemporary Drupal core features a powerful framework for managing responsive images and breakpoints, which are essential components of responsive design. 

Recent advancements include innovative features like Views Responsive Grids, which provide intuitive responsiveness options for grids within Drupal Views. The core themes for both the administration and front end in Drupal 10, known as Claro and Olivero, are inherently responsive. 

2. Administrative Interfaces 

Drupal 7 CMS features a conventional administrative dashboard organized with tabs and subtabs. The overlay-contributed module enables extensive menu sections to be displayed in modal windows. The user interface of Drupal 7 CMS seems to be antiquated regarding its design and overall user experience. 

Accessibility challenges also exist, including problems with colour contrast and the absence of keyboard navigation options. Accessing the administrative dashboard on mobile devices proves to be challenging due to the lack of optimization for smaller screens. 

Upon accessing the administrative dashboard in Drupal 10, one is greeted by a contemporary and elegant design offered by the core admin theme, Claro. The design features a tidy and organized appearance. The admin interface has become more intuitive and user-friendly due to a more logical arrangement of settings and actions, accompanied by clearer labels throughout. 

Claro has been developed with a focus on responsiveness, enabling your team to perform administrative tasks using mobile devices when necessary. Significant enhancements in accessibility are readily apparent through the noticeable colour contrasts and the use of more legible fonts. 

A consistent approach to focus states and styles facilitates the interaction with forms, buttons, form fields, and other interactive components, ensuring accessibility for users who navigate solely via keyboard. 

3. Content Authoring 

When Drupal 8 was released in 2015, it included a comprehensive text editor by default — CKEditor. Over the years, CKEditor has continually evolved following the latest trends.  

CKEditor 5 has emerged as a significant asset for Drupal 10, introducing contemporary and user-friendly balloon panels for ALT text and links, a specialized toolbar for inline media formatting, straightforward table creation, code blocks, special characters, and a variety of additional features. 

The range of functionalities offered by CKEditor 5 for Drupal 10 is continually expanding, accompanied by supplementary contributed modules for CKEditor 5. We conducted a comparative analysis of CKEditor 4 and CKEditor 5, examining each feature for the benefit of our readers. 

In the case of Drupal 7, it is important to note that it does not come equipped with a WYSIWYG (What-You-See-Is-What-You-Get) editor by default. The content editing form lacks a toolbar that facilitates the addition of links, bold text, italics, headings, bullet points, numbered lists, and other formatting options. 

Acquiring a toolbar necessitates the installation of contributed modules that provide different iterations of the WYSIWYG editor. For many years, one of the most effective solutions has been the installation of the CKEditor 4 contributed module.  

The module is currently no longer supported, which means that its presence on your website will necessitate additional paid CMS Drupal 7 support to guarantee its proper functionality. There is more to consider than merely the loss of CMS Drupal 7 support for all Drupal 7 modules. The issue is rooted even more profoundly in this situation. 

CKEditor 4, a third-party application, officially reached its end-of-life for the open-source version earlier this year. 

4. Creating Page Layouts 

The process of creating layouts in Drupal 7 CMS is mainly facilitated by contributed modules, particularly Panels, in conjunction with several other dependent modules, including Page Manager and Ctools.  

To modify a Drupal 7 CMS layout, it is frequently necessary to possess a certain level of understanding of PHP as well as the ability to configure settings via the administrative interface. In Drupal 10, the Layout Builder feature is integrated into the core, enhancing the intuitiveness and flexibility of layout creation. 

It boasts an intuitive interface that includes drag-and-drop functionality. Customizations can be achieved without the necessity of coding, thereby creating new opportunities for individuals who are not developers. The Layout Builder in Drupal 10 is designed to inherently accommodate responsive web design. 

The development of visually appealing and consistent responsive layouts in Drupal 10 is becoming increasingly engaging due to innovative methods such as Single Directory Components, as well as contributed modules like Bootstrap UI Kit, among others.  

The integration of the Bootstrap framework into Drupal websites enhances their capabilities, streamlines workflows, and increases overall project efficiency. 

5. AI Tool Integration 

Generative AI can be seamlessly incorporated into a Drupal website, transforming it into a centralized hub where users can enhance their workflows utilizing artificial intelligence.  

AI tools are capable of providing responses directly within the Drupal administration interface, producing content, translating text, proposing titles, modifying the tone and voice of written material, creating taxonomy terms, and generating placeholder content complete with images for quality assurance and development teams to evaluate new features, among various other functionalities. 

The variety of AI-related modules and their functionalities is continually expanding. All modules developed for AI integration are specifically designed for Drupal 10, and there are none created for Drupal 7. The sole method to obtain it for Drupal 7 CMS is by developing a custom module. 

This solution will require payment, which poses a challenge for development due to the limitations of Drupal 7 CMS in integrating with specific APIs or functionalities.  

6. Decoupling Opportunities 

Today, there will be extensive discussions regarding decoupled architecture. Separating the front end and back end enables developers to use modern JavaScript frameworks designed for creating user interfaces that enhance performance, improve user experience, and boost developer productivity. 

Examining the decoupled setup possibilities of Drupal 7 CMS resembles a journey through time. Drupal 7 CMS is a monolithic content management system in which the front end is closely integrated with the backend. The built-in support for REST APIs is limited, and the development of APIs for content sharing necessitates the use of additional modules. 

Drupal 10, in contrast, adopts an API-first methodology. The system incorporates integrated RESTful Web Services and JSON:API, facilitating the development of APIs and the distribution of Drupal content to external applications. While these features were introduced before Drupal 10, efforts continue to enhance Drupal with additional exciting functionalities. 

One of the recent advancements in Drupal is the development of Decoupled Menus, which is designed to facilitate the consumption of Drupal menus by JavaScript frontends. With the introduction of Drupal 10.1, it is now possible to activate a menu Linkset API endpoint with minimal effort, and additional improvements are on the horizon.  

CMS Drupal 7 To 10 Migration Checklist 

So, let us now look into the requirements necessary to successfully execute CMS Drupal 7 to Drupal 10 migration. Although every Drupal 7 to Drupal 10 migration project possesses its distinct characteristics, it can typically be divided into the following steps:  

Step 1: Examine Your Drupal 7 Website 

CMS Drupal 7 to Drupal 10 migration represents a significant advancement. Consider this an opportunity to strategize for the future of your site by evaluating its structure, content, functionality, and design.  

Here are a few questions to help you initiate your exploration: 

  1. What are your expectations regarding Drupal 10? 

  1. Is the existing structure functioning effectively? 

  1. What elements require Drupal 7 to Drupal 10 migration? 

  1. Is there a necessity for a redesign? 

  1. Does your code require a comprehensive revision? 

  1. What is the scale of the task? 

Step 2: Verify The Availability Of Modules 

Are you utilizing contributed modules to enhance the capabilities of your Drupal 7 site? 

If so, you will need to verify their compatibility with Drupal 10 or seek an alternative before proceeding with the CMS Drupal 7 to Drupal 10 migration. You may accomplish this by individually reviewing the page of each module on drupal.org, or by utilizing a tool like the Upgrade Status module.  

It is advisable to explore alternative options, even if your current modules are compatible with Drupal 10, as the Drupal community may have developed superior solutions. 

Step 3: Develop Your Drupal 10 Website 

You are required to develop a completely new website utilizing the most recent version available, which is Drupal 10.3.7 as of the current date. Now, proceed to install the modules that you have chosen in the preceding step. It is important to note that the installation procedure in Drupal 10 differs from that of Drupal 7. 

Establish your content frameworks by incorporating blocks, content types, media types, web forms, and navigation menus. It is advisable to utilize Layout Builder, a fundamental module introduced in Drupal 8.5, which serves as a replacement for the Panels module. The robust drag-and-drop capabilities of Layout Builder facilitate the creation of visually appealing and adaptable pages with ease. 

Step 4: Revise Your Code 

It is advisable to utilize available contributed modules whenever feasible to minimize the necessity for custom coding. Custom themes must be developed anew from the ground up. Adopt optimal methodologies and contemporary coding standards. It is important to note that Drupal 10 necessitates a minimum of PHP 8.1 and has revised its database requirements. 

Finally, integrate your personalized modules and themes into your Drupal 10 website. 

Step 5: Transfer Your Data 

If the amount of content is limited, it may be feasible to transfer it manually from the previous site to the new one. You may wish to consider automating the process instead. Automated Drupal 7 to Drupal 10 migration can be accomplished by utilizing the Migrate API to transfer content and configurations. 

It is essential to recognize its limitations and to develop a strategy for addressing them effectively. You might need to regenerate views using the views migration module, for instance. Additional useful modules for CMS Drupal 7 to Drupal 10 migration consist of Migrate Plus, Migrate Tools, and Migrate Scanner. 

Step 6: Test Your Newly Developed Website 

Conduct thorough testing and quality assurance on your new website to guarantee its security, performance, and accessibility on a range of devices. Ensure that all content and data have been accurately migrated. Finally, obtain the necessary approvals from the relevant stakeholders. 

Step 7: Launch! 

Inform your audiences about the forthcoming change. This presents an excellent opportunity to demonstrate and articulate the advantages it offers to them. Adjust the DNS settings of your site to direct them to your Drupal 10 website. Re-establish any previous redirects or custom URLs and monitor your logs for any occurrences of 404 errors or other alerts. 

Ensure that your previous Drupal 7 site is secure and inaccessible to the public. It may be advisable to ultimately establish a static version and a backup for future reference. 

Key Takeaways 
  1. The impending CMS Drupal 7 end of life in January 2025, underscores the importance of doing Drupal 7 to Drupal 10 migration soon. 

  2. Drupal 10 introduces automated updates, enhances user experience, and includes a variety of additional features. 

  3. The Drupal community places a strong emphasis on the security of its newer versions, the CMS Drupal 7 support from the community will also come to an end with Drupal 7 end of life in January 2025. 

  4. One additional consequence of failing to upgrade to Drupal 10 is that specific functionalities may no longer operate as expected. 

  1. Drupal 10 is developed using PHP version 8.0, whereas Drupal 7 CMS is based on PHP 7, which has also approached its end of life. 

Off You must have JavaScript enabled to use this form. Email Leave this field blank 14 Blog category Drupal
Categories: FLOSS Project Planets

Glyph Lefkowitz: It’s Time For Democrats To Get More Annoying

Planet Python - Sun, 2024-11-10 23:01

Kamala Harris lost. Here we are. So it goes.

Are you sad? Are you scared?

I am very sad. I am very scared.

But, like everyone else in this position, most of all, I want to know what to do next.

A Mission For Progress

I believe that we should set up a missionary organization for progressive and liberal values.

In 2017, Kayla Chadwick wrote the now-classic article, “I Don’t Know How To Explain To You That You Should Care About Other People”. It resonated with millions of people, myself included. It expresses an exasperation with a populace that seems ignorant of economics, history, politics, and indeed unable to read the news. It is understandable to be frustrated with people who are exercising their electoral power callously and irresponsibly.

But I think in 2024, we need to reckon with the fact that we do, in fact, need to explain to a large swathe of the population that they should care about other people.

We had better figure out how to explain it soon.

Shared Values — A Basis for Hope

The first question that arises when we start considering outreach to the conservative-leaning or undecided independent population is, “are these people available to be convinced?”.

To that, I must answer an unqualified “yes”.

I know that some of you are already objecting. For those of us with an understanding of history and the mechanics of bigotry in the United States, it might initially seem like the answer is “no”.

As the Nazis came to power in the 1920s, they were campaigning openly on a platform of antisemitic violence. Everyone knew what the debate was. It was hard to claim that you didn’t, in spite of some breathtakingly cowardly contemporaneous journalism, they weren’t fooling anyone.

It feels ridiculous to say this, but Hitler did not have support among Jews.

Yet, after campaigning on a platform of defaming immigrants, and Mexican immigrants specifically for a decade, a large part of what drove his victory is that Trump enjoyed a shockingly huge surge of support among the Hispanic population. Even some undocumented migrants — the ones most likely to be herded into concentration camps starting in January — are supporting him.

I believe that this is possible because, in order to maintain support of the multi-ethnic working-class coalition that Trump has built, the Republicans must maintain plausible deniability. They have to say “we are not racist”, “we are not xenophobic”. Incredibly, his supporters even say “I don’t hate trans people” with startling regularity.

Most voters must continue to believe that hateful policies with devastating impacts are actually race-neutral, and are simply going to get rid of “bad” people. Even the ones motivated by racial resentment are mostly motivated by factually incorrect beliefs about racialized minorities receiving special treatment and resources which they are not in fact receiving.

They are victims of a disinformation machine. One that has rendered reality incomprehensible.

If you listen to conservative messaging, you can hear them referencing this all the time. Remember when JD Vance made that comment about Democrats calling Diet Mountain Dew racist?

Many publications wrote about this joke “bombing”1, but the kernel of truth within it is this: understanding structural bigotry in the United States is difficult. When we progressives talk about it, people who don’t understand it think that our explanations sound ridiculous and incoherent.

There’s a reason that the real version of critical race theory is a graduate-level philosophy-of-law course, and not a couple of catch phrases.

If, without context, someone says that “municipal zoning laws are racist”, this makes about as much sense as “Diet Mountain Dew is racist” to someone who doesn’t already know what “redlining” is.

Conservatives prey upon this confusion to their benefit. But they prey on this because they must do so. They must do so because, despite everything, hate is not actually popular among the American electorate. Even now, they have to be deceived into it.

The good news is that all we need to do is stop the deception.

Politics Matter

If I have sold you on the idea that a substantial plurality of voters are available to be persuaded, the next question is: can we persuade them? Do we, as progressives, have the resources and means to do so? We did lose, after all, and it might seem like nothing we did had much of an impact.

Let’s analyze that assumption.

Across the country, Trump’s margins increased. However, in the swing states, where Harris spent money on campaigning, his margins increased less than elsewhere. At time of writing, we project that the safe-state margin shift will be 3.55% towards trump, and the swing-state margin shift will be 1.69%.

This margin was, sadly, too small for a victory, but it does show that the work mattered. Perhaps given more time, or more resources, it would have mattered just a little bit more, and that would have been decisive.

This is to say, in the places where campaign dollars were spent, even against the similar spending of the Trump campaign, we pushed the margin of support 1.86% higher within 107 days. So yes: campaigning matters. Which parts and how much are not straightforward, but it definitely matters.

This is a bit of a nonsensical comparison for a whole host of reasons2, but just for a ballpark figure, if we kept this pressure up continuously during the next 4 years, we could increase support for a democratic candidate by 25%.

We Can Teach, Not Sell

Political junkies tend to overestimate the knowledge of the average voter. Even when we are trying to compensate for it, we tend to vastly overestimate how much the average voter knows about politics and policy. I suspect that you, dear reader, are a political junkie even if you don’t think of yourself as one.

To give you a sense of what I mean, across the country, on Election day and the day after, there was a huge spike in interest for the Google query, “did Joe Biden drop out”.

Consistently over the last decade, democratic policies are more popular than their opponents. Even deep red states, such as Kansas, often vote for policies supported by democrats and opposed by Republicans.

This confusion about policy is not organic; it is not voters’ fault. It is because Republicans constantly lie.

All this ignorance might seem discouraging, but it presents an opportunity: people will not sign up to be persuaded, but people do like being informed. Rather than proselytizing via a hard sales pitch, it should be possible to offer to explain how policy connects to elections. And this is made so much the easier if so many of these folks already generally like our policies.

The Challenge Is Enormous

I’ve listed some reasons for optimism, but that does not mean that this will be easy.

Republicans have a tremendously powerful, decentralized media apparatus that reinforces their culture-war messaging all the time.

After some of the post-election analysis, “The Left Needs Its Own Joe Rogan” is on track to become a cliché within the week.3 While I am deeply sympathetic to that argument, the right-wing media’s success is not organic; it is funded by petrochemical billionaires.

We cannot compete via billionaire financing, and as such, we have to have a way to introduce voters to progressive and liberal media. Which means more voters need social connections to liberals and progressives.

Good Works

The democratic presidential campaign alone spent a billion and a half dollars. And, as shown above, this can be persuasive, but it’s just the persuasion itself.

Better than spending all this money on telling people what good stuff we would do for them if we were in power, we could just show them, by doing good stuff. We should live our values, not just endlessly reiterate them.

A billion dollars is a significant amount of power in its own right.

For historical precedent, consider the Black Panthers’ Free Breakfast For Children program. This program absolutely scared the shit out of the conservative power structure, to the point that Nixon’s FBI literally raided them for giving out free food to children.

Religious missionaries, who are famously annoying, often offset their annoying-ness by doing charitable work in the communities they are trying to reach. A lot of the country that we need to reach are religious people, and nominally both Christians and leftists share a concern for helping those in need, so we should find some cultural common ground there.

We can leverage that overlap in values by partnering with churches. This immediately makes such work culturally legible to many who we most need to reach.

Jobs Jobs Jobs

When I raised this idea with Philip James, he had been mulling over similar ideas for a long time, but with a slightly different tack: free career skills workshops from folks who are obviously “non-traditional” with respect to the average rural voter’s cultural expectations. Recruit trans folks, black folks, women, and non-white immigrants from our tech networks.

Run the trainings over remote video conferencing to make volunteering more accessible. Run those workshops through churches as a distribution network.

There is good evidence that this sort of prolonged contact and direct exposure to outgroups, to help people see others as human beings, very effective politically.

However, job skills training is by no means the only benefit we could bring. There are lots of other services we could offer remotely, particularly with the skills that we in the tech community could offer. I offer this as an initial suggestion; if you have more ideas I’d love to hear them. I think the best ideas are ones where folks can opt in, things that feel like bettering oneself rather than receiving charity; nobody likes getting handouts, particularly from the outgroup, but getting help to improve your own skills feels more participatory.

I do think that free breakfast for children, specifically, might be something to start with because people are far more willing to accept gifts to benefit others (particularly their children, or the elderly!) rather than themselves.

Take Credit

Doing good works in the community isn’t enough. We need to do visible good works. Attributable good works.

We don’t want to be assholes about it, but we do want to make sure that these benefits are clearly labeled. We do not want to attach an obligation to any charitable project, but we do want to attach something to indicate where it came from.

I don’t know what that “something” should be. The most important thing is that whatever “something” is appeals to set of partially-overlapping cultures that I am not really a part of — Midwestern, rural, southern, exurban, working class, “red state” — and thus, I would want to hear from people from those cultures about what works best.

But it’s got to be something.

Maybe it’s a little sticker, “brought to you by progressives and liberals. we care about you!”. Maybe it’s a subtle piece of consistent branding or graphic design, like a stylized blue stripe. Maybe we need to avoid the word “democrats”, or even “progressive” or “liberal”, and need some independent brand for such a thing, that is clearly tenuously connected but not directly; like the Coalition of Liberal and Leftist Helpful Neighbors or something.

Famously, when Trump sent everybody a check from the government, he put his name on it. Joe Biden did the same thing, and Democrats seem to think it’s a good thing that he didn’t take credit because it “wasn’t about advancing politics”, even though this obviously backfired. Republicans constantly take credit for the benefits of Democratic policies, which is one reason why voters don’t know they’re democratic policies.

Our broad left-liberal coalition is attempting to improve people’s material conditions. Part of that is, and must be, advancing a political agenda. It’s no good if we provide job trainings and free lunches to a community if that community is just going to be reduced to ruin by economically catastrophic tariffs and mass deportations.

We cannot do this work just for the credit, but getting credit is important.

Let’s You And Me — Yes YOU — Get Started

I think this is a good idea, but I am not the right person to lead it.

For one thing, building this type of organization requires a lot of organizational and leadership skills that are not really my forte. Even the idea of filing the paperwork for a new 501(c)3 right now sounds like rolling Sisyphus’s rock up the hill to me.

For another, we need folks who are connected to this culture, in ways that I am not. I would be happy to be involved — I do have some relevant technical skills to help with infrastructure, and I could always participate in some of the job-training stuff, and I can definitely donate a bit of money to a nonprofit, but I don’t think I can be in charge.

You can definitely help too, and we will need a wide variety of skills to begin with, and it will definitely need money. Maybe you can help me figure out who should be in charge.

This project will be weaker without your support. Thus: I need to hear from you.

You can email me, or, if you’d prefer a more secure channel, feel free to reach out over Signal, where my introduction code is glyph.99 . Please start the message with “good works:” so I can easily identify conversations about this.

If I receive any interest at all, I plan to organize some form of meeting within the next 30 days to figure out concrete next steps.

Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. If you like what you’ve read here and you’d like to read more things like it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor! My aspirations for this support are more in the directions of software development than activism, but needs must, when the devil drives. Thanks especially to Philip James for both refining the idea and helping to edit this post, and to Marley Myrianthopoulos for assistance with the data analysis.

  1. Personally I think that the perception of it “bombing” had to do with the microphones during his speech not picking up much in the way of crowd noise. It sounded to me like there were plenty of claps and laughs at the time. But even if it didn’t land with most of the audience, it definitely resonated for some of them. 

  2. A brief, non-exhaustive list of the most obvious ones:

    • This is a huge amount of money raised during a crisis with an historic level of enthusiasm among democrats. There’s no way to sustain that kind of momentum.
    • There are almost certainly diminishing returns at some point; people harbor conservative (and, specifically, bigoted) beliefs to different degrees, and the first million people will be much easier to convince than the second million, etc.
    • Support share is not fungible; different communities will look different, and some will be saturated much more quickly than others. There is no reason to expect the rate over time to be consistent, nor the rate over geography.

  3. I mostly agree with this take, and in the interest of being the change I want to see in the world, let me just share a brief list of some progressive and liberal sources of media that you might want to have a look at and start paying attention to:

    Please note that not all of these are to my taste and not all of them may be to yours. They are all at different places along the left-liberal coalition spectrum, but find some sources that you enjoy and trust, and build from there. 

Categories: FLOSS Project Planets

Seth Michael Larson: Writing a blog on the internet

Planet Python - Sun, 2024-11-10 19:00
Writing a blog on the internet AboutBlogCool URLs Writing a blog on the internet

Published 2024-11-11 by Seth Larson
Reading time: minutes

Today is the 5-year anniversary of my first blog post in 2019. Since that time I've written nearly 100 articles for my blog, something that I am quite proud of! Writing has had a huge positive impact on my life and career.

I invite you, dear reader, to start writing about topics you're interested in and sharing those writings on the internet. This article is me putting my finger on the scale by sharing what I would do differently if I were to start over again.

2★2★3★3★4★4★3★3★1★1★1★1★Hired at the PSFHired at t...44441313151513139966662★2★3★3★1★1★Writing about securityWriting about se...2222330055223322Q4Q4201920191★1★1★1★Hired at ElasticHired at E...Q2Q2Q3Q3Maintainer of urllib3Maintainer of ur...2233221144Q1Q1Q4Q420202020Q2Q2Q3Q3Q1Q1Q4Q420212021Q2Q2Q3Q3Q1Q1Q4Q420222022Q2Q2Q3Q3Q1Q1Q4Q420232023Q2Q2Q3Q3Q1Q1Q4Q420242024Q2Q2Q3Q3Q1Q1Text is not SVG - cannot display
Number of blog posts published over time with life events. Blue bars show which posts are my personal favorites. Skip the analytics§

If I were to go back in time and do one thing differently about my blog, analytics would be the one.

When I first started I used Google Analytics and found myself obsessing over the dashboards after publishing an article. This wasn't healthy, as many articles would do fine, but all the time was wasted. I'm apparently not alone in this experience.

Seeing the relatively small numbers of readers for the first few articles (single-digits...) can discourage people from writing more. Building an audience takes a looong time and plenty of persistence. That means you'll need something else to motivate you to keep at it in the mean-time.

If you insist on having analytics: I recommend GoatCounter. GoatCounter supports a mode that removes the visitor numbers and only shows referrers. The service is free for small websites, but don't forget to support them if you can.

Create what you want!

The world is a weird place, and you can't control what becomes popular. Create what you want to create for the sake of creating and enjoy the ride!

My most popular article by an extremely wide margin is one I didn't expect: "Move or recover your Wordle stats". I created this little utility for me and my friends and didn't expect hundreds of thousands of people to use it until the New York Times shared the URL on Twitter.

My most recent viral article I wrote in ~15 minutes about an unexpected behavior in Python regular expressions that caused a bug in some of my code.

Own your work

Publishing on the internet means deciding where you will publish your work. We've seen far too many platforms either die or become completely user-hostile. To prevent this from happening to your hard work:

  • Create the original work in a format that is transformable (such as Markdown or HTML).
  • Publish that work to publicly accessible URLs that you can share.
  • Share your URLs in many ways, like RSS, email newsletter, social media, or elsewhere.

For easy-to-start publishing platforms, I recommend either GitHub Pages or Bear Blog. If you have the savvy and interest: host content on your own website. There are far too many guides to getting started with this, choose one using a technology that you're interested in.

See also: "Publish on your Own Site, Syndicate Elsewhere".

Let your authenticity shine

Please note that I am a cis white male and has not had to justify my existence or expertise in a space. Unfortunately not all of my friends can be their authentic selves online, but knowing them in real life I certainly wish that the world allowed them to be.

I always enjoy when a blog shows off the author, either through writing style, phrases, personal touches, pictures, jokes, or little pieces of life. Don't be afraid to leave in things you think about when you're writing. I try to strike the right balance between how I might speak about an idea if I were to talk in person and writing for a diverse audience.

Don't put yourself in a box

You don't have to write about the same one or two topics, forever. I am guilty of this, and I am working on writing about more than only open source and security. Recently I have started to write about video-game preservation.

Again, write about what you want to write about. Writing about something new, even if it's only once, can be very refreshing. Don't let vague feelings about what your audience "expects" to get in the way of creative expression.

Don't think you only need to write about "professional" topics or topics that have broad appeal. You can write about anything at all from rocket science to what's happening in your local community.

Start at the end

Start with the conclusion! A reader should be able to know your main ideas without a single page scroll (because almost all readers won't make it past the first few paragraphs). Check out how your draft looks on a phone to confirm this is the case.

After that first page scroll you've already pared down to the more dedicated readers so start giving them details. If you like narrative writing like me, this is a good place to start the actual story.

Keep it short

In terms of writing, you should be able to write the main points and details of an article quickly, assuming you've done your research beforehand.

Once all the main points are there, resist the urge to make an in-progress article "more grand" or "comprehensive". Instead, link out to resources that already exist or plan on writing follow-up articles later. Many smaller articles are more easily consumable for readers and more writeable for you (double win!)

Ship early instead of never

I've wasted so much time trying to "finish" blog posts. Endlessly trying to polish something into being perfect is not worth it, because it increases the chances that the work won't ever be published!

Try to be okay publishing something that isn't perfect, because your idea of "perfect" will change over time. You need to go through the "research-write-edit-publish" cycle to improve, not by endlessly editing one piece.

Hang up when you're done

Don't worry about "conclusions" or "wrapping-up" a blog post at the end. Just stop writing as soon there's no more to say. I promise almost no one reads all the way to the end (except your most loyal readers: remember they like you!)

Speaking of stopping: this is it! Thanks to everyone who has read this blog 💜

Have thoughts or questions? Let's chat over email or social:

sethmichaellarson@gmail.com
@sethmlarson@fosstodon.org

Want more articles like this one? Get notified of new posts by subscribing to the RSS feed or the email newsletter. I won't share your email or send spam, only whatever this is!

Want more content now? This blog's archive has ready-to-read articles. I also curate a list of cool URLs I find on the internet.

Find a typo? This blog is open source, pull requests are appreciated.

Thanks for reading! ♡ This work is licensed under CC BY-SA 4.0

Categories: FLOSS Project Planets

Quansight Labs Blog: The Polars vs pandas difference nobody is talking about

Planet Python - Sun, 2024-11-10 19:00
A closer look at non-elementary group-by aggregations
Categories: FLOSS Project Planets

Python Docs Editorial Board: Meeting Minutes: Nov 11, 2024

Planet Python - Sun, 2024-11-10 19:00
Meeting Minutes from Python Docs Editorial Board: November 11, 2024
Categories: FLOSS Project Planets

Dirk Eddelbuettel: inline 0.3.20: Mostly Maintenance

Planet Debian - Sun, 2024-11-10 14:29

A new release of the inline package got to CRAN today marking the first release in three and half years. inline facilitates writing code in-line in simple string expressions or short files. The package was used quite extensively by Rcpp in the very early days before Rcpp Attributes arrived on the scene providing an even better alternative for its use cases. inline is still used by rstan and a number of other packages.

This release was tickled by changing in r-devel just this week, and the corresponding ‘please fix or else’ email I received this morning. R_NO_REMAP is now the default in r-devel, and while we had already converted most (old-style) calls into the API to using the now mandatory Rf_ prefix, the package contained few remaining cases in examples as well as one in code generation. The release also contains a helpful contributed PR making an error message a little clearer, plus several small and common maintenance changed around continuous integration, package layout and the repository.

The NEWS extract follows and details the changes some more.

Changes in inline version 0.3.20 (2024-11-10)
  • Error message formatting is improved for compileCode (Alexis Derumigny in #25)

  • Switch to using Authors@R, other general packaging maintenance for continuous integration and repository

  • Use Rf_ in a handful of cases as R-devel now mandates it

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues).

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Reproducible Builds: Reproducible Builds in October 2024

Planet Debian - Sun, 2024-11-10 13:26

Welcome to the October 2024 report from the Reproducible Builds project.

Our reports attempt to outline what we’ve been up to over the past month, highlighting news items from elsewhere in tech where they are related. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.

Table of contents:

  1. Beyond bitwise equality for Reproducible Builds?
  2. ‘Two Ways to Trustworthy’ at SeaGL 2024
  3. Number of cores affected Android compiler output
  4. On our mailing list…
  5. diffoscope
  6. IzzyOnDroid passed 25% reproducible apps
  7. Distribution work
  8. Website updates
  9. Reproducibility testing framework
  10. Supply-chain security at Open Source Summit EU
  11. Upstream patches
Beyond bitwise equality for Reproducible Builds?

Jens Dietrich, Tim White, of Victoria University of Wellington, New Zealand along with Behnaz Hassanshahi and Paddy Krishnan of Oracle Labs Australia published a paper entitled “Levels of Binary Equivalence for the Comparison of Binaries from Alternative Builds”:

The availability of multiple binaries built from the same sources creates new challenges and opportunities, and raises questions such as: “Does build A confirm the integrity of build B?” or “Can build A reveal a compromised build B?”. To answer such questions requires a notion of equivalence between binaries. We demonstrate that the obvious approach based on bitwise equality has significant shortcomings in practice, and that there is value in opting for alternative notions. We conceptualise this by introducing levels of equivalence, inspired by clone detection types.

A PDF of the paper is freely available.


Two Ways to Trustworthy’ at SeaGL 2024

On Friday 8th November, Vagrant Cascadian will present a talk entitled Two Ways to Trustworthy at SeaGL in Seattle, WA.

Founded in 2013, SeaGL is a free, grassroots technical summit dedicated to spreading awareness and knowledge about free source software, hardware and culture. Vagrant’s talk:

[…] delves into how two project[s] approaches fundamental security features through Reproducible Builds, Bootstrappable Builds, code auditability, etc. to improve trustworthiness, allowing independent verification; trustworthy projects require little to no trust.

Exploring the challenges that each project faces due to very different technical architectures, but also contextually relevant social structure, adoption patterns, and organizational history should provide a good backdrop to understand how different approaches to security might evolve, with real-world merits and downsides.


Number of cores affected Android compiler output

Fay Stegerman wrote that the cause of the Android toolchain bug from September’s report that she reported to the Android issue tracker has been found and the bug has been fixed.

the D8 Java to DEX compiler (part of the Android toolchain) eliminated a redundant field load if running the class’s static initialiser was known to be free of side effects, which ended up accidentally depending on the sharding of the input, which is dependent on the number of CPU cores used during the build.

To make it easier to understand the bug and the patch, Fay also made a small example to illustrate when and why the optimisation involved is valid.


On our mailing list…

On our mailing list this month:


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 279, 280, 281 and 282 to Debian:

  • Ignore errors when listing .ar archives (#1085257). []
  • Don’t try and test with systemd-ukify in the Debian stable distribution. []
  • Drop Depends on the deprecated python3-pkg-resources (#1083362). []

In addition, Jelle van der Waa added support for Unified Kernel Image (UKI) files. [][][] Furthermore, Vagrant Cascadian updated diffoscope in GNU Guix to version 282. [][]


IzzyOnDroid passed 25% reproducible apps

The IzzyOnDroid project has reached a good milestone by reaching over 25% of the ~1,200 Android apps provided by their repository (of official APKs built by the original application developers) having been confirmed to be reproducible by a rebuilder.


Distribution work

In Debian this month:

  • Holger Levsen uploaded devscripts version 2.24.2, including many changes to the debootsnap, debrebuild and reproducible-check scripts. This is the first time that debrebuild actually works (using sbuild’s unshare backend). As part of this, Holger also fixed an issue in the reproducible-check script where a typo in the code led to incorrect results []

  • Recently, a news entry was added to snapshot.debian.org’s homepage, describing the recent changes that made the system stable again:

    The new server has no problems keeping up with importing the full archives on every update, as each run finishes comfortably in time before it’s time to run again. [While] the new server is the one doing all the importing of updated archives, the HTTP interface is being served by both the new server and one of the VM’s at LeaseWeb.

    The entry list a number of specific updates surrounding the API endpoints and rate limiting.

  • Lastly, 12 reviews of Debian packages were added, 3 were updated and 18 were removed this month adding to our knowledge about identified issues.

Elsewhere in distribution news, Zbigniew Jędrzejewski-Szmek performed another rebuild of Fedora 42 packages, with the headline result being that 91% of the packages are reproducible. Zbigniew also reported a reproducibility problem with QImage.

Finally, in openSUSE, Bernhard M. Wiedemann published another report for that distribution.


Website updates

There were an enormous number of improvements made to our website this month, including:

  • Alba Herrerias:

    • Improve consistency across distribution-specific guides. []
    • Fix a number of links on the Contribute page. []
  • Chris Lamb:

  • hulkoba

  • James Addison:

    • Huge and significant work on a (as-yet-merged) quickstart guide to be linked from the homepage [][][][][]
    • On the homepage, link directly to the Projects subpage. []
    • Relocate “dependency-drift” notes to the Volatile inputs page. []
  • Ninette Adhikari:

    • Add a brand new ‘Success stories’ page that “highlights the success stories of Reproducible Builds, showcasing real-world examples of projects shipping with verifiable, reproducible builds”. [][][][][][]
  • Pol Dellaiera:

    • Update the website’s README page for building the website under NixOS. [][][][][]
    • Add a new academic paper citation. []

Lastly, Holger Levsen filed an extensive issue detailing a request to create an overview of recommendations and standards in relation to reproducible builds.


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In October, a number of changes were made by Holger Levsen, including:

  • Add a basic index.html for rebuilderd. []
  • Update the nginx.conf configuration file for rebuilderd. []
  • Document how to use a rescue system for Infomaniak’s OpenStack cloud. []
  • Update usage info for two particular nodes. []
  • Fix up a version skew check to fix the name of the riscv64 architecture. []
  • Update the rebuilderd-related TODO. []

In addition, Mattia Rizzolo added a new IP address for the inos5 node [] and Vagrant Cascadian brought 4 virt nodes back online [].


Supply-chain security at Open Source Summit EU

The Open Source Summit EU took place recently, and covered plenty of topics related to supply-chain security, including:


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Finally, If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Categories: FLOSS Project Planets

Pages