Feeds

Family of free and friendly open source software

Planet KDE - Wed, 2024-05-01 20:00

Or FOFAFOSS. Rolls right off your tongue.

Like in many families, there's always a bit of.. turmoil and drama in FOSS. Something breaks (either on purpose or by accident), people get frustrated.. The usual. It is kind of to be expected when it comes to very social projects, like let's say, Linux desktop environments.

People have their own visions and ways to see things. They often clash. That's normal. It's quite human. I do like to think that Linux desktop environments especially are like siblings that have rivalries.

But we got to remember that it's not "us vs them" here. We don't have the resources to fight each other. We need to work together even with our incompatible visions sometimes. Otherwise things will keep fracturing and get even worse.. And nobody gains from that. Well, except the proprietary platforms. :P

In the end we all want to make good software for everyone to use and enjoy. Let's help each other to do that as well as we can.

Nothing wrong with "Oh you made that? Well watch this!" type of friendly rivalry however, it keeps us doing what we do best. :)

I just wanted to write this down as somekind of reminder that we got to remember to work together if we want to succeed. Even when frustrated.

And no I am not any high and mighty person to really say this, I have had my own share of frustrations and quips. This post serves also as a reminder for myself.

So let's try to work together as well as we can.

Categories: FLOSS Project Planets

Seth Michael Larson: Isolating risk in the CPython release process

Planet Python - Wed, 2024-05-01 20:00
Isolating risk in the CPython release process AboutBlogNewsletterLinks Isolating risk in the CPython release process

Published 2024-05-02 by Seth Larson
Reading time: minutes

This critical role would not be possible without funding from the Alpha-Omega project. Massive thank-you to Alpha-Omega for investing in the security of the Python ecosystem!

The first stage of the CPython release process produces source and docs artifacts. In terms of "supply chain integrity", the source artifacts are the most important artifact produced by this process. These tarballs are what propagates down into containers, pyenv, and operating system distributions, so reducing the risk that these artifacts are modified in-flight is critical.

A few weeks ago I published that CPythons' release process for source and docs artifacts was moved from developers machines onto GitHub Actions, which provides an isolated build environment.

This already reduces risk of artifacts being accidentally or maliciously modified during the release process. The layout of the build and release process before used a build script which built the software from source, built the docs, and then ran tests all in the same isolated job. This was totally fine on a developers' machine where there isn't any isolation possible between different stages.

Build DependenciesBuild Dependenci...Build SourceBuild SourceSource ArtifactsSource Artifa...Docs ArtifactsDocs ArtifactsSource
CodeSource...Docs DependenciesDocs DependenciesBuild DependenciesBuild Dependenci...Build SourceBuild SourceSource ArtifactsSource Artifa...Docs ArtifactsDocs ArtifactsSource
CodeSource...Docs DependenciesDocs DependenciesBuild DocsBuild DocsTestingTestingBuild DocsBuild DocsTestingTestingSource ArtifactsSource Artifa...Text is not SVG - cannot display
Before and after splitting up build stages

With GitHub Actions we can isolate each stage from the others and remove the need to install all dependencies for all jobs into the same stage. This drastically reduces the number of dependencies, each representing a small amount of risk, for the stages that are critical for supply chain security of CPython (specifically the building of source artifacts).

Above you can see on the left the previous process which pulls all dependencies into the same job (represented as a gray box) and the right being the new process having split up the builds and testing and the source and docs builds.

After doing this split the "Build Source" task only needs ~170 dependencies instead of over 800 dependencies (mostly for documentation LaTeX and PDFs) and all of those dependencies either come with the operating system and thus can't be reduced further or are pinned in a lock file.

The testing stage still has access to the source artifacts, but only after they've been uploaded to GitHub Action Artifacts and aren't able to modify them.

I plan to write a separate in-depth article about dependencies, pinning, and related topics, stay tuned for that.

SOSS Community Day 2024 recordings

The recordings for my talk and the OpenSSF tabletop session have been published to YouTube:

Embrace the Differences: Securing open source software ecosystems where they are

In the talk I discuss the technical and also social aspects of why it's sometimes difficult to adopt security changes into an open source ecosystem. Ecosystem-agnostic work (think memory safety, provenance, reproducible builds, vulnerabilities) tends to operate at a much higher level than the individual ecosystems where the work ends up being applied.

OpenSSF Tabletop Session

The tabletop session had many contributors representing all the different aspects of discovering, debugging, disclosing, fixing, and patching a zero-day vulnerability in an open source component that's affecting production systems.

Tabletop Session moderated by Dana Wang

Mentoring for Google Summer of Code

Google Summer of Code 2024 recently published its program and among the projects and contributors accepted was CPython's project for adopting the Hardened Compiler Options Guide for C/C++. I'm mentoring the contributor through the process of contributing to CPython and hopefully being successful in adopting hardened compiler options.

Other items

That's all for this week! 👋 If you're interested in more you can read last week's report.

Thanks for reading! ♡ Did you find this article helpful and want more content like it? Get notified of new posts by subscribing to the RSS feed or the email newsletter.

This work is licensed under CC BY-SA 4.0

Categories: FLOSS Project Planets

KDE & Google Summer of Code 2024

Planet KDE - Wed, 2024-05-01 20:00

KDE will mentor ten projects in Google Summer of Code (GSoC) this year. GSoC is a program in which contributors new to open-source spend between 175 and 350 hours working on an open.source project.

Projects KDE Connect

ShellWen Chen will work on updating the SSHD library in the KDE Connect Android app which will improve the application's security and stability. Albert Vaca Cintora will mentor this project.

Labplot

LabPlot is a Data Visualization and Analysis platform. This summer, Kuntal Bar will work on adding 3D plotting support to cater to the evolving demands of scientific research. Israel Galadima will work on Python wrappers around the LabPlot C++ API. Alexander Semke will mentor both projects.

Arianna

Arianna is KDE's Epub viewer, and Ajay will work on porting the Javascript frontend from epub.js to Foliate.js, the library that powers Foliate. Carl Schwan will mentor this project.

Frameworks

Manuel Alcaraz will work on adding support for Qt for Python to some of the KDE Frameworks, enabling the large Python ecosystem to use them. Carl Schwan will also mentor this project.

Okular

Pratham Gandhi will work on improving Okular's support for Javascript forms. This is particularly important because Javascript-powered forms are used frequently in PDFs provided by local governments, and while Okular already partially supports them, many functions are not implemented. Albert Astals Cid will mentor this project.

Snaps

Soumyadeep Ghosh will work under Scarlett's direction and integrate the Snap ecosystem closer with KDE. This includes fixing the Discover integration with Snap and adding a Snap KCM to change the permission from Plasma System Settings.

Krita

Ken Lo will work under the supervision of Tiar and Emmet O'Neill on improving the pixel art workflow by adding the Pixel Perfect option to smooth out pixel art curves.

KDE Games

João Gouveia will implement the backend for a variant of the Mancala game as well as a solver under the supervision of Benson Muite and Harsh Kumar.

Kdenlive

Chengkun Chen will work on improving the support for subtitles in Kdenlive. More specifically, he will add full support for the Sub Station Alpha v4.00+ format which contains style information for Kdenlive. Jean-Baptiste Mardelle will mentor this project.

Categories: FLOSS Project Planets

Mario Hernandez: Integrating Drupal with Storybook components

Planet Drupal - Wed, 2024-05-01 20:00

Hey you're back! 🙂 In the previous post we talked about how to build a custom Drupal theme using Storybook as the design system. We also built a simple component to demonstrate how Storybook, using custom extensions, can understand Twig. In this post, the focus will be on making Drupal aware of those components by connecting Drupal to Storybook.
If you are following along, we will continue where we left off to take advantage of all the prep work we did in the previous post. Topics we will cover in this post include:

  1. What is Drupal integration
  2. Installing and preparing Drupal for integration
  3. Building components in Storybook
  4. Building a basic front-end workflow
  5. Integrating Drupal with Storybook components
What is Drupal integration?

In the context of Drupal development using the component-driven methodology, Drupal integration means connecting Drupal presenter templates such as node.html.twig, block.html.twig, paragraph.html.twig, etc. to Storybook by mapping Drupal fields to component fields in Storybook. This in turn allows for your Drupal content to be rendered wrapped in the Storybook components.

The advantage of using a design system like Storybook is that you are in full control of the markup when building components, as a result your website is more semantic, accessible, and easier to maintain.

Building more components in Storybook

The title component we built in the previous post may not be enough to demonstrate some of the advanced techniques when integrating components. We will build a larger component to put these techniques in practice. The component we will build is called Card and it looks like this:

When building components, I like to take inventory of the different parts that make up the components I'm building. The card image above shows three parts: An image, a title, and teaser text. Each of these parts translates into fields when I am defining the data structure for the component or building the entity in Drupal.

Building the Card component
  • Open the Drupal site in your code editor and within your code editor navigate to the storybook theme (web/themes/custom/storybook)
  • Create two new directories inside components called 01-atoms and 02-molecules
  • Inside 02-molecules create a new directory called card
  • Inside the card directory add the following four files:
    • card.css: component's styles
    • card.twig: component's markup and logic
    • card.stories.jsx: Storybook's story
    • card.yml: component's demo data
  • Add the following code snippet to card.yml:
--- modifier: '' image: <img src="https://source.unsplash.com/cHRDevKFDBw/640x360" alt="Palm trees near city buildings" /> title: level: 2 modifier: '' text: 'Tours & Experiences' url: 'https://mariohernandez.io' teaser: 'Step inside for a tour. We offer a variety of tours and experiences to explore the building’s architecture, take you backstage, and uncover the best food and drink. Tours are offered in different languages and for different levels of mobility.'
  • Add the following to card.twig to provide the markup and logic for the card:
{{ attach_library('storybook/card') }} <article class="card{{ modifier ? ' ' ~ modifier }}{{- attributes ? ' ' ~ attributes.class -}}" {{- attributes ? attributes|without(class) -}}> {% if image %} <div class="card__image"> <figure> {{ image }} </figure> </div> {% endif %} <div class="card__content"> {% if title %} {% include "@atoms/title/title.twig" with { 'level': title.level, 'modifier': title.modifier, 'text': title.text, 'url': title.url, } only %} {% endif %} {% if teaser %} <p class="card__teaser">{{ teaser }}</p> {% endif %} </div> </article>
  • Copy and paste these styles into card.css.

  • Finally, let's create the Storybook card story by adding the following to card.stories.jsx:

import parse from 'html-react-parser'; import card from './card.twig'; import data from './card.yml'; import './card.css'; const component = { title: 'Molecules/Card', }; export const Card = { render: (args) => parse(card(args)), args: { ...data }, }; export default component;

Let's go over a few things regarding the code above:

  • The data structure in card.yml reflects the data structure and type we will use in Drupal.
    • The image field uses the entire <img> element rather than just using the image src and alt attributes. The reason for this is so when we get to Drupal, we can use Drupal's full image entity. This is a good practice for caching purposes.
  • card.twig reuses the title component we created in the previous post. Rather than build a title from scratch for the Card and repeat the code we already wrote, reusing the existing components keeps us DRY.
  • card.stories.jsx in the Storybook story for the Card, notice how the code in this file is very similar to the code in the title.stories.jsx. Even with complex components, when we port them into Storybook as stories, most times the code will be similar as what you see above because Storybook is simply parsing whatever is in .twig and .yml files. There are exceptions when the React code may have extra parameters or logic which typically happens when we're building stories variations. Maybe a topic for a different blog post. 😉
Before we preview the Card, some updates are needed

You may have noticed in card.twig we used the namespace @atoms when nesting the title component. This namespace does not exist, and we need to create it now. In addition, we need to move the title component into the 01-atoms directory:

  • In your code editor or command line (whichever is easier), move the title directory into the 01-atoms directory
  • In your editor, open title.stories.jsx and change the line
    title: 'Components/Title' to title: 'Atoms/Title'. This will display the title component within the Atoms category in Storybook's sidebar.
  • Rather than have you make individual changes to vite.config.js, let's replace/overwrite all its content with the following:
/* eslint-disable */ import { defineConfig } from 'vite' import yml from '@modyfi/vite-plugin-yaml'; import twig from 'vite-plugin-twig-drupal'; import { join } from 'node:path' export default defineConfig({ root: 'src', publicDir: 'public', build: { emptyOutDir: true, outDir: '../dist', rollupOptions: { input: { 'reset': './src/css/reset.css', 'styles': './src/css/styles.css', 'card': './src/components/02-molecules/card/card.css', }, output: { assetFileNames: 'css/[name].css', }, }, sourcemap: true, }, plugins: [ twig({ namespaces: { atoms: join(__dirname, './src/components/01-atoms'), molecules: join(__dirname, './src/components/02-molecules'), }, }), // Allows Storybook to read data from YAML files. yml(), ], })

Let's go over some of the most noticeable updates inside vite.config.js:

  • We have defined a few things to improve the functionality of our Vite project, starting with using src as our app root directory and public for publicDir. This helps the app understand the project structure in a relative manner.

  • Next, we defined a Build task which provides the app with defaults for things like where should it compiled code to (i.e. /dist), and rollupOptions for instructing the app which stylesheets to compile and what to call them.

  • As part of the rollupOptions we also defined two stylesheets for global styles (reset.css and styles.css). We'll create these next.

    Important This is as basic as it gets for a build workflow and in no way would I recommend this be your front-end build workflow. When working on bigger projects with more components, it is best to define a more robust and dynamic workflow that provides automation for all the repetitive tasks performed on a typical front-end project.
  • Under the Plugins section, we have defined two new namespaces, @atoms and @molecules, each of which points to specific path within our components directory. These are the namespaces Storybook understands when nesting components. You can have as many namespaces as needed.

Adding global styles
  • Inside storybook/src, create a new directory called css
  • Inside the css directory, add two new files, reset.css and styles.css
  • Here are the styles for reset.css and styles.css. Please copy them and paste them into each of the stylesheets.
  • Now for Storybook to use reset.css and styles.css, we need to update /.storybook/preview.js by adding these two imports directly after the current imports, around line 4.
import '../dist/css/reset.css'; import '../dist/css/styles.css'; Previewing the Card in Storybook

Remember, you need NodeJS v20 or higher as well as NVM installed on your machine.

  • In your command line, navigate to the storybook directory and run:
nvm install npm install npm run build npm run storybook

A quick note about the commands above:

  • nvm install and npm install are typically only done once in your app. These commands will first install and use the node version specified in .nvmrc, and will install all the required node packages found in package.json. If you happen to be workign on another project that may use a different version of node, when you comeback to the Storybook project you will need to run nvm use in order to resume using the right node version.
  • npm run build is usually only ran when you have made configuration changes to the project or are introducing new files.
  • npm run storybook is the command you will use all the time when you want to run Storybook.

After Storybook launches, you should see two story categories in Storybook's sidebar, Atoms and Molecules. The title component should be under Atoms and the Card under Molecules. See below:

Installing Drupal and setting up the Storybook theme

We have completed all the prep work in Storybook and our attention now will be all in Drupal. In the previous post all the work we did was in a standalone project which did not require Drupal to run. In this post, we need a Drupal site to be able to do the integration with Storybook. If you are following along and already have a Drupal 10 site ready, you can skip the first step below.

  1. Build a basic Drupal 10 website (I recommend using DDEV).
  2. Add the storybook theme to your website. If you completed the excercise in the previous post, you can copy the theme you built into your site's /themes/custom/ directory, Otherwise, you can clone the previous post repo into the same location so it becomes your theme. After this your theme's path should be themes/custom/storybook.
  3. No need to enable the theme just yet, we'll come back to the theme shortly.
  4. Finally, create a new Article post that includes a title, body content and an image. We'll use this article later in the process.
Creating Drupal namespaces and adding Libraries

Earlier we created namespaces for Storybook, now we will do the same but this time for Drupal. It is best if the namesapces' names between Storybook and Drupal match for consistency. In addition, we will create Drupal libraries to allow Drupal to use the CSS we've written.

  • Install and enable the Components module
  • Add the following namespaces at the end of storybook.info.yml (mind your indentation):
components: namespaces: atoms: src/components/01-atoms molecules: src/components/02-molecules
  • Replace all content in storybook.libraries.yml with the following:
global: version: VERSION css: base: dist/css/reset.css: {} dist/css/styles.css: {} card: css: component: dist/css/card.css: {}
  • Let's go over the changes to both, storybook.info.yml and storybook.libraries.yml files:

    • Using the Components module we created two namespaces: @atoms and @molecules. Each namespace is associated with a specific path to the corresponding components. This is important because Drupal by default only looks for Twig templates inside the /templates directory and without the Components module and the namespaces it would not know to look for our component's Twig templates inside the components directory.
    • Then we created two Drupal libraries: global and card. The Global library includes two CSS stylesheets (reset.css and styles.css), which handle base styles in our theme. the Card library includes the styles we wrote for the Card component. If you noticed, when we created the Card component, the first line inside card.twig is a Twig attach library statement. Basically card.twig is expecting a Drupal library called card.
Turn Twig debugging on

All the pieces are in place to Integrate the Card component so Drupal can use it to render article nodes when viewed in teaser view mode.

  • The first thing we need to do to begin the integration process is to determine which Twig template Drupal uses to render article nodes in teaser view mode. One easy way to do this is by turning Twig debugging on. This used to be a complex configuration but starting with Drupal 10.1 you can now do it directly in Drupal's UI:

    • While logged in with admin access, navigate to /admin/config/development/settings on your browser. This will bring up the Development settings page.
    • Check all the boxes on this page and click Save settings. This will enable Twig debugging and disable caching.
    • Now navigate to /admin/config/development/performance so we can turn CSS and JS aggregation off.
    • Under Bandwidth optimization cleared the two boxes for CSS and Javascript aggregation then click on Save configuration.
    • Lastly, click the Clear all caches button. This will ensure any CSS or JS we write will be available without having to clear caches.
  • With Twig debugging on, go to the homepage where the Article we created should be displayed in teaser mode. If you right-click on any part of the article and select inspect from the context menu, you will see in detail all the templates Drupal is using to render the content on the current page. See example below.

    Note I am using a new basic Drupal site with Olivero as the default theme. If your homepage does not display Article nodes in teaser view mode, you could create a simple Drupal view to list Article nodes in teaser view mode to follow along.

In the example above, we see a list of templates that start with node...*. These are called template suggestions and are the names Drupal is suggesting we can assign our custom templates. The higher the template appears on the list, the more specific it is to the piece of content being rendered. For example, changes made to node.html.twig would affect ALL nodes throughout the site, whereas changes made to node--1--teaser.html.twig will only affect the first node created on the site but only when it's viewed in teaser view mode.

Notice I marked the template name Drupal is using to render the Article node. We know this is the template because it has an X before the template name.

In addition, I also marked the template path. As you can see the current template is located in core/themes/olivero/templates/content/node--teaser.html.twig.

And finally, I marked examples of attributes Drupal is injecting in the markup. These attributes may not always be useful but it is a good practice to ensure they are available even when we are writing custom markup for our components.

Create a template suggestion

By looking at the path of the template in the code inspector, we can see that the original template being used is located inside the Olivero core theme. The debugging screenshot above shows a pretty extensive list of templates suggestions, and based on our requirements, copying the file node--teaser.html.twig makes sense since we are going to be working with a node in teaser view mode.

  • Copy /core/themes/olivero/templates/content/node--teaser.html.twig into your theme's /storybook/templates/content/. Create the directory if it does not exist.
  • Now rename the newly copied template to node--article--teaser.html.twig.
  • Clear Drupal's cache since we are introducing a new Twig template.

As you can see, by renaming the template node--article--teaser (one of the names listed as a suggestion), we are indicating that any changes we make to this template will only affect nodes of type Article which are displayed in Teaser view mode. So whenever an Article node is displayed, if it is in teaser view mode, it will use the Card component to render it.

The template has a lot of information that may or may not be needed when integrating it with Storybook. If you recall, the Card component we built was made up of three parts: an image, a title, and teaser text. Each of those are Drupal fields and these are the only fields we care about when integrating. Whenever when I copy a template from Drupal core or a module into my theme, I like to keep the comments on the template untouched. This is helpful in case I need to reference any variables or elements of the template.

The actual integration ...Finally
  1. Delete everything from the newly copied template except the comments and the classes array variable
  2. At the bottom of what is left in the template add the following code snippet:
{% set render_content = content|render %} {% set article_title = { 'level': 2, 'modifier': 'card__title', 'text': label, 'url': url, } %} {% include '@molecules/card/card.twig' with { 'attributes': attributes.addClass(classes), 'image': content.field_image, 'title': article_title, 'teaser': content.body, } only %}
  • We set a variable with content|render as its value. The only purpose for this variable is to make Drupal aware of the entire content array for caching purposes. More info here.
  • Next, we setup a variable called article_title which we structured the same way as data inside card.yml. Having similar data structures between Drupal and our components provides many advantages during the integration process.
    • Notice how for the text and url properties we are using Drupal specific variables (label and url), accordingly. If you look in the comments in node--article--teaser.html.twig you will see these two variables.
  • We are using a Twig include statement with the @molecules namespace to nest the Card component into the node template. The same way we nested the Title component into the Card.
  • We mapped Drupal's attributes into the component's attributes placeholder so Drupal can inject any attributes such as CSS classes, IDs, Data attributes, etc. into the component.
  • Finally, we mapped the image, title and teaser fields from Drupal to the component's equivalent fields.
  • Save the changes to the template and clear Drupal's cache.
Enable the Storybook theme

Before we forget, let's enable the Storybook theme an also make it your default theme, otherwise all the work we are doing will not be visible since we are currently using Olivero as the default theme. Clear caches after this is done.

Previewing the Article node as a Card

Integration is done and we switched our default theme to Storybook. After clearing caches if you reload the homepage you should be able to see the Article node you wrote but this time displayed as a card. See below:

  • If you right-click on the article and select Inspect, you will notice the following:
    • Drupal is now using node--article--teaser.html.twig. This is the template we created.
    • The template path is now themes/custom/storybook/src/templates/content/.
    • You will also notice that the article is using the custom markup we wrote for the Card component which is more semantic, accessible, but in addition to this, the <article> tag is also inheriting several other attributes that were provided by Drupal through its Attributes variable. See below:

If your card's image size or aspect ratio does not look as the one in Storybook, this is probably due to the image style being used in the Article Teaser view mode. You can address this by:

  • Going to the Manage display tab of the Article's Teaser view mode (/admin/structure/types/manage/article/display/teaser).
  • Changing the image style for the Image field for one that may work better for your image.
  • Preview the article again on the homepage to see if this looks better.
In closing

This is only a small example of how to build a simple component in Storybook using Twig and then integrate it with Drupal, so content is rendered in a more semantic and accessible manner. There are many more advantages of implementing a system like this. I hope this was helpful and see the potential of a component-driven environment using Storybook. Thanks for visiting.

Download the code For a full copy of the code base which includes the work in this and the previous post, clone or download the repo and switch to the card branch. The main branch only includes the previous post code.

Download the code

Categories: FLOSS Project Planets

Dirk Eddelbuettel: RcppInt64 0.0.5 on CRAN: Minor Maintenance

Planet Debian - Wed, 2024-05-01 19:44

The new-ish package RcppInt64 (announced last fall in this post, with three small updates following) arrived on CRAN yesterday as relase 0.0.5. RcppInt64 collects some of the previous conversions between 64-bit integer values in R and C++, and regroups them in a single package. It offers two interfaces: both a more standard as<>() converter from R values along with its companions wrap() to return to R, as well as more dedicated functions ‘from’ and ‘to’.

This release addresses an new nag from CRAN who no longer want us to use the ‘non-API’ header function SET_S4_OBJECT so a small change was made.

The brief NEWS entry follows:

Changes in version 0.0.5 (2024-04-30)
  • Minor refactoring of internal code to not rely on SET_S4_OBJECT.

Courtesy of my CRANberries, there is a diffstat report relative to the previous release. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Kate & Icons

Planet KDE - Wed, 2024-05-01 18:36
How it shall look… # Linux & BSDs # Windows # macOS # State on Fedora 40 Workstation & XFCE Spin… #

Screenshots taken from the GNOME bugtracker, copies to not stall their GitLab instance.

I think that is rather unpleasant and for e.g. the left icon-only border just an unusable insult.

Why? The Adwaita Icon Theme no longer follows the FDO icon naming spec #

There was no information that they want to break away from the icon naming ’the world’ does assume (given there is a spec). And now we have that state for our users there, at least on these spins.

That is not that nice, we did spend a lot of work to get our applications working cross-desktop and even cross-platform and now that…

I feel rather infuriated, finding this before going to sleep, even more after reading the feedback in the GNOME bugtracker and that this is just closed as ‘so be it’.

They added now at least a hint to the README:

Private UI icon set for GNOME core apps.

Ok, I assume that is then all fine.

No, it is not.

Then please don’t install it as FDO icon theme and break all FOSS apps that rely on the naming spec…

If you care for non ‘GNOME core apps’ to work per default properly on distributions like that, please either get them to fix it (hints are given in the linked issue) or get the distributions to install a compliant theme.

We can plan to work around this mess in the future on our side, but that will not un-break the application versions that are now already shipped to our users and non-KDE frameworks based stuff that will just run into the same issues.

Feedback #

You can provide feedback on the matching KDE Social, reddit or Hacker News post.

Categories: FLOSS Project Planets

Bits from Debian: Debian welcomes the 2024 GSOC contributors/students

Planet Debian - Wed, 2024-05-01 17:56

We are very excited to announce that Debian has selected seven contributors to work under mentorship on a variety of projects with us during the Google Summer of Code.

Here are the list of the projects, students, and details of the tasks to be performed.

Project: Android SDK Tools in Debian

  • Student: anuragxone

Deliverables of the project: Make the entire Android toolchain, Android Target Platform Framework, and SDK tools available in the Debian archives.

Project: Benchmarking Parallel Performance of Numerical MPI Packages

  • Student: Nikolaos

Deliverables of the project: Deliver an automated method for Debian maintainers to test selected numerical Debian packages for their parallel performance in clusters, in particular to catch performance regressions from updates, and to verify expected performance gains, such as Amdahl’s and Gufstafson’s law, from increased cluster resources.

Project: Debian MobCom

  • Student: Nathan D

Deliverables of the project: Update the outdated mobile packages and recreate aged packages due to new dependencies. Bring in more mobile communication tools by adding about 5 new packages.

Project: Improve support of the Rust coreutils in Debian

  • Student: Sreehari Prasad TM

Deliverables of the project: Make uutils behave more like GNU’s coreutils by improving compatibility with GNU coreutils test suit.

Project: Improve support of the Rust findutils in Debian

  • Student: hanbings

Deliverables of the project: A safer and more performant implementation of the GNU suite's xargs, find, locate and updatedb tools in rust.

Project: Expanding ROCm support within Debian and derivatives

  • Student: xuantengh

Deliverables of the project: Building, packaging, and uploading missing ROCm software into Debian repositories, starting with simple tools and progressing to high-level applications like PyTorch, with the final deliverables comprising a series of ROCm packages meeting community quality assurance standards.

Project: procps: Development of System Monitoring, Statistics and Information Tools in Rust

  • Student: Krysztal Huang

Deliverables of the project: Improve the usability of the entire Rust-based implementation of the procps utility on Linux.

Congratulations and welcome to all the contributors!

The Google Summer of Code program is possible in Debian thanks to the efforts of Debian Developers and Debian Contributors that dedicate part of their free time to mentor contributors and outreach tasks.

Join us and help extend Debian! You can follow the contributors' weekly reports on the debian-outreach mailing-list, chat with us on our IRC channel or reach out to the individual projects' team mailing lists.

Categories: FLOSS Project Planets

Drupal.org blog: Best Drupalcon Portland 2024 sessions to learn Drupal for the first time

Planet Drupal - Wed, 2024-05-01 17:26

I have gone through all the Drupalcon sessions in Portland and selected those that I think are perfect for someone learning Drupal, here is the result.

Did I miss any that you think it should be highlighted here? Please let me know 😊.

Have fun, learn and meet the community

Trivia night
https://events.drupal.org/portland2024/session/trivia-night
When: Thursday, May 9, 2024 - 18:30 to 21:05
Why: General culture about Drupal

Birds of a Feather
https://events.drupal.org/portland2024/session/birds-feather 
When: Monday, May 6, 2024 - 08:00 to 17:00
Why: Learn and interact with the discussions

Learn how Drupal is used in the real world 

Harvard College: Don't Call it a Redesign
https://events.drupal.org/portland2024/session/harvard-college-dont-call-it-redesign 
When: Thursday, May 9, 2024 - 14:20 to 14:45
Why: learn about current trends and on going work from real agencies in the real world

Creating Nimble Drupal Systems for Government: Transforming MN’s Dept of Health in 6 Months
https://events.drupal.org/portland2024/session/creating-nimble-drupal-systems-government-transforming-mns-dept-health-6 
When: Thursday, May 9, 2024 - 13:10 to 13:45
Why: Learn about current trends and on going work from real agencies in the real world

How Los Angeles Department of Water and Power Revolutionized the Web Utility Experience
https://events.drupal.org/portland2024/session/how-los-angeles-department-water-and-power-revolutionized-web-utility 
When: Thursday, May 9, 2024 - 11:30 to 11:45
Why: Learn about current trends and on going work from real agencies in the real world

How Drupal Rescued Georgia Tech’s International Students During and Post-Pandemic
https://events.drupal.org/portland2024/session/how-drupal-rescued-georgia-techs-international-students-during-and-post
When: Thursday, May 9, 2024 - 11:30 to 11:45
Why: Learn about current trends and on going work from real agencies in the real world

How Acquia and Drupal Power Robust, Modern, and Secure Digital Experiences
https://events.drupal.org/portland2024/session/how-acquia-and-drupal-power-robust-modern-and-secure-digital-experiences
When: Thursday, May 9, 2024 - 11:00 to 11:25
Why: Learn about current trends and on going work from real agencies in the real world

From Many to One: Migrating 70+ Disparate Local Government Websites onto a Cohesive Drupal Platform
https://events.drupal.org/portland2024/session/many-one-migrating-70-disparate-local-government-websites-cohesive-drupal 
When: Monday, May 6, 2024 - 13:30 to 14:20
Why: Learn about Drupal’s multisite capabilities

Get trained

TRAINING | Evolving Web
https://events.drupal.org/portland2024/session/training-debug-academy 
When: Thursday, May 9, 2024 - 09:00 to 16:00
Why: Introduction to Building Sites with Drupal

TRAINING | Evolving Web
https://events.drupal.org/portland2024/session/training-evolving-web 
When: Thursday, May 9, 2024 - 09:00 to 16:00
Why: Drupal Theming with SDC and TailwindCSS

First-time contributor workshop
https://events.drupal.org/portland2024/session/first-time-contributor-workshop 
When: Wednesday, May 8, 2024 - 10:30 to 17:00
Why: Learn to give something back while you learn something new

General Contribution
https://events.drupal.org/portland2024/session/general-contribution 
When: Monday, May 6, 2024 - 16:10 to 17:00
Why: It’s not necessarily a place to get trained, but a place where you can start contributing while volunteers will help you on how to do it.

Learn about Drupal capabilities

Access Control Strategies for Enterprise Drupal Websites
https://events.drupal.org/portland2024/session/access-control-strategies-enterprise-drupal-websites 
When: Tuesday, May 7, 2024 - 16:10 to 17:00
Why: Learn how the powerful Drupal access control works

Using Layout Builder: Practical Advice from the Field
https://events.drupal.org/portland2024/session/using-layout-builder-practical-advice-field 
When: Tuesday, May 7, 2024 - 15:00 to 15:50
Why: Learn about the powerful Layout Builder

Protecting your site with Automatic Updates
https://events.drupal.org/portland2024/session/protecting-your-site-automatic-updates 
When: Tuesday, May 7, 2024 - 15:00 to 15:50
Why: Learn to stay secure

Secure, Performant, Scalable and Green: The big wins of a static Drupal website
https://events.drupal.org/portland2024/session/secure-performant-scalable-and-green-big-wins-static-drupal-website 
When: Tuesday, May 7, 2024 - 15:00 to 15:50
Why: Learn to build static websites while leveraging the power of Drupal 

Unleashing the power of ECA: No-code coding for ambitious site builders
https://events.drupal.org/portland2024/session/unleashing-power-eca-no-code-coding-ambitious-site-builders 
When: Tuesday, May 7, 2024 - 13:50 to 14:40
Why: Learn some low code capabilities in Drupal

Learn about teamwork and collaboration
https://events.drupal.org/portland2024/session/price-silence-hidden-costs-withholding-feedback-teams 
When: Tuesday, May 7, 2024 - 16:10 to 17:00
Why: Because teamwork is the name of the game

Getting started using Personalization
https://events.drupal.org/portland2024/session/getting-started-using-personalization
When: Tuesday, May 7, 2024 - 11:30 to 12:20
Why: I personally believe that personalisation is the next big thing in Drupal and the web

Navigation changes in Drupal’s Admin UI
https://events.drupal.org/portland2024/session/navigation-changes-drupals-admin-ui 
When: Monday, May 6, 2024 - 15:00 to 15:50
Why: Learn about the new navigation interface 

Drupal's next leap: configuration validation — it's here!
https://events.drupal.org/portland2024/session/drupals-next-leap-configuration-validation-its-here 
When: Monday, May 6, 2024 - 15:00 to 15:50
Why:  Configuration is a powerful but complex topic in Drupal worth to explore

Lightening Talk: 5 new free things you get from CKEditor 5 Plugin Pack
https://events.drupal.org/portland2024/session/lightening-talk-5-new-free-things-you-get-ckeditor-5-plugin-pack 
When: Monday, May 6, 2024 - 13:05 to 13:15
Why: Learn more about the editor in the core of the Drupal editorial experience

Mastering Consistency: Expanding content across multiple sites and touch points
https://events.drupal.org/portland2024/session/mastering-consistency-expanding-content-across-multiple-sites-and-touch-points 
When: Monday, May 6, 2024 - 09:00 to 09:50
Why: Learn how flexible is Drupal when it comes to content shareability

Learn strategy and where Drupal is heading

Drupal Project Initiatives Keynote
https://events.drupal.org/portland2024/session/drupal-project-initiatives-keynote 
When: Wednesday, May 8, 2024 - 09:00 to 10:00
Why: Learn about Drupal future

Lightning Talk: Is the Redesign Dead?
https://events.drupal.org/portland2024/session/lightening-talk-redesign-dead
When: Monday, May 6, 2024 - 15:55 to 16:05
Why: Learn new trends in development

Drupal.org Update
https://events.drupal.org/portland2024/session/drupalorg-update 
When: Monday, May 6, 2024 - 15:00 to 15:50
Why: The engineering team will give you insights on what’s happening and what’s coming soon

So I logged in, now what? The Dashboard initiative welcomes you
https://events.drupal.org/portland2024/session/so-i-logged-now-what-dashboard-initiative-welcomes-you 
Monday, May 6, 2024 - 13:30 to 14:20
Why: learn how the new interface will welcome you in the near future.

Driesnote
https://events.drupal.org/portland2024/session/driesnote 
When: Monday, May 6, 2024 - 10:45 to 11:45
Why: Do we need to explain why the most important session in Drupalcon will give you insights in the immediate future of Drupal?
 

Categories: FLOSS Project Planets

Mike Herchel's Blog: Polishing Drupal’s Admin UI

Planet Drupal - Wed, 2024-05-01 16:42
Polishing Drupal’s Admin UI mherchel Wed, 05/01/2024 - 16:42
Categories: FLOSS Project Planets

Drupal Association blog: Elevate Your Marketing Game at DrupalCon Portland 2024

Planet Drupal - Wed, 2024-05-01 16:06

In the digital age, staying updated with the latest marketing strategies and tools is crucial for every marketing professional. DrupalCon Portland 2024 might not be an event that was on your radar as a marketer, but this year is different. DrupalCon Portland 2024 has worked hard to curate some of the best speakers in the content management space for its new marketing track. This conference is transforming from a developer-focused conference to a full blown web conference, providing marketers an opportunity to enhance their expertise, network with industry leaders, and gain insights into the latest trends and technologies. Here's what you can gain from attending, along with a sneak peek into some of the key sessions that promise to enrich your marketing prowess.

Comprehensive Learning Opportunities

At DrupalCon Portland 2024, the focus is on actionable insights and strategies that can be applied immediately. Whether you're a content strategist, a digital marketer, or lead a team of developers, the conference offers a diverse range of sessions tailored to meet your interests. These sessions will cover everything from content consistency across multiple platforms to the integration of AI in web and marketing strategies.

Session Spotlight:

AI + Atomic Content:  Managing Personalized, Omni-channel Content at Scale: Explore how to manage personalized, omni-channel content at scale, a vital skill in today’s customer-centric marketing environment.

Networking with Peers and Industry Leaders

One of the primary benefits of attending DrupalCon is the opportunity to connect with peers and thought leaders from across the globe. These interactions provide a chance to share ideas, challenges, and solutions, fostering a valuable exchange of knowledge that can lead to future collaborations and innovations.

Session Spotlight:

DrupalCon's Next Top Content Model: Delve into advanced content strategy tools that clarify requirements and enhance mutual understanding within your marketing team.

Gaining a Competitive Edge

The marketing track at DrupalCon Portland 2024 is designed to equip you with cutting-edge skills and insights that will help you drive your organization forward and set you apart in the competitive job market. Learn how to leverage Drupal and other technologies to maximize your digital presence and effectiveness.

Session Spotlight:

Transforming Drupal into a MarTech LeadGen Machine: Learn the secrets to turning your website into a lead generation powerhouse, ensuring that your digital efforts translate into tangible results.

Insight into Future Trends

Staying ahead in marketing means anticipating changes and adapting quickly. DrupalCon provides a forward-looking perspective on the future of marketing, particularly how emerging technologies like AI are reshaping the landscape.

Session Spotlight:

Navigating Tomorrow: The Future of Websites in the Age of AI and Content Proliferation: Understand the seismic shifts in website management and content creation driven by AI and the insatiable demand for new content.

Practical Takeaways for Immediate Application

Every session at DrupalCon is crafted to provide practical knowledge and strategies that you can immediately implement in your work. From enhancing your content strategies to integrating sophisticated tech solutions, the takeaways are designed to have an immediate impact on your marketing effectiveness.

Session Spotlight:

The 30-Minute Content Strategist: From Concept to Plan: Equip yourself with a rapid, effective content strategy formulation that you can apply the moment you return to the office.

A Tailored Experience

DrupalCon Portland 2024 offers a uniquely tailored experience, allowing you to customize your itinerary based on your specific interests and professional needs. Whether your focus is on technical SEO, content management, or user experience, the sessions are structured to provide deep dives into each area.

Attending DrupalCon Portland 2024 is more than just an educational experience; it's an investment in your professional future. With sessions designed to bridge the gap between theory and practice and opportunities to connect with industry leaders, the benefits of attending extend well beyond the conference itself. Ready to transform your approach to digital marketing? Join us at DrupalCon Portland 2024 and be part of shaping the future of web and marketing.

Categories: FLOSS Project Planets

Tryton News: Tryton Release 7.2

Planet Python - Wed, 2024-05-01 12:00

We are proud to announce the 7.2 release of Tryton.
This release provides many bug fixes, performance improvements and some fine tuning. It also adds 5 new modules.
You can give it a try on the demo server, use the docker image or download it here.
As usual upgrading from previous series is fully supported but some manual steps are needed to update from 7.0 to 7.2.

Here is a list of the most noticeable changes:

Changes for the User Clients

You can now request to reset your password from the login dialog. Doing this sends a temporary password to your email address.

The PYSON widgets display the value using operators which are more user-friendly.

Web Client

The binary and image widgets now support drag and drop to set their value.

Desktop Client

On list and tree views, there is now a contextual menu that allows you to copy the contents of a cell or a column.

Accounting

It is now possible to modify the dates of a period even if it contains posted moves as long as the existing moves stay inside the new period dates. This useful to correct mistakes or even extend a period.

A warning is now raised when you validate an invoice for which some lines do not have the expected default taxes. This helps to detect mistakes.

When an invoice in another currency is paid, the currency exchange amount is now booked automatically into a configured account.

You can now enter the amount of the transaction in a second currency on statements. This makes it easier to do the reconciliation between the statement and invoices based on a second currency.

Company

Employees are now automatically deactivated once their end date has passed.

It is now possible to use some placeholders in the header and footer of company reports like the company name, phone, website etc.

Marketing

Some reports are now available on marketing scenario and activities. They calculate and display the open, click and click-through rates.

UTM parameters can be added to marketing emails so you can follow their results.

Product

You can now store the Manufacturer Part Number and brand as a product identifier.

Tryton now supports to adding images to product categories.

You can now use non-square images on products. The module resizes the images to fit the requested size but keeps the aspect ratio.

Production

The production number is now only set when the order progresses to waiting. This prevents the supply module from consuming number for production request that are subsequently deleted.

Purchase

It is now possible to remove ignored invoices and stock moves from purchases. This is useful when you have ignored the invoice or shipping exception by mistake and need to correct it.

Sale

It is now possible to remove the ignored invoices and stock moves from sales. This is useful when you have ignored the invoice or shipping exception by mistake and need to correct it.

The product on sale opportunity lines can be omitted, a description and a note can be used instead.

Stock

The drop shipment (like the other shipments) can now be split. This is useful to match exactly how the supplier shipped the products.

The shipment numbers are now only set when it progresses to a waiting state. This prevents consuming sequences numbers for requests that are going to be deleted.

The lot trace now optionally displays the source and destination locations. This can be useful when investigating the traceability of a lot.

Web Shop

It is now possible to limit a web shop by country.

The web shop supports price lists to calculate the sale price and the non sale price.

New Modules Stock Product Location Place

The Stock Product Location Place Module allows defining the place where each product is stored within each location.

Account SYSCOHADA

The Account SYSCOHADA Module provides templates for the chart of account for OHADA countries.

Account Export

The Account Export Module provides the basis to allow accounting moves to be exported to external accounting software.

Account Export WinBooks

The Account Export WinBooks Module adds support to export accounting data to WinBooks.

Web Shop Product Data Feed

The Web Shop Product Data Feed Module exposes web shop products as a data feed for Google Merchant and Meta for business.

Changes for the System Administrator Server

It is now possible to update the database without updating the indexes or to create the indexes concurrently. These are useful options when updating busy system.

It is possible to define a timeout for some RPC calls. This helps preventing users from overloading the system with expensive requests.

Changes for the Developer Server

We added send_message methods to simplify sending emails using python’s Message.

A new kind of field fmany2one is now available, which is a type of many2one field but stores a different field to the id. It is used mainly in the infrastructure to create foreign keys based on a model or field name.

The read-only relational fields are no longer copied by default. This was source of various bugs as developers often forgot to disable these from the copy.

Clients

The clients read the xxx2many fields using dotted notation. This avoids making multiple requests when displaying a form with these fields.

The XML ID of a record is now displayed in the log window.

Script

It is possible to configure the scripting client to skip any warning.

Product

It is now possible to generate barcodes for a product using a different type than the one on the identifier.

Stock

The done buttons have been renamed to do.

Location name fields have been added to stock moves. This is useful to customize the information displayed in reports about the source and destination locations.

3 posts - 2 participants

Read full topic

Categories: FLOSS Project Planets

Guido Günther: Free Software Activities April 2024

Planet Debian - Wed, 2024-05-01 11:06

A short status update of what happened on my side last month. Maintenance and code review keep to be the top time sinks (in a positive way).

Categories: FLOSS Project Planets

Lullabot: Understanding What Drupal Editors and Authors Need

Planet Drupal - Wed, 2024-05-01 10:58

The Drupal Administration UI initiative, introduced in June 2023, continues to improve the user experience of Drupal core. The initiative was launched with the commitment and goal of improving the Drupal experience for all users using Drupal and reversing the existing negative impression of the Drupal user interface (UI).

Categories: FLOSS Project Planets

Antoine Beaupré: Tor migrates from Gitolite/GitWeb to GitLab

Planet Debian - Wed, 2024-05-01 10:55

Note: I've been awfully silent here for the past ... (checks notes) oh dear, 3 months! But that's not because I've been idle, quite the contrary, I've been very busy but just didn't have time to write about anything. So I've taken it upon myself to write something about my work this week, and published this post on the Tor blog which I copy here for a broader audience. Let me know if you like this or not.

Tor has finally completed a long migration from legacy Git infrastructure (Gitolite and GitWeb) to our self-hosted GitLab server.

Git repository addresses have therefore changed. Many of you probably have made the switch already, but if not, you will need to change:

https://git.torproject.org/

to:

https://gitlab.torproject.org/

In your Git configuration.

The GitWeb front page is now an archived listing of all the repositories before the migration. Inactive git repositories were archived in GitLab legacy/gitolite namespace and the gitweb.torproject.org and git.torproject.org web sites now redirect to GitLab.

Best effort was made to reproduce the original gitolite repositories faithfully and also avoid duplicating too much data in the migration. But it's possible that some data present in Gitolite has not migrated to GitLab.

User repositories are particularly at risk, because they were massively migrated, and they were "re-forked" from their upstreams, to avoid wasting disk space. If a user had a project with a matching name it was assumed to have the right data, which might be inaccurate.

The two virtual machines responsible for the legacy service (cupani for git-rw.torproject.org and vineale for git.torproject.org and gitweb.torproject.org) have been shutdown. Their disks will remain for 3 months (until the end of July 2024) and their backups for another year after that (until the end of July 2025), after which point all the data from those hosts will be destroyed, with only the GitLab archives remaining.

The rest of this article expands on how this was done and what kind of problems we faced during the migration.

Where is the code?

Normally, nothing should be lost. All repositories in gitolite have been either explicitly migrated by their owners, forcibly migrated by the sysadmin team (TPA), or explicitly destroyed at their owner's request.

An exhaustive rewrite map translates gitolite projects to GitLab projects. Some of those projects actually redirect to their parent in cases of empty repositories that were obvious forks. Destroyed repositories redirect to the GitLab front page.

Because the migration happened progressively, it's technically possible that commits pushed to gitolite were lost after the migration. We took great care to avoid that scenario. First, we adopted a proposal (TPA-RFC-36) in June 2023 to announce the transition. Then, in March 2024, we locked down all repositories from any further changes. Around that time, only a handful of repositories had changes made after the adoption date, and we examined each repository carefully to make sure nothing was lost.

Still, we built a diff of all the changes in the git references that archivists can peruse to check for data loss. It's large (6MiB+) because a lot of repositories were migrated before the mass migration and then kept evolving in GitLab. Many other repositories were rebuilt in GitLab from parent to rebuild a fork relationship which added extra references to those clones.

A note to amateur archivists out there, it's probably too late for one last crawl now. The Git repositories now all redirect to GitLab and are effectively unavailable in their original form.

That said, the GitWeb site was crawled into the Internet Archive in February 2024, so at least some copy of it is available in the Wayback Machine. At that point, however, many developers had already migrated their projects to GitLab, so the copies there were already possibly out of date compared with the repositories in GitLab.

Software Heritage also has a copy of all repositories hosted on Gitolite since June 2023 and have continuously kept mirroring the repositories, where they will be kept hopefully in eternity. There's an issue where the main website can't find the repositories when you search for gitweb.torproject.org, instead search for git.torproject.org.

In any case, if you believe data is missing, please do let us know by opening an issue with TPA.

Why?

This is an old project in the making. The first discussion about migrating from gitolite to GitLab started in 2020 (almost 4 years ago). But going further back, the first GitLab experiment was in 2016, almost a decade ago.

The current GitLab server dates from 2019, replacing Trac for issue tracking in 2020. It was originally supposed to host only mirrors for merge requests and issue trackers but, naturally, one thing led to another and eventually, GitLab had grown a container registry, continuous integration (CI) runners, GitLab Pages, and, of course, hosted most Git repositories.

There were hesitations at moving to GitLab for code hosting. We had discussions about the increased attack surface and ways to mitigate that, but, ultimately, it seems the issues were not that serious and the community embraced GitLab.

TPA actually migrated its most critical repositories out of shared hosting entirely, into specific servers (e.g. the Puppet Git repository is just on the Puppet server now), leveraging Git's decentralized nature and removing an entire attack surface from our infrastructure. Some of those repositories are mirrored back into GitLab, but the authoritative copy is not on GitLab.

In any case, the proposal to migrate from Gitolite to GitLab was effectively just formalizing a fait accompli.

How to migrate from Gitolite / cgit to GitLab

The progressive migration was a challenge. If you intend to migrate between hosting platforms, we strongly recommend to make a "flag day" during which you migrate all repositories at once. This ensures a smoother transition and avoids elaborate rewrite rules.

When Gitolite access was shutdown, we had repositories on both GitLab and Gitolite, without a clear relationship between the two. A priori, the plan then was to import all the remaining Gitolite repositories into the legacy/gitolite namespace, but that seemed wasteful, particularly for large repositories like Tor Browser which uses nearly a gigabyte of disk space. So we took special care to avoid duplicating repositories.

When the mass migration started, only 71 of the 538 Gitolite repositories were Migrated to GitLab in the gitolite.conf file. So, given that we had hundreds of repositories to migrate:, we developed some automation to "save time". We already automate similar ad-hoc tasks with Fabric, so we used that framework here as well. (Our normal configuration management tool is Puppet, which is a poor fit here.)

So a relatively large amount of Python code was produced to basically do the following:

  1. check if all on-disk repositories are listed in gitolite.conf (and vice versa) and either add missing repositories or delete them from disk if garbage
  2. for each repository in gitolite.conf, if its category is marked Migrated to GitLab, skip, otherwise;
  3. find a matching GitLab project by name, prompt the user for multiple matches
  4. if a match is found, redirect if the repository is non-empty
    • we have GitLab projects that look like the real thing, but are only present to host migrated Trac issues
    • in such cases we cloned the Gitolite project locally and pushed to the existing repository instead
  5. otherwise, a new repository is created in the legacy/gitolite namespace, using the "import" mechanism in GitLab to automatically import the repository from Gitolite, creating redirections and updating gitolite.conf to document the change

User repositories (those under the user/ directory in Gitolite) were handled specially. First, the existing redirection map was checked to see if a similarly named project was migrated (so that, e.g. user/dgoulet/tor is properly treated as a fork of tpo/core/tor). Then the parent project was forked in GitLab and the Gitolite project force-pushed to the fork. This allows us to show the fork relationship in GitLab and, more importantly, benefit from the "pool" feature in GitLab which deduplicates disk usage between forks.

Sometimes, we found no such relationships. Then we simply imported multiple repositories with similar names in the legacy/gitolite namespace, sometimes creating forks between user repositories, on a first-come-first-served basis from the gitolite.conf order.

The code used in this migration is now available publicly. We encourage other groups planning to migrate from Gitolite/GitWeb to GitLab to use (and contribute to) our fabric-tasks repository, even though it does have its fair share of hard-coded assertions.

The main entry point is the gitolite.mass-repos-migration task. A typical migration job looked like:

anarcat@angela:fabric-tasks$ fab -H cupani.torproject.org gitolite.mass-repos-migration [...] INFO: skipping project project/help/infra in category Migrated to GitLab INFO: skipping project project/help/wiki in category Migrated to GitLab INFO: skipping project project/jenkins/jobs in category Migrated to GitLab INFO: skipping project project/jenkins/tools in category Migrated to GitLab INFO: searching for projects matching fastlane INFO: Successfully connected to https://gitlab.torproject.org import gitolite project project/tor-browser/fastlane into gitlab legacy/gitolite/project/tor-browser/fastlane with desc 'Tor Browser app store and deployment configuration for Fastlane'? [Y/n] INFO: importing gitolite project project/tor-browser/fastlane into gitlab legacy/gitolite/project/tor-browser/fastlane with desc 'Tor Browser app store and deployment configuration for Fastlane' INFO: building a new connect to cupani INFO: defaulting name to fastlane INFO: importing project into GitLab INFO: Successfully connected to https://gitlab.torproject.org INFO: loading group legacy/gitolite/project/tor-browser INFO: archiving project INFO: creating repository fastlane (fastlane) in namespace legacy/gitolite/project/tor-browser from https://git.torproject.org/project/tor-browser/fastlane into https://gitlab.torproject.org/legacy/gitolite/project/tor-browser/fastlane INFO: migrating Gitolite repository project/tor-browser/fastlane to GitLab project legacy/gitolite/project/tor-browser/fastlane INFO: uploading 399 bytes to /srv/git.torproject.org/repositories/project/tor-browser/fastlane.git/hooks/pre-receive INFO: making /srv/git.torproject.org/repositories/project/tor-browser/fastlane.git/hooks/pre-receive executable INFO: adding entry to rewrite_map /home/anarcat/src/tor/tor-puppet/modules/profile/files/git/gitolite2gitlab.txt INFO: modifying gitolite.conf to add: "config gitweb.category = Migrated to GitLab" INFO: rewriting gitolite config /home/anarcat/src/tor/gitolite-admin/conf/gitolite.conf to change project project/tor-browser/fastlane to category Migrated to GitLab INFO: skipping project project/bridges/bridgedb-admin in category Migrated to GitLab [...]

In the above, you can see migrated repositories skipped then the fastlane project being archived into GitLab. Another example with a later version of the script, processing only user repositories and showing the interactive prompt and a force-push into a fork:

$ fab -H cupani.torproject.org gitolite.mass-repos-migration --include 'user/.*' --exclude '.*tor-?browser.*' INFO: skipping project user/aagbsn/bridgedb in category Migrated to GitLab [...] INFO: skipping project user/phw/atlas in category Migrated to GitLab INFO: processing project user/phw/obfsproxy (Philipp's obfsproxy repository) in category Users' development repositories (Attic) INFO: Successfully connected to https://gitlab.torproject.org INFO: user repository detected, trying to find fork phw/obfsproxy WARNING: no existing fork found, entering user fork subroutine INFO: found 6 GitLab projects matching 'obfsproxy' (https://gitweb.torproject.org/user/phw/obfsproxy.git) 0 legacy/gitolite/debian/obfsproxy 1 legacy/gitolite/debian/obfsproxy-legacy 2 legacy/gitolite/user/asn/obfsproxy 3 legacy/gitolite/user/ioerror/obfsproxy 4 tpo/anti-censorship/pluggable-transports/obfsproxy 5 tpo/anti-censorship/pluggable-transports/obfsproxy-legacy select parent to fork from, or enter to abort: ^G4 INFO: repository is not empty: in-pack: 2104, packs: 1, size-pack: 414 fork project tpo/anti-censorship/pluggable-transports/obfsproxy into legacy/gitolite/user/phw/obfsproxy^G [Y/n] INFO: loading project tpo/anti-censorship/pluggable-transports/obfsproxy INFO: forking project user/phw/obfsproxy into namespace legacy/gitolite/user/phw INFO: waiting for fork to complete... INFO: fork status: started, sleeping... INFO: fork finished INFO: cloning and force pushing from user/phw/obfsproxy to legacy/gitolite/user/phw/obfsproxy INFO: deleting branch protection: <class 'gitlab.v4.objects.branches.ProjectProtectedBranch'> => {'id': 2723, 'name': 'master', 'push_access_levels': [{'id': 2864, 'access_level': 40, 'access_level_description': 'Maintainers', 'deploy_key_id': None}], 'merge_access_levels': [{'id': 2753, 'access_level': 40, 'access_level_description': 'Maintainers'}], 'allow_force_push': False} INFO: cloning repository git-rw.torproject.org:/srv/git.torproject.org/repositories/user/phw/obfsproxy.git in /tmp/tmp6orvjggy/user/phw/obfsproxy Cloning into bare repository '/tmp/tmp6orvjggy/user/phw/obfsproxy'... INFO: pushing to GitLab: https://gitlab.torproject.org/legacy/gitolite/user/phw/obfsproxy remote: remote: To create a merge request for bug_10887, visit: remote: https://gitlab.torproject.org/legacy/gitolite/user/phw/obfsproxy/-/merge_requests/new?merge_request%5Bsource_branch%5D=bug_10887 remote: [...] To ssh://gitlab.torproject.org/legacy/gitolite/user/phw/obfsproxy + 2bf9d09...a8e54d5 master -> master (forced update) * [new branch] bug_10887 -> bug_10887 [...] INFO: migrating repo INFO: migrating Gitolite repository https://gitweb.torproject.org/user/phw/obfsproxy.git to GitLab project https://gitlab.torproject.org/legacy/gitolite/user/phw/obfsproxy INFO: adding entry to rewrite_map /home/anarcat/src/tor/tor-puppet/modules/profile/files/git/gitolite2gitlab.txt INFO: modifying gitolite.conf to add: "config gitweb.category = Migrated to GitLab" INFO: rewriting gitolite config /home/anarcat/src/tor/gitolite-admin/conf/gitolite.conf to change project user/phw/obfsproxy to category Migrated to GitLab INFO: processing project user/phw/scramblesuit (Philipp's ScrambleSuit repository) in category Users' development repositories (Attic) INFO: user repository detected, trying to find fork phw/scramblesuit WARNING: no existing fork found, entering user fork subroutine WARNING: no matching gitlab project found for user/phw/scramblesuit INFO: user fork subroutine failed, resuming normal procedure INFO: searching for projects matching scramblesuit import gitolite project user/phw/scramblesuit into gitlab legacy/gitolite/user/phw/scramblesuit with desc 'Philipp's ScrambleSuit repository'?^G [Y/n] INFO: checking if remote repo https://git.torproject.org/user/phw/scramblesuit exists INFO: importing gitolite project user/phw/scramblesuit into gitlab legacy/gitolite/user/phw/scramblesuit with desc 'Philipp's ScrambleSuit repository' INFO: importing project into GitLab INFO: Successfully connected to https://gitlab.torproject.org INFO: loading group legacy/gitolite/user/phw INFO: creating repository scramblesuit (scramblesuit) in namespace legacy/gitolite/user/phw from https://git.torproject.org/user/phw/scramblesuit into https://gitlab.torproject.org/legacy/gitolite/user/phw/scramblesuit INFO: archiving project INFO: migrating Gitolite repository https://gitweb.torproject.org/user/phw/scramblesuit.git to GitLab project https://gitlab.torproject.org/legacy/gitolite/user/phw/scramblesuit INFO: adding entry to rewrite_map /home/anarcat/src/tor/tor-puppet/modules/profile/files/git/gitolite2gitlab.txt INFO: modifying gitolite.conf to add: "config gitweb.category = Migrated to GitLab" INFO: rewriting gitolite config /home/anarcat/src/tor/gitolite-admin/conf/gitolite.conf to change project user/phw/scramblesuit to category Migrated to GitLab [...]

Acute eyes will notice the bell used as a notification mechanism as well in this transcript.

A lot of the code is now useless for us, but some, like "commit and push" or is-repo-empty live on in the git module and, of course, the gitlab module has grown some legs along the way. We've also found fun bugs, like a file descriptor exhaustion in bash, among other oddities. The retirement milestone and issue 41215 has a detailed log of the migration, for those curious.

This was a challenging project, but it feels nice to have this behind us. This gets rid of 2 of the 4 remaining machines running Debian "old-old-stable", which moves a bit further ahead in our late bullseye upgrades milestone.

Full transparency: we tested GPT-3.5, GPT-4, and other large language models to see if they could answer the question "write a set of rewrite rules to redirect GitWeb to GitLab". This has become a standard LLM test for your faithful writer to figure out how good a LLM is at technical responses. None of them gave an accurate, complete, and functional response, for the record.

The actual rewrite rules as of this writing follow, for humans that actually like working answers provided by expert humans instead of artificial intelligence which currently seem to be, glorified, mansplaining interns.

git.torproject.org rewrite rules

Those rules are relatively simple in that they rewrite a single URL to its equivalent GitLab counterpart in a 1:1 fashion. It relies on the rewrite map mentioned above, of course.

RewriteEngine on # this RewriteMap connects the gitweb projects to their GitLab # equivalent RewriteMap gitolite2gitlab "txt:/etc/apache2/gitolite2gitlab.txt" # if this becomes a performance bottleneck, convert to a DBM map with: # # $ httxt2dbm -i mapfile.txt -o mapfile.map # # and: # # RewriteMap mapname "dbm:/etc/apache/mapfile.map" # # according to reports lavamind found online, we hit such a # performance bottleneck only around millions of entries, which is not our case # those two rules can go away once all the projects are # migrated to GitLab # # this matches the request URI so we can check the RewriteMap # for a match next # # WARNING: this won't match URLs without .git in them, which # *do* work now. one possibility would be to match the request # URI (without query string!) with: # # /git/(.*)(.git)?/(((branches|hooks|info|objects/).*)|git-.*|upload-pack|receive-pack|HEAD|config|description)?. # # I haven't been able to figure out the actual structure of # those URLs, so it's really hard to figure out the boundaries # of the project name here. I stopped after pouring around the # http-backend.c code in git # itself. https://www.git-scm.com/docs/http-protocol is also # kind of incomplete and unsatisfying. RewriteCond %{REQUEST_URI} ^/(git/)?(.*).git/.*$ # this makes the RewriteRule match only if there's a match in # the rewrite map RewriteCond ${gitolite2gitlab:%2|NOT_FOUND} !NOT_FOUND RewriteRule ^/(git/)?(.*).git/(.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$2}.git/$3 [R=302,L] # Fallback everything else to GitLab RewriteRule (.*) https://gitlab.torproject.org [R=302,L] gitweb.torproject.org rewrite rules

Those are the vastly more complicated GitWeb to GitLab rewrite rules.

Note that we say "GitWeb" but we were actually not running GitWeb but cgit, as the former didn't actually scale for us.

RewriteEngine on # this RewriteMap connects the gitweb projects to their GitLab # equivalent RewriteMap gitolite2gitlab "txt:/etc/apache2/gitolite2gitlab.txt" # special rule to process targets of the old spec.tpo site and # bring them to the right redirect on the new spec.tpo site. that should turn, for example: # # https://gitweb.torproject.org/torspec.git/tree/address-spec.txt # # into: # # https://spec.torproject.org/address-spec RewriteRule ^/torspec.git/tree/(.*).txt$ https://spec.torproject.org/$1 [R=302] # list of endpoints taken from cgit's cmd.c # those two RewriteCond are necessary because we don't move # all repositories at once. once the migration is completed, # they can be removed. # # and yes, they are copied all over the place below # # create a match for the project name to check if the project # has been moved to GitLab RewriteCond %{REQUEST_URI} ^/(.*).git(/.*)?$ # this makes the RewriteRule match only if there's a match in # the rewrite map RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND # main project page, like summary below RewriteRule ^/(.*).git/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/ [R=302,L] # summary RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteRule ^/(.*).git/summary/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/ [R=302,L] # about RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteRule ^/(.*).git/about/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/ [R=302,L] # commit RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteCond "%{QUERY_STRING}" "(.*(?:^|&))id=([^&]*)(&.*)?$" RewriteRule ^/(.*).git/commit/? https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commit/%2 [R=302,L,QSD] RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteRule ^/(.*).git/commit/? https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commits/HEAD [R=302,L] # diff, incomplete because can diff arbitrary refs and files in cgit but not in GitLab, hard to parse RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteCond %{QUERY_STRING} id=([^&]*) RewriteRule ^/(.*).git/diff/? https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commit/%1 [R=302,L,QSD] # patch RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteCond %{QUERY_STRING} id=([^&]*) RewriteRule ^/(.*).git/patch/? https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commit/%1.patch [R=302,L,QSD] # rawdiff, incomplete because can show only one file diff, which GitLab cannot RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteCond %{QUERY_STRING} id=([^&]*) RewriteRule ^/(.*).git/rawdiff/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commit/%1.diff [R=302,L,QSD] # log RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteCond %{QUERY_STRING} h=([^&]*) RewriteRule ^/(.*).git/log/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commits/%1 [R=302,L,QSD] RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteRule ^/(.*).git/log/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commits/HEAD [R=302,L] RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteRule ^/(.*).git/log(/?.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commits/HEAD$2 [R=302,L] # atom RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteCond %{QUERY_STRING} h=([^&]*) RewriteRule ^/(.*).git/atom/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commits/%1 [R=302,L,QSD] RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteRule ^/(.*).git/atom/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commits/HEAD [R=302,L,QSD] # refs, incomplete because two pages in GitLab, defaulting to "tags" RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteRule ^/(.*).git/refs/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/tags [R=302,L] RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteCond %{QUERY_STRING} h=([^&]*) RewriteRule ^/(.*).git/tag/? https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/tags/%1 [R=302,L,QSD] # tree RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteCond %{QUERY_STRING} id=([^&]*) RewriteRule ^/(.*).git/tree(/?.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/tree/%1$2 [R=302,L,QSD] RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteRule ^/(.*).git/tree(/?.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/tree/HEAD$2 [R=302,L] # /-/tree has no good default in GitLab, revert to HEAD which is a good # approximation (we can't assume "master" here anymore) RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteRule ^/(.*).git/tree/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/tree/HEAD [R=302,L] # plain RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteCond %{QUERY_STRING} h=([^&]*) RewriteRule ^/(.*).git/plain(/?.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/raw/%1$2 [R=302,L,QSD] RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteRule ^/(.*).git/plain(/?.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/raw/HEAD$2 [R=302,L] # blame: disabled #RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ #RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND #RewriteCond %{QUERY_STRING} h=([^&]*) #RewriteRule ^/(.*).git/blame(/?.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/blame/%1$2 [R=302,L,QSD] # same default as tree above #RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ #RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND #RewriteRule ^/(.*).git/blame(/?.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/blame/HEAD/$2 [R=302,L] # stats RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteRule ^/(.*).git/stats/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/graphs/HEAD [R=302,L] # still TODO: # repolist: once migration is complete # # cannot be done: # atom: needs a feed token, user must be logged in # blob: no direct equivalent # info: not working on main cgit website? # ls_cache: not working, irrelevant? # objects: undocumented? # snapshot: pattern too hard to match on cgit's side # special case, we keep a copy of the main index on the archive RewriteRule ^/?$ https://archive.torproject.org/websites/gitweb.torproject.org.html [R=302,L] # Fallback: everything else to GitLab RewriteRule .* https://gitlab.torproject.org [R=302,L]

The reference copy of those is available in our (currently private) Puppet git repository.

Categories: FLOSS Project Planets

Anarcat: Tor migrates from Gitolite/GitWeb to GitLab

Planet Python - Wed, 2024-05-01 10:55

Note: I've been awfully silent here for the past ... (checks notes) oh dear, 3 months! But that's not because I've been idle, quite the contrary, I've been very busy but just didn't have time to write about anything. So I've taken it upon myself to write something about my work this week, and published this post on the Tor blog which I copy here for a broader audience. Let me know if you like this or not.

Tor has finally completed a long migration from legacy Git infrastructure (Gitolite and GitWeb) to our self-hosted GitLab server.

Git repository addresses have therefore changed. Many of you probably have made the switch already, but if not, you will need to change:

https://git.torproject.org/

to:

https://gitlab.torproject.org/

In your Git configuration.

The GitWeb front page is now an archived listing of all the repositories before the migration. Inactive git repositories were archived in GitLab legacy/gitolite namespace and the gitweb.torproject.org and git.torproject.org web sites now redirect to GitLab.

Best effort was made to reproduce the original gitolite repositories faithfully and also avoid duplicating too much data in the migration. But it's possible that some data present in Gitolite has not migrated to GitLab.

User repositories are particularly at risk, because they were massively migrated, and they were "re-forked" from their upstreams, to avoid wasting disk space. If a user had a project with a matching name it was assumed to have the right data, which might be inaccurate.

The two virtual machines responsible for the legacy service (cupani for git-rw.torproject.org and vineale for git.torproject.org and gitweb.torproject.org) have been shutdown. Their disks will remain for 3 months (until the end of July 2024) and their backups for another year after that (until the end of July 2025), after which point all the data from those hosts will be destroyed, with only the GitLab archives remaining.

The rest of this article expands on how this was done and what kind of problems we faced during the migration.

Where is the code?

Normally, nothing should be lost. All repositories in gitolite have been either explicitly migrated by their owners, forcibly migrated by the sysadmin team (TPA), or explicitly destroyed at their owner's request.

An exhaustive rewrite map translates gitolite projects to GitLab projects. Some of those projects actually redirect to their parent in cases of empty repositories that were obvious forks. Destroyed repositories redirect to the GitLab front page.

Because the migration happened progressively, it's technically possible that commits pushed to gitolite were lost after the migration. We took great care to avoid that scenario. First, we adopted a proposal (TPA-RFC-36) in June 2023 to announce the transition. Then, in March 2024, we locked down all repositories from any further changes. Around that time, only a handful of repositories had changes made after the adoption date, and we examined each repository carefully to make sure nothing was lost.

Still, we built a diff of all the changes in the git references that archivists can peruse to check for data loss. It's large (6MiB+) because a lot of repositories were migrated before the mass migration and then kept evolving in GitLab. Many other repositories were rebuilt in GitLab from parent to rebuild a fork relationship which added extra references to those clones.

A note to amateur archivists out there, it's probably too late for one last crawl now. The Git repositories now all redirect to GitLab and are effectively unavailable in their original form.

That said, the GitWeb site was crawled into the Internet Archive in February 2024, so at least some copy of it is available in the Wayback Machine. At that point, however, many developers had already migrated their projects to GitLab, so the copies there were already possibly out of date compared with the repositories in GitLab.

Software Heritage also has a copy of all repositories hosted on Gitolite since June 2023 and have continuously kept mirroring the repositories, where they will be kept hopefully in eternity. There's an issue where the main website can't find the repositories when you search for gitweb.torproject.org, instead search for git.torproject.org.

In any case, if you believe data is missing, please do let us know by opening an issue with TPA.

Why?

This is an old project in the making. The first discussion about migrating from gitolite to GitLab started in 2020 (almost 4 years ago). But going further back, the first GitLab experiment was in 2016, almost a decade ago.

The current GitLab server dates from 2019, replacing Trac for issue tracking in 2020. It was originally supposed to host only mirrors for merge requests and issue trackers but, naturally, one thing led to another and eventually, GitLab had grown a container registry, continuous integration (CI) runners, GitLab Pages, and, of course, hosted most Git repositories.

There were hesitations at moving to GitLab for code hosting. We had discussions about the increased attack surface and ways to mitigate that, but, ultimately, it seems the issues were not that serious and the community embraced GitLab.

TPA actually migrated its most critical repositories out of shared hosting entirely, into specific servers (e.g. the Puppet Git repository is just on the Puppet server now), leveraging Git's decentralized nature and removing an entire attack surface from our infrastructure. Some of those repositories are mirrored back into GitLab, but the authoritative copy is not on GitLab.

In any case, the proposal to migrate from Gitolite to GitLab was effectively just formalizing a fait accompli.

How to migrate from Gitolite / cgit to GitLab

The progressive migration was a challenge. If you intend to migrate between hosting platforms, we strongly recommend to make a "flag day" during which you migrate all repositories at once. This ensures a smoother transition and avoids elaborate rewrite rules.

When Gitolite access was shutdown, we had repositories on both GitLab and Gitolite, without a clear relationship between the two. A priori, the plan then was to import all the remaining Gitolite repositories into the legacy/gitolite namespace, but that seemed wasteful, particularly for large repositories like Tor Browser which uses nearly a gigabyte of disk space. So we took special care to avoid duplicating repositories.

When the mass migration started, only 71 of the 538 Gitolite repositories were Migrated to GitLab in the gitolite.conf file. So, given that we had hundreds of repositories to migrate:, we developed some automation to "save time". We already automate similar ad-hoc tasks with Fabric, so we used that framework here as well. (Our normal configuration management tool is Puppet, which is a poor fit here.)

So a relatively large amount of Python code was produced to basically do the following:

  1. check if all on-disk repositories are listed in gitolite.conf (and vice versa) and either add missing repositories or delete them from disk if garbage
  2. for each repository in gitolite.conf, if its category is marked Migrated to GitLab, skip, otherwise;
  3. find a matching GitLab project by name, prompt the user for multiple matches
  4. if a match is found, redirect if the repository is non-empty
    • we have GitLab projects that look like the real thing, but are only present to host migrated Trac issues
    • in such cases we cloned the Gitolite project locally and pushed to the existing repository instead
  5. otherwise, a new repository is created in the legacy/gitolite namespace, using the "import" mechanism in GitLab to automatically import the repository from Gitolite, creating redirections and updating gitolite.conf to document the change

User repositories (those under the user/ directory in Gitolite) were handled specially. First, the existing redirection map was checked to see if a similarly named project was migrated (so that, e.g. user/dgoulet/tor is properly treated as a fork of tpo/core/tor). Then the parent project was forked in GitLab and the Gitolite project force-pushed to the fork. This allows us to show the fork relationship in GitLab and, more importantly, benefit from the "pool" feature in GitLab which deduplicates disk usage between forks.

Sometimes, we found no such relationships. Then we simply imported multiple repositories with similar names in the legacy/gitolite namespace, sometimes creating forks between user repositories, on a first-come-first-served basis from the gitolite.conf order.

The code used in this migration is now available publicly. We encourage other groups planning to migrate from Gitolite/GitWeb to GitLab to use (and contribute to) our fabric-tasks repository, even though it does have its fair share of hard-coded assertions.

The main entry point is the gitolite.mass-repos-migration task. A typical migration job looked like:

anarcat@angela:fabric-tasks$ fab -H cupani.torproject.org gitolite.mass-repos-migration [...] INFO: skipping project project/help/infra in category Migrated to GitLab INFO: skipping project project/help/wiki in category Migrated to GitLab INFO: skipping project project/jenkins/jobs in category Migrated to GitLab INFO: skipping project project/jenkins/tools in category Migrated to GitLab INFO: searching for projects matching fastlane INFO: Successfully connected to https://gitlab.torproject.org import gitolite project project/tor-browser/fastlane into gitlab legacy/gitolite/project/tor-browser/fastlane with desc 'Tor Browser app store and deployment configuration for Fastlane'? [Y/n] INFO: importing gitolite project project/tor-browser/fastlane into gitlab legacy/gitolite/project/tor-browser/fastlane with desc 'Tor Browser app store and deployment configuration for Fastlane' INFO: building a new connect to cupani INFO: defaulting name to fastlane INFO: importing project into GitLab INFO: Successfully connected to https://gitlab.torproject.org INFO: loading group legacy/gitolite/project/tor-browser INFO: archiving project INFO: creating repository fastlane (fastlane) in namespace legacy/gitolite/project/tor-browser from https://git.torproject.org/project/tor-browser/fastlane into https://gitlab.torproject.org/legacy/gitolite/project/tor-browser/fastlane INFO: migrating Gitolite repository project/tor-browser/fastlane to GitLab project legacy/gitolite/project/tor-browser/fastlane INFO: uploading 399 bytes to /srv/git.torproject.org/repositories/project/tor-browser/fastlane.git/hooks/pre-receive INFO: making /srv/git.torproject.org/repositories/project/tor-browser/fastlane.git/hooks/pre-receive executable INFO: adding entry to rewrite_map /home/anarcat/src/tor/tor-puppet/modules/profile/files/git/gitolite2gitlab.txt INFO: modifying gitolite.conf to add: "config gitweb.category = Migrated to GitLab" INFO: rewriting gitolite config /home/anarcat/src/tor/gitolite-admin/conf/gitolite.conf to change project project/tor-browser/fastlane to category Migrated to GitLab INFO: skipping project project/bridges/bridgedb-admin in category Migrated to GitLab [...]

In the above, you can see migrated repositories skipped then the fastlane project being archived into GitLab. Another example with a later version of the script, processing only user repositories and showing the interactive prompt and a force-push into a fork:

$ fab -H cupani.torproject.org gitolite.mass-repos-migration --include 'user/.*' --exclude '.*tor-?browser.*' INFO: skipping project user/aagbsn/bridgedb in category Migrated to GitLab [...] INFO: skipping project user/phw/atlas in category Migrated to GitLab INFO: processing project user/phw/obfsproxy (Philipp's obfsproxy repository) in category Users' development repositories (Attic) INFO: Successfully connected to https://gitlab.torproject.org INFO: user repository detected, trying to find fork phw/obfsproxy WARNING: no existing fork found, entering user fork subroutine INFO: found 6 GitLab projects matching 'obfsproxy' (https://gitweb.torproject.org/user/phw/obfsproxy.git) 0 legacy/gitolite/debian/obfsproxy 1 legacy/gitolite/debian/obfsproxy-legacy 2 legacy/gitolite/user/asn/obfsproxy 3 legacy/gitolite/user/ioerror/obfsproxy 4 tpo/anti-censorship/pluggable-transports/obfsproxy 5 tpo/anti-censorship/pluggable-transports/obfsproxy-legacy select parent to fork from, or enter to abort: ^G4 INFO: repository is not empty: in-pack: 2104, packs: 1, size-pack: 414 fork project tpo/anti-censorship/pluggable-transports/obfsproxy into legacy/gitolite/user/phw/obfsproxy^G [Y/n] INFO: loading project tpo/anti-censorship/pluggable-transports/obfsproxy INFO: forking project user/phw/obfsproxy into namespace legacy/gitolite/user/phw INFO: waiting for fork to complete... INFO: fork status: started, sleeping... INFO: fork finished INFO: cloning and force pushing from user/phw/obfsproxy to legacy/gitolite/user/phw/obfsproxy INFO: deleting branch protection: <class 'gitlab.v4.objects.branches.ProjectProtectedBranch'> => {'id': 2723, 'name': 'master', 'push_access_levels': [{'id': 2864, 'access_level': 40, 'access_level_description': 'Maintainers', 'deploy_key_id': None}], 'merge_access_levels': [{'id': 2753, 'access_level': 40, 'access_level_description': 'Maintainers'}], 'allow_force_push': False} INFO: cloning repository git-rw.torproject.org:/srv/git.torproject.org/repositories/user/phw/obfsproxy.git in /tmp/tmp6orvjggy/user/phw/obfsproxy Cloning into bare repository '/tmp/tmp6orvjggy/user/phw/obfsproxy'... INFO: pushing to GitLab: https://gitlab.torproject.org/legacy/gitolite/user/phw/obfsproxy remote: remote: To create a merge request for bug_10887, visit: remote: https://gitlab.torproject.org/legacy/gitolite/user/phw/obfsproxy/-/merge_requests/new?merge_request%5Bsource_branch%5D=bug_10887 remote: [...] To ssh://gitlab.torproject.org/legacy/gitolite/user/phw/obfsproxy + 2bf9d09...a8e54d5 master -> master (forced update) * [new branch] bug_10887 -> bug_10887 [...] INFO: migrating repo INFO: migrating Gitolite repository https://gitweb.torproject.org/user/phw/obfsproxy.git to GitLab project https://gitlab.torproject.org/legacy/gitolite/user/phw/obfsproxy INFO: adding entry to rewrite_map /home/anarcat/src/tor/tor-puppet/modules/profile/files/git/gitolite2gitlab.txt INFO: modifying gitolite.conf to add: "config gitweb.category = Migrated to GitLab" INFO: rewriting gitolite config /home/anarcat/src/tor/gitolite-admin/conf/gitolite.conf to change project user/phw/obfsproxy to category Migrated to GitLab INFO: processing project user/phw/scramblesuit (Philipp's ScrambleSuit repository) in category Users' development repositories (Attic) INFO: user repository detected, trying to find fork phw/scramblesuit WARNING: no existing fork found, entering user fork subroutine WARNING: no matching gitlab project found for user/phw/scramblesuit INFO: user fork subroutine failed, resuming normal procedure INFO: searching for projects matching scramblesuit import gitolite project user/phw/scramblesuit into gitlab legacy/gitolite/user/phw/scramblesuit with desc 'Philipp's ScrambleSuit repository'?^G [Y/n] INFO: checking if remote repo https://git.torproject.org/user/phw/scramblesuit exists INFO: importing gitolite project user/phw/scramblesuit into gitlab legacy/gitolite/user/phw/scramblesuit with desc 'Philipp's ScrambleSuit repository' INFO: importing project into GitLab INFO: Successfully connected to https://gitlab.torproject.org INFO: loading group legacy/gitolite/user/phw INFO: creating repository scramblesuit (scramblesuit) in namespace legacy/gitolite/user/phw from https://git.torproject.org/user/phw/scramblesuit into https://gitlab.torproject.org/legacy/gitolite/user/phw/scramblesuit INFO: archiving project INFO: migrating Gitolite repository https://gitweb.torproject.org/user/phw/scramblesuit.git to GitLab project https://gitlab.torproject.org/legacy/gitolite/user/phw/scramblesuit INFO: adding entry to rewrite_map /home/anarcat/src/tor/tor-puppet/modules/profile/files/git/gitolite2gitlab.txt INFO: modifying gitolite.conf to add: "config gitweb.category = Migrated to GitLab" INFO: rewriting gitolite config /home/anarcat/src/tor/gitolite-admin/conf/gitolite.conf to change project user/phw/scramblesuit to category Migrated to GitLab [...]

Acute eyes will notice the bell used as a notification mechanism as well in this transcript.

A lot of the code is now useless for us, but some, like "commit and push" or is-repo-empty live on in the git module and, of course, the gitlab module has grown some legs along the way. We've also found fun bugs, like a file descriptor exhaustion in bash, among other oddities. The retirement milestone and issue 41215 has a detailed log of the migration, for those curious.

This was a challenging project, but it feels nice to have this behind us. This gets rid of 2 of the 4 remaining machines running Debian "old-old-stable", which moves a bit further ahead in our late bullseye upgrades milestone.

Full transparency: we tested GPT-3.5, GPT-4, and other large language models to see if they could answer the question "write a set of rewrite rules to redirect GitWeb to GitLab". This has become a standard LLM test for your faithful writer to figure out how good a LLM is at technical responses. None of them gave an accurate, complete, and functional response, for the record.

The actual rewrite rules as of this writing follow, for humans that actually like working answers provided by expert humans instead of artificial intelligence which currently seem to be, glorified, mansplaining interns.

git.torproject.org rewrite rules

Those rules are relatively simple in that they rewrite a single URL to its equivalent GitLab counterpart in a 1:1 fashion. It relies on the rewrite map mentioned above, of course.

RewriteEngine on # this RewriteMap connects the gitweb projects to their GitLab # equivalent RewriteMap gitolite2gitlab "txt:/etc/apache2/gitolite2gitlab.txt" # if this becomes a performance bottleneck, convert to a DBM map with: # # $ httxt2dbm -i mapfile.txt -o mapfile.map # # and: # # RewriteMap mapname "dbm:/etc/apache/mapfile.map" # # according to reports lavamind found online, we hit such a # performance bottleneck only around millions of entries, which is not our case # those two rules can go away once all the projects are # migrated to GitLab # # this matches the request URI so we can check the RewriteMap # for a match next # # WARNING: this won't match URLs without .git in them, which # *do* work now. one possibility would be to match the request # URI (without query string!) with: # # /git/(.*)(.git)?/(((branches|hooks|info|objects/).*)|git-.*|upload-pack|receive-pack|HEAD|config|description)?. # # I haven't been able to figure out the actual structure of # those URLs, so it's really hard to figure out the boundaries # of the project name here. I stopped after pouring around the # http-backend.c code in git # itself. https://www.git-scm.com/docs/http-protocol is also # kind of incomplete and unsatisfying. RewriteCond %{REQUEST_URI} ^/(git/)?(.*).git/.*$ # this makes the RewriteRule match only if there's a match in # the rewrite map RewriteCond ${gitolite2gitlab:%2|NOT_FOUND} !NOT_FOUND RewriteRule ^/(git/)?(.*).git/(.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$2}.git/$3 [R=302,L] # Fallback everything else to GitLab RewriteRule (.*) https://gitlab.torproject.org [R=302,L] gitweb.torproject.org rewrite rules

Those are the vastly more complicated GitWeb to GitLab rewrite rules.

Note that we say "GitWeb" but we were actually not running GitWeb but cgit, as the former didn't actually scale for us.

RewriteEngine on # this RewriteMap connects the gitweb projects to their GitLab # equivalent RewriteMap gitolite2gitlab "txt:/etc/apache2/gitolite2gitlab.txt" # special rule to process targets of the old spec.tpo site and # bring them to the right redirect on the new spec.tpo site. that should turn, for example: # # https://gitweb.torproject.org/torspec.git/tree/address-spec.txt # # into: # # https://spec.torproject.org/address-spec RewriteRule ^/torspec.git/tree/(.*).txt$ https://spec.torproject.org/$1 [R=302] # list of endpoints taken from cgit's cmd.c # those two RewriteCond are necessary because we don't move # all repositories at once. once the migration is completed, # they can be removed. # # and yes, they are copied all over the place below # # create a match for the project name to check if the project # has been moved to GitLab RewriteCond %{REQUEST_URI} ^/(.*).git(/.*)?$ # this makes the RewriteRule match only if there's a match in # the rewrite map RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND # main project page, like summary below RewriteRule ^/(.*).git/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/ [R=302,L] # summary RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteRule ^/(.*).git/summary/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/ [R=302,L] # about RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteRule ^/(.*).git/about/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/ [R=302,L] # commit RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteCond "%{QUERY_STRING}" "(.*(?:^|&))id=([^&]*)(&.*)?$" RewriteRule ^/(.*).git/commit/? https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commit/%2 [R=302,L,QSD] RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteRule ^/(.*).git/commit/? https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commits/HEAD [R=302,L] # diff, incomplete because can diff arbitrary refs and files in cgit but not in GitLab, hard to parse RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteCond %{QUERY_STRING} id=([^&]*) RewriteRule ^/(.*).git/diff/? https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commit/%1 [R=302,L,QSD] # patch RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteCond %{QUERY_STRING} id=([^&]*) RewriteRule ^/(.*).git/patch/? https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commit/%1.patch [R=302,L,QSD] # rawdiff, incomplete because can show only one file diff, which GitLab cannot RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteCond %{QUERY_STRING} id=([^&]*) RewriteRule ^/(.*).git/rawdiff/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commit/%1.diff [R=302,L,QSD] # log RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteCond %{QUERY_STRING} h=([^&]*) RewriteRule ^/(.*).git/log/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commits/%1 [R=302,L,QSD] RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteRule ^/(.*).git/log/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commits/HEAD [R=302,L] RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteRule ^/(.*).git/log(/?.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commits/HEAD$2 [R=302,L] # atom RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteCond %{QUERY_STRING} h=([^&]*) RewriteRule ^/(.*).git/atom/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commits/%1 [R=302,L,QSD] RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteRule ^/(.*).git/atom/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/commits/HEAD [R=302,L,QSD] # refs, incomplete because two pages in GitLab, defaulting to "tags" RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteRule ^/(.*).git/refs/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/tags [R=302,L] RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteCond %{QUERY_STRING} h=([^&]*) RewriteRule ^/(.*).git/tag/? https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/tags/%1 [R=302,L,QSD] # tree RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteCond %{QUERY_STRING} id=([^&]*) RewriteRule ^/(.*).git/tree(/?.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/tree/%1$2 [R=302,L,QSD] RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteRule ^/(.*).git/tree(/?.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/tree/HEAD$2 [R=302,L] # /-/tree has no good default in GitLab, revert to HEAD which is a good # approximation (we can't assume "master" here anymore) RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteRule ^/(.*).git/tree/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/tree/HEAD [R=302,L] # plain RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteCond %{QUERY_STRING} h=([^&]*) RewriteRule ^/(.*).git/plain(/?.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/raw/%1$2 [R=302,L,QSD] RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteRule ^/(.*).git/plain(/?.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/raw/HEAD$2 [R=302,L] # blame: disabled #RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ #RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND #RewriteCond %{QUERY_STRING} h=([^&]*) #RewriteRule ^/(.*).git/blame(/?.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/blame/%1$2 [R=302,L,QSD] # same default as tree above #RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ #RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND #RewriteRule ^/(.*).git/blame(/?.*)$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/blame/HEAD/$2 [R=302,L] # stats RewriteCond %{REQUEST_URI} ^/(.*).git/.*$ RewriteCond ${gitolite2gitlab:%1|NOT_FOUND} !NOT_FOUND RewriteRule ^/(.*).git/stats/?$ https://gitlab.torproject.org/${gitolite2gitlab:$1}/-/graphs/HEAD [R=302,L] # still TODO: # repolist: once migration is complete # # cannot be done: # atom: needs a feed token, user must be logged in # blob: no direct equivalent # info: not working on main cgit website? # ls_cache: not working, irrelevant? # objects: undocumented? # snapshot: pattern too hard to match on cgit's side # special case, we keep a copy of the main index on the archive RewriteRule ^/?$ https://archive.torproject.org/websites/gitweb.torproject.org.html [R=302,L] # Fallback: everything else to GitLab RewriteRule .* https://gitlab.torproject.org [R=302,L]

The reference copy of those is available in our (currently private) Puppet git repository.

Categories: FLOSS Project Planets

Ed Crewe: Software Engineering Hiring and Firing

Planet Python - Wed, 2024-05-01 10:47



The jump in interest rates to the highest level in over 20 years that hit in summer 2023 for the US, UK and many other countries is still impacting the Software industry. Rates may be due to drop soon, but currently it has choked off investment, upped borrowing costs and lead to many software companies making engineers redundant to please the markets. 

For the UK the estimate is around 8% of software industry jobs made redundant. Although strangely, the overall trend in vacancies for software engineers continues to march upwards, the initial surge after the pandemic dipped last summer but has now recovered.
But if you work in the industry you are bound to have colleagues and friends who have been made redundant, if you are lucky enough to have not been impacted personally.

Given recent history, I thought it may be worth reflecting on my personal experience of the whole hiring and firing process, in the tech industry. It is a UK centric view, but the companies I have worked for in the last 8 years are US software companies.

I have been fired, hired and conducted technical interviews to hire others. Giving me a few different perspectives. 

This post is NOT about getting your first Software job

I first got a coding job in the public sector and it was as a self taught web developer in the 1990s, before web development was a thing you could get a degree in. So I initially got a  job in IT support, volunteered to act up (ie no pay increase) and built some websites that were needed, then became a full time web developer through a portfolio of work, ie sites. 

Today junior developers may have to prove themselves suitable by artificial measures. I skipped these, so I do not have any professional certifications, or any to recommend. I also don't know how to ace coding algorithm or personality profile assessments.

Once you are 5-10 years in to a software career - none of those approaches are used for hiring decisions. 

Only large companies are likely to subject you to them, and that is really out of fairness on the juniors who have to go through them, and to screen out dodgy applicants. Screening just needs to be passed, it will not have any input into whether you get the job. Hence Acing the coding interview as promoted by sites such as LeetCode is not even a thing, only passing coding exercise systems in order to start, or switch to, a career as a developer. I would recommend starting an open source project instead, to demonstrate you can actually code.

The majority of small to medium software companies and of job vacancies require experience and in effect have no vacancies for the most junior software grades with less than 3 years under their belt. So they tend not to use any of these filtering methods. They just want to see proof that you are already a developer, and usually base that on face to face interviews and examples of your code you provide them. So much like how I was originally hired back in the 1990s.

I have only been subject to a LeetCode style test once, which was for a generic job application, ie hiring for numbers of SREs of various seniority, for a FANNG.


 F I R E D When you get that unexpected one to one Zoom call with you manager appear in your calendar these days, it is unlikely to be great news 😓
In the majority of cases the firing process, or to be more polite, redundancy, is all about balancing the finances of the whole company or institution. As such it is very unlikely to be about you.

Of course people are also fired as individuals for various reasons, one of which is actually not being any good at their job, failing to get along with their manager, being a bad culture fit, jobs turning out not to be what was advertised, or expressing political views. Since unlike where I once worked, in the UK Education sector, where 50% of staff are union members, US software companies will have less than 1% membership, so don't tend to respond well to dissent.

Mostly this happens via failing probation, at around 15%  then maybe another 5% annually for disciplinary / performance improvement failure.

If you want to try getting individually fired then go the overemployed route. Get two or three jobs at once and test how long it takes before the company notices you giving 110% is now only 40% and fire you. The rule of thumb is the larger the company, the longer it takes!

But this post's focus isn't about individual firing, its about organizational hiring and firing.

Firing Reasons

  1. A company may be doing badly in a slow long term way, so it has to chop as part of a restructure and downsize to attempt to fix that.
  2. Alternatively the company could be doing really well. So it gets the attention of a big investment company and is bought up and merged with its rival. To fix overlap and justify the merger - both companies lose 20% of staff.
  3. Maybe it needs to pivot towards a new area (currently likely to be AI) and so chop 20% of its staff so it can hire 15% experienced, and pricey, AI developers.
  4. Or it may just have had a one off external impacting event that hit it financially. So to balance the earnings for that year and keep its share price good, it chops a bunch of staff. It will rehire next year, when it suits the balance sheet. This is the example in which I was made redundant along with 5% of staff, it was a big company, so that was a few thousand people globally.
  5. Finally it may be an industry wide phenomenon as it is with the current redundancies in the software industry. A world clamp down on easy cheap loans means investment company driven industries such as tech. are no longer awash with spare cash. Cut backs look good right now, and keep the share price high.
    Hence redundancies that are nothing to do with the industry itself or its future prospects.

That is mirrored in who is fired. Companies do not keep a log book of gold stars and black marks against each employee. They do not use organizational triggered rounds of redundancies to select individuals to fire. They certainly have not got the capability to accurately determine all the best employees and only fire the worst ones. You will be fired based on what part of the organization you are in, how much it is valued in the current strategy and how much you cost vs others who could do your job. If you are currently between teams / or in a new team or role which has yet to establish itself, when the music stops - like musical chairs, bad luck you are out.

The only personal element may be if a whole team is seen as under performing or difficult to manage it might be axed. No matter that it contains a star performer. Decisions may also be geographic. Lets axe the Greek office, save by withdrawing engineering from a country, which is again how I was made redundant, the rest of my team was in Greece.
Alternatively it may be, fire under 20 staff from each country, to avoid more burdensome regulation for bulk layoffs.

The organization could create an insecure / downturn atmosphere to encourage staff to leave. Because its a lot cheaper for people to leave than the company paying out redundancy settlements.

Redundancy keeps the average employees 😐

As a result in response to significant redundancies an organisation will tend to lose more of the best employees - since they are the most able to move, the most likely to get a big pay rise if they move and the least likely to want to stick around if they see negative organisational change.  The software industry has very high staff turnover at almost 20%. Out weighing any nominal idea of removing less efficient staff. 

If a company handles things well it may only lose a representative productivity range of staff from best to worst. But a bulk redundancy process is likely to lead to the biggest loss in the top talent, get rid of slightly more of the bottom dwellers and so result in maximising the mediocre!

In summary the answer to 'Why me?' in group redundancies is "because you were there" ... and you didn't have a personal friendship with the CEO 😉. Of course that is why new CEOs are often brought in to restructure - the first step of which is to take an axe to the current C-suite. 

Some of the best software engineers I have worked with have been made redundant at some point in their career. Group redundancies are not about you or how well you do your job. But taking it personally and challenging the messenger, with why me?, as demonstrated by recent viral videos, is an understandable emotional response to rejection, and the misguided belief that work aims to be some form of meritocracy, in the same way college might.

LIFO and FIFO

LIFO rather than FIFO is the norm in Firing. New hires are less likely to have established themselves as essential to the company, and have less personal connections within it. More importantly many countries redundancy legislation doesn't kick in until over 2 years of employment and the longer you have been employed the more the company will have to pay to terminate you.

Which means a new hire who has uprooted for their new tech job, will be the most likely to find themselves losing that job when bulk redundancies hit.
But FIFO has its place, next would be older engineers. Some companies don't even hire hands on engineers much over the age of 40, anyway. But staff near retirement have at most only a few years left to contribute and may cost more for the same grade. So encouraging early retirement can be part of the bulk redundancy process.

Prejudicial Firing

Whilst redundancy is all about costs and not about your personal performance. That is not to say companies who pass the redundancy choices down to junior managers may not end up with firing disproportionate numbers of workers who are not from the same background as their manager, ie white USA males, ideally younger than the manager. But prejudice is not personal either. That is pretty much what defines it as prejudice, a pre-judgement of people based on physical characteristics rather than their ability at the job. Also people are least likely to fire staff that they have the most in common with, resulting in prejudicial firing. 
Unfortunately it seems many companies with a good diversity policy for hiring, may not have adequate ones for firing. Again resulting in losing more of the higher performing staff.

I have heard of a case where someone got a new manager, who on joining was told to cut from his team, so he fired everyone outside the USA. The worker was so keen to stay at their current employer they went over the head of their manager to senior management and asked for their redundancy to be repealed. Since they had been at the company many years and personally knew senior management, this worked.
Alternatively a more purely cost based restructure may hire all developers from cheaper countries and fire most of them in the US. As happened with Google's Python team recently.

Fight for your job?

The company may set up a pool process for bulk redundancies if numbers are high enough per country, where you can fight for a place on the lifeboat of remaining positions. 

In both cases I would recommend that you don't waste time on a company that doesn't value you. If you do stay you risk, dealing with the bulk redundancy aftermath. Which will be present unless the redundancies were for a pivot (3) or one off event (4).
An increased workload, pay freezes, no bonus, needing to over work to justify being kept on, plus a negative work atmosphere.

In a case where I stayed after the department I was in was axed, I had to reapply for a new job which was moved to a different division. The work was less worthwhile and at the time, the employment of in-house software developers as a whole, was questioned as being unnecessary for the organisation. I outstayed my welcome for 18 months of legacy commercial software support, before getting the message and quitting.

Lesson learnt, if you must ask to to stay in your company, via senior management, a pool or reapplication. Make sure you look around and apply for other jobs outside of it at the same time.

You also miss out on a minimum of a couple of months tax free pay as a settlement.

On the other hand, if the redundancy round is for a more minor pivot, and you are happy in the role, it may be well worth staying around to see how things pan out.

Of course you may get no choice in the matter, in which case, get straight into GET HIRED mode, and start the job search. If you can manage it fast enough, you will benefit financially from the whole process. Although if the reason is (5) a sector wide reduction, then it will be take longer and be harder to obtain the usual 20% pay increase that a new position can offer.


 H I R E D Why change jobs (aside from being fired!)
  • It is a lot easier to get a pay rise or promotion by changing companies, than being promoted internally. To fast track your career to a principal or architect top IC role. Or just get a pay rise.
  • Changing jobs gives you much wider experience, of different technology, approaches and cultures. Making you a better engineer.
  • If you have been in your current job over 10 years without significant internal promotions or changes of role then it is detrimental to your CV, indicating you are stuck in a rut and unable to handle change, eg. new technology.
  • You want to shift sectors.
    I changed from public sector web developer, to commercial cloud engineer with one move.
  • You want to get into new technology that is not used in your current role.
    I changed from a Python, Ruby config management automation engineer to a Kubernetes Golang engineer with another.
  • You want to change your role in tech, or leave it entirely. For example get out of sales as a solution architect and back into a more technical role as an SRE.
On that basis many software engineers change jobs every 2 or 3 years for part of their careers. Its expected, the average engineer in a FAANG stays less than 3 years.
Of course you probably need to be in a job for at least 2 years to fully master it.
If your CV has loads of similar positions where you barely make it past the probation period, its marking you out as a failure at those roles == Fail hiring at the first step, the HR CV check.
Upskilling
The other problem is that changing jobs to change roles, even if its just to use a new language or framework can be blocked by roles requiring experience in that area on the CV to get interviewed for the job in the first place. For software engineering that is less of an issue. Since tech changes faster than any other sector.
You just need to prove you have a range of experience and software languages and are willing to learn, early in a technology boom. To catch the cloud engineer bus, I got a job in it in 2016. The US cloud sector was $8 billion back then. It is $600 billion now. Similarly to get on board with Golang and Kubernetes in 2019. In the first few years of a tech boom most companies will initially have to cross train engineers without direct experience. The corollary of that is that in the current downturn attempting to pivot to an established technology, which k8s has become, is going to be much harder.

Market rates
Clearly ML ops and AI data science are current booming areas. The demand so far outstrips current supply that for switching to a more junior Python AI role in them may pay as well as a senior Django web developer for example.

So around £60k for a junior role, but in 3-4 years it should jump to at least £100k for a senior AI engineer. Of course for US salaries add 30%, plus usually free medical, life insurance etc. The lower tax rates cancels out the higher cost of living in the US ... so its UK salary +30% in real terms*. Researching the going rate for the particular role, technical skills and sector you are applying for is a necessary part of the hiring process. In order that you don't let recruitment bargain you down too low.

* Note that geographic software pay differences are why you often come across engineers of other nationalities emigrating to, and working in the higher paying countries, USA, Canada and Australia. I have worked with many people from the UK and Europe who live in the USA, and Indians who live there or Europe, for example.
Of course as a cheap foreign worker myself, I too stick with US companies partly because they pay rather more than UK ones, even if a lot lower than what I would get if I moved there 😉

Now is the time when such a switch will be easier to accomplish without having to work nights doing courses, certifications and personal projects. The usual means of demonstrating your ability without any work experience.
The caveat here is that moving jobs in a down turn, as we are arguably experiencing currently, can depress the market salary rates and if you are already at the top of those when made redundant, can mean you have to take a pay cut for a year or two rather than face the cost of long term unemployment.
The hiring process for an experienced software engineer roleInterview to Offer should take a month.

If not then the recruitment is likely for a group of roles in an expansion process and from screening and CV, you are not one of the top candidates. You may waiting on the backlog of potential interviewees for a couple of extra months before it properly kicks off.
Or you are told the post is no longer available, sorry!
Even if you would eventually get a post it may stretch your redundancy settlement. Therefore I would not bother pursuing any application process that is looking to be stretching on past 6 weeks.
Start date will be 5 weeks from contract (partly to cater for notice, referee and compliance checks etc)

That makes it 2 months minimum from applying for a role to starting.


The process will consist of a technical assessment task and at least 3 interviews, screening, manager and technical.
With another for introduction to team mates / office etc. which is unlikely to have any effect on the hiring decision unless you are your potential new manager take an instant personal dislike to each other.
The HR screening interview, just checks you are a genuine candidate for the job.The Manager interview similarly is more about checking you will fit in with the company and team, plus that you have basic personal communication skills.
The Technical Interview is what matters
Passing the technical interview is what really decides whether you will get a job offer. Sometimes the tech interview may be split into two, one more task and questionnaire based and the other more discussion. Often the initial task part will be given as WFH.

The technical interview will consist of technical questions to explore whether you have the knowledge and experience required, plus some thing to confirm you can write code and discuss that code, for a developer or SRE role. For the former it would likely be application code whilst for the latter automation code.
For a more purely system administration / IT support role it will involve specifying your processes for resolving issues.

If you are unlucky and it is an in person interview, you may have to whiteboard pseudo code live in response to a changing task described to you on the spot. Although I have only had that once. More common, especially for hybrid / remote roles, is the take away task. To be completed in a 'few hours' at most.

It is possible that either of the above could be replaced by another source for your code. Talking through one of your open source packages, if you have any. Or talking through one or two longer automated coding exercise assessed tasks. I have never come across either of these though.

The main point is that the core of any technical interview for a developer related role will involve talking through code you have written, as a kicking off point to check your understanding of the code, how it could be improved, how you would tackle scaling, or a new exemplar functional requirement. Its faults and features.

You will be asked to talk through past code or technical work in a more generic manner in response to standard questions along the lines of examples of your past work that show how you fit the job. 

Preparing for the Technical Interview

It doesn't take much to work out that a 20% pay rise is worth, a day's worth of work a week.
Assuming you stay in your new job for 2 or 3 years - that is equivalent to 6 months pay.
On that basis even doing a week of work to apply to, and prepare for a single job is still very well worth it, if you get the job.

Adopting a scatter gun approach, ie applying with a generic CV and covering letter to 10 or more jobs, is a waste of time in my view. If you need a new job, then it should be one you are genuinely interested in and research. That means probably you should only have a maximum of 3 tailored applications on the go at once. Even when I was made redundant (and about to get married) I think I limited myself to 4 job applications in total, with one primary one that thankfully I did end up getting.

There are many sites that can advise how best to do that, based on the framework that your hiring will be decided upon I have outlined above. I think preparing some Challenge Action Result stories targeted at the details of the new employer is useful. Plus spending a day or so refining that '2 hours'  development task. Researching the company and preparing specific questions and perhaps suggestions for your interviewers.

Being a Technical Interviewer

From the other side of the table, clearly candidates need to show sufficient competency for the post. They may show it, but only within a totally different technical stack. Smaller companies tend to have less capacity and time to get people up to speed with new tech. So will likely fail these candidates even though they are capable of doing the job eventually.
The technical interviewers will tend to pair on the assessment - to improve its consistency. Swapping partners for interviews regularly also helps.

The assessment process is likely to use some online system such as Jobvite or Greenhouse where each interviewer assesses the candidate. Finally summarising it all with a recommendation for strong pass, pass or fail. Sometimes for a specific post and grade, otherwise the assessment can include a grade recommendation. The manager then rubber stamps that assuming appropriate funding is available. HR's job is to beat the candidate down to the lowest reasonable price, without going so low the candidate walks away.

A healthy growing company will tend to have a rolling recruitment process as they expect to be increasing head count in proportion to customers and revenue. On that basis they will likely be aiming to recruit anyone with a good pass, plus maybe most of the passes too.

Given that engineering jobs are highly specialised and require relevant experience I have not seen cases of way more interviewees than jobs. Currently, even with all the redundancies, there is still an under supply of engineers.
Also the approach for HR will be to set experience and skills pre-requisites for the roles that will keep the numbers down for those who make it to technical interview to around double the number of vacancies. Since it takes out a day of work for each interviewing engineer, to prep, interview and assess.

HIRING SUMMARY

You must pass each of the first 5 or 6 steps to get to the next one and get the job.

  1. HR check written application is a plausible candidate
  2. FAANG sized company - automated quiz / Leetcode style challenge - to reduce the numbers - because they get way more speculative applicants.
  3. Recruiter chat to check candidate is genuine and available
  4. CV skills / experience check vs other applicants to shortlist those worth interviewing
  5. Technical task could be takeaway or whiteboard / questionairre interview
  6. Technical interview, in person or Zoom with engineers

  7. Manager interview / introduction to team mates. 
  8. Recruiter chat. Negotiate exact salary. Agree start date.
  9. Contract is signed, YOU ARE HIRED.






Categories: FLOSS Project Planets

Real Python: Python Sequences: A Comprehensive Guide

Planet Python - Wed, 2024-05-01 10:00

A phrase you’ll often hear is that everything in Python is an object, and every object has a type. This points to the importance of data types in Python. However, often what an object can do is more important than what it is. So, it’s useful to discuss categories of data types and one of the main categories is Python’s sequence.

In this tutorial, you’ll learn about:

  • Basic characteristics of a sequence
  • Operations that are common to most sequences
  • Special methods associated with sequences
  • Abstract base classes Sequence and MutableSequence
  • User-defined mutable and immutable sequences and how to create them

This tutorial assumes that you’re familiar with Python’s built-in data types and with the basics of object-oriented programming.

Get Your Code: Click here to download the free sample code that you’ll use to learn about Python sequences in this comprehensive guide.

Take the Quiz: Test your knowledge with our interactive “Python Sequences: A Comprehensive Guide” quiz. You’ll receive a score upon completion to help you track your learning progress:

Interactive Quiz

Python Sequences: A Comprehensive Guide

In this quiz, you'll test your understanding of sequences in Python. You'll revisit the basic characteristics of a sequence, operations common to most sequences, special methods associated with sequences, and how to create user-defined mutable and immutable sequences.

Building Blocks of Python Sequences

It’s likely you used a Python sequence the last time you wrote Python code, even if you don’t know it. The term sequence doesn’t refer to a specific data type but to a category of data types that share common characteristics.

Characteristics of Python Sequences

A sequence is a data structure that contains items arranged in order, and you can access each item using an integer index that represents its position in the sequence. You can always find the length of a sequence. Here are some examples of sequences from Python’s basic built-in data types:

Python >>> # List >>> countries = ["USA", "Canada", "UK", "Norway", "Malta", "India"] >>> for country in countries: ... print(country) ... USA Canada UK Norway Malta India >>> len(countries) 6 >>> countries[0] 'USA' >>> # Tuple >>> countries = "USA", "Canada", "UK", "Norway", "Malta", "India" >>> for country in countries: ... print(country) ... USA Canada UK Norway Malta India >>> len(countries) 6 >>> countries[0] 'USA' >>> # Strings >>> country = "India" >>> for letter in country: ... print(letter) ... I n d i a >>> len(country) 5 >>> country[0] 'I' Copied!

Lists, tuples, and strings are among Python’s most basic data types. Even though they’re different types with distinct characteristics, they have some common traits. You can summarize the characteristics that define a Python sequence as follows:

  • A sequence is an iterable, which means you can iterate through it.
  • A sequence has a length, which means you can pass it to len() to get its number of elements.
  • An element of a sequence can be accessed based on its position in the sequence using an integer index. You can use the square bracket notation to index a sequence.

There are other built-in data types in Python that also have all of these characteristics. One of these is the range object:

Python >>> numbers = range(5, 11) >>> type(numbers) <class 'range'> >>> len(numbers) 6 >>> numbers[0] 5 >>> numbers[-1] 10 >>> for number in numbers: ... print(number) ... 5 6 7 8 9 10 Copied!

You can iterate through a range object, which makes it iterable. You can also find its length using len() and fetch items through indexing. Therefore, a range object is also a sequence.

You can also verify that bytes and bytearray objects, two of Python’s built-in data structures, are also sequences. Both are sequences of integers. A bytes sequence is immutable, while a bytearray is mutable.

Special Methods Associated With Python Sequences

In Python, the key characteristics of a data type are determined using special methods, which are defined in the class definitions. The special methods associated with the properties of sequences are the following:

  • .__iter__(): This special method makes an object iterable using Python’s preferred iteration protocol. However, it’s possible for a class without an .__iter__() special method to create iterable objects if the class has a .__getitem__() special method that supports iteration. Most sequences have an .__iter__() special method, but it’s possible to have a sequence without this method.
  • .__len__(): This special method defines the length of an object, which is normally the number of elements contained within it. The len() built-in function calls an object’s .__len__() special method. Every sequence has this special method.
  • .__getitem__(): This special method enables you to access an item from a sequence. The square brackets notation can be used to fetch an item. The expression countries[0] is equivalent to countries.__getitem__(0). For sequences, .__getitem__() should accept integer arguments starting from zero. Every sequence has this special method. This method can also ensure an object is iterable if the .__iter__() special method is missing.

Therefore, all sequences have a .__len__() and a .__getitem__() special method and most also have .__iter__().

However, it’s not sufficient for an object to have these special methods to be a sequence. For example, many mappings also have these three methods but mappings aren’t sequences.

A dictionary is an example of a mapping. You can find the length of a dictionary and iterate through its keys using a for loop or other iteration techniques. You can also fetch an item from a dictionary using the square brackets notation.

This characteristic is defined by .__getitem__(). However, .__getitem__() needs arguments that are dictionary keys and returns their matching values. You can’t index a dictionary using integers that refer to an item’s position in the dictionary. Therefore, dictionaries are not sequences.

Slicing in Python Sequences Read the full article at https://realpython.com/python-sequences/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Goal Sprint 2024

Planet KDE - Wed, 2024-05-01 09:00

Last week, like many other people, I was in Berlin for the Mega^WGoals sprint. Natually the three goals (Sustainability, Accessibility, and Automatability; the three abilities) attracted a diverse crowd of people that brought also other topics, so it turned into a proper megasprint.

Being interested in all three goals made it a bit callenging to follow all relevant discussions unfortunately, but on the flip side it never got boring.

Most of my goal-related work was towards the automation goal. One thing I was working on is a CI job that checks for spelling mistakes in the code, so that those can be caught when doing a MR. A while ago I create a website that tracks which KDE projects are ported to Qt6. What started out as a joke for a talk turned out to be a useful tool for planning porting work. During the print I fixed the site to actually work correctly again and, most importantly, changed the text from “No” to “Yes” since most projects actually use Qt6 now. For a while the site also has auto-generated reports for other things, like showing which projects don’t yet have clang-format applied to them or which projects don’t enforce passing test in the CI. I used the latter list to enable this for the remaining projects that don’t have failing tests right now and prepared a change to the CI system that enforces passing tests by default. In the same spirit others and I also fixed some currently failing tests. We also discussed the idea of extending the site with more checks and turning it into a proper KDE site that isn’t hosted on my personal infrastructure.

Harald, Carl, and I worked on a dashboard to show the CI status for our project. This is something we haven’t really had since switching to Gitlab CI but is very useful e.g. as part of the release checklist. We do have a working prototype, but some things remain to be ironed out. As part of this we also fixed some of the currently failing builds.

My main contribution to the systainability goal was debugging why Nate’s NeoChat is using too much CPU. With a team effort we eventually pinpointed this to an invisible animation constantly repainting the window, which was then promptly fixed.

In terms of accessibility I was mainly involved in discussions about challenges and new developments with accessibility on Wayland. Expect to hear more on this soon.

In “off-topic” topics there were plently of discussions about visions, ideas, and challenges for our application development story. This included discussions on visions for design/UX, theming, API design, and software distribution. Being KDE’s Software Platform Engineer it is part of my responsibilities to facilitate these kinds of discussions. Later this year I want to host a sprint dedicated to application design to discuss and establish our vision there. If you are interested do reach out to me.

All in all it was a fun few days with great people. Thanks to MBition and Aleix for hosting us, and thanks to those donating to KDE to make these sprints possible.

Categories: FLOSS Project Planets

Colin Watson: Free software activity in April 2024

Planet Debian - Wed, 2024-05-01 07:34

My Debian contributions this month were all sponsored by Freexian.

  • I’m trying to get back into bugs.debian.org administration, so I spent some time catching up on my owner@bugs.debian.org mailbox and answering a number of support requests there.
  • I fixed a regression I’d introduced last year where groff’s PDF output had invalid date headers, both upstream and in Debian.
  • I released man-db 2.12.1.
  • openssh:
    • I did a little more testing of Luca Boccassi’s modifications to upstream’s inline systemd notification patch.
    • I did an extensive review of some of the choices in Debian’s OpenSSH packaging, in light of last month’s xz-utils backdoor.
    • I fixed a build failure on ppc64el, forwarded upstream.
    • I proposed reducing shared library linkage in tcp-wrappers; its maintainer accepted this by disabling NIS support.
    • I applied a suggestion to improve ordering of systemd services in relation to nss-user-lookup.target.
  • I updated putty to 0.81.
  • Python team:
  • I did some inconclusive investigation of flaky tests in gcr4. More work is needed there.
  • I proposed a patch for a build failure in gyoto, both upstream and in Debian.

You can support my work directly via Liberapay.

Categories: FLOSS Project Planets

Talking Drupal: Skills Upgrade #9

Planet Drupal - Wed, 2024-05-01 06:53

Welcome back to “Skills Upgrade” a Talking Drupal mini-series following the journey of a D7 developer learning D10. This is the final episode, 9.

Topics
  • Review status of Chad's Smart Date test
  • Panel discussion
    • Chad, What was your biggest takeaway?
    • Mike, How do you approach this type of one on one mentorship differently than your courses?
    • AmyJune, do you think there are other types of focused mentorship like this that would be valuable to the community?
    • Chad, what was the most surprising thing you learned in Modern Drupal vs Drupal 7?
    • Michael, what did you learn through this process?
    • How do you think people will use this journey to help their learning process?
    • Chad, what are your plans for your next contribution?
Resources

Chad's Drupal 10 Learning Curriclum & Journal Chad's Drupal 10 Learning Notes

The Linux Foundation is offering a discount of 30% off e-learning courses, certifications and bundles with the code, all uppercase DRUPAL24 and that is good until June 5th https://training.linuxfoundation.org/certification-catalog/

Hosts

Nic Laflin - www.nlightened.net AmyJune Hineline - @volkswagenchick

Guests

Chad Hester - chadkhester.com @chadkhest Mike Anello - DrupalEasy.com @ultimike

Categories: FLOSS Project Planets

Pages