Feeds

Nicola Iarocci: Microsoft MVP

Planet Python - Thu, 2024-07-11 09:11

Last night, I was at an outdoor theatre with Serena, watching Anatomy of a Fall (an excellent film). Outdoor theatres are becoming rare, which is a pity, and Arena del Sole is lovely with its strong vintage, 80s vibe. There’s little as pleasant as watching a film under the stars with your loved one on a quiet summer evening.

Anyway, in the pause, I glanced at my e-mails and discovered I had been again granted the Microsoft MVP Award. It is the ninth consecutive year, and I’m grateful and happy the journey continues. At this point, I should put in some extra effort to reach the 10-year milestone next year.

Categories: FLOSS Project Planets

mark.ie: My Drupal Core Contributions for week-ending July 12th, 2024

Planet Drupal - Thu, 2024-07-11 08:59

Here's what I've been working on for my Drupal contributions this week. Thanks to Code Enigma for sponsoring the time to work on these.

Categories: FLOSS Project Planets

Real Python: Quiz: Build a Blog Using Django, GraphQL, and Vue

Planet Python - Thu, 2024-07-11 08:00

In this quiz, you’ll test your understanding of building a Django blog back end and a Vue front end, using GraphQL to communicate between them.

You’ll revisit how to run the Django server and a Vue application on your computer at the same time.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Qt Creator 14 RC released

Planet KDE - Thu, 2024-07-11 06:59

We are happy to announce the release of Qt Creator 14 RC!

Categories: FLOSS Project Planets

Petter Reinholdtsen: More than 200 orphaned Debian packages moved to git, 216 to go

Planet Debian - Thu, 2024-07-11 06:30

In April, I started migrating orphaned Debian packages without any version control system listed in debian/control to git. This morning, my Debian QA page finally reached 200 QA packages migrated. In reality there are a few more, as the packages uploaded by someone else after my initial upload have disappeared from my QA uploads list. As I am running out of steam and will most likely focus on other parts of Debian moving forward, I hope someone else will find time to continue the migration to bring the number of orphaned packages without any version control system down to zero. Here is the updated recipe if someone want to help out.

To locate packages to work on, the following one-liner can be used:

PGPASSWORD="udd-mirror" psql --port=5432 --host=udd-mirror.debian.net \ --username=udd-mirror udd -c "select source from sources \ where release = 'sid' and (vcs_url ilike '%anonscm.debian.org%' \ OR vcs_browser ilike '%anonscm.debian.org%' or vcs_url IS NULL \ OR vcs_browser IS NULL) AND maintainer ilike '%packages@qa.debian.org%' \ order by random() limit 10;"

Pick a random package from the list and run the latest edition of the script debian-snap-to-salsa with the package name as the argument to prepare a git repository with the existing packaging. This will download old Debian packages from snapshot.debian.org. Note that very recent uploads will not be included, so check out the package on tracker.debian.org. Next, run gbp buildpackage --git-ignore-new to verify that the package build as it should, and then visit https://salsa.debian.org/debian/ and make sure there is not already a git repository for the package there. I also did git log -p debian/control and look for vcs entries to check if the package used to have a git repository on Alioth, and see if it can be a useful starting point moving forward. If all this check out, I created a new gitlab project below the Debian group on salsa, push the package source there and upload a new version. I tend to also ensure build hardening is enabled, if it prove to be easy, and check if I can easily fix any lintian issues or bug reports. If the process took more than 20 minutes, I dropped it and moved on to another package.

If I found patches in debian/patches/ that were not yet passed upstream, I would send an email to make sure upstream know about them. This has proved to be a valuable step, and caused several new releases for software that initially appeared abandoned. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Categories: FLOSS Project Planets

Robin Wilson: Searching an aerial photo with text queries – a demo and how it works

Planet Python - Thu, 2024-07-11 05:35

Summary: I’ve created a demo web app where you can search an aerial photo of Southampton, UK using text queries such as "roundabout", "tennis court" or "ship". It uses vector embeddings to do this – which I explain in this blog post.

In this post I’m going to try and explain a bit more about how this works.

Firstly, I should explain that the only data used for the searching is the aerial image data itself – even though a number of these things will be shown on the OpenStreetMap map, none of that data is used, so you can also search for things that wouldn’t be shown on a map (like a blue bus)

The main technique that lets us do this is vector embeddings. I strongly suggest you read Simon Willison’s great article/talk on embeddings but I’ll try and explain here too. An embedding model lets you turn a piece of data (for example, some text, or an image) into a constant-length vector – basically just a sequence of numbers. This vector would look something like [0.283, -0.825, -0.481, 0.153, ...] and would be the same length (often hundreds or even thousands of elements long) regardless how long the data you fed into it was.

In this case, I’m using the SkyCLIP model which produces vectors that are 768 elements long. One of the key features of these vectors are that the model is trained to produce similar vectors for things that are similar in some way. For example, a text embedding model may produce a similar vector for the words "King" and "Queen", or "iPad" and "tablet". The ‘closer’ a vector is to another vector, the more similar the data that produced it.

The SkyCLIP model was trained on image-text pairs – so a load of images that had associated text describing what was in the image. SkyCLIP’s training data "contains 5.2 million remote sensing image-text pairs in total, covering more than 29K distinct semantic tags" – and these semantic tags and the text descriptions of them were generated from OpenStreetMap data.

Once we’ve got the vectors, how do we work out how close vectors are? Well, we can treat the vectors as encoding a point in 768-dimensional space. That’s a bit difficult to visualise – so imagine a point in 2- or 3-dimensional space as that’s easier, plotted on a graph. Vectors for similar things will be located physically closer on the graph – and one way of calculating similarity between two vectors is just to measure the multi-dimensional distance on a graph. In this situation we’re actually using cosine similarity, which gives a number between -1 and +1 representing the similarity of two vectors.

So, we now have a way to calculate an embedding vector for any piece of data. The next step we take is to split the aerial image into lots of little chunks – we call them ‘image chips’ – and calculate the embedding of each of those chunks, and then compare them to the embedding calculated from the text query.

I used the RasterVision library for this, and I’ll show you a bit of the code. First, we generate a sliding window dataset, which will allow us to then iterate over image chips. We define the size of the image chip to be 200×200 pixels, with a ‘stride’ of 100 pixels which means each image chip will overlap the ones on each side by 100 pixels. We then configure it to resize the output to 224×224 pixels, which is the size that the SkyCLIP model expects as input.

ds = SemanticSegmentationSlidingWindowGeoDataset.from_uris( image_uri=uri, image_raster_source_kw=dict(channel_order=[0, 1, 2]), size=200, stride=100, out_size=224, )

We then iterate over all of the image chips, run the model to calculate the embedding and stick it into a big array:

dl = DataLoader(ds, batch_size=24) EMBEDDING_DIM_SIZE = 768 embs = torch.zeros(len(ds), EMBEDDING_DIM_SIZE) with torch.inference_mode(), tqdm(dl, desc='Creating chip embeddings') as bar: i = 0 for x, _ in bar: x = x.to(DEVICE) emb = model.encode_image(x) embs[i:i + len(x)] = emb.cpu() i += len(x) # normalize the embeddings embs /= embs.norm(dim=-1, keepdim=True) embs.shape

We also do a fair amount of fiddling around to get the locations of each chip and store those too.

Once we’ve stored all of those (I’ll get on to storage in a moment), we need to calculate the embedding of the text query too – which can be done with code like this:

text = tokenizer(text_queries) with torch.inference_mode(): text_features = model.encode_text(text.to(DEVICE)) text_features /= text_features.norm(dim=-1, keepdim=True) text_features = text_features.cpu()

It’s then ‘just’ a matter of comparing the text query embedding to the embeddings of all of the image chips, and finding the ones that are closest to each other.

To do this, we can use a vector database. There are loads of different vector databases to choose from, but I’d recently been to a tutorial at PyData Southampton (I’m one of the co-organisers, and I strongly recommend attending if you’re in the area) which used the Pinecone serverless vector database, and they have a fairly generous free tier, so I thought I’d try that.

Pinecone, like all other vector databases, allows you to insert a load of vectors and their metadata (in this case, their location in the image) into the database, and then search the database to find the vectors closest to a ‘search vector’ you provide.

I won’t bother showing you all the code for this side of things: it’s fairly standard code for calling Pinecone APIs, mostly copied from their tutorials.

I then wrapped this all up in a FastAPI API, and put a simple Javascript front-end on it to display the results on a Leaflet web map. I also added some basic caching to stop us hitting the Pinecone API too frequently (as there is limit to the number of API calls you can make on the free plan). And that’s pretty-much it.

I hope the explanation made sense: have a play with the app here and post a comment with any questions.

Categories: FLOSS Project Planets

Qt for Android Supported Versions Guidelines

Planet KDE - Thu, 2024-07-11 05:14

Qt for Android usually supports a wide range of Android versions, some very old. To keep the supported versions to a level that’s maintainable by Qt, especially for LTS releases which are expected to live for three years, Qt for Android is adopting new guidelines for selecting the supported versions for a given Qt release in the hope that this effort would make the selection clear and transparent, and help shape proper expectations of support for each Qt for Android release.

Categories: FLOSS Project Planets

Using Nix as a Yocto Alternative

Planet KDE - Thu, 2024-07-11 03:00

Building system images for embedded devices from the ground up is a very complex process, that involves many different kinds of requirements for the build tooling around it. Traditionally, the most popular build systems used in this context are the Yocto project and buildroot.

These build systems make it easy to set up toolchains for cross-compilation to the embedded target architecture, so that the lengthy compilation process can be offloaded to a more beefy machine, but they also help with all the little details that come up when building an image for what effectively amounts to building an entire custom Linux distribution.

More often than not, an embedded Linux kernel image requires changing the kernel config, as well as setting compiler flags and patching files in the source tree directly. A good build system can take a lot of pain away for these steps, by simplifying this with a declarative interface.

Finally, there is a lot of value in deterministic and reproducible builds, as this allows one to get the very same output image regardless of the context and circumstances where and when the compilation is performed.

Introducing Nix

Today we take a look at Nix as an alternative to Yocto and buildroot. Nix is a purely functional language that fits all of the above criteria perfectly. The project started as a PhD thesis for purely functional software deployment and has been around for over 20 years already.  In the last few years, it has gained a lot of popularity in the Server and Desktop Linux scene, due to its ability to configure an entire system and solve complex packaging-related use cases in a declarative fashion.

In the embedded scene, Nix is not yet as popular, but there have already been success stories of Nix being used as an alternative to Yocto. And with the vast collection of over 80,000 officially maintained packages in the nixpkgs repo (this is more than all official packages of Debian and Arch Linux combined), Nix certainly has an edge over the competition, as most software stacks are already packaged. For most common hardware you will also find an overlay of hardware-specific quirks in the nixos-hardware repository. However, since the user demographic of Nix is slightly different at the moment, for more obscure embedded platforms you are still much more likely to find an OpenEmbedded layer.

For the Nix syntax it is best to refer to the official documentation, but being a declarative language, the following parts should be easy to comprehend even without further introduction. The only uncommon caveat is the syntax for lambdas, which essentially boils down to { x, y }: x + y being a lambda, that takes a set with two attributes x and y and returns their sum.

Cross-Compilation

On Nix there are two possible approaches to do cross-compilation. The first one would be to just pull in the packages for the target architecture (in this case aarch64) and compile it on a x86_64 system by configuring qemu-user as a binfmt_misc handler for user space emulation. While this effectively cheats around the actual cross-compilation by emulating the target instruction set, it has some advantages such as simplifying the build process for all packages, but most importantly it allows to reuse the official package cache, which has already built binaries for most aarch64 packages. While most of the build process can be shortcut with this, packages that need to be actually built will build extremely slow due to the emulation.

For that reason we use the second approach instead, which is actual cross-compilation: Instead of pulling in packages normally, we can use the special pkgsCross.${targetArch} attribute to cross-compile our packages to whatever we pass as ${targetArch}. The majority of the packages will just work, but rarely some packages need extra configuration to cross-compile. For example, for Qt we need to set QT_HOST_PATH to point to a Qt installation on the build host, as it needs to run some tools such as moc during the actual build on the build host. The disadvantage of this approach is that for most packages the official Nix cache does not provide binaries, so we have to build everything ourselves.

Of course builds are already cached locally by default (because they end up as a derivation in /nix/store), but it is also possible to set up a custom self-hosted Nix cache server, so that binaries have to be built only once even across multiple machines.

Building an image with Nix

As an example we will be looking into building an entire system image for the Raspberry Pi 3 Model B from the ground up. This includes compiling a custom Linux kernel from source, building the entire graphics stack including Mesa and the whole rest of the software stack. This also means we will build Qt 6 from source.

As example application we will deploy GammaRay, which is already packaged in the official Nix repository. This is to illustrate the advantage of having the large collection of nixpkgs at our disposal. Building a custom Qt application would not be much more involved, for reference take a look at how GammaRay itself is packaged.

Then at the end of the build pipeline, we will have an actual image that can be flashed onto the Raspberry Pi to boot a custom NixOS image with all the settings we have configured.

To build a system image, nixpkgs provides a lot of utility helper functions. For example, to build a normal bootable ISO that can install NixOS like the official installer, the isoImage module can be used. Even more helper functions are available in the nixos-generators repository. However, we do not want to create an “installer” image, instead we are interested in creating an image that can be booted straight away and already has all of the correct software installed. And because the Raspberry Pi uses an SD card, we can make use of the sd-card/sd-image.nix module for that. This module already does a lot of the extra work for us, i.e. it creates a MBR partitioned SD card image, that contains a FAT boot partition and an ext4 partitioned root partition. Of course it is possible to customize all these settings, for example, to add other partitions, we could simply append elements to the config.fileSystems attribute set.

Leaving out some slight Nix flakes boilerplate (we will get to this at the end), with the following two snippets we would already create a bootable image:

nixosConfigurations.pi = system.nixos { imports = [ "${nixpkgs}/nixos/modules/installer/sd-card/sd-image-aarch64.nix" nixosModules.pi ]; images.pi = nixosConfigurations.pi.config.system.build.sdImage; };

This first snippet imports the sd-image module mentioned above and links to a further nixosModules.pi configuration, that we define in the following snippet and that we can use to configure the entire system to our liking. This includes installed packages, setup of users, boot flags and more.

nixosModules.pi = ({ lib, config, pkgs, ... }: { environment.systemPackages = with pkgs; [ gammaray ]; users.groups = { pi = { gid = 1000; name = "pi"; }; }; users.users = { pi = { uid = 1000; password = "pi"; isSystemUser = true; group = "pi"; extraGroups = ["wheel" "video"]; shell = pkgs.bash; }; }; services = { getty.autologinUser = "pi"; }; time.timeZone = "Europe/Berlin"; boot = { kernelPackages = lib.mkForce pkgs.linuxKernel.packages.linux_rpi3; kernelParams = ["earlyprintk" "loglevel=8" "console=ttyAMA0,115200" "cma=256M"]; }; networking.hostName = "pi"; });

Thus, with this configuration we install GammaRay, set up a user that is logged in by default and uses a custom Raspberry Linux kernel as the default kernel to boot. Most of these configuration options should be self-explanatory and they only show a glimpse of what is possible to configure. In total, there are over 10,000 official NixOS configuration options, that can be searched with the official web interface.

The line where we add GammaRay to the system packages, also automatically adds Qt 6 as a dependency. However, as mentioned previously, this does not work quite well with cross-compilation. In order for it to build we need to patch a dependency of GammaRay to add the QT_HOST_PATH variable to the cmake flags. What would involve bizarre gymnastics with most other build systems, becomes incredibly simple with Nix: we just add an overlay that overrides the Qt6 package, there is no need to touch the definition of GammaRay at all:

nixpkgs.overlays = [ (final: super: { qt6 = super.qt6.overrideScope' (qf: qp: { qtbase = qp.qtbase.overrideAttrs (p: { cmakeFlags = ["-DQT_HOST_PATH=${nixpkgs.legacyPackages.${hostSystem}.qt6.qtbase}"]; }); }); }) ];

And note, how we pass the path to Qt built on the host system to QT_HOST_PATH. Due to lazy evaluation, this will build Qt (or rather download it from the Nix binary cache) for the host architecture and pass the resulting derivation as string at evaluation time.

In order to quickly test an image, we can write a support script to test the output directly in qemu instead of having to flash it on real hardware:

qemuTest = pkgs.writeScript "qemuTest" '' zstd --decompress ${images.pi.outPath}/sd-image/*.img.zst -o qemu.img chmod +w qemu.img qemu-img resize -f raw qemu.img 4G qemu-system-aarch64 -machine raspi3b -kernel "${uboot}/u-boot.bin" -cpu cortex-a53 -m 1G -smp 4 -drive file=qemu.img,format=raw -device usb-net,netdev=net0 -netdev type=user,id=net0 -usb -device usb-mouse -device usb-kbd -serial stdio '';

Here we decompress the output image, resize it so that qemu can start it and then use the uboot boot loader to finally boot it.

Taking a final look at our config, we now have the following flake.nix file:

{ description = "Cross compile for Raspberry"; inputs = { nixpkgs.url = "nixpkgs/nixos-23.11"; }; outputs = { self, nixpkgs, nixos-hardware }: let hostSystem = "x86_64-linux"; system = nixpkgs.legacyPackages.${hostSystem}.pkgsCross.aarch64-multiplatform; pkgs = system.pkgs; uboot = pkgs.ubootRaspberryPi3_64bit; in rec { nixosConfigurations.pi = system.nixos { imports = [ "${nixpkgs}/nixos/modules/installer/sd-card/sd-image-aarch64.nix" nixosModules.pi ]; }; images.pi = nixosConfigurations.pi.config.system.build.sdImage; qemuTest = pkgs.writeScript "qemuTest" '' zstd --decompress ${images.pi.outPath}/sd-image/*.img.zst -o qemu.img chmod +w qemu.img qemu-img resize -f raw qemu.img 4G qemu-system-aarch64 -machine raspi3b -kernel "${uboot}/u-boot.bin" -cpu cortex-a53 -m 1G -smp 4 -drive file=qemu.img,format=raw -device usb-net,netdev=net0 -netdev type=user,id=net0 -usb -device usb-mouse -device usb-kbd -serial stdio ''; nixosModules.pi = ({ lib, config, pkgs, ... }: { environment.systemPackages = with pkgs; [ uboot gammaray ]; services = { getty.autologinUser = "pi"; }; users.groups = { pi = { gid = 1000; name = "pi"; }; }; users.users = { pi = { uid = 1000; password = "pi"; isSystemUser = true; group = "pi"; extraGroups = ["wheel" "video"]; shell = pkgs.bash; }; }; time.timeZone = "Europe/Berlin"; boot = { kernelParams = ["earlyprintk" "loglevel=8" "console=ttyAMA0,115200" "cma=256M"]; kernelPackages = lib.mkForce pkgs.linuxKernel.packages.linux_rpi3; }; networking.hostName = "pi"; nixpkgs.overlays = [ (final: super: { makeModulesClosure = x: super.makeModulesClosure (x // { allowMissing = true; }); # workaround for https://github.com/NixOS/nixpkgs/issues/154163 qt6 = super.qt6.overrideScope' (qf: qp: { qtbase = qp.qtbase.overrideAttrs (p: { cmakeFlags = ["-DQT_HOST_PATH=${nixpkgs.legacyPackages.${hostSystem}.qt6.qtbase}"]; }); }); unixODBCDrivers = super.unixODBCDrivers // { psql = super.unixODBCDrivers.psql.overrideAttrs (p: { nativeBuildInputs = with nixpkgs.legacyPackages.${hostSystem}.pkgsCross.aarch64-multiplatform.pkgs; [unixODBC postgresql]; }); }; # workaround for odbc not building }) ]; system.stateVersion = "23.11"; }); }; }

And that’s it, now we can build and test the entire image with:

nix build .#images.pi --print-build-logs nix build .#qemuTest ./result

Note that this will take quite a while to build, as everything is compiled from source. This will also create a flake.lock file pinning all the inputs to a specific version, so that subsequent runs will be reproducible.

Conclusion

Nix has been growing a lot in recent years, and not without reason. The Nix language allows to solve some otherwise very complicated packaging tasks in a very concise way. The fully hermetic and reproducible builds are a perfect fit for building embedded Linux images, and the vast collection of packages and configuration options allow to perform most tasks without ever having to leave the declarative world.

However, there are also some downsides when compared to the Yocto project. Due to the less frequent use of Nix on embedded, it is harder to find answers and support for embedded-related questions and you are quickly on your own, especially when using more obscure embedded platforms.

And while the Nix syntax in and of itself is very simple, it should not go unmentioned that there is a lot of complexity around the language constructs such as derivations and how everything interacts with each other. Thus, there is definitely a steep learning curve involved, though most of this comes with the territory and is also true for the Yocto project.

Hence overall, Nix is a suitable alternative for building embedded system images (keeping in mind that some extra work is involved for more obscure embedded platforms), and its purely functional language makes it possible to solve most tasks in a very elegant way.

About KDAB

If you like this article and want to read similar material, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post Using Nix as a Yocto Alternative appeared first on KDAB.

Categories: FLOSS Project Planets

PreviousNext: Co-contribution with clients: A revision UI API for all entity types

Planet Drupal - Thu, 2024-07-11 01:11

The tale of an eight-year, collaborative effort to build a generic revision UI into Drupal 10.1.0, bringing a major piece of functionality to core.

by lee.rowlands / 11 July 2024

As we discussed in our previous post, Improving Drupal with the help of your clients, we’re fortunate to work with a client like ServiceNSW that is committed to open-source contribution. So when their challenges require solutions that will also benefit the whole Drupal community, they're on board!

In the beginning, there were nodes

Since Drupal 4.7 was released in 2006, nodes have had a revision user interface (UI). The UI allows editors to view revision history and specific revisions, as well as revert and delete revisions.

A lot has changed since Drupal 4.7. We received revision support for many more entities, but Node remained the only one with a revision UI in core.

Supporting client needs through contrib 

Our client, Service NSW, makes heavy use of block content entities for Notices displayed throughout the site. These are regularly updated. Editors need to be able to see what has changed and when, revert to previous versions, and view revision logs when needed. 

Since Drupal 8, much of the special treatment of Node entities has been replaced with generic Entity API functionality. Nodes were no longer the only tool in the content-modelling toolbox, with this one exception: revision UI.

The code for node's revision UI lives in the node module. It’s dependent on hard-coded permission checking and uses routing and forms outside the entity API.

This meant that for every additional entity type for which Service NSW needed a revision UI, those parts needed to be recreated repeatedly.

As you can imagine, this approach quickly becomes hard to maintain due to the amount of duplication. 

The journey to core

Having identified that Drupal core needed a generic entity revision UI API (it already had generic APIs for entity routing, editing, viewing and access), we set to work on this missing piece of the puzzle.

We found an existing core issue for it, and in 2015, posted our first patch for it. 

This began an 8-year journey to bring a major piece of functionality to core.

Over the course of many re-rolls, we released contributed modules built on top of the patch:

Finally, with the release of Drupal 10.1.0 in 2023, any entity-type could opt into a revision UI. The Drupal 10.1.0 release opted-in for Block Content entities, making that contributed module obsolete. Then later in 2023, the release of Drupal 10.2.0 saw Media entities use this new API. In early 2024, support for Taxonomy terms was added and released in 10.3.0.

Challenges along the way

The biggest challenges encountered were keeping the patch up to date with core as it changed and navigating the contribution process. Over the years, there have been over 120 patch files and 300+ comments on the issue!

Another challenge was the lack of an access API for checking access to revisions. 

The entity API supported a set of entity access operations — view, update, delete — but no revision operations were considered. The node module had hard-coded permissions e.g. 'view all revisions' and 'revert all revisions'. 

To have a generic entity revision UI API, we needed a generic way to check access to the operations the UI would make available.

Initially, we tried to include this with the revision UI changes. However, it became increasingly difficult to get both major pieces of functionality simultaneously. So, in 2019, this was split into a separate issue, and the original issue was postponed.

With efforts from our team, Service NSW and many other individuals and companies in the Drupal community, this made it into Drupal core in 2021. It was first available in Drupal 9.3.0. Adding a whole new major access API is not without its challenges, though. Unfortunately, this change resulted in a security release shortly after 9.3.0 came out. Luckily it was caught and fixed before many sites had updated to 9.3.0.

Collaborative contribution

Adding a new feature to Drupal core is a large undertaking. Doing it in a client-agency collaboration provides an ideal model for how open source should work. 

Developers from PreviousNext and Service NSW worked with the broader Drupal community to bring this feature to fruition.

Our developers have experience contributing to core and were able to guide Service NSW developers through the process. Being credited on large features like this is a major feather in the cap for both individual developers and their organisations.

Wrapping up

Together, we helped integrate a generic revision UI into Drupal 10.1.0. All of the developers involved received issue credits for their work. 

This was a significant effort over eight years, requiring collaboration with individuals and organisations in the wider Drupal community to build consensus. This level of shared commitment helps drive the Drupal open source project forward, recognising that what benefits one can benefit all.

So, what are the next big features you and your clients could work on? Or is there something you want to bring to core, as an individual, group or organisation? Either way, we’d love to chat and collaborate!

Contributors
  • dpi
  • acbramley
  • jibran
  • manuel garcia
  • chr.fritsch
  • AaronMcHale
  • Nono95230
  • capysara
  • darvanen
  • ravi.shankar
  • Spokje
  • thhafner
  • larowlan
  • smustgrave
  • mstrelan
  • mikestar5
  • andregp
  • joachim
  • nterbogt
  • shubhangi1995
  • catch
  • mkalkbrenner
  • Berdir
  • Sam152
  • Xano
Issue links
Categories: FLOSS Project Planets

Russ Allbery: podlators v6.0.0

Planet Debian - Wed, 2024-07-10 22:57

podlators is the collection of Perl modules and front-end scripts that convert POD documentation to *roff manual pages or text, possibly with formatting intended for pagers.

This release continues the simplifications that I've been doing in the last few releases and now uniformly escapes - characters and single quotes, disabling all special interpretation by *roff formatters and dropping the heuristics that were used in previous versions to try to choose between possible interpretations of those characters. I've come around to the position that POD simply is not semantically rich enough to provide sufficient information to successfully make a more nuanced conversion, and these days Unicode characters are available for authors who want to be more precise.

This version also drops support for Perl versions prior to 5.12 and switches to semantic versioning for all modules. I've added a v prefix to the version number, since that is the convention for Perl module versions that use semantic versioning.

This release also works around some changes to the man macros in groff 1.23.0 to force persistent ragged-right justification when formatted with nroff and fixes a variety of other bugs.

You can get the latest release from CPAN or from the podlators distribution page.

Categories: FLOSS Project Planets

GNU Taler news: KYCID, an operational OAuth2 integration of eKYC

GNU Planet! - Wed, 2024-07-10 18:00
In this bachelor thesis Yann Doy presents his implementation of a concept of eKYC (electronic Knwo Your Customer procedure).
Categories: FLOSS Project Planets

Tag1 Consulting: Migrating Your Data from Drupal 7 to Drupal 10: Syntax and structure of migration files

Planet Drupal - Wed, 2024-07-10 12:11

In the previous article, we saw what a migration file looks like. We made some changes without going too deep into explaining the syntax or structure of the file. Today, we are exploring the language in which migration files are written and the different sections it contains.

Read more mauricio Wed, 07/10/2024 - 09:11
Categories: FLOSS Project Planets

GSoC '24 Progress: Week 3 - 6

Planet KDE - Wed, 2024-07-10 12:00

Hi there! The past few weeks have been really busy with my final exams, so I had to slow down my work. Here’s a brief status report on my progress over the past 4 weeks:

I created a SubtitleEvent class to help us better manage subtitle event information, which can replace the original SubtitledTime class. To distinguish subtitles from different layers, I also added basic display support for subtitle layers as multiple subtitle tracks.

Currently, I’m focused on refining these features. There are still some minor tasks to complete and bugs to fix. You can find more information at this MR.

Stay tuned!

Categories: FLOSS Project Planets

Reproducible Builds: Reproducible Builds in June 2024

Planet Debian - Wed, 2024-07-10 11:08

Welcome to the June 2024 report from the Reproducible Builds project!

In our reports, we outline what we’ve been up to over the past month and highlight news items in software supply-chain security more broadly. As always, if you are interested in contributing to the project, please visit our Contribute page on our website.

Table of contents:

  1. Next Reproducible Builds Summit dates announced
  2. GNU Guix patch review session for reproducibility
  3. New reproducibility-related academic papers
  4. Website updates
  5. Misc development news
  6. Reproducibility testing framework


Next Reproducible Builds Summit dates announced

We are very pleased to announce the upcoming Reproducible Builds Summit, set to take place from September 16th — 19th 2024 in Hamburg, Germany.

We are thrilled to host the seventh edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin and Athens. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort. During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving.

If you’re interesting in joining us this year, please make sure to read the event page which has more details about the event and location. We are very much looking forward to seeing many readers of these reports there.


GNU Guix patch review session for reproducibility

Vagrant Cascadian will holding a Reproducible Builds session as part of the monthly Guix patch review series on July 11th at 17:00 UTC.

These online events are intended to encourage everyone everyone becoming a patch reviewer and the goal of reviewing patches is to help Guix project accept contributions while maintaining our quality standards and learning how to do patch reviews together in a friendly hacking session.


New reproducibility-related academic papers

A total of three separate scholarly papers related to Reproducible Builds were published this month:

An Industry Interview Study of Software Signing for Supply Chain Security was published by Kelechi G. Kalu, Tanmay Singla, Chinenye Okafor, Santiago Torres-Arias and James C. Davis of Electrical and Computer Engineering department of Purdue University, Indiana, USA, and is concerned with:

To understand software signing in practice, we interviewed 18 high-ranking industry practitioners across 13 organizations. We provide possible impacts of experienced software supply chain failures, security standards, and regulations on software signing adoption. We also study the challenges that affect an effective software signing implementation.


DiVerify: Diversifying Identity Verification in Next-Generation Software Signing was written by Chinenye L. Okafor, James C. Davis and Santiago Torres-Arias also of Purdue University and is interested in:

Code signing enables software developers to digitally sign their code using cryptographic keys, thereby associating the code to their identity. This allows users to verify the authenticity and integrity of the software, ensuring it has not been tampered with. Next-generation software signing such as Sigstore and OpenPubKey simplify code signing by providing streamlined mechanisms to verify and link signer identities to the public key. However, their designs have vulnerabilities: reliance on an identity provider introduces a single point of failure, and the failure to follow the principle of least privilege on the client side increases security risks. We introduce Diverse Identity Verification (DiVerify) scheme, which strengthens the security guarantees of nextgeneration software signing by leveraging threshold identity validations and scope mechanisms.


Felix Lagnöhed published their thesis on the Integration of Reproducibility Verification with Diffoscope in GNU Make. This work, amongst some other results:

[
] resulted in an extension of GNU make which is called rmake, where diffoscope — a tool for detecting differences between a large number of file types — was integrated into the workflow of make. rmake was later used to answer the posed research questions for this thesis. We found that different build paths and offsets are a big problem as three out of three tested Free and Open Source Software projects all contained these variations. The results also showed that gcc’s optimisation levels did not affect reproducibility, but link-time optimisation embeds a lot of unreproducible information in build artefacts. Lastly, the results showed that build paths, build ID’s and randomness are the three most common groups of variations encountered in the wild and potential solutions for some variations were proposed.


Pol Dellaiera completed his master thesis on Reproducibility in Software Engineering at University of Mons (UMons) under the supervision of Dr. Tom Mens, full professor and director of the Software Engineering Lab.

The thesis serves as an introduction to the concept of reproducibility in software engineering, offering a comprehensive overview of formalizations using mathematical notations for key concepts and an empirical evaluation of several key tools. By exploring various case studies, methodologies and tools, the research aims to provide actionable insights for practitioners and researchers alike. In a commitment to fostering openness and collaboration, the full thesis has been made publicly available for free access. Additionally, the source files for the thesis are hosted on GitHub, promoting transparency and inviting further exploration and contributions from the global software engineering community.


Website updates

There were a number of improvements made to our website this month, including Akihiro Suda very helpfully made the <h4> elements more distinguishable from the <h3> level [
][
] as well as added added a guide for Dockerfile reproducibility [
]. In addition Fay Stegerman added two tools, apksigcopier and reproducible-apk-tools to our Tools page.


Misc development news

In Debian this month, 4 reviews of Debian packages were added, 11 were updated and 14 were removed this month adding to our knowledge about identified issues. Only one issue types was updated, though, explaining that we don’t vary the build path anymore.


On our mailing list this month, Bernhard M. Wiedemann wrote that whilst he had previously collected issues that introduce non-determinism he has now moved on to discuss about “mitigations”, in the sense of how can we avoid whole categories of problem “without patching an infinite number of individual packages”. In addition, Janneke Nieuwenhuizen announced the release of two versions of GNU Mes. [
][
]


In openSUSE news, Bernhard M. Wiedemann published another report for that distribution.


In NixOS, with the 24.05 release out, we have again validated that our minimal ISO reproduces by building it on a VM with software from the 2000’s and no access to the binary cache.


What’s more, we continued to write patches in order to fix specific reproducibility issues, including Bernhard M. Wiedemann writing three patches (for qutebrowser, samba and systemd), Chris Lamb filing Debian bug #1074214 against the fastfetch package, and Arnout Engelen proposing fixes to refind and the Scala compiler.


Lastly, diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb uploaded two versions (270 and 271) to Debian, and made the following changes as well:

  • Drop Build-Depends on liblz4-tool in order to fix Debian bug #1072575. [
]
  • Update tests to support zipdetails version 4.004 that is shipped with Perl 5.40. [
]


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In June, a number of changes were made by Holger Levsen, including:

  • Marking the virt(32|64)c-armhf nodes as down. [
]
  • Granting a developer access to the osuosl4 node in order to debug a regression on the ppc64el architecture. [
]
  • Granting a developer access to the osuosl4 node. [
][
]

In addition, Mattia Rizzolo re-aligned the /etc/default/jenkins file with changes performed upstream [
] and changed how configuration files are handled on the rb-mail1 host. [
], whilst Vagrant Cascadian documented the failure of the virt32c and virt64c nodes after initial investigation [
].


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Categories: FLOSS Project Planets

Real Python: How Do You Choose Python Function Names?

Planet Python - Wed, 2024-07-10 10:00

One of the hardest decisions in programming is choosing names. Programmers often use this phrase to highight the challenges of selecting Python function names. It may be an exaggeration, but there’s still a lot of truth in it.

There are some hard rules you can’t break when naming Python functions and other objects. There are also other conventions and best practices that don’t raise errors when you break them, but they’re still important when writing Pythonic code.

Choosing the ideal Python function names makes your code more readable and easier to maintain. Code with well-chosen names can also be less prone to bugs.

In this tutorial, you’ll learn about the rules and conventions for naming Python functions and why they’re important. So, how do you choose Python function names?

Get Your Code: Click here to download the free sample code that you’ll use as you learn how to choose Python function names.

In Short: Use Descriptive Python Function Names Using snake_case

In Python, the labels you use to refer to objects are called identifiers or names. You set a name for a Python function when you use the def keyword.

When creating Python names, you can use uppercase and lowercase letters, the digits 0 to 9, and the underscore (_). However, you can’t use digits as the first character. You can use some other Unicode characters in Python identifiers, but not all Unicode characters are valid. Not even 🐍 is valid!

Still, it’s preferable to use only the Latin characters present in ASCII. The Latin characters are easier to type and more universally found on most keyboards. Using other characters rarely improves readability and can be a source of bugs.

Here are some syntactically valid and invalid names for Python functions and other objects:

Name Validity Notes number Valid first_name Valid first name Invalid No whitespace allowed first_10_numbers Valid 10_numbers Invalid No digits allowed at the start of names _name Valid greeting! Invalid No ASCII punctuation allowed except for the underscore (_) cafĂ© Valid Not recommended äœ ć„œ Valid Not recommended hello⁀world Valid Not recommended—connector punctuation characters and other marks are valid characters

However, Python has conventions about naming functions that go beyond these rules. One of the core Python Enhancement Proposals, PEP 8, defines Python’s style guide, which includes naming conventions.

According to PEP 8 style guidelines, Python functions should be named using lowercase letters and with an underscore separating words. This style is often referred to as snake case. For example, get_text() is a better function name than getText() in Python.

Function names should also describe the actions being performed by the function clearly and concisely whenever possible. For example, for a function that calculates the total value of an online order, calculate_total() is a better name than total().

You’ll explore these conventions and best practices in more detail in the following sections of this tutorial.

What Case Should You Use for Python Function Names?

Several character cases, like snake case and camel case, are used in programming for identifiers to name the various entities. Programming languages have their own preferences, so the right style for one language may not be suitable for another.

Python functions are generally written in snake case. When you use this format, all the letters are lowercase, including the first letter, and you use an underscore to separate words. You don’t need to use an underscore if the function name includes only one word. The following function names are examples of snake case:

  • find_winner()
  • save()

Both function names include lowercase letters, and one of them has two English words separated by an underscore. You can also use the underscore at the beginning or end of a function name. However, there are conventions outlining when you should use the underscore in this way.

You can use a single leading underscore, such as with _find_winner(), to indicate that a function is meant only for internal use. An object with a leading single underscore in its name can be used internally within a module or a class. While Python doesn’t enforce private variables or functions, a leading underscore is an accepted convention to show the programmer’s intent.

A single trailing underscore is used by convention when you want to avoid a conflict with existing Python names or keywords. For example, you can’t use the name import for a function since import is a keyword. You can’t use keywords as names for functions or other objects. You can choose a different name, but you can also add a trailing underscore to create import_(), which is a valid name.

You can also use a single trailing underscore if you wish to reuse the name of a built-in function or other object. For example, if you want to define a function that you’d like to call max, you can name your function max_() to avoid conflict with the built-in function max().

Unlike the case with the keyword import, max() is not a keyword but a built-in function. Therefore, you could define your function using the same name, max(), but it’s generally preferable to avoid this approach to prevent confusion and ensure you can still use the built-in function.

Double leading underscores are also used for attributes in classes. This notation invokes name mangling, which makes it harder for a user to access the attribute and prevents subclasses from accessing them. You’ll read more about name mangling and attributes with double leading underscores later.

Read the full article at https://realpython.com/python-function-names/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Mer Joyce: voices of the Open Source AI Definition

Open Source Initiative - Wed, 2024-07-10 09:36

The Open Source Initiative (OSI) is running a series of stories about a few of the people involved in the Open Source AI Definition (OSAID) co-design process. We’ll be featuring the voices of the volunteers who have helped shape and are shaping the Definition.

The OSI started researching the topic in 2022, and in 2023 began the co-design process of a new definition of Open Source that applies to AI. The OSI hired Mer Joyce, founder and principal of Do Big Good, as an independent consultant to lead the co-design process. She has worked for over a decade at the intersection of research, policy, innovation and social change.

Mer Joyce, process facilitator for the Open Source AI Definition About co-design

Co-design, also called participatory or human-centered design, is a set of creative methods used to solve communal problems by sharing knowledge and power. The co-design methodology addresses the challenges of reaching an agreed definition within a diverse community (Costanza-Chock, 2020: Escobar, 2018: Creative Reaction Lab, 2018: Friedman et al., 2019). 

As noted in MIT Technology Review’s article about the OSAID, “[t]he open-source community is a big tent… encompassing everything from hacktivists to Fortune 500 companies…. With so many competing interests to consider, finding a solution that satisfies everyone while ensuring that the biggest companies play along is no easy task.” (Gent, 2024). 

The co-design method allows for the integration of diverging perspectives into one just, cohesive and feasible standard. Support from such a significant and broad group of people also creates a tension to be managed between moving swiftly enough to deliver outputs that can be used operationally and taking the time to consult widely to understand the big issues and garner community buy-in. Having Mer as facilitator of the OSAID co-design, with her in-depth experience, has been important in ensuring the integrity of the process. 

The OSAID co-design process

The first step of the OSAID co-design process was to identify the freedoms needed for Open Source AI. After various online and in-person activities and discussions, including five workshops across the world, the community adopted the four freedoms for software, now adapted for AI systems:

  • Freedom to Use the system for any purpose and without having to ask for permission.
  • Freedom to Study how the system works and inspect its components.
  • Freedom to Modify the system for any purpose, including to change its output.
  • Freedom to Share the system for others to use with or without modifications, for any purpose.

The next step was the formation of four working groups to initially analyze four different AI systems and their components. To achieve better representation, special attention was given to diversity, equity and inclusion. Over 50% of the working group participants are people of color, 30% are black, 75% were born outside the US, and 25% are women, trans or nonbinary.

These working groups discussed and voted on which AI system components should be required to satisfy the four freedoms for AI. The components adopted are described in the Model Openness Framework developed by the Linux Foundation.

The vote compilation was performed based on the mean total votes per component (ÎŒ). Components that received over 2ÎŒ votes were marked as “required,” and between 1.5ÎŒ and 2ÎŒ were marked “likely required.” Components that received between 0.5ÎŒ and ÎŒ were marked as “likely not required,” and less than 0.5ÎŒ were marked “not required.”

After the working groups evaluated legal frameworks and legal documents for each component, each working group published a recommendation report. The end result is the OSAID with a comprehensive definition checklist encompassing a total of 17 components. More working groups are being formed to evaluate how well other AI systems align with the Definition.

OSAID multi-stakeholder co-design process: from component list to a definition checklist Meet Mer Joyce Video recorded by Ezequiel Lanza, Open Source AI Evangelist at Intel

I am the process facilitator for the Open Source AI Definition, the Open Source Initiative project creating a definition of Open Source AI that will be a part of the stable public infrastructure of Open Source technology that everyone can benefit from, similar to the Open Source Definition that OSI currently stewards. The co-design of the Open Source AI Definition involves consulting with global stakeholders to ensure their vast range of needs are represented while integrating and weaving together the variety of different perspectives on what Open Source AI should mean.

If you would like to participate in the process, we’re currently on version 0.0.7. We will have a release candidate in June and a stable version in October. There is a public forum at discuss.opensource.org where anyone can create an account and make comments. As different versions are created, updates about our process are released here as well. I am available, as is the executive director of the OSI, to answer questions at bi-weekly town halls that are open for anyone to attend.

How to get involved

The OSAID co-design process is open to everyone interested in collaborating. There are many ways to get involved:

  • Join the working groups: be part of a team to evaluate various models against the OSAID.
  • Join the forum: support and comment on the drafts, record your approval or concerns to new and existing threads.
  • Comment on the latest draft: provide feedback on the latest draft document directly.
  • Follow the weekly recaps: subscribe to our newsletter and blog to be kept up-to-date.
  • Join the town hall meetings: participate in the online public town hall meetings to learn more and ask questions.
  • Join the workshops and scheduled conferences: meet the OSI and other participants at in-person events around the world.
One of the many OSAID workshops organized by Mer Joyce around the world
Categories: FLOSS Research

Petter Reinholdtsen: Some notes from the 2024 LinuxCNC Norwegian developer gathering

Planet Debian - Wed, 2024-07-10 08:45

The Norwegian The LinuxCNC developer gathering 2024 is over. It was a great and productive weekend, and I am sad that it is over.

Regular readers probably still remember what LinuxCNC is, but her is a quick summary for those that forgot? LinuxCNC is a free software system for numerical control of machines such as milling machines, lathes, plasma cutters, routers, cutting machines, robots and hexapods. It eats G-code and produce motor movement and other changes to the physical world, while reading sensor input.

I am not quite sure about the total head count, as not all people were present at the gathering the entire weekend, but I believe it was close to 10 people showing their faces at the gathering. The "hard core" of the group, who stayed the entire weekend, were two from Norway, two from Germany and one from England. I am happy with the outcome from the gathering. We managed to wrap up a new stable LinuxCNC release 2.9.3 and even tested it on real hardware within minutes of the release. The release notes for 2.9.3 are still being written, but should show up on on the project site in the next few days. We managed to go through around twenty pull requests and merge then into either the stable release (2.9) or the development branch (master). There are still around thirty pull requests left to process, so we are not out of work yet. We even managed to fix/improve a slightly worn lathe, and experiment with running a mechanical clock using G-code.

The evening barbeque worked well both on Saturday and Sunday. It is quite fun to light up a charcoal grill using compressed air. Sadly the weather was not the best, so we stayed indoors most of the time.

This gathering was made possible partly with sponsoring from both Redpill Linpro, Debian and NUUG Foundation, and we are most grateful for the support. I would also like to thank the local school for lending us some furniture, and of course the rest of the members of the organizers team, Asle and Bosse, for their countless contributions. The gathering was such success that we want to do it again next year.

We plan to organize the next Norwegian LinuxCNC developer gathering at the end of June next year, the weekend Friday 27th to Sunday 29th of June 2025. I recommend you reserve the dates on your calendar today. Other related communities are also welcome to join in, for example those working on systems like FreeCAD and opencamlib, as I am sure we have much in common and sharing experiences would be very useful to all involved. We are of course looking for sponsors for this gathering already. The total budget for this gathering was around NOK 25.000 (around EUR 2.300), so our needs are quite modest. Perhaps a machine or tools company would like to help out the free software manufacturing community by sponsoring food, lodging and transport for such gathering?

Categories: FLOSS Project Planets

Pages