FLOSS Project Planets

Bounteous.com: Discover the Power of Drupal for Enhanced Operational Efficiency and Security for Healthcare Systems

Planet Drupal - Fri, 2024-06-28 18:39
Healthcare systems consistently rely on technology to provide accurate and timely care. Drupal, an integrated CMS, can help meet this goal.
Categories: FLOSS Project Planets

Bounteous.com: A Guide to the Latest Security Updates for Drupal 7 Users

Planet Drupal - Fri, 2024-06-28 18:39
Discover important security updates and changes happening in Drupal 7 and how you could benefit from upgrading to Drupal 10!
Categories: FLOSS Project Planets

Bounteous.com: Best Practices With Composable Drupal

Planet Drupal - Fri, 2024-06-28 18:39
Discover how Drupal enables organizations to adapt to evolving business needs with agility and ease.
Categories: FLOSS Project Planets

Bounteous.com: The Evolution of Drupal: Discover the Features D7 Users Are Missing Out On

Planet Drupal - Fri, 2024-06-28 18:39
Organizations still using Drupal 7 are missing out on the flexibility, customization options, scalability, and marketing capabilities available in the newer versions. Drupal 10 allows for the management of consistent and engaging digital experiences across various channels, enhances search engine optimization, and enables web teams to deliver content more efficiently.
Categories: FLOSS Project Planets

ImageX: Unlock the Incredible Diversity of Robust AI-Driven Workflows with the AI Interpolator Module in Drupal

Planet Drupal - Fri, 2024-06-28 18:27

Authored by Nadiia Nykolaichuk.

It seems like yesterday that people started cautiously embracing artificial intelligence with admiration, surprise, or even fear. Technology rushes forward, multiplying the number of potential ways to use AI on your website to streamline your workflows.

Categories: FLOSS Project Planets

About My Part in the Creation of the Plasma 6.1 Wallpaper "Reef"

Planet KDE - Fri, 2024-06-28 14:33

(German version of this article: https://wordsmith.social/felixernst/mein-anteil-an-der-erstellung-des-hintergrundbildes-von-plasma-6-1-reef)

Plasma 5

A short recap: In Plasma 5 we predominantly had wallpapers with geometric features. They showed digital representations of nature or were completely abstract, which I never really liked. Perhaps that was trendy for a while, or maybe technical Linux users enjoy such wallpapers, which are quite obviously made using a computer. However, these days, we do not only offer the best package of security, privacy, usability, and power for tech enthusiasts, but for everyone. Therefore, it is in my opinion important that our wallpapers represent not just what we stand for but also what we want to enable. In the best case, we enthuse a broad public this way. We should move on from the purely technical towards what is human and incorporate the creative, inventive, or artistic, which will always be absent from machines. A down-to-earth example of this are wallpapers that look like they could be painted on a simple piece of paper.

Plasma 6.0

For the MegaRelease and Plasma 6.0 KDE arranged a competition for the next wallpaper. There was a jury, and even though I had no part in any of this, I was in full agreement with the jury about the winning image “Scarlet Tree”: As far as I know, the winning artist never made a public appearance and the only thing I know of them is their pseudonym “axo1otl”, under which the wallpaper was published. As far as I know, the communication with them was handled by Niccolò Venerandi.

Plasma 6.1

From then on, the release of Plasma 6.1 was inching closer. One could ask: Do we really need a new wallpaper for every new version?

Ask our Promo Team and they will say: Yes, of course!

The reason for this is that we want to show images of the new Plasma version, but not every new Plasma version includes changes to the default user interface. That would be quite annoying if we moved around buttons or changed the design two times a year!

But when we change the wallpaper, it becomes obvious: This is a new version. That wallpaper then moreso represents all the changes that happened below the surface.

Additionally. a new pretty image obviously also brings some colour and variety into the users' lives, if they haven't already switched their wallpaper themselves.

The plan was to keep the wallpaper for Plasma 6.1 in a similar style as the Plasma 6.0 one. The same artist “axo1otl” painted another picture for us. Unfortunately, we only got to see it somewhat late:

The Visual Design Group was not enthralled by it. The image is quite full, and therefore some of us thought that it was not suitable as a wallpaper. This reluctance was strong enough that people started discussing what we could use as a wallpaper instead, even though we did not have much time to make a decision.

I thought the image was good enough, but I was in a minority with that opinion. However, there was no quick way to find a popular replacement either. Some suggestions went back to our old geometric abstract style from Plasma 5 times.

Additionally, the picture was a bit too pretty to simply drop it. So there were attempts to edit the “Reef” wallpaper in a way that might fit our purpose.

Editings

Niccolò used blur: Oliver Beard from Wales moved some of the elements of the painting out of the frame, so the whole wallpaper would become more calm, like a backdrop: This was considered a step in the right direction.

Niccolò then combined both strategies: While watching these experiments, I noticed that the possibilities for adaptations were very restricted because nobody dared to personally add anything to the picture. That is why the reefs only grew in size in the images. Nobody made them smaller because it would mean creating empty space which would then need to be filled e.g. by adding new sand.

I already saw a future before us in which the time constraints would force us to publish such a hastily-constructed adaptation that hides or obscures many nice details of the original artwork. All that because we would not know how to help ourselves.

I did not want that to happen. I felt like I might and should perhaps be able to help here.

The thing is, I tend to work on the Dolphin file manager and regularly dive into the depths of its source code, so I know a thing or two about underwater landscapes. After all, I have been to two Dolphin meetings in the Mediterranean Sea in the last two years: At Barcelona and at Thessaloniki.

So I started editing the painting myself:

My plan was to make room so the image would seem more serene. Viewers should no longer feel like they are in the middle of a lively coral reef and more like they are wandering through the open sea. By moving the right reef to a new middleground and shrinking the castle, the depths and distances in the image grew. After some initial positive feedback, I added more and more of the missing elements.

Some contributors in the Visual Design Group did not like that the path at the bottom of the image did not lead to the castle anymore. To others, the path was generally an undesirable element which should be removed. I bent the visible end of it towards the castle:

The waves in the upper half of the images also needed to be completed. There were big holes where the two castle towers used to be.

The image above was the result of me working until 4 a.m. I only concluded once I considered the image good enough that I could honestly advocate for it to be a Plasma wallpaper. I hoped that my nightly work would ensure that we had a passable wallpaper ready in time for the Plasma 6.1 release.

When I awoke the next “morning,” I addressed feedback from the group. More people had voiced the opinion that they did not like the path. I had originally kept it because I tried my best to preserve as much of the original vision and technique of “axo1otl” as possible given my other changes.

Granted, the goal of this exercise was to make the wallpaper more calming. Removing elements goes hand in hand with that. It turned out that, for some, the path did not look like it was even leading to the castle. Others did not imagine themselves as wanderers on the path when they viewed the image.

So I removed it and also used that opportunity to improve the sand at the bottom edge of the screen so it would be closer in style to the sand I did not paint.

Finally, everyone was somewhat content with this. It might not be one of our best wallpapers of all time, but considering the time constraints it did not make a lot of sense to discuss this further.

However, we wanted to ensure that the original artist “axo1otl” was fine with the changes. The image would be published under their pseudonym after all.

The image was sent to them, and within one or two days they made a few final adjustments:

And what can I say? I like the changes. Better shadows and the drawing style of the water and sand I added were adjusted so one could no longer tell that they were painted by a different person. For this, some gradual colour transitions were replaced by discretely coloured steps.

So everything was fine and well, except the path reappeared. More generally, it seemed to me like the image was not based on my final version but on the one before that.

I might not know what happened there, but for me this was fine. Not everyone liked that the path reappeared, but considering that this is a rather minor detail, there was hardly any criticism.

Release

And then we released Plasma 6.1. However, due to unforeseen circumstances, the new wallpaper did not make it into the release! I will not elaborate on this topic, but I obviously was not happy to read that.

Furthermore, I noticed that the wallpaper that we nevertheless offered as a separate download was not the version “axo1otl” had sent us. It was my latest version. I hope “axo1otl” is not upset about that, but as far as I know, they will not create a new wallpaper for Plasma 6.2.

I have now created another version of the wallpaper based on “axo1otl”'s final version. The picture is identical to their version, aside from me removing the path. If you do not like the path, I would say that this is the best version for you. However, there are slight compression artefacts:

Plasma 6.2

For the next Plasma version our Promo Team wants a new wallpaper. There are already efforts to ensure that we will hopefully do a better job this time around.

I have suggested the creation of a new permanent category in KDE's forum in https://invent.kde.org/teams/vdg/issues/-/issues/52#note_972957 . I would want it to be a place for everyone to upload their self-made wallpapers. Maybe there are hobby artists out there who would enjoy doing that. I hope that some of the images would be great and well-suited as wallpapers for future Plasma versions to the benefit of us and everyone.

Categories: FLOSS Project Planets

Steinar H. Gunderson: This is how people think about security

Planet Debian - Fri, 2024-06-28 13:30

I borrowed a 3D printer with Octoprint set up, and happened to access it from work, whereupon I was greeted with a big scary message and a link to this blog post. Even though it is from 2018, there seems to be no retraction, so I figured it's an interesting insight in how people seem to think about security:

  • There is a “public internet” that is disjoint from your private network, and the only way something on the latter can be exposed to the former is if you “forward ports on your router”. (Hint: IPv6 prevalence is 45% and rising.)
  • There are no dangerous actors on your private network (e.g., nobody sets up a printer on a company network with a couple thousand hosts). Software that is safe to use on your private network can cause “a catastrophe to happen” if exposed to the internet (note that OctoPrint has now, as far as I know, passwords on by default; the linked ISC advisory is about completely open public instances).
  • There is no mention about TLS, or strong passwords. There is a mention about password rate limiting, but not that the service should be able to do that itself.
  • HTTP forwarding is safe even if port forwarding is not. Cloud(TM) forwarding is even safer. In fact, exposing your printer to a Discord channel is also a much better idea.
  • It is dangerous and difficult to have your reverse proxy on the same physical instance as the service it is proxying; it is “asking for trouble”.

I'm not against defense in depth. But I wonder if this is really what goes for best practice still, in 2024.

Categories: FLOSS Project Planets

Web Review, Week 2024-26

Planet KDE - Fri, 2024-06-28 11:52

Let’s go for my web review for the week 2024-26.

Chat Control and the New Panopticon - by Masayuki Hatta

Tags: tech, surveillance, privacy, cryptography, law

Very neat piece, shows quite well the problems with Chat Control like laws. It’s been postponed this time, but expect it to comeback somehow.

https://mhatta.substack.com/p/chat-control-and-the-new-panopticon


Cleantech has an enshittification problem

Tags: tech, politics, law

This is becoming an important industry. Regulation is needed to avoid consumers to be in a mouse trap. This is necessary to reap the benefits of those technologies.

https://pluralistic.net/2024/06/26/unplanned-obsolescence/


Indirector

Tags: tech, cpu, security

A new type of attack targeting the CPU indirect branch predictor.

https://indirector.cpusec.org/


Polyfill supply chain attack hits 100K+ sites

Tags: tech, supply-chain, security, web

This is bad for two reasons: 1) people clearly put too much trust in random CDNs to distribute their dependencies and 2) people don’t track depencendies obsolescence properly.

https://sansec.io/research/polyfill-supply-chain-attack


The People’s AI – Doc Searls Weblog

Tags: tech, ai, machine-learning, gpt, foss, self-hosting

This is ignoring the energy consumption aspect. That said, it is spot on regarding the social and economics aspects of those transformer models. They have to be open and self hostable.

https://doc.searls.com/2024/05/28/the-peoples-ai/


On the Paradox of Learning to Reason from Data

Tags: tech, ai, machine-learning, gpt, research

Further clues that transformer models can’t learn logic from data.

https://arxiv.org/abs/2205.11502


Scalable MatMul-free Language Modeling

Tags: tech, ai, machine-learning, gpt, research

Interesting paper showing a promising path to reduce the memory and workload of transformer models. This is much more interesting than the race to the gigantic size.

https://arxiv.org/abs/2406.02528


Inside the tiny chip that powers Montreal subway tickets

Tags: tech, nfc, hardware

Nice reverse engineering of a NFC chip used in a disposable transportation ticket.

https://www.righto.com/2024/06/montreal-mifare-ultralight-nfc.html


Local, first, forever

Tags: tech, crdt, self-hosting, privacy

Interesting approach for using CRDT through a file sync application. Probably something to see somehow generalized on traditional desktop applications.

https://tonsky.me/blog/crdt-filesync/


Reladiff

Tags: tech, databases, tools, tests

Interesting tool for diffing database tables. Should come in handy for tests.

https://reladiff.readthedocs.io/en/latest/index.html


Performance tip: avoid unnecessary copies – Daniel Lemire’s blog

Tags: tech, performance

Interesting case, when everything else gets faster, memory copies might start to become the bottleneck.

https://lemire.me/blog/2024/06/22/performance-tip-avoid-unnecessary-copies/


How much memory does a call to ‘malloc’ allocates? – Daniel Lemire’s blog

Tags: tech, system, memory

If you needed to be reminded that allocating small blocks of memory is a bad idea… here is a paper explaining it.

https://lemire.me/blog/2024/06/27/how-much-memory-does-a-call-to-malloc-allocates/


How the STL uses explicit

Tags: tech, c++

Definitely not the rules you want to apply on your projects. Still it’s interesting to know how the STL uses explicit.

https://quuxplusone.github.io/blog/2024/06/25/most-stl-ctors-arent-explicit-but-yours-still-should-be/


Breaking out of nested loops with generators | mathspp

Tags: tech, python

This is a useful construct in Python which is often forgotten.

https://mathspp.com/blog/breaking-out-of-nested-loops-with-generators


The plan-execute pattern

Tags: tech, algorithm, pattern, design, architecture

A nice pattern to separate decision from actions in complex algorithms.

https://mmapped.blog/posts/29-plan-execute


Fighting Faults in Distributed Systems

Tags: tech, safety, distributed, failure

A nice zine introducing the topic of faults and failures in distributed systems.

https://decomposition.al/CSE138-2024-01/zines/zine-ali.pdf


From ZeroVer to SemVer: A Comprehensive List of Versioning Schemes in Open Source | Andrew Nesbitt

Tags: tech, project-management, version-control

A nice collection of versioning schemes. I definitely didn’t know them all.

https://nesbitt.io/2024/06/24/from-zerover-to-semver-a-comprehensive-list-of-versioning-schemes-in-open-source.html


Of Psion and Symbian - by Bradford Morgan White

Tags: tech, history

Another story of precursors in the tech space. They basically invented the palmtop and spawned Symbian which was very much dominant on mobile for a while. The end of the Nokia story is a bit oversimplified for my taste just glancing over Maemo, but it is forgivable since it wasn’t the focus of this piece.

https://www.abortretry.fail/p/of-psion-and-symbian


Neko: History of a Software Pet

Tags: tech, history, funny

I remember playing with this a long time again… but it’s actually even older than I suspected.

https://eliotakira.com/neko/


Bye for now!

Categories: FLOSS Project Planets

mark.ie: My Drupal Core Contributions for week-ending June 28th, 2024

Planet Drupal - Fri, 2024-06-28 09:24

Here's what I've been working on for my Drupal contributions this week. Thanks to Code Enigma for sponsoring the time to work on these.

Categories: FLOSS Project Planets

Real Python: The Real Python Podcast – Episode #210: Creating a Guitar Synthesizer & Generating WAV Files With Python

Planet Python - Fri, 2024-06-28 08:00

What techniques go into synthesizing a guitar sound in Python? What higher-level programming and Python concepts can you practice while building advanced projects? This week on the show, we talk with Real Python author and core team member Bartosz Zaczyński about his recent step-by-step project, Build a Guitar Synthesizer: Play Musical Tablature in Python.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Promet Source: How Population Size Shapes CMS Choices in Government

Planet Drupal - Fri, 2024-06-28 06:34
Takeaway: Government CMS preferences evolve with population size, shifting from specialized proprietary solutions for smaller entities to enterprise-level, often open-source platforms for larger ones. This shift reflects the need for greater scalability, flexibility, and advanced features as government entities grow and face more complex digital demands. NEED A CMS THAT CAN GROW WITH YOU? TALK TO OUR TEAM  
Categories: FLOSS Project Planets

Robin Wilson: A load of links…

Planet Python - Fri, 2024-06-28 06:14

For months now I’ve been collecting a load of links saying that I’ll get round to blogging them "soon". Well, I’m currently babysitting for a friend’s daughter (who is sleeping peacefully upstairs), so I’ve finally found time to write them up.

So, here are a load of links – a lot of them are geospatial- or data-related, but I hope you might find something here you like even if those aren’t your specialisms:

  • QGIS Visual Changelog for v3.36 – QGIS is great GIS software that I use a lot in my work. The reason this link is here though is because of their ‘visual changelog’ that they write for every new version – it is a really good way of showing what has changed in a GUI application, and does a great job of advertising the new features.
  • List of unusual units of measurement – a fascinating list from Wikipedia. I think my favourite are the barn (and outhouse/shed) and the shake – you’ll have to read the article to see what these are!
  • List of humourous units of measurement – linked from the above is this list of humourous units of measurement, including the ‘beard-second’, and the Smoot.
  • lonboard – a relatively new package for very efficiently creating interactive webmaps of geospatial data from Python. It is so much faster than geopandas.explore(), and is developing at a rapid pace. I’m using it a lot these days.
  • A new metric for measuring learning – an interesting post from Greg Wilson about the time it takes learners to realise that something was worth learning. I wonder what the values are for some things that I do a lot of – remote sensing, GIS, Python, cloud computing, cloud-native geospatial etc.
  • The first global 30-m land-cover dynamic monitoring product with fine classification system from 1985 to 2022 – an interesting dataset that I haven’t had chance to investigate in detail yet, but purports to give 30m global land cover every 5 years from 1985 and every year from 2000.
  • CyberChef – from GCHQ (yes, the UK signals intelligence agency) comes this very useful web-based tool for converting data formats, extracting data, doing basic image processing, extracting files and more. You can even build multiple tools into a ‘recipe’ or pipeline.
  • envreport – a simple Python script to report lots of information about the Python environment in a diffable form – I can see this being very useful
  • Antarctic glaciers named after satellites – seven Antarctic glaciers have been named after satellites, reflecting (ha!) the importance of satellites in monitoring Antarctica
  • MoMAColors and MetBrewer – these are color palettes derived from artwork at the Museum of Modern Art and the Metropolitan Museum of Art in New York. There are some beautiful sets of colours in here (see below) which I want to use in some future data visualisations.

  • geospatial-cli – a list of geospatial command-line tools. Lots of them I knew, but there were some gems I hadn’t come across before here too.
  • map-gl-tools – a nice Javascript package to add a load of syntactic sugar to the MapLibreJS library. I’ve recently started using this library and found things quite clunky (coming from a Leaflet background) so this really helped
  • CoolWalks – a nice research paper looking at creating walking routes on the shaded side of the street for various cities
  • Writing efficient code for GeoPandas and Shapely in 2023 – a very useful notebook showing how to do things efficiently in GeoPandas in 2023. There are a load of old ways of doing things which are no longer the most efficient!
  • Inside PostGIS: Calculating Distance – a post explaining how PostGIS’s distance calculation algorithms work
  • quackosm – a Python and CLI tool for reading OpenStreetMap data using DuckDB – either for future analysis in DuckDB or for export to GeoParquet
  • Comparing odc.stac.load and stackstac for raster composite workflow – fairly self-explanatory, but it’s interesting to see the differences between two widely-used tools in the STAC ecosystem
  • Towards Flexible Data Schemas – this article shows how the flexibility of data schemes in specifications like STAC really help their adoption and use for diverse purposes
Categories: FLOSS Project Planets

Luke Plant: Keeping things in sync: derive vs test

Planet Python - Fri, 2024-06-28 05:15

An extremely common problem in programming is that multiple parts of a program need to be kept in sync – they need to do exactly the same thing or behave in a consistent way. It is in response to this problem that we have mantras like “DRY” (Don’t Repeat Yourself), or, as I prefer it, OAOO, “Each and every declaration of behaviour should appear Once And Only Once”.

For both of these mantras, if you are faced with possible duplication of any kind, the answer is simply “just say no”. However, since programming mantras are to be understood as proverbs, not absolute laws, there are times that obeying this mantra can hurt more than it helps, so in this post I’m going to discuss other approaches.

Most of what I say is fairly language agnostic I think, but I’ve got specific tips for Python and web development.

Contents

The essential problem

To step back for a second, the essential problem that we are addressing here is that if making a change to a certain behaviour requires changing more than one place in the code, we have the risk than one will be forgotten. This results in bugs, which can be of various degrees of seriousness depending on the code in question.

To pick a concrete example, suppose we have a rule that says that items in a deleted folder get stored for 30 days, then expunged. We’re going to need some code that does the actual expunging after 30 days, but we’re also going to need to tell the user about the limit somewhere in the user interface. “Once And Only Once” says that the 30 days limit needs to be defined in a single place somewhere, and then reused.

There is a second kind of motivating example, which I think often crops up when people quote “Don’t Repeat Yourself”, and it’s really about avoiding tedious things from a developer perspective. Suppose you need to add an item to a menu, and you find out that first you’ve got to edit the MENU_ITEMS file to add an entry, then you’ve got to edit the MAIN_MENU constant to refer to the new entry, then you’ve got to define a keyboard shortcut in the MENU_SHORTCUTS file, then a menu icon somewhere else etc. All of these different places are in some way repeating things about how menus work. I think this is less important in general, but it is certainly life-draining as a developer if code is structured in this way, especially if it is difficult to discover or remember all the things that have to be done.

The ideal solution: derive

OAOO and DRY say that we aim to have a single place that defines the rule or logic, and any other place should be derived from this.

Regarding the simple example of a time limit displayed in the UI and used in the backend, this might be as simple as defining a constant e.g. in Python:

from datetime import timedelta EXPUNGE_TIME_LIMIT = timedelta(days=30)

We then import and use this constant in both our UI and backend.

An important part of this approach is that the “deriving” process should be entirely automatic, not something that you can forget to do. In the case of a Python import statement, that is very easy to achieve, and relatively hard to get wrong – if you change the constant where it is defined in one module, any other code that uses it will pick up the change the next time the Python process is restarted.

Alternative solution: test

By “test”, I mean ideally an automated test, but manual tests may also work if they are properly scripted. The idea is that you write a test that checks the behaviour or code is synced. Often, it may be that for one (or more) instances that need the behaviour will define it using some constant as above, let’s say the “backend” code. Then, for one instance, e.g. the UI, you would hard code “30 days” without using the constant, but have a test that uses the backend constant to build a string, and checks the UI for that string.

Examples

In the example above, it might be hard to see why you want to use the fundamentally less reliable, less automatic method I’m suggesting. So I now have to show some motivating examples where the “derive” method ends up losing to the cruder, simpler alternative of “test”.

Example 1 - external data sources

My first example comes from the project I’m currently working on, which involves creating CAM files from input data. Most of the logic for that is driven using code, but there are some dimensions that are specified as data tables by the engineers of the physical product.

These data tables look something like below. The details here aren’t important, and I’ve changed them – it’s enough to know that we’ve are creating some physical “widgets” which need to have specific dimensions specified:

Widgets have length 150mm unless specified below

Widget id

Location

Length (mm)

A

start

100

A

end

120

F

start

105

F

end

110

These tables are supplied at design-time rather than run-time i.e. they are bundled with the software and can’t be changed after the code is shipped. But it is still convenient to read them in automatically rather than simply duplicate the tables in my code by some process. So, for the body of the table, that’s exactly what my code does on startup – it reads the bundled XLSX/CSV files.

So we are obeying “derive” here — there is a single, canonical source of data, and anywhere that needs it derives it by an entirely automatic process.

But what about that “150mm” default value specified in the header of that table?

It would be possible to “derive” it by having a parser. Writing such a parser is not hard to do – for this kind of thing in Python I like parsy, and it is as simple as:

import parsy as P default_length_parser = ( P.string("Widgets have length ") >> P.regex(r"\d+").map(int) << P.string("mm unless specified below") )

In fact I do something similar in some cases. But in reality, the “parser” here is pretty simplistic – it can’t deal with the real variety of English text that might be put into the sentence, and to claim I’m “deriving” it from the table is a bit of a stretch – I’m just matching a specific, known pattern. In addition, it’s probably not the case that any value for the default length would work – most likely if it was 10 times larger, there would be some other problem, and I’d want to do some manual checking.

So, let’s admit that we are really just checking for something expected, using the “test” approach. You can still define a constant that you use in most of the code:

DEFAULT_LENGTH_MM = 150

And then you test it is what you expect when you load the data file:

assert worksheets[0].cell(1, 1).value == f"Widgets have length {DEFAULT_LENGTH_MM}mm unless specified below"

So, I’ve achieved my aim: a guard against the original problem of having multiple sources of information that could potentially be out of sync. But I’ve done it using a simple test, rather than a more complex and fragile “derive” that wouldn’t have worked well anyway.

By the way, for this specific project – we’re looking for another contract developer! It’s a very worthwhile project, and one I’m really enjoying – a small flexible team, with plenty of problem solving and fun challenges, so if you’re a talented developer and interested give me a shout.

Example 2 - defining UI behaviour for domain objects

Suppose you have a database that stores information about some kind of entity, like customers say, and you have different types of customer, represented using an enum of some kind, perhaps a string enum like this in Python:

from enum import StrEnum class CustomerType(StrEnum): ENTERPRISE = "Enterprise" SMALL_FRY = "Small fry" # Let’s be honest! Try not to let the name leak… LEGACY = "Legacy"

We need to a way edit the different customer types, and they are sufficiently different that we want quite different interfaces. So, we might have a dictionary mapping the customer type to a function or class that defines the UI. If this were a Django project, it might be a different Form class for each type:

CUSTOMER_EDIT_FORMS = { CustomerType.ENTERPRISE: EnterpriseCustomerForm, CustomerType.SMALL_FRY: SmallFryCustomerForm, CustomerType.LEGACY: LegacyCustomerForm, }

Now, the DRY instinct kicks in and we notice that we now have two things we have to remember to keep in sync — any addition to the customer enum requires a corresponding addition to the UI definition dictionary. Maybe there are multiple dictionaries like this.

We could attempt to solve this by “deriving”, or some “correct by construction” mechanism that puts the creation of a new customer type all in one place.

For example, maybe we’ll have a base Customer class with get_edit_form_class() as an abstractmethod, which means it is required to be implemented. If I fail to implement it in a subclass, I can’t even construct an instance of the new customer subclass – it will throw an error.

from abc import abstractmethod class Customer: @abstractmethod def get_edit_form_class(self): pass class EnterpriseCustomer(Customer): def get_edit_form_class(self): return EnterpriseCustomerForm class LegacyCustomer(Customer): ... # etc.

I still need my enum value, or at least a list of valid values that I can use for my database field. Maybe I could derive that automatically by looking at all the sublclasses?

CUSTOMER_TYPES = [ cls.__name__.upper().replace("CUSTOMER", "") for cls in Customer.__subclasses__() ]

Or maybe an __init_subclass__ trick, and I can perhaps also set up the various mappings I’ll need that way?

It’s at this point you should stop and think. In addition to requiring you to mix UI concerns into the Customer class definitions, it’s getting complex and magical.

The alternative I’m suggesting is this: require manual syncing of the two parts of the code base, but add a test to ensure that you did it. All you need is a few lines after your CUSTOMER_EDIT_FORMS definition:

CUSTOMER_EDIT_FORMS = { # etc as before } for c_type in CustomerType: assert ( c_type in CUSTOMER_EDIT_FORMS ), f"You've defined a new customer type {c_type}, you need to add an entry in CUSTOMER_EDIT_FORMS"

You could do this as a more traditional unit test in a separate file, but for simple things like this, I think an assertion right next to the code works much better. It really helps local reasoning to be able to look and immediately conclude “yes, I can see that this dictionary must be exhaustive because the assertion tells me so.” Plus you get really early failure – as soon as you import the code.

This kind of thing crops up a lot – if you create a class here, you’ve got to create another one over there, or add a dictionary entry etc. In these cases, I’m finding simple tests and assertions have a ton of advantages when compared to clever architectural contortions (or other things like advanced static typing gymnastics):

  • they are massively simpler to create and understand.

  • you can write your own error message in the assertion. If you make a habit of using really clear error messages, like the one above, your code base will literally tell you how to maintain it.

  • you can easily add things like exceptions. “Every Customer type needs an edit UI defined, except Legacy because they are read only” is an easy, small change to the above.

    • This contrasts with cleverer mechanisms, which might require relaxing other constraints to the point where you defeat the whole point of the mechanism, or create more difficulties for yourself.

  • the rule about how the code works is very explicit, rather than implicit in some complicated code structure, and typically needs no comment other than what you write in the assertion message.

  • you express and enforce the rule, with any complexities it gains, in just one place. Ironically, if you try to enforce this kind of constraint using type systems or hierarchies to eliminate repetition or the need for any kind of code syncing, you may find that when you come to change the constraint it actually requires touching far more places.

  • temporarily silencing the assertion while developing is easy and doesn’t have far reaching consequences.

Of course, there are many times when being able to automatically derive things at the code level, including some complex relationships between parts of the code, can be a win, and it’s the kind of thing you can do in Python with its many powerful techniques.

But my point is that you should remember the alternative: “synchronise manually, and have a test to check you did it.” Being able to add any kind of executable code at module level – the same level as class/function/constant definitions – is a Python super-power that you should use.

Example 3 - external polymorphism and static typing

A variant of the above problem is when, instead of an enum defining different types, I’ve got a set of classes that all need some behaviour defined.

Often we just use polymorphism where a base class defines the methods or interfaces needed and sub-classes provide the implementation. However, as in the previous case, this can involve mixing concerns e.g. user interface code, possibly of several types, is mixed up with the base domain objects. It also imposes constraints on class hierarchies.

Recently for these kind of cases, I’m more likely to prefer external polymorphism to avoid these problems. To give an example, in my current project I’m using the Command pattern or plan-execute pattern extensively, and it involves manipulating CAM objects using a series of command objects that look something like this:

@dataclass class DeleteFeature: feature_name: str @dataclass class SetParameter: param_name: str value: float @dataclass class SetTextSegment: text_name: str segment: int value: str Command: TypeAlias = DeleteFeature | SetParameter | SetTextSegment

Note that none of them share a base class, but I do have a union type that gives me the complete set.

It’s much more convenient to define the behaviour associated with these separately from these definitions, and so I have multiple other places that deal with Command, such as the place that executes these commands and several others. One example that requires very little code to show is where I’m generating user-presentable tables that show groups of commands. I convert each of these Command objects into key-value pairs that are used for column headings and values:

def get_command_display(command: Command) -> tuple[str, str | float | bool]: match command: case DeleteFeature(feature_name=feature_name): return (f"Delete {feature_name}", True) case SetParameter(param_name=param_name, value=value): return (param_name, value) case SetTextSegment(text_name=text_name, segment=segment, value=value): return (f"{text_name}[{segment}]", value)

This is giving me a similar problem to the one I had before I had before: if I add a new Command, I have to remember to add the new branch to get_command_display.

I could split out get_command_display into a dictionary of functions, and apply the same technique as in the previous example, but it’s more work, a less natural fit for the problem and potentially less flexible.

Instead, all I need to do is add exhaustiveness checking with one more branch:

match command: ... # etc case _: assert_never(command)

Now, pyright will check that I didn’t forget to add branches here for any new Command. The error message is not controllable, in contrast to hand-written asserts, but it is clear enough.

The theme here is that additions in one part of the code require synchronised additions in other parts of the code, rather than being automatically correct “by construction”, but you have something that tests you didn’t forget.

Example 4 - generated code

In web development, ensuring consistent design and keeping different things in sync is a significant problem. There are many approaches, but let’s start with the simple case of using a single CSS stylesheet to define all the styles.

We may want a bunch of components to have a consistent border colour, and a first attempt might look like this (ignoring the many issues of naming conventions here):

.card-component, .bordered-heading { border-color: #800; }

This often becomes impractical when we want to organise by component, rather than by property, which introduces duplication:

.card-component { border-color: #800; } /* somewhere far away … */ .bordered-heading { border-color: #800; }

Thankfully, CSS has variables, so the first application of “derive” is straightforward – we define a variable which we can use in multiple places:

:root { --primary-border-color: #800; } /* elsewhere */ .bordered-heading { border-bottom: 1px solid var(--primary-border-color); }

However, as the project grows, we may find that we want to use the same variables in different contexts where CSS isn’t applicable. So the next step at this point is typically to move to Design Tokens.

Practically speaking, this might mean that we now have our variables defined in a separate JSON file. Maybe something like this (using a W3C draft spec):

{ "primary-border-color": { "$value": "#800000", "$type": "color" } "primary-hightlight-color": { "$value": "#FBC100", "$type": "color" } }

From this, we can automatically generate CSS fragments that contain the same variables quite easily – for simple cases, this isn’t more than a 50 line Python script.

However, we’ve got some choices when it comes to how we put everything together. I think the general assumption in web development world is that a fully automatic “derive” is the only acceptable answer. This typically means you have to put your own CSS in a separate file, and then you have a build tool that watches for changes, and compiles your CSS plus the generated CSS into the final output that gets sent to the browser.

In addition, once you’ve bought into these kind of tools you’ll find they want to do extensive changes to the output, and define more and more extensions to the underlying languages. For example, postcss-design-tokens wants you to write things like:

.foo { color: design-token('color.background.primary'); }

And instead of using CSS variables in the output, it puts the value of the token right in to every place in your code that uses it.

This approach has various problems, in particular that you become more and more dependent on the build process, and the output gets further from your input. You can no longer use the Dev Tools built in to your browser to do editing – the flow of using Dev Tools to experiment with changing a single spacing or colour CSS variable for global changes is broken, you need your build tool. And you can’t easily copy changes from Dev Tools back into the source, because of the transformation step, and debugging can be similarly difficult. And then, you’ll probably want special IDE support for the special CSS extensions, rather than being able to lean on your editor simply understanding CSS, and any other tools that want to look at your CSS now need support etc.

It’s also a lot of extra infrastructure and complexity to solve this one problem, especially when our design tokens JSON file is probably not going to change that often, or is going to have long periods of high stability. There are good reasons to want to be essentially build free. The current state of the art in this space is that to get your build tool to compile your CSS you add import './styles.css' in your entry point Javascript file! What if I don’t even have a Javascript file? I think I understand how this sort of thing came about, but don’t try to tell me that it’s anything less than completely bonkers.

Do we have an alternative to the fully automatic derive?

Using the “test” approach, we do. We can even stick with our single CSS file – we just write it like this:

/* DESIGN TOKENS START */ /* auto-created block - do not edit */ :root { --primary-border-color: #800000; --primary-highlight-color: #FBC100; } /* DESIGN TOKENS END */ /* the rest of our CSS here */

The contents of this block will be almost certainly auto-generated. We won’t have a process that fully automatically updates it, however, because this is the same file where we are putting our custom CSS, and we don’t want any possibility of lost work due to the file being overwritten as we are editing it.

On the other hand we don’t want things to get out of sync, so we’ll add a test that checks whether the current styles.css contains the block of design tokens that we expect to be there, based on the JSON. For actually updating the block, we’ll need some kind of manual step – maybe a script that can find and update the DESIGN TOKEN START block, maybe cog – which is a perfect little tool for this use case — or we could just copy-paste.

There are also slightly simpler solutions in this case, like using a CSS import if you don’t mind having multiple CSS files.

Conclusion

For all the examples above, the solutions I’ve presented might not work perfectly for your context. You might also want to draw the line at different place to me. But my main point is that we don’t have to go all the way with a fully automatic derive solution to eliminate any manual syncing. Having some manual work plus a mechanism to test that two things are in sync is a perfectly legitimate solution, and it can avoid some of the large costs that come with structuring everything around “derive”.

Categories: FLOSS Project Planets

Marknote 1.3

Planet KDE - Fri, 2024-06-28 05:00

It's been almost two months since the release of Marknote 1.2. Marknote is a rich text editor and note management tool using Markdown. Since the release a lot has changed and many new features have been added thanks to the work of all contributors!

User Interface

On small phone screens, you want as much space as possible to write and read your notes. Marknote now lets you hide the editor toolbar on mobile, so you can focus on the important stuff while having the formatting options just a tap away.

On desktop, you can now adjust the width of the note list and editor by dragging the separator, just like in other desktop applications.

You can now undo and redo changes using buttons in the UI, rather than relying on keyboard shortcuts, which is especially helpful if you're using a virtual keyboard that may not have a control key.

The application menu has been reorganized and simplified to make it less cluttered.

Carl's work on responsive images has finally landed in Qt and now also in Marknote. This means that images no longer get cropped on small screens.

Volker implemented clickable links! You can now ctrl-click links to open them.

Switching to Marknote

To make it easier to switch from your current notes application to Marknote, Marknote now offers several import options. for now you can import your notes from KNotes and maildir.

Preferences

The settings now longer open in an overlay dialog, and are now shown in Kirigami Addons' new ConfigurationView.

Thanks to Garry Wang, you can now change the color theme directly from Marknote's preferences.

Bug fixes
  • The heading of the note list no longer overlaps the buttons if it gets too long, now it is hidden.
  • The list of notes now overlaps the header longer when scrolling.
  • Opening a note on a touch device no longer opens the options menu every time.
  • Thanks to Carl, file creation on Windows has been fixed so you can use the sketch feature there, too.
Get It

Marknote is available on Flathub.

Packager Section

You can find the package on download.kde.org and it has been signed with my Carl's GPG key.

Categories: FLOSS Project Planets

Matthew Palmer: Checking for Compromised Private Keys has Never Been Easier

Planet Debian - Thu, 2024-06-27 20:00

As regular readers would know, since I never stop banging on about it, I run Pwnedkeys, a service which finds and collates private keys which have been disclosed or are otherwise compromised. Until now, the only way to check if a key is compromised has been to use the Pwnedkeys API, which is not necessarily trivial for everyone.

Starting today, that’s changing.

The next phase of Pwnedkeys is to start offering more user-friendly tools for checking whether keys being used are compromised. These will typically be web-based or command-line tools intended to answer the question “is the key in this (certificate, CSR, authorized_keys file, TLS connection, email, etc) known to Pwnedkeys to have been compromised?”.

Opening the Toolbox

Available right now are the first web-based key checking tools in this arsenal. These tools allow you to:

  1. Check the key in a PEM-format X509 data structure (such as a CSR or certificate);

  2. Check the keys in an authorized_keys file you upload; and

  3. Check the SSH keys used by a user at any one of a number of widely-used code-hosting sites.

Further planned tools include “live” checking of the certificates presented in TLS connections (for HTTPS, etc), SSH host keys, command-line utilities for checking local authorized_keys files, and many other goodies.

If You Are Intrigued By My Ideas…

… and wish to subscribe to my newsletter, now you can!

I’m not going to be blogging every little update to Pwnedkeys, because that would probably get a bit tedious for readers who aren’t as intrigued by compromised keys as I am. Instead, I’ll be posting every little update in the Pwnedkeys newsletter. So, if you want to keep up-to-date with the latest and greatest news and information, subscribe to the newsletter.

Supporting Pwnedkeys

All this work I’m doing on my own time, and I’m paying for the infrastructure from my own pocket. If you’ve got a few dollars to spare, I’d really appreciate it if you bought me a refreshing beverage. It helps keep the lights on here at Pwnedkeys Global HQ.

Categories: FLOSS Project Planets

GNU Health: Migrar, migrant, migràrem

GNU Planet! - Thu, 2024-06-27 15:48

The title of this article, “Migrar, migrant, migràrem“, comes from a beautiful poem written by Laia Porcar[1], that inspired the strikingly profound painting by Sara Belles [2] “Jo per tu, fill meu“. The artists reflect the migrants ordeal to provide a better life to their children and families, even at the cost of losing their own lives.

GNU Health[3] is a Social project with some technology behind and the mission at Sea-Eye is one of the best examples. After all, GNU Solidario[4] is a NGO that focuses in the advancement of Social Medicine.

We live a world of injustice. Concentration of power, social gradient and poverty rates keep on the rise. Artificial intelligence is on the hands of mega private corporations, targeting our privacy and feeding the macabre business of war. The fight for scarce natural resources such as lithium or coltan creates coups in impoverished countries. Nature and non-human animals are used and abused as mere commodities. Our world turns a blind eye to the systematic crushing and eradication of civilian population by powerful armies. As a result, we live in a world where migration is not a choice, but the only way out for millions of human beings, even at the risk of becoming anonymous victims in the Atlantic ocean or Mediterranean sea mass graveyards.

“Jo per tu, fill meu”, by Sara Belles

But there is hope. The Sea-Eye mission is the end result of a network of solidarity, cooperation and empathy. The Free Software movement started by Richard Stallman[5]; Julian Sassencheidt message in Mastodon and his presentation at GNU Health Con 2023[6] ; The work of our representative in Germany, Gerald Wiese; the Chaos Computer Club[7]; the team from L’Aurora[8] providing logistic support to the Search and Rescue vessels; the phenomenal Sea-Eye family who made me feel at home: The cook, crew on deck, the logistics and medical team who stood stoically intensive hours of GNU Health training. Of course, Selene, the heart of GNU Solidario and the one that looks after the human and non-human family members while I’m away.

You will hardly see these people in the news, because most corporate-backed media neglect them and their organizations. Unlike some billionaire “philanthropists” that take the media spotlight, these anonymous heroes stand on the right side of history, making a difference on the present and future of those who need it most, with very limited resources.

Collage of several pictures during my stay at the Sea-eye

We’re very happy and proud to see that GNU Health can be of help to Sea-Eye in tasks such as guests registration, health evaluations, reporting, statistics and stock management. This is just the beginning and we will be optimizing and adding functionality on successive missions. That said, GNU Health will always play a secondary role compared to picking up somebody from the water and giving them a welcoming hug. Again, we’re a social project with a bit of technology behind.

Drawings made by the children rescued at the Sea-eye

I’d like to finish with a reflection on the picture I took to some of the drawings done by children during their stay at the Sea-Eye. The drawings exist because the Sea-eye crew rescued those kids. Otherwise, their corpses would be at the bottom of the Mediterranean sea, along with thousands who tragically perished trying to find dignity in this world. Thank you, Sea-eye. You are priceless.

A final note: shame on those countries and governments that detain and punish Search and Rescue vessels. Saving lives is not a crime.

Love, freedom and happy hacking

You can obtain Sara Belles painting and Laia Porcar poem from L’Aurora solidarity shop[8]

  1. Laia Porcar : https://laravalerateatre.com/qui-som/
  2. Sara Belles . https://sarabelles.es/
  3. The GNU Health project. https://www.gnuhealth.org
  4. GNU Solidario. Advancing Social Medicine https://www.gnusolidario.org
  5. The GNU Operating System. https://www.gnu.org
  6. Search and rescue on the central Mediterranean migratory route . https://https://www.gnuhealthcon.org/2023/presentations/GHCon2023-Friday-07-Julian_Sassenscheidt-Search_and_rescue_on_the_central_Mediterranean_migratory_route.pdf
  7. The Chaos Computer Club (CCC) . https://www.ccc.de/en/
  8. L’Aurora suport. https://aurorasuport.org/
Categories: FLOSS Project Planets

Jonathan McDowell: Sorting out backup internet #4: IPv6

Planet Debian - Thu, 2024-06-27 14:38

The final piece of my 5G (well, 4G) based based backup internet connection I needed to sort out was IPv6. While Three do support IPv6 in their network they only seem to enable it for certain devices, and the MC7010 is not one of those devices, even though it also supports IPv6.

I use v6 a lot - over 50% of my external traffic, last time I looked. One suggested option was that I could drop the IPv6 Router Advertisements when the main external link went down, but I have a number of internal services that are only presented on v6 addresses so I needed to ensure clients in the house continued to have access to those.

As it happens I’ve used the Hurricane Electric IPv6 Tunnel Broker in the past, so my pass was re-instating that. The 5G link has a real external IPv4 address, and it’s possible to update the endpoint using a simple HTTP GET. I added the following to my /etc/dhcp/dhclient-exit-hooks.d/modem-interface-route where we are dealing with an interface IP change:

# Update IPv6 tunnel with Hurricane Electric curl --interface $interface 'https://username:password@ipv4.tunnelbroker.net/nic/update?hostname=1234'

I needed some additional configuration to bring things up, so /etc/network/interfaces got the following, configuring the 6in4 tunnel as well as the low preference default route, and source routing via the 5g table, similar to IPv4:

pre-up ip tunnel add he-ipv6 mode sit remote 216.66.80.26 pre-up ip link set he-ipv6 up pre-up ip addr add 2001:db8:1234::2/64 dev he-ipv6 pre-up ip -6 rule add from 2001:db8:1234::/64 lookup 5g pre-up ip -6 route add default dev he-ipv6 table 5g pre-up ip -6 route add default dev he-ipv6 metric 1000 post-down ip tunnel del he-ipv6

We need to deal with IPv4 changes in for the tunnel endpoint, so modem-interface-route also got:

ip tunnel change he-ipv6 local $new_ip_address

/etc/nftables.conf had to be taught to accept the 6in4 packets from the tunnel in the input chain:

# Allow HE tunnel iifname "sfp.31" ip protocol 41 ip saddr 216.66.80.26 accept

Finally, I had to engage in something I never thought I’d deal with; IPv6 NAT. HE provide a /48, and my FTTP ISP provides me with a /56, so this meant I could do a nice stateless 1:1 mapping:

table ip6 nat { chain postrouting { type nat hook postrouting priority 0 oifname "he-ipv6" snat ip6 prefix to ip6 saddr map { 2001:db8:f00d::/56 : 2001:db8:666::/56 } } }

This works. Mostly. The problem is that HE, not unreasonably, expect your IPv4 address to be pingable. And it turns out Three have some ranges that this works on, and some that it doesn’t. Which means it’s a bit hit and miss whether you can setup the tunnel.

I spent a while trying to find an alternative free IPv6 tunnel provider with a UK endpoint. There’s less call for them these days, so I didn’t manage to find any that actually worked (or didn’t have a similar pingable requirement). I did consider whether I wanted to end up with routes via a VM, as I described in the failover post, but looking at costings for VMs with providers who could actually give me an IPv6 range I decided the cost didn’t make it worthwhile; the VM cost ended up being more than the backup SIM is costing monthly.

Finally, it turns out happy eyeballs mostly means that when the 5G ends up on an IP that we can’t setup the IPv6 tunnel on, things still mostly work. Browser usage fails over quickly and it’s mostly my own SSH use that needs me to force IPv4. Purists will groan, but this turns out to be an acceptable trade-off for me, at present. Perhaps if I was seeing frequent failures the diverse routes approach to a VM would start to make sense, but for now I’m pretty happy with the configuration in terms of having a mostly automatic backup link take over when the main link goes down.

Categories: FLOSS Project Planets

Pages