Feeds

Luke Plant: Keeping things in sync: derive vs test

Planet Python - Fri, 2024-06-28 05:15

An extremely common problem in programming is that multiple parts of a program need to be kept in sync – they need to do exactly the same thing or behave in a consistent way. It is in response to this problem that we have mantras like “DRY” (Don’t Repeat Yourself), or, as I prefer it, OAOO, “Each and every declaration of behaviour should appear Once And Only Once”.

For both of these mantras, if you are faced with possible duplication of any kind, the answer is simply “just say no”. However, since programming mantras are to be understood as proverbs, not absolute laws, there are times that obeying this mantra can hurt more than it helps, so in this post I’m going to discuss other approaches.

Most of what I say is fairly language agnostic I think, but I’ve got specific tips for Python and web development.

Contents

The essential problem

To step back for a second, the essential problem that we are addressing here is that if making a change to a certain behaviour requires changing more than one place in the code, we have the risk than one will be forgotten. This results in bugs, which can be of various degrees of seriousness depending on the code in question.

To pick a concrete example, suppose we have a rule that says that items in a deleted folder get stored for 30 days, then expunged. We’re going to need some code that does the actual expunging after 30 days, but we’re also going to need to tell the user about the limit somewhere in the user interface. “Once And Only Once” says that the 30 days limit needs to be defined in a single place somewhere, and then reused.

There is a second kind of motivating example, which I think often crops up when people quote “Don’t Repeat Yourself”, and it’s really about avoiding tedious things from a developer perspective. Suppose you need to add an item to a menu, and you find out that first you’ve got to edit the MENU_ITEMS file to add an entry, then you’ve got to edit the MAIN_MENU constant to refer to the new entry, then you’ve got to define a keyboard shortcut in the MENU_SHORTCUTS file, then a menu icon somewhere else etc. All of these different places are in some way repeating things about how menus work. I think this is less important in general, but it is certainly life-draining as a developer if code is structured in this way, especially if it is difficult to discover or remember all the things that have to be done.

The ideal solution: derive

OAOO and DRY say that we aim to have a single place that defines the rule or logic, and any other place should be derived from this.

Regarding the simple example of a time limit displayed in the UI and used in the backend, this might be as simple as defining a constant e.g. in Python:

from datetime import timedelta EXPUNGE_TIME_LIMIT = timedelta(days=30)

We then import and use this constant in both our UI and backend.

An important part of this approach is that the “deriving” process should be entirely automatic, not something that you can forget to do. In the case of a Python import statement, that is very easy to achieve, and relatively hard to get wrong – if you change the constant where it is defined in one module, any other code that uses it will pick up the change the next time the Python process is restarted.

Alternative solution: test

By “test”, I mean ideally an automated test, but manual tests may also work if they are properly scripted. The idea is that you write a test that checks the behaviour or code is synced. Often, it may be that for one (or more) instances that need the behaviour will define it using some constant as above, let’s say the “backend” code. Then, for one instance, e.g. the UI, you would hard code “30 days” without using the constant, but have a test that uses the backend constant to build a string, and checks the UI for that string.

Examples

In the example above, it might be hard to see why you want to use the fundamentally less reliable, less automatic method I’m suggesting. So I now have to show some motivating examples where the “derive” method ends up losing to the cruder, simpler alternative of “test”.

Example 1 - external data sources

My first example comes from the project I’m currently working on, which involves creating CAM files from input data. Most of the logic for that is driven using code, but there are some dimensions that are specified as data tables by the engineers of the physical product.

These data tables look something like below. The details here aren’t important, and I’ve changed them – it’s enough to know that we’ve are creating some physical “widgets” which need to have specific dimensions specified:

Widgets have length 150mm unless specified below

Widget id

Location

Length (mm)

A

start

100

A

end

120

F

start

105

F

end

110

These tables are supplied at design-time rather than run-time i.e. they are bundled with the software and can’t be changed after the code is shipped. But it is still convenient to read them in automatically rather than simply duplicate the tables in my code by some process. So, for the body of the table, that’s exactly what my code does on startup – it reads the bundled XLSX/CSV files.

So we are obeying “derive” here — there is a single, canonical source of data, and anywhere that needs it derives it by an entirely automatic process.

But what about that “150mm” default value specified in the header of that table?

It would be possible to “derive” it by having a parser. Writing such a parser is not hard to do – for this kind of thing in Python I like parsy, and it is as simple as:

import parsy as P default_length_parser = ( P.string("Widgets have length ") >> P.regex(r"\d+").map(int) << P.string("mm unless specified below") )

In fact I do something similar in some cases. But in reality, the “parser” here is pretty simplistic – it can’t deal with the real variety of English text that might be put into the sentence, and to claim I’m “deriving” it from the table is a bit of a stretch – I’m just matching a specific, known pattern. In addition, it’s probably not the case that any value for the default length would work – most likely if it was 10 times larger, there would be some other problem, and I’d want to do some manual checking.

So, let’s admit that we are really just checking for something expected, using the “test” approach. You can still define a constant that you use in most of the code:

DEFAULT_LENGTH_MM = 150

And then you test it is what you expect when you load the data file:

assert worksheets[0].cell(1, 1).value == f"Widgets have length {DEFAULT_LENGTH_MM}mm unless specified below"

So, I’ve achieved my aim: a guard against the original problem of having multiple sources of information that could potentially be out of sync. But I’ve done it using a simple test, rather than a more complex and fragile “derive” that wouldn’t have worked well anyway.

By the way, for this specific project – we’re looking for another contract developer! It’s a very worthwhile project, and one I’m really enjoying – a small flexible team, with plenty of problem solving and fun challenges, so if you’re a talented developer and interested give me a shout.

Example 2 - defining UI behaviour for domain objects

Suppose you have a database that stores information about some kind of entity, like customers say, and you have different types of customer, represented using an enum of some kind, perhaps a string enum like this in Python:

from enum import StrEnum class CustomerType(StrEnum): ENTERPRISE = "Enterprise" SMALL_FRY = "Small fry" # Let’s be honest! Try not to let the name leak… LEGACY = "Legacy"

We need to a way edit the different customer types, and they are sufficiently different that we want quite different interfaces. So, we might have a dictionary mapping the customer type to a function or class that defines the UI. If this were a Django project, it might be a different Form class for each type:

CUSTOMER_EDIT_FORMS = { CustomerType.ENTERPRISE: EnterpriseCustomerForm, CustomerType.SMALL_FRY: SmallFryCustomerForm, CustomerType.LEGACY: LegacyCustomerForm, }

Now, the DRY instinct kicks in and we notice that we now have two things we have to remember to keep in sync — any addition to the customer enum requires a corresponding addition to the UI definition dictionary. Maybe there are multiple dictionaries like this.

We could attempt to solve this by “deriving”, or some “correct by construction” mechanism that puts the creation of a new customer type all in one place.

For example, maybe we’ll have a base Customer class with get_edit_form_class() as an abstractmethod, which means it is required to be implemented. If I fail to implement it in a subclass, I can’t even construct an instance of the new customer subclass – it will throw an error.

from abc import abstractmethod class Customer: @abstractmethod def get_edit_form_class(self): pass class EnterpriseCustomer(Customer): def get_edit_form_class(self): return EnterpriseCustomerForm class LegacyCustomer(Customer): ... # etc.

I still need my enum value, or at least a list of valid values that I can use for my database field. Maybe I could derive that automatically by looking at all the sublclasses?

CUSTOMER_TYPES = [ cls.__name__.upper().replace("CUSTOMER", "") for cls in Customer.__subclasses__() ]

Or maybe an __init_subclass__ trick, and I can perhaps also set up the various mappings I’ll need that way?

It’s at this point you should stop and think. In addition to requiring you to mix UI concerns into the Customer class definitions, it’s getting complex and magical.

The alternative I’m suggesting is this: require manual syncing of the two parts of the code base, but add a test to ensure that you did it. All you need is a few lines after your CUSTOMER_EDIT_FORMS definition:

CUSTOMER_EDIT_FORMS = { # etc as before } for c_type in CustomerType: assert ( c_type in CUSTOMER_EDIT_FORMS ), f"You've defined a new customer type {c_type}, you need to add an entry in CUSTOMER_EDIT_FORMS"

You could do this as a more traditional unit test in a separate file, but for simple things like this, I think an assertion right next to the code works much better. It really helps local reasoning to be able to look and immediately conclude “yes, I can see that this dictionary must be exhaustive because the assertion tells me so.” Plus you get really early failure – as soon as you import the code.

This kind of thing crops up a lot – if you create a class here, you’ve got to create another one over there, or add a dictionary entry etc. In these cases, I’m finding simple tests and assertions have a ton of advantages when compared to clever architectural contortions (or other things like advanced static typing gymnastics):

  • they are massively simpler to create and understand.

  • you can write your own error message in the assertion. If you make a habit of using really clear error messages, like the one above, your code base will literally tell you how to maintain it.

  • you can easily add things like exceptions. “Every Customer type needs an edit UI defined, except Legacy because they are read only” is an easy, small change to the above.

    • This contrasts with cleverer mechanisms, which might require relaxing other constraints to the point where you defeat the whole point of the mechanism, or create more difficulties for yourself.

  • the rule about how the code works is very explicit, rather than implicit in some complicated code structure, and typically needs no comment other than what you write in the assertion message.

  • you express and enforce the rule, with any complexities it gains, in just one place. Ironically, if you try to enforce this kind of constraint using type systems or hierarchies to eliminate repetition or the need for any kind of code syncing, you may find that when you come to change the constraint it actually requires touching far more places.

  • temporarily silencing the assertion while developing is easy and doesn’t have far reaching consequences.

Of course, there are many times when being able to automatically derive things at the code level, including some complex relationships between parts of the code, can be a win, and it’s the kind of thing you can do in Python with its many powerful techniques.

But my point is that you should remember the alternative: “synchronise manually, and have a test to check you did it.” Being able to add any kind of executable code at module level – the same level as class/function/constant definitions – is a Python super-power that you should use.

Example 3 - external polymorphism and static typing

A variant of the above problem is when, instead of an enum defining different types, I’ve got a set of classes that all need some behaviour defined.

Often we just use polymorphism where a base class defines the methods or interfaces needed and sub-classes provide the implementation. However, as in the previous case, this can involve mixing concerns e.g. user interface code, possibly of several types, is mixed up with the base domain objects. It also imposes constraints on class hierarchies.

Recently for these kind of cases, I’m more likely to prefer external polymorphism to avoid these problems. To give an example, in my current project I’m using the Command pattern or plan-execute pattern extensively, and it involves manipulating CAM objects using a series of command objects that look something like this:

@dataclass class DeleteFeature: feature_name: str @dataclass class SetParameter: param_name: str value: float @dataclass class SetTextSegment: text_name: str segment: int value: str Command: TypeAlias = DeleteFeature | SetParameter | SetTextSegment

Note that none of them share a base class, but I do have a union type that gives me the complete set.

It’s much more convenient to define the behaviour associated with these separately from these definitions, and so I have multiple other places that deal with Command, such as the place that executes these commands and several others. One example that requires very little code to show is where I’m generating user-presentable tables that show groups of commands. I convert each of these Command objects into key-value pairs that are used for column headings and values:

def get_command_display(command: Command) -> tuple[str, str | float | bool]: match command: case DeleteFeature(feature_name=feature_name): return (f"Delete {feature_name}", True) case SetParameter(param_name=param_name, value=value): return (param_name, value) case SetTextSegment(text_name=text_name, segment=segment, value=value): return (f"{text_name}[{segment}]", value)

This is giving me a similar problem to the one I had before I had before: if I add a new Command, I have to remember to add the new branch to get_command_display.

I could split out get_command_display into a dictionary of functions, and apply the same technique as in the previous example, but it’s more work, a less natural fit for the problem and potentially less flexible.

Instead, all I need to do is add exhaustiveness checking with one more branch:

match command: ... # etc case _: assert_never(command)

Now, pyright will check that I didn’t forget to add branches here for any new Command. The error message is not controllable, in contrast to hand-written asserts, but it is clear enough.

The theme here is that additions in one part of the code require synchronised additions in other parts of the code, rather than being automatically correct “by construction”, but you have something that tests you didn’t forget.

Example 4 - generated code

In web development, ensuring consistent design and keeping different things in sync is a significant problem. There are many approaches, but let’s start with the simple case of using a single CSS stylesheet to define all the styles.

We may want a bunch of components to have a consistent border colour, and a first attempt might look like this (ignoring the many issues of naming conventions here):

.card-component, .bordered-heading { border-color: #800; }

This often becomes impractical when we want to organise by component, rather than by property, which introduces duplication:

.card-component { border-color: #800; } /* somewhere far away … */ .bordered-heading { border-color: #800; }

Thankfully, CSS has variables, so the first application of “derive” is straightforward – we define a variable which we can use in multiple places:

:root { --primary-border-color: #800; } /* elsewhere */ .bordered-heading { border-bottom: 1px solid var(--primary-border-color); }

However, as the project grows, we may find that we want to use the same variables in different contexts where CSS isn’t applicable. So the next step at this point is typically to move to Design Tokens.

Practically speaking, this might mean that we now have our variables defined in a separate JSON file. Maybe something like this (using a W3C draft spec):

{ "primary-border-color": { "$value": "#800000", "$type": "color" } "primary-hightlight-color": { "$value": "#FBC100", "$type": "color" } }

From this, we can automatically generate CSS fragments that contain the same variables quite easily – for simple cases, this isn’t more than a 50 line Python script.

However, we’ve got some choices when it comes to how we put everything together. I think the general assumption in web development world is that a fully automatic “derive” is the only acceptable answer. This typically means you have to put your own CSS in a separate file, and then you have a build tool that watches for changes, and compiles your CSS plus the generated CSS into the final output that gets sent to the browser.

In addition, once you’ve bought into these kind of tools you’ll find they want to do extensive changes to the output, and define more and more extensions to the underlying languages. For example, postcss-design-tokens wants you to write things like:

.foo { color: design-token('color.background.primary'); }

And instead of using CSS variables in the output, it puts the value of the token right in to every place in your code that uses it.

This approach has various problems, in particular that you become more and more dependent on the build process, and the output gets further from your input. You can no longer use the Dev Tools built in to your browser to do editing – the flow of using Dev Tools to experiment with changing a single spacing or colour CSS variable for global changes is broken, you need your build tool. And you can’t easily copy changes from Dev Tools back into the source, because of the transformation step, and debugging can be similarly difficult. And then, you’ll probably want special IDE support for the special CSS extensions, rather than being able to lean on your editor simply understanding CSS, and any other tools that want to look at your CSS now need support etc.

It’s also a lot of extra infrastructure and complexity to solve this one problem, especially when our design tokens JSON file is probably not going to change that often, or is going to have long periods of high stability. There are good reasons to want to be essentially build free. The current state of the art in this space is that to get your build tool to compile your CSS you add import './styles.css' in your entry point Javascript file! What if I don’t even have a Javascript file? I think I understand how this sort of thing came about, but don’t try to tell me that it’s anything less than completely bonkers.

Do we have an alternative to the fully automatic derive?

Using the “test” approach, we do. We can even stick with our single CSS file – we just write it like this:

/* DESIGN TOKENS START */ /* auto-created block - do not edit */ :root { --primary-border-color: #800000; --primary-highlight-color: #FBC100; } /* DESIGN TOKENS END */ /* the rest of our CSS here */

The contents of this block will be almost certainly auto-generated. We won’t have a process that fully automatically updates it, however, because this is the same file where we are putting our custom CSS, and we don’t want any possibility of lost work due to the file being overwritten as we are editing it.

On the other hand we don’t want things to get out of sync, so we’ll add a test that checks whether the current styles.css contains the block of design tokens that we expect to be there, based on the JSON. For actually updating the block, we’ll need some kind of manual step – maybe a script that can find and update the DESIGN TOKEN START block, maybe cog – which is a perfect little tool for this use case — or we could just copy-paste.

There are also slightly simpler solutions in this case, like using a CSS import if you don’t mind having multiple CSS files.

Conclusion

For all the examples above, the solutions I’ve presented might not work perfectly for your context. You might also want to draw the line at different place to me. But my main point is that we don’t have to go all the way with a fully automatic derive solution to eliminate any manual syncing. Having some manual work plus a mechanism to test that two things are in sync is a perfectly legitimate solution, and it can avoid some of the large costs that come with structuring everything around “derive”.

Categories: FLOSS Project Planets

Marknote 1.3

Planet KDE - Fri, 2024-06-28 05:00

It's been almost two months since the release of Marknote 1.2. Marknote is a rich text editor and note management tool using Markdown. Since the release a lot has changed and many new features have been added thanks to the work of all contributors!

User Interface

On small phone screens, you want as much space as possible to write and read your notes. Marknote now lets you hide the editor toolbar on mobile, so you can focus on the important stuff while having the formatting options just a tap away.

On desktop, you can now adjust the width of the note list and editor by dragging the separator, just like in other desktop applications.

You can now undo and redo changes using buttons in the UI, rather than relying on keyboard shortcuts, which is especially helpful if you're using a virtual keyboard that may not have a control key.

The application menu has been reorganized and simplified to make it less cluttered.

Carl's work on responsive images has finally landed in Qt and now also in Marknote. This means that images no longer get cropped on small screens.

Volker implemented clickable links! You can now ctrl-click links to open them.

Switching to Marknote

To make it easier to switch from your current notes application to Marknote, Marknote now offers several import options. for now you can import your notes from KNotes and maildir.

Preferences

The settings now longer open in an overlay dialog, and are now shown in Kirigami Addons' new ConfigurationView.

Thanks to Garry Wang, you can now change the color theme directly from Marknote's preferences.

Bug fixes
  • The heading of the note list no longer overlaps the buttons if it gets too long, now it is hidden.
  • The list of notes now overlaps the header longer when scrolling.
  • Opening a note on a touch device no longer opens the options menu every time.
  • Thanks to Carl, file creation on Windows has been fixed so you can use the sketch feature there, too.
Get It

Marknote is available on Flathub.

Packager Section

You can find the package on download.kde.org and it has been signed with my Carl's GPG key.

Categories: FLOSS Project Planets

Matthew Palmer: Checking for Compromised Private Keys has Never Been Easier

Planet Debian - Thu, 2024-06-27 20:00

As regular readers would know, since I never stop banging on about it, I run Pwnedkeys, a service which finds and collates private keys which have been disclosed or are otherwise compromised. Until now, the only way to check if a key is compromised has been to use the Pwnedkeys API, which is not necessarily trivial for everyone.

Starting today, that’s changing.

The next phase of Pwnedkeys is to start offering more user-friendly tools for checking whether keys being used are compromised. These will typically be web-based or command-line tools intended to answer the question “is the key in this (certificate, CSR, authorized_keys file, TLS connection, email, etc) known to Pwnedkeys to have been compromised?”.

Opening the Toolbox

Available right now are the first web-based key checking tools in this arsenal. These tools allow you to:

  1. Check the key in a PEM-format X509 data structure (such as a CSR or certificate);

  2. Check the keys in an authorized_keys file you upload; and

  3. Check the SSH keys used by a user at any one of a number of widely-used code-hosting sites.

Further planned tools include “live” checking of the certificates presented in TLS connections (for HTTPS, etc), SSH host keys, command-line utilities for checking local authorized_keys files, and many other goodies.

If You Are Intrigued By My Ideas…

… and wish to subscribe to my newsletter, now you can!

I’m not going to be blogging every little update to Pwnedkeys, because that would probably get a bit tedious for readers who aren’t as intrigued by compromised keys as I am. Instead, I’ll be posting every little update in the Pwnedkeys newsletter. So, if you want to keep up-to-date with the latest and greatest news and information, subscribe to the newsletter.

Supporting Pwnedkeys

All this work I’m doing on my own time, and I’m paying for the infrastructure from my own pocket. If you’ve got a few dollars to spare, I’d really appreciate it if you bought me a refreshing beverage. It helps keep the lights on here at Pwnedkeys Global HQ.

Categories: FLOSS Project Planets

GNU Health: Migrar, migrant, migràrem

GNU Planet! - Thu, 2024-06-27 15:48

The title of this article, “Migrar, migrant, migràrem“, comes from a beautiful poem written by Laia Porcar[1], that inspired the strikingly profound painting by Sara Belles [2] “Jo per tu, fill meu“. The artists reflect the migrants ordeal to provide a better life to their children and families, even at the cost of losing their own lives.

GNU Health[3] is a Social project with some technology behind and the mission at Sea-Eye is one of the best examples. After all, GNU Solidario[4] is a NGO that focuses in the advancement of Social Medicine.

We live a world of injustice. Concentration of power, social gradient and poverty rates keep on the rise. Artificial intelligence is on the hands of mega private corporations, targeting our privacy and feeding the macabre business of war. The fight for scarce natural resources such as lithium or coltan creates coups in impoverished countries. Nature and non-human animals are used and abused as mere commodities. Our world turns a blind eye to the systematic crushing and eradication of civilian population by powerful armies. As a result, we live in a world where migration is not a choice, but the only way out for millions of human beings, even at the risk of becoming anonymous victims in the Atlantic ocean or Mediterranean sea mass graveyards.

“Jo per tu, fill meu”, by Sara Belles

But there is hope. The Sea-Eye mission is the end result of a network of solidarity, cooperation and empathy. The Free Software movement started by Richard Stallman[5]; Julian Sassencheidt message in Mastodon and his presentation at GNU Health Con 2023[6] ; The work of our representative in Germany, Gerald Wiese; the Chaos Computer Club[7]; the team from L’Aurora[8] providing logistic support to the Search and Rescue vessels; the phenomenal Sea-Eye family who made me feel at home: The cook, crew on deck, the logistics and medical team who stood stoically intensive hours of GNU Health training. Of course, Selene, the heart of GNU Solidario and the one that looks after the human and non-human family members while I’m away.

You will hardly see these people in the news, because most corporate-backed media neglect them and their organizations. Unlike some billionaire “philanthropists” that take the media spotlight, these anonymous heroes stand on the right side of history, making a difference on the present and future of those who need it most, with very limited resources.

Collage of several pictures during my stay at the Sea-eye

We’re very happy and proud to see that GNU Health can be of help to Sea-Eye in tasks such as guests registration, health evaluations, reporting, statistics and stock management. This is just the beginning and we will be optimizing and adding functionality on successive missions. That said, GNU Health will always play a secondary role compared to picking up somebody from the water and giving them a welcoming hug. Again, we’re a social project with a bit of technology behind.

Drawings made by the children rescued at the Sea-eye

I’d like to finish with a reflection on the picture I took to some of the drawings done by children during their stay at the Sea-Eye. The drawings exist because the Sea-eye crew rescued those kids. Otherwise, their corpses would be at the bottom of the Mediterranean sea, along with thousands who tragically perished trying to find dignity in this world. Thank you, Sea-eye. You are priceless.

A final note: shame on those countries and governments that detain and punish Search and Rescue vessels. Saving lives is not a crime.

Love, freedom and happy hacking

You can obtain Sara Belles painting and Laia Porcar poem from L’Aurora solidarity shop[8]

  1. Laia Porcar : https://laravalerateatre.com/qui-som/
  2. Sara Belles . https://sarabelles.es/
  3. The GNU Health project. https://www.gnuhealth.org
  4. GNU Solidario. Advancing Social Medicine https://www.gnusolidario.org
  5. The GNU Operating System. https://www.gnu.org
  6. Search and rescue on the central Mediterranean migratory route . https://https://www.gnuhealthcon.org/2023/presentations/GHCon2023-Friday-07-Julian_Sassenscheidt-Search_and_rescue_on_the_central_Mediterranean_migratory_route.pdf
  7. The Chaos Computer Club (CCC) . https://www.ccc.de/en/
  8. L’Aurora suport. https://aurorasuport.org/
Categories: FLOSS Project Planets

Jonathan McDowell: Sorting out backup internet #4: IPv6

Planet Debian - Thu, 2024-06-27 14:38

The final piece of my 5G (well, 4G) based based backup internet connection I needed to sort out was IPv6. While Three do support IPv6 in their network they only seem to enable it for certain devices, and the MC7010 is not one of those devices, even though it also supports IPv6.

I use v6 a lot - over 50% of my external traffic, last time I looked. One suggested option was that I could drop the IPv6 Router Advertisements when the main external link went down, but I have a number of internal services that are only presented on v6 addresses so I needed to ensure clients in the house continued to have access to those.

As it happens I’ve used the Hurricane Electric IPv6 Tunnel Broker in the past, so my pass was re-instating that. The 5G link has a real external IPv4 address, and it’s possible to update the endpoint using a simple HTTP GET. I added the following to my /etc/dhcp/dhclient-exit-hooks.d/modem-interface-route where we are dealing with an interface IP change:

# Update IPv6 tunnel with Hurricane Electric curl --interface $interface 'https://username:password@ipv4.tunnelbroker.net/nic/update?hostname=1234'

I needed some additional configuration to bring things up, so /etc/network/interfaces got the following, configuring the 6in4 tunnel as well as the low preference default route, and source routing via the 5g table, similar to IPv4:

pre-up ip tunnel add he-ipv6 mode sit remote 216.66.80.26 pre-up ip link set he-ipv6 up pre-up ip addr add 2001:db8:1234::2/64 dev he-ipv6 pre-up ip -6 rule add from 2001:db8:1234::/64 lookup 5g pre-up ip -6 route add default dev he-ipv6 table 5g pre-up ip -6 route add default dev he-ipv6 metric 1000 post-down ip tunnel del he-ipv6

We need to deal with IPv4 changes in for the tunnel endpoint, so modem-interface-route also got:

ip tunnel change he-ipv6 local $new_ip_address

/etc/nftables.conf had to be taught to accept the 6in4 packets from the tunnel in the input chain:

# Allow HE tunnel iifname "sfp.31" ip protocol 41 ip saddr 216.66.80.26 accept

Finally, I had to engage in something I never thought I’d deal with; IPv6 NAT. HE provide a /48, and my FTTP ISP provides me with a /56, so this meant I could do a nice stateless 1:1 mapping:

table ip6 nat { chain postrouting { type nat hook postrouting priority 0 oifname "he-ipv6" snat ip6 prefix to ip6 saddr map { 2001:db8:f00d::/56 : 2001:db8:666::/56 } } }

This works. Mostly. The problem is that HE, not unreasonably, expect your IPv4 address to be pingable. And it turns out Three have some ranges that this works on, and some that it doesn’t. Which means it’s a bit hit and miss whether you can setup the tunnel.

I spent a while trying to find an alternative free IPv6 tunnel provider with a UK endpoint. There’s less call for them these days, so I didn’t manage to find any that actually worked (or didn’t have a similar pingable requirement). I did consider whether I wanted to end up with routes via a VM, as I described in the failover post, but looking at costings for VMs with providers who could actually give me an IPv6 range I decided the cost didn’t make it worthwhile; the VM cost ended up being more than the backup SIM is costing monthly.

Finally, it turns out happy eyeballs mostly means that when the 5G ends up on an IP that we can’t setup the IPv6 tunnel on, things still mostly work. Browser usage fails over quickly and it’s mostly my own SSH use that needs me to force IPv4. Purists will groan, but this turns out to be an acceptable trade-off for me, at present. Perhaps if I was seeing frequent failures the diverse routes approach to a VM would start to make sense, but for now I’m pretty happy with the configuration in terms of having a mostly automatic backup link take over when the main link goes down.

Categories: FLOSS Project Planets

The Drop Times: Gábor Hojtsy and Pamela Barone Share Their Perspectives on Starshot

Planet Drupal - Thu, 2024-06-27 12:22
Explore the transformative vision behind Drupal Starshot through the insights of Gábor Hojtsy and Pamela Barone. This ambitious initiative addresses Drupal's past criticisms by enhancing user experience and accessibility with innovative UI improvements and pre-packaged feature sets. Delve into the potential and challenges of Starshot and understand how the leadership team is steering Drupal towards a more user-friendly future.
Categories: FLOSS Project Planets

Python Insider: Python 3.13.0 beta 3 released

Planet Python - Thu, 2024-06-27 11:57

 

I'm pleased to announce the release of Python 3.13 beta 3.

https://www.python.org/downloads/release/python-3130b3/

 

This is a beta preview of Python 3.13

Python 3.13 is still in development. This release, 3.13.0b3, is the third of four beta release previews of 3.13.

Beta release previews are intended to give the wider community the opportunity to test new features and bug fixes and to prepare their projects to support the new feature release.

We strongly encourage maintainers of third-party Python projects to test with 3.13 during the beta phase and report issues found to the Python bug tracker as soon as possible. While the release is planned to be feature complete entering the beta phase, it is possible that features may be modified or, in rare cases, deleted up until the start of the release candidate phase (Tuesday 2024-07-30). Our goal is to have no ABI changes after beta 4 and as few code changes as possible after 3.13.0rc1, the first release candidate. To achieve that, it will be extremely important to get as much exposure for 3.13 as possible during the beta phase.

 

Please keep in mind that this is a preview release and its use is not recommended for production environments.

 Major new features of the 3.13 series, compared to 3.12

Some of the new major new features and changes in Python 3.13 are:

New features
  • A new and improved interactive interpreter , based on PyPy’s, featuring multi-line editing and color support, as well as colorized exception tracebacks .
  • An experimental free-threaded build mode , which disables the Global Interpreter Lock, allowing threads to run more concurrently. The build mode is available as an experimental feature in the Windows and macOS installers as well.
  • A preliminary, experimental JIT , providing the ground work for significant performance improvements.
  • The (cyclic) garbage collector is now incremental , which should mean shorter pauses for collection in programs with a lot of objects.
  • A modified version of mimalloc is now included, optional but enabled by default if supported by the platform, and required for the free-threaded build mode.
  • Docstrings now have their leading indentation stripped, reducing memory use and the size of .pyc files. (Most tools handling docstrings already strip leading indentation.)
  • The dbm module has a new dbm.sqlite3 backend that is used by default when creating new files.
  • The minimum supported macOS version was changed from 10.9 to 10.13 (High Sierra). Older macOS versions will not be supported going forward.
Typing Removals and new deprecations
  • PEP 594 (Removing dead batteries from the standard library) scheduled removals of many deprecated modules: aifc, audioop, chunk, cgi, cgitb, crypt, imghdr, mailcap, msilib, nis, nntplib, ossaudiodev, pipes, sndhdr, spwd, sunau, telnetlib, uu, xdrlib, lib2to3.
  • Many other removals of deprecated classes, functions and methods in various standard library modules.
  • C API removals and deprecations. (Some removals present in alpha 1 were reverted in alpha 2, as the removals were deemed too disruptive at this time.)
  • New deprecations, most of which are scheduled for removal from Python 3.15 or 3.16.

(Hey, fellow core developer, if a feature you find important is missing from this list, let Thomas know.)

For more details on the changes to Python 3.13, see What’s new in Python 3.13 . The next pre-release of Python 3.13 will be 3.13.0b4, currently scheduled for 2024-07-16.

 More resources  Enjoy the new releases

Thanks to all of the many volunteers who help make Python Development and these releases possible! Please consider supporting our efforts by volunteering yourself or through organization contributions to the Python Software Foundation.

Your release team,
Thomas Wouters
Łukasz Langa
Ned Deily
Steve Dower 

 

Categories: FLOSS Project Planets

Python Software Foundation: Announcing the PSF Board Candidates for 2024!

Planet Python - Thu, 2024-06-27 10:57

What an exciting list! Please take a look at who is running for the PSF Board this year on the PSF Board Election 2024 Nominees page. This year there are 3 seats open on the PSF board. You can see who is currently on the board on the PSF Officers & Directors page. (Débora Azevedo, Kwon-Han Bae, and Tania Allard are at the end of their current terms.)

Board Election Timeline
  • Nominations open: Tuesday, June 11th, 2:00 pm UTC
  • Nomination cut-off: Tuesday, June 25th, 2:00 pm UTC
  • Voter application/affirmation cut-off date: Tuesday, June 25th, 2:00 pm UTC
  • Announce candidates: Thursday, June 27th
  • Voting start date: Tuesday, July 2nd, 2:00 pm UTC
  • Voting end date: Tuesday, July 16th, 2:00 pm UTC

Not sure what UTC is for you locally? Check using this timezone converter

Voting

Voting opens on Tuesday, July 2nd at 2:00 pm UTC, through Tuesday, July 16th, 2024 2:00 pm UTC. Check the Elections page to see how much time you have left to vote. 

If you are a voting member of the PSF that affirmed your intention to participate in this year’s election, you will receive an email from “OpaVote Voting Link <noreply@opavote.com>” with your ballot, the subject line will read “Python Software Foundation Board of Directors Election 2024”. If you haven’t seen your ballot by Wednesday, please first check your spam folder for a message from “noreply@opavote.com”. If you don’t see anything get in touch by emailing psf-elections@python.org so we can look into your account and make sure we have the most up-to-date email for you.

If you have questions about your membership status or the election, please email psf-elections@python.org. You are welcome to join the discussion about the PSF Board election on the PSF Discuss forum.

Categories: FLOSS Project Planets

Qt Creator 14 Beta2 released

Planet KDE - Thu, 2024-06-27 06:34

We are happy to announce the release of Qt Creator 14 Beta2!

Categories: FLOSS Project Planets

Greg Casamento: Free as in Freedom, not as in beer...

GNU Planet! - Thu, 2024-06-27 06:16

 So... recently I was working for a bit (sweat equity or so I thought) for a company by the name of ImmortalData.  The company is headed by a man by the name of Dale Amon.  I have worked, on and off, for them for about 2-3 years.   They are developing a piece of software that is used to extract data from their proprietary black box systems.  This piece of software uses GNUstep.   They were born from a previous company known as XCOR which was developing a space plane at the Mojave space port.   That company is now defunct.

Okay, so with that bit of history, I worked for a while for XCOR and then, because ImmortalData inherited the software, for them as well.  When I worked for XCOR it was as a contractor.  There have been issues with the software (some GNUstep bugs and some bugs due to problems introduced by Dale) that I have been asked to address.

At the end of a meeting a few weeks ago Dale made a comment like "Well, this issue seems like a GNUstep bug, so there is no reason we should have to pay for any of this" which hit an EXTREMELY sour note with me.

Later on that week I tried to clarify it with Dale, and it seems as though he was under the impression that since I was working on Free Software any changes or fixes TO that software should not be billable.   This is NOT true.  Additionally, the issue that they are experiencing is because of something THEY did, and it is not a GNUstep bug. 

I mentioned this in the previous post, but I feel strongly that this needs to be called out explicitly.   Free Software is free as in FREEDOM.  This means you are free to look at, examine, and modify the software as you see fit.   It does NOT mean services performed on that software on your behalf by someone other than you are free.

This development was VERY upsetting to me and I feel the need to make the above VERY clear.

Categories: FLOSS Project Planets

The Drop is Always Moving: ⚠️ Only one day left to buy a @DrupalConEur Early Bird ticket. Join the Starshot track to learn about the Admin UI redesign process, what's behind the Launch button, how will package security be improved further, what will...

Planet Drupal - Thu, 2024-06-27 06:12

⚠️ Only one day left to buy a @DrupalConEur Early Bird ticket. Join the Starshot track to learn about the Admin UI redesign process, what's behind the Launch button, how will package security be improved further, what will recipes allow you to do and more! https://events.drupal.org/barcelona2024/drupal-starshot-track

Categories: FLOSS Project Planets

joshics.in: The Biggest Challenges in Drupal 10 Migration and How to Overcome Them

Planet Drupal - Thu, 2024-06-27 06:04
The Biggest Challenges in Drupal 10 Migration and How to Overcome Them bhavinhjoshi Thu, 06/27/2024 - 15:34 Drupal 10

Welcome to the world of Drupal 10, a cutting-edge iteration of one of the most powerful and feature-rich content management systems (CMS) available today. Launched in December 2022, Drupal 10 builds on the strengths of its predecessors while offering a range of exciting new features and improvements.

But what makes Drupal 10 stand out? First and foremost, it comes packed with the latest innovations in technology, security, and design, ensuring your website meets the modern-day demands of online users.

One of the most significant advantages of Drupal 10 over previous versions is its improved performance. Drupal 10 is faster and more efficient, providing an enhanced user experience which can lead to higher user engagement, lower bounce rates, and ultimately, increased conversions.

Drupal 10 also provides better security with its automatic updates feature, without relying on third-party plugins. This means you can rest easy knowing your website is always up-to-date and protected against potential cyber threats.

Furthermore, Drupal 10 comes with a new administrative theme, Olivero, that is more accessible and user-friendly. This makes it easier for website administrators to manage and edit content.

Inclusivity is also a major focus in Drupal 10. The CMS is designed to be more accessible, addressing the needs of users with disabilities and making the internet a more inclusive space for everyone.

Migrating to Drupal 10 may present some challenges, but the benefits far outweigh the hurdles. In this post, we will explore some of these challenges and provide practical solutions to ease the migration process. So, let's get started.

 

The Challenge of Deprecated Modules

One of the key challenges you might face when migrating to Drupal 10 is dealing with deprecated modules. “Deprecated” in this sense means that these modules are no longer recommended for use and are slated for removal in future versions of Drupal.

In previous versions of Drupal, you may have installed certain modules to extend the functionality of your site. These modules could range from image sliders and SEO tools, to custom field types and formatting options. However, since Drupal 10 is a step forward in terms of technology and usability, some of these modules might not be compatible with the new version.

When these deprecated modules are not supported by Drupal 10, they can cause disruption to the functionality of your website during the migration process. It's as if you're trying to fit a square peg into a round hole. The two are simply not compatible, and forcing them together won't work.

For example, you may find that a custom field type provided by a deprecated module no longer works in Drupal 10. This could lead to data loss, or perhaps certain sections of your website not displaying correctly. If your website heavily relies on such deprecated modules, this could potentially cause significant disruption to your site's functionality and user experience.

 

The Solution: Identifying and Replacing Deprecated Modules

Addressing the issue of deprecated modules requires a two-step process: identifying them and then replacing them with suitable alternatives in Drupal 10.

Identifying Deprecated Modules

The first step is to identify which modules on your site are deprecated. Drupal makes this process relatively straightforward with the use of the Upgrade Status module. This module provides a comprehensive report of all the deprecated code that your site is using, including modules.

To use the Upgrade Status module, you simply need to install it on your Drupal site and run a scan. The module will produce a list of deprecated modules you're currently using, making it easy for you to see what needs changing.

Replacing Deprecated Modules

Once you've identified the deprecated modules, the next step is to find suitable replacements.

Start by researching if there are updated versions of these modules that are compatible with Drupal 10. Module maintainers often release updated versions for new Drupal releases. You can usually find this information on the module's page on the Drupal website.

If a deprecated module doesn't have an updated version, you'll need to find an alternative module that offers similar functionality. The Drupal community is a good place to start your search. You can also ask for recommendations from other Drupal users. It's likely that others have faced a similar issue and can recommend a suitable module.

In some cases, you might find that the functions provided by a deprecated module have been incorporated into Drupal 10's core. In this case, you simply need to enable the corresponding functionality in Drupal 10.

Remember, always test new modules on a development version of your site before installing them on your live site. This way, you can ensure that the new module works correctly and doesn't cause any issues.

While dealing with deprecated modules can be a bit of a headache, it can also be an opportunity to streamline your site and improve its functionality.

 

The Challenge of Custom Code

If you've been operating a Drupal site for some time, it's likely that you or your development team have written custom code to tailor the website to your specific needs. This custom code could include anything from unique themes to specific functionalities that are critical to your website's operation. While this helps make your site uniquely yours, it can pose challenges during the migration process to Drupal 10.

Primarily, some of your custom code may not be compatible with Drupal 10 due to the differences in code requirements and standards between different Drupal versions. Your custom code may be using functions or methods that are deprecated in Drupal 10, or the architecture of Drupal 10 may simply not support your custom code.

This incompatibility can lead to errors during your site's migration to Drupal 10, causing certain functionalities to break or not function as intended. In worst-case scenarios, incompatible custom code can even make your website inaccessible. This can lead to a poor user experience, potentially causing lost audience engagement or revenue.

The challenge posed by custom code, therefore, is twofold: you need to identify the custom code that's causing issues, and then update or rewrite this code to be compatible with Drupal 10. This process can be time-consuming and complex, requiring a deep understanding of Drupal's coding requirements and standards.

 

The Solution: Identifying and Refactoring Custom Code

Managing custom code for a Drupal 10 migration can seem daunting, but with the right approach, it doesn't have to be. Here's how to go about it:

Identifying Problematic Custom Code

Much like identifying deprecated modules, the first step is to identify which parts of your custom code may be problematic in Drupal 10. The Upgrade Status module comes in handy here as well. Beyond just identifying deprecated modules, this tool can also scan your custom code to find deprecated API use and other potential issues that could cause problems in Drupal 10.

Another tool you can use is the Drupal-check command-line tool. This tool uses the same underlying library as the Upgrade Status module to check custom code for deprecations and other potential pitfalls.

Refactoring Custom Code:

After identifying the troublesome parts of your custom code, the next step is to refactor them to be compatible with Drupal 10. Simply put, refactoring is the process of altering the code without changing its external behavior.

If the Upgrade Status module flagged some code as deprecated, the report will usually include suggestions for what to replace the deprecated code with. If it doesn't, the Drupal API documentation can be a helpful resource. You'll have to replace deprecated function calls, alter data structures, or even rearchitect some parts of your code to ensure compatibility.

In some cases, the changes required might be quite extensive, especially for code written for earlier versions of Drupal. If you're not comfortable doing this on your own, it may be worth hiring a Drupal developer with experience in migrations.

Finally, it's crucial to thoroughly test your changes to ensure that they work correctly and have not altered the expected behavior of your website. Automated testing tools can be a great help in this regard, ensuring your code is robust and ready for migration to Drupal 10.

Refactoring custom code for Drupal 10 can be an involved process, but it's a vital step in preparing your site for the migration. With careful planning and diligent testing, you can make your transition to Drupal 10 smoother and more successful.

 

The Challenge of Ensuring Consistent Performance

As with any major update or migration, moving to Drupal 10 can potentially impact the performance of your website. Performance, in this context, relates to how quickly your website loads, how smoothly it operates, and how well it manages the resources of the server it's hosted on.

While Drupal 10 is designed to be faster and more efficient than its predecessors, the migration process itself can lead to unexpected dips in performance. For instance, new modules or updated versions of existing ones may not be as optimised as those on your current site, slowing down load times. Similarly, potential compatibility issues with custom code may lead to increased server load, impacting website speed and overall performance.

Besides the technical aspects, user experience can also be affected during the migration process. Changes in layout due to a new theme or variations in navigational structures can disorient regular visitors, affecting user engagement and bounce rates.

These performance risks are a vital concern during migration because an optimally performing website is crucial for maintaining user engagement, SEO rankings, and overall user satisfaction.
 

 

The Solution: Monitoring Performance and Identifying Areas for Improvement

Monitoring your site’s performance pre, mid and post migration can play a pivotal role in ensuring a successful transition to Drupal 10. Here is how you can stay on top of it:

Benchmarking Performance Pre-Migration

Before you begin the migration process, document your website's current performance. This includes page load times, server response times, error rates, and any other relevant metrics. Use tools like Google PageSpeed Insights, GTMetrix, or Pingdom to gather this data. Having this information will allow you to compare performance before and after the migration, and identify any areas that need improvement.

Maintaining Performance During Migration

During the migration process, ensure that your website remains in an operational state. Regularly check for any potential performance drops or system errors. Drupal’s built-in watchdog logs and your server’s error logs are critical tools for this.

Optimising Performance Post-Migration

Once the migration is complete, return to your benchmark data and conduct the same tests again. If performance has dropped in any area, take steps to address this. Your solutions might include enabling caching, optimising images, reducing the number of HTTP requests, updating or replacing inefficient modules, or refactoring custom code for better performance.

Ensuring User Experience

Remember, performance isn't just about speed. User experience, which includes factors like site navigation and layout consistency, also plays a huge role. Use heat maps, session recording tools, and user feedback to understand how changes in Drupal 10 have affected the user experience and adjust accordingly.

Involving SEO

Ensure that your website's SEO hasn't been negatively impacted by the migration. Tools like Google Search Console can alert you to any crawl errors that might have occurred due to the migration. Also, ensure that any URLs that have changed due to the migration are properly redirected to prevent 404 errors.

By keeping a close eye on performance and being ready to take corrective action, you can ensure that your Drupal 10 migration is seamless and minimises disruption to your site's performance and user experience.

 

Conclusion

In this blog post, we have navigated through the challenges and solutions of migrating to Drupal 10, covering deprecated modules, custom code, and ensuring consistent performance.

Undeniably, the migration process can seem daunting, with the potential for bumps along the way. However, with meticulous planning, problem-solving, and performance monitoring, these hurdles can be overcome. The Upgrade Status module, refactoring custom code, and constant performance tracking are essential tools in your migration toolbox.

But, it's crucial to remember that while the migration to Drupal 10 requires effort, the rewards are absolutely worthwhile. With Drupal 10, you gain a faster, more secure, and highly efficient website that is designed to provide an enhanced user experience and keep you ahead in the digital space. It also ensures that your website is on the most recent and supported version of Drupal, protecting your online presence in the long run.

So, prepare yourself, embrace the upgrade, and take your website to new heights with Drupal 10. Remember, every challenge is an opportunity in disguise. Happy migrating!

Drupal Drupal 10 Drupal Planet Drupal migration
Categories: FLOSS Project Planets

The Smarter Way to Rust

Planet KDE - Thu, 2024-06-27 04:00

If you’ve been following our blog, you’re likely aware of Rust’s growing presence in embedded systems. While Rust excels in safety-by-design, it’s also common to find it integrated with C++. This strategic approach leverages the strengths of both languages, including extensive C++ capabilities honed over the years in complex embedded systems. Let’s delve into some key concepts for integrating Rust and C++.

Adding Rust to C++

If you’re adding Rust to an existing C++ project, you need to start in the right place. Begin by oxidizing (that is, converting code to Rust) areas that are bug-prone, difficult to maintain, or with security vulnerabilities. These are where Rust can offer immediate improvements. Focus on modules that are self-contained, have clean interfaces, and are primarily procedural rather than object-oriented. For example, libraries that handle media or image processing can be prime candidates for rewriting in Rust as these are often vulnerable to memory safety issues. Parsers and input handling routines also stand to benefit from Rust’s guarantees of safety.

Deciding between Rusting outside-in or inside-out

As your project scales, weigh the merits of maintaining a C++ core with Rust components versus a Rust-centric application with C++ libraries. For smaller, newer projects, starting with Rust may help you avoid the complexities of dealing with C foreign function interfaces (FFIs). This decision may hinge on your safety priorities: if your project’s core tenant is safety, then a Rust-centric approach may be preferable. Conversely, if safety is needed only in certain areas of C++ project, keeping the core in C++ could be more practical.

Another consideration is how your project handles multi-threading. Mixing threading and memory ownership between Rust and C++ is very complex and prone to mistakes. Depending on how your application uses threads, this may tilt the decision in the direction of either C++ or Rust as the main “host” application.

Keeping C++ where it excels

While Rust offers many advantages, particularly in safety, C++ has its own merits that shouldn’t be hastily dismissed. The decision to rewrite should be strategic, based on actual needs rather than a pursuit of language purity since the risk of introducing new bugs through rewriting well-tested and stable C++ code outweigh the benefits of a Rust rewrite. Time-tested C++ code, particularly in areas like signal processing or cryptography, might be best left as is. Such code is often highly optimized, stable, and less prone to memory-related issues. As the saying goes, if it’s not broken, don’t “fix” it.

Navigating Rust limitations

Despite its growing ecosystem, Rust is still relatively young. Relying on packages maintained by small teams or single individuals carries inherent risks. Moreover, as Rust is still in a period of rapid language evolution, this could result in frequent updates, posing challenges for large-scale or long-lived projects. In certain scenarios, such as very large codebases, specific embedded support requirements, or projects with long development cycles, C++ may remain the more practical choice. It is wise to use C++ where stability and longevity are important, and Rust where safety is critical but some development fluidity is acceptable.

Summary

By combining the reliability of C++ with the safety of Rust, developers can engineer systems that endure while minimizing the risk of common programming pitfalls. If you’re interested in reading more about this topic, you’ll want to read our best practice guide on Rust/C++ integration, which was created in collaboration with Ferrous Systems co-founder Florian Gilcher.

About KDAB

If you like this article and want to read similar material, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post The Smarter Way to Rust appeared first on KDAB.

Categories: FLOSS Project Planets

CKEditor: CKEditor Plugin Pack for Drupal | Release 1.1.0

Planet Drupal - Thu, 2024-06-27 03:20
Explore the latest CKEditor 5 Plugin Pack 1.1.0 and CKEditor 5 Premium Features 1.2.9 for Drupal 10.3. Enhance your content creation with new features like Templates, Auto Image, and Multi-level List. Discover seamless integration and improved editing tools.
Categories: FLOSS Project Planets

Tag1 Consulting: Migrating Your Data from Drupal 7 to Drupal 10: Customizing the generated migration

Planet Drupal - Thu, 2024-06-27 02:34

Previously, we explored generating migrations using the Migrate Upgrade module and managing them with Migrate Plus. Today, we cover migration plugins from Drupal Core. The two main methods differ in file patterns, locations, and change detection. Learn how to organize your code effectively and customize your approach for optimal results. This article is packed with practical tips and insights to make your migration smoother and more efficient. Get ahead of the curve – read our guide and migrate with confidence!

Read more mauricio Thu, 06/27/2024 - 04:00
Categories: FLOSS Project Planets

Russ Allbery: Review: Lyorn

Planet Debian - Thu, 2024-06-27 00:05

Review: Lyorn, by Steven Brust

Series: Vlad Taltos #17 Publisher: Tor Copyright: 2024 ISBN: 1-4668-8971-3 Format: Kindle Pages: 274

Lyorn is the 17th Vlad Taltos book and a direct sequel to 2014's Hawk. (Yes, actual main story progress!) You do not want to start reading here; you would be hopelessly confused. When this series is complete, I want to re-read the entire thing from the beginning and pick up more of the bits I missed the first time.

Vlad is not, in fact, free to see his friends and get entangled in imperial politics again as I thought after Hawk. Despite the successes of that story, there is one remaining small problem: incredibly powerful magic users still want to kill him. His immediate solution is to shelter in a theater, since Draegaran theaters are well-known for their excellent magical shielding. This works well enough at first, but the theater is rehearsing a play about Draegaran politics that is highly offensive to the Lyorn and the theater may be shut down because of it. Vlad's enemies are also willing to lean on his friends to find him and kill him.

This series continues to be thoroughly enjoyable. Lyorn is "just" more of Vlad being Vlad, meddling in everyone's business and coming up with elaborate plans with too many moving parts that he somehow manages to pull off, but I'll happily read lots of books like that. Vlad is both anxious and grumpy, both of which give the plot some needed tension without being overwhelming. There are no truly major world-building revelations here (or, if there are, I missed them), but there's a lot of processing of what the reader learned in Tsalmoth. It's increasingly looking like the payoff from those revelations is going to be the series finale.

This is the first Vlad book that contains solid confirmation of where the series as a whole is headed. Brust mentioned some time ago that the last book is titled The Last Contract, and Lyorn comes close to stating explicitly what that contract will be. I am sure that it will be more complicated than it appears now and there are misdirections yet to come, but I am excited to see where Brust takes this idea.

Vlad has been insistently apolitical for much of the series, meddling in politics only when he has to or to help his friends and otherwise treating it as a system that he has to navigate and survive. That was the root of the conflict in Teckla, all the way back at the start of the series. This may be starting to change, and when Brust ties it together with the Jenoine, the Great Weapon Godslayer, and the rest of the world-building he's cued up, the results are going to be explosive.

Two books left: Chreotha and The Last Contract. I can hardly wait.

In every Vlad book, Brust plays some sort of structural game. This time, befitting the setting, it's a musical. The action is interspersed with quotes from a fictional history about the play the theater is putting on, a work called Song of the Presses about political censorship during the reign of a Lyorn emperor in the 14th cycle, thousands of years before the time of this book. This was, at times, nearly as interesting as the main plot. The chapters are also numbered like the acts and scenes of a play, although this I didn't notice as much since books often have that structure anyway.

Since this is a musical, there are also songs. Specifically, each chapter is introduced by a parody of songs from various musicals in our world, rewritten so that they fit within Brust's fictional musical. Brust is a also musician and a filker, so these songs are actually good, or at least they amused me a great deal. I'm not much of a musical fan and I still could hear the tune playing when I read most of them.

Lyorn is not so good that I would rave about it. It's one of those functional connective books of a series that advances the plot, tells a good story, and has some fun along the way. The guns on the mantelpiece of this world have not gone off yet, and Vlad is still maneuvering into position. But it's looking like we're going to get the conclusion, and it's going to be spectacular. If you have read this far, you will want to keep reading.

Followed by Chreotha, which may be a bit of a wait because apparently Brust is going to write The Last Contract first to make sure he ties up loose ends properly.

Rating: 7 out of 10

Categories: FLOSS Project Planets

Mark Dufour: Shed Skin restricted-Python-to-C++ compiler 0.9.9

Planet Python - Wed, 2024-06-26 21:11

I have just released version 0.9.9 of Shed Skin, a restricted-Python-to-C++ compiler. It comes with many changes under the hood to improve the code base. For example, Shakeeb has started adding type annotations to Shed Skin itself, whereas I have fixed many C++ compiler warnings. We have also replaced the old CPython-based dict and set implementations with STL unordered_map and unordered_set. While this may cost some performance for especially small toy programs, it really helps with maintainability.

There was also a type inference improvement, which hasn't happened in a while since it wasn't needed. It was however needed to add a cool new example called "pycsg", which shows how one can perform CSG (constructive solid geometry) using binary space partitioning (BSP). After some investigation I decided to make the type inference slightly less "optimistic", which should make it scale a lot better, at the cost of being slower overall. In the end this new example runs about 15 times faster after compilation on my system (not 15%.. 15 times!).

Most notably on the outside, Shed Skin now comes with --floatXX/--intXXX options, which allow you to specify the desired float/int precision. Unfortunately under windows, --int128 does not yet work, since C++ has not yet standardized 128-bit integers.. The relatively new othello2 and collatz examples now require --int64 and --int128, respectively, to function properly.

For the full list of changes, please see the release notes.

I would like to take this opportunity to (once again) invite others to help out in Shed Skin development. There is always enough to do, also low-hanging fruit, on both the Python and the C++ side of things. I at least think it is rather fun to work on.. :) Let me know if you'd like to contribute but aren't sure what you could bring to the table.

Of course just sending in feedback or new example programs (especially if they fail!) can be very motivating as well to keep improving things.

Categories: FLOSS Project Planets

ImageX: The ECA Module: Setting Up Automated Actions For Various Scenarios on Your Drupal Website

Planet Drupal - Wed, 2024-06-26 13:39

Authored by Nadiia Nykolaichuk.

Your Drupal website is an advanced, powerful, and intelligent system capable of performing remarkable tasks. One of them is triggering automatic actions in response to certain events, which opens a treasure trove of options to meet your needs.

Categories: FLOSS Project Planets

Pages