Feeds

Nonprofit Drupal posts: DrupalCon Portland 2024 Nonprofit Summit: Breakout Leaders Wanted

Planet Drupal - Thu, 2024-01-11 15:09

Hey nonprofit Drupal users! The DA is interested in supporting community-driven content that is specifically relevant to nonprofit organization staff and related agencies at DrupalCon North America in Portland, Oregon, at the Nonprofit Summit on May 9, 2024.

We are looking for volunteers who would be interested in giving back to the community by contributing some subject matter expertise via a day of informal breakout sessions or other group activities. We are open to ideas!

Who are we looking for?

Do you have some Drupal expertise or a recent experience with a Drupal project that you would like to share with others? Is there something about Drupal that you think is really cool that you would love to share with the nonprofit Drupal community?

What’s required?

You will not be required to make slides! You don’t need to have lots of (or any) speaking experience! All you need is a willingness to facilitate a discussion group or engaging activity around a particular topic, and some expertise or enthusiasm for that topic that you wish to share. 

How to Submit an Idea or Topic

Please fill out this form by February 13th and we will get back to you as soon as we are able. Thank you! https://forms.gle/MJthh68rsFeZsuVc8

Discussion leaders will be selected by the Nonprofit Summit Planning Committee and will be notified by the end of February

Questions? 

Email nonprofitsummit@association.drupal.org.

Categories: FLOSS Project Planets

Reproducible Builds: Reproducible Builds in December 2023

Planet Debian - Thu, 2024-01-11 14:41

Welcome to the December 2023 report from the Reproducible Builds project! In these reports we outline the most important things that we have been up to over the past month. As a rather rapid recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries (more).

Reproducible Builds: Increasing the Integrity of Software Supply Chains awarded IEEE Software “Best Paper” award

In February 2022, we announced in these reports that a paper written by paper Chris Lamb and Stefano Zacchiroli was now available in the March/April 2022 issue of IEEE Software. Titled Reproducible Builds: Increasing the Integrity of Software Supply Chains (PDF).

This month, however, IEEE Software announced that this paper has won their Best Paper award for 2022.


Reproducibility to affect package migration policy in Debian

In a post summarising the activities of the Debian Release Team at a recent in-person Debian event in Cambridge, UK, Paul Gevers announced a change to the way packages are “migrated” into the staging area for the next stable Debian release based on its reproducibility status:

The folks from the Reproducibility Project have come a long way since they started working on it 10 years ago, and we believe it’s time for the next step in Debian. Several weeks ago, we enabled a migration policy in our migration software that checks for regression in reproducibility. At this moment, that is presented as just for info, but we intend to change that to delays in the not so distant future. We eventually want all packages to be reproducible. To stimulate maintainers to make their packages reproducible now, we’ll soon start to apply a bounty [speedup] for reproducible builds, like we’ve done with passing autopkgtests for years. We’ll reduce the bounty for successful autopkgtests at that moment in time.


Speranza: “Usable, privacy-friendly software signing”

Kelsey Merrill, Karen Sollins, Santiago Torres-Arias and Zachary Newman have developed a new system called Speranza, which is aimed at reassuring software consumers that the product they are getting has not been tampered with and is coming directly from a source they trust. A write-up on TechXplore.com goes into some more details:

“What we have done,” explains Sollins, “is to develop, prove correct, and demonstrate the viability of an approach that allows the [software] maintainers to remain anonymous.” Preserving anonymity is obviously important, given that almost everyone—software developers included—value their confidentiality. This new approach, Sollins adds, “simultaneously allows [software] users to have confidence that the maintainers are, in fact, legitimate maintainers and, furthermore, that the code being downloaded is, in fact, the correct code of that maintainer.” []

The corresponding paper is published on the arXiv preprint server in various formats, and the announcement has also been covered in MIT News.


Nondeterministic Git bundles

Paul Baecher published an interesting blog post on Reproducible git bundles. For those who are not familiar with them, Git bundles are used for the “offline” transfer of Git objects without an active server sitting on the other side of a network connection. Anyway, Paul wrote about writing a backup system for his entire system, but:

I noticed that a small but fixed subset of [Git] repositories are getting backed up despite having no changes made. That is odd because I would think that repeated bundling of the same repository state should create the exact same bundle. However [it] turns out that for some, repositories bundling is nondeterministic.

Paul goes on to to describe his solution, which involves “forcing git to be single threaded makes the output deterministic”. The article was also discussed on Hacker News.


Output from libxlst now deterministic

libxslt is the XSLT C library developed for the GNOME project, where XSLT itself is an XML language to define transformations for XML files. This month, it was revealed that the result of the generate-id() XSLT function is now deterministic across multiple transformations, fixing many issues with reproducible builds. As the Git commit by Nick Wellnhofer describes:

Rework the generate-id() function to return deterministic values. We use a simple incrementing counter and store ids in the 'psvi' member of nodes which was freed up by previous commits. The presence of an id is indicated by a new "source node" flag. This fixes long-standing problems with reproducible builds, see https://bugzilla.gnome.org/show_bug.cgi?id=751621 This also hardens security, as the old implementation leaked the difference between a heap and a global pointer, see https://bugs.chromium.org/p/chromium/issues/detail?id=1356211 The old implementation could also generate the same id for dynamically created nodes which happened to reuse the same memory. Ids for namespace nodes were completely broken. They now use the id of the parent element together with the hex-encoded namespace prefix.


Community updates

There were made a number of improvements to our website, including Chris Lamb fixing the generate-draft script to not blow up if the input files have been corrupted today or even in the past [], Holger Levsen updated the Hamburg 2023 summit to add a link to farewell post [] & to add a picture of a Post-It note. [], and Pol Dellaiera updated paragraph about tar and the --clamp-mtime flag [].

On our mailing list this month, Bernhard M. Wiedemann posted an interesting summary on some of the reasons why packages are still not reproducible in 2023.

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes, including processing objdump symbol comment filter inputs as Python byte (and not str) instances [] and Vagrant Cascadian extended diffoscope support for GNU Guix [] and updated the version in that distribution to version 253 [].


“Challenges of Producing Software Bill Of Materials for Java”

Musard Balliu, Benoit Baudry, Sofia Bobadilla, Mathias Ekstedt, Martin Monperrus, Javier Ron, Aman Sharma, Gabriel Skoglund, César Soto-Valero and Martin Wittlinger (!) of the KTH Royal Institute of Technology in Sweden, have published an article in which they:

… deep-dive into 6 tools and the accuracy of the SBOMs they produce for complex open-source Java projects. Our novel insights reveal some hard challenges regarding the accurate production and usage of software bills of materials.

The paper is available on arXiv.


Debian Non-Maintainer campaign

As mentioned in previous reports, the Reproducible Builds team within Debian has been organising a series of online and offline sprints in order to clear the huge backlog of reproducible builds patches submitted by performing so-called NMUs (Non-Maintainer Uploads).

During December, Vagrant Cascadian performed a number of such uploads, including:

In addition, Holger Levsen performed three “no-source-change” NMUs in order to address the last packages without .buildinfo files in Debian trixie, specifically lorene (0.0.0~cvs20161116+dfsg-1.1), maria (1.3.5-4.2) and ruby-rinku (1.7.3-2.1).


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In December, a number of changes were made by Holger Levsen:

  • Debian-related changes:

    • Fix matching packages for the R programming language. [][][]
    • Add a [Certbot](https://certbot.eff.org/ configuration for the Nginx web server. []
    • Enable debugging for the create-meta-pkgs tool. [][]
  • Arch Linux-related changes

    • The asp has been deprecated by pkgctl; thanks to dvzrv for the pointer. []
    • Disable the Arch Linux builders for now. []
    • Stop referring to the /trunk branch / subdirectory. []
    • Use --protocol https when cloning repositories using the pkgctl tool. []
  • Misc changes:

In addition, node maintenance was performed by Holger Levsen [] and Vagrant Cascadian [].


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Categories: FLOSS Project Planets

Wayland really breaks things… Just for now?

Planet KDE - Thu, 2024-01-11 11:24

This post is in part a response to an aspect of Nate’s post “Does Wayland really break everything?“, but also my reflection on discussing Wayland protocol additions, a unique pleasure that I have been involved with for the past months1.

Some facts

Before I start I want to make a few things clear: The Linux desktop will be moving to Wayland2 – this is a fact at this point (and has been for a while), sticking to X11 makes no sense for future projects. From reading Wayland protocols and working with it at a much lower level than I ever wanted to, it is also very clear to me that Wayland is an exceptionally well-designed core protocol, and so are the additional extension protocols (xdg-shell & Co.). The modularity of Wayland is great, it gives it incredible flexibility and will for sure turn out to be good for the long-term viability of this project (and also provides a path to correct protocol issues in future, if one is found). In other words: Wayland is an amazing foundation to build on, and a lot of its design decisions make a lot of sense!

The shift towards people seeing “Linux” more as an application developer platform, and taking PipeWire and XDG Portals into account when designing for Wayland is also an amazing development and I love to see this – this holistic approach is something I always wanted!

Furthermore, I think Wayland removes a lot of functionality that shouldn’t exist in a modern compositor – and that’s a good thing too! Some of X11’s features and design decisions had clear drawbacks that we shouldn’t replicate. I highly recommend to read Nate’s blog post, it’s very good and goes into more detail. And due to all of this, I firmly believe that any advancement in the Wayland space must come from within the project.

But!

But! Of course there was a “but” coming – I think while developing Wayland-as-an-ecosystem we are now entrenched into narrow concepts of how a desktop should work. While discussing Wayland protocol additions, a lot of concepts clash, people from different desktops with different design philosophies debate the merits of those over and over again never reaching any conclusion (just as you will never get an answer out of humans whether sushi or pizza is the clearly superior food, or whether CSD or SSD is better). Some people want to use Wayland as a vehicle to force applications to submit to their desktop’s design philosophies, others prefer the smallest and leanest protocol possible, other developers want the most elegant behavior possible. To be clear, I think those are all very valid approaches.

But this also creates problems: By switching to Wayland compositors, we are already forcing a lot of porting work onto toolkit developers and application developers. This is annoying, but just work that has to be done. It becomes frustrating though if Wayland provides toolkits with absolutely no way to reach their goal in any reasonable way. For Nate’s Photoshop analogy: Of course Linux does not break Photoshop, it is Adobe’s responsibility to port it. But what if Linux was missing a crucial syscall that Photoshop needed for proper functionality and Adobe couldn’t port it without that? In that case it becomes much less clear on who is to blame for Photoshop not being available.

A lot of Wayland protocol work is focused on the environment and design, while applications and work to port them often is considered less. I think this happens because the overlap between application developers and developers of the desktop environments is not necessarily large, and the overlap with people willing to engage with Wayland upstream is even smaller. The combination of Windows developers porting apps to Linux and having involvement with toolkits or Wayland is pretty much nonexistent. So they have less of a voice.

A quick detour through the neuroscience research lab

I have been involved with Freedesktop, GNOME and KDE for an incredibly long time now (more than a decade), but my actual job (besides consulting for Purism) is that of a PhD candidate in a neuroscience research lab (working on the morphology of biological neurons and its relation to behavior). I am mostly involved with three research groups in our institute, which is about 35 people. Most of us do all our data analysis on powerful servers which we connect to using RDP (with KDE Plasma as desktop). Since I joined, I have been pushing the envelope a bit to extend Linux usage to data acquisition and regular clients, and to have our data acquisition hardware interface well with it. Linux brings some unique advantages for use in research, besides the obvious one of having every step of your data management platform introspectable with no black boxes left, a goal I value very highly in research (but this would be its own blogpost).

In terms of operating system usage though, most systems are still Windows-based. Windows is what companies develop for, and what people use by default and are familiar with. The choice of operating system is very strongly driven by application availability, and WSL being really good makes this somewhat worse, as it removes the need for people to switch to a real Linux system entirely if there is the occasional software requiring it. Yet, we have a lot more Linux users than before, and use it in many places where it makes sense. I also developed a novel data acquisition software that even runs on Linux-only and uses the abilities of the platform to its fullest extent. All of this resulted in me asking existing software and hardware vendors for Linux support a lot more often. Vendor-customer relationship in science is usually pretty good, and vendors do usually want to help out. Same for open source projects, especially if you offer to do Linux porting work for them… But overall, the ease of use and availability of required applications and their usability rules supreme. Most people are not technically knowledgeable and just want to get their research done in the best way possible, getting the best results with the least amount of friction.

KDE/Linux usage at a control station for a particle accelerator at Adlershof Technology Park, Germany, for reference (by 25years of KDE)3 Back to the point

The point of that story is this: GNOME, KDE, RHEL, Debian or Ubuntu: They all do not matter if the necessary applications are not available for them. And as soon as they are, the easiest-to-use solution wins. There are many facets of “easiest”: In many cases this is RHEL due to Red Hat support contracts being available, in many other cases it is Ubuntu due to its mindshare and ease of use. KDE Plasma is also frequently seen, as it is perceived a bit easier to onboard Windows users with it (among other benefits). Ultimately, it comes down to applications and 3rd-party support though.

Here’s a dirty secret: In many cases, porting an application to Linux is not that difficult. The thing that companies (and FLOSS projects too!) struggle with and will calculate the merits of carefully in advance is whether it is worth the support cost as well as continuous QA/testing. Their staff will have to do all of that work, and they could spend that time on other tasks after all.

So if they learn that “porting to Linux” not only means added testing and support, but also means to choose between the legacy X11 display server that allows for 1:1 porting from Windows or the “new” Wayland compositors that do not support the same features they need, they will quickly consider it not worth the effort at all. I have seen this happen.

Of course many apps use a cross-platform toolkit like Qt, which greatly simplifies porting. But this just moves the issue one layer down, as now the toolkit needs to abstract Windows, macOS and Wayland. And Wayland does not contain features to do certain things or does them very differently from e.g. Windows, so toolkits have no way to actually implement the existing functionality in a way that works on all platforms. So in Qt’s documentation you will often find texts like “works everywhere except for on Wayland compositors or mobile”4.

Many missing bits or altered behavior are just papercuts, but those add up. And if users will have a worse experience, this will translate to more support work, or people not wanting to use the software on the respective platform.

What’s missing? Window positioning

SDI applications with multiple windows are very popular in the scientific world. For data acquisition (for example with microscopes) we often have one monitor with control elements and one larger one with the recorded image. There is also other configurations where multiple signal modalities are acquired, and the experimenter aligns windows exactly in the way they want and expects the layout to be stored and to be loaded upon reopening the application. Even in the image from Adlershof Technology Park above you can see this style of UI design, at mega-scale. Being able to pop-out elements as windows from a single-window application to move them around freely is another frequently used paradigm, and immensely useful with these complex apps.

It is important to note that this is not a legacy design, but in many cases an intentional choice – these kinds of apps work incredibly well on larger screens or many screens and are very flexible (you can have any window configuration you want, and switch between them using the (usually) great window management abilities of your desktop).

Of course, these apps will work terribly on tablets and small form factors, but that is not the purpose they were designed for and nobody would use them that way.

I assumed for sure these features would be implemented at some point, but when it became clear that that would not happen, I created the ext-placement protocol which had some good discussion but was ultimately rejected from the xdg namespace. I then tried another solution based on feedback, which turned out not to work for most apps, and now proposed xdg-placement (v2) in an attempt to maybe still get some protocol done that we can agree on, exploring more options before pushing the existing protocol for inclusion into the ext Wayland protocol namespace. Meanwhile though, we can not port any application that needs this feature, while at the same time we are switching desktops and distributions to Wayland by default.

Window position restoration

Similarly, a protocol to save & restore window positions was already proposed in 2018, 6 years ago now, but it has still not been agreed upon, and may not even help multiwindow apps in its current form. The absence of this protocol means that applications can not restore their former window positions, and the user has to move them to their previous place again and again.

Meanwhile, toolkits can not adopt these protocols and applications can not use them and can not be ported to Wayland without introducing papercuts.

Window icons

Similarly, individual windows can not set their own icons, and not-installed applications can not have an icon at all because there is no desktop-entry file to load the icon from and no icon in the theme for them. You would think this is a niche issue, but for applications that create many windows, providing icons for them so the user can find them is fairly important. Of course it’s not the end of the world if every window has the same icon, but it’s one of those papercuts that make the software slightly less user-friendly. Even applications with fewer windows like LibrePCB are affected, so much so that they rather run their app through Xwayland for now.

I decided to address this after I was working on data analysis of image data in a Python virtualenv, where my code and the Python libraries used created lots of windows all with the default yellow “W” icon, making it impossible to distinguish them at a glance. This is xdg-toplevel-icon now, but of course it is an uphill battle where the very premise of needing this is questioned. So applications can not use it yet.

Limited window abilities requiring specialized protocols

Firefox has a picture-in-picture feature, allowing it to pop out media from a mediaplayer as separate floating window so the user can watch the media while doing other things. On X11 this is easily realized, but on Wayland the restrictions posed on windows necessitate a different solution. The xdg-pip protocol was proposed for this specialized usecase, but it is also not merged yet. So this feature does not work as well on Wayland.

Automated GUI testing / accessibility / automation

Automation of GUI tasks is a powerful feature, so is the ability to auto-test GUIs. This is being worked on, with libei and wlheadless-run (and stuff like ydotool exists too), but we’re not fully there yet.

Wayland is frustrating for (some) application authors

As you see, there is valid applications and valid usecases that can not be ported yet to Wayland with the same feature range they enjoyed on X11, Windows or macOS. So, from an application author’s perspective, Wayland does break things quite significantly, because things that worked before can no longer work and Wayland (the whole stack) does not provide any avenue to achieve the same result.

Wayland does “break” screen sharing, global hotkeys, gaming latency (via “no tearing”) etc, however for all of these there are solutions available that application authors can port to. And most developers will gladly do that work, especially since the newer APIs are usually a lot better and more robust. But if you give application authors no path forward except “use Xwayland and be on emulation as second-class citizen forever”, it just results in very frustrated application developers.

For some application developers, switching to a Wayland compositor is like buying a canvas from the Linux shop that forces your brush to only draw triangles. But maybe for your avant-garde art, you need to draw a circle. You can approximate one with triangles, but it will never be as good as the artwork of your friends who got their canvases from the Windows or macOS art supply shop and have more freedom to create their art.

Triangles are proven to be the best shape! If you are drawing circles you are creating bad art!

Wayland, via its protocol limitations, forces a certain way to build application UX – often for the better, but also sometimes to the detriment of users and applications. The protocols are often fairly opinionated, a result of the lessons learned from X11. In any case though, it is the odd one out – Windows and macOS do not pose the same limitations (for better or worse!), and the effort to port to Wayland is orders of magnitude bigger, or sometimes in case of the multiwindow UI paradigm impossible to achieve to the same level of polish. Desktop environments of course have a design philosophy that they want to push, and want applications to integrate as much as possible (same as macOS and Windows!). However, there are many applications out there, and pushing a design via protocol limitations will likely just result in fewer apps.

The porting dilemma

I spent probably way too much time looking into how to get applications cross-platform and running on Linux, often talking to vendors (FLOSS and proprietary) as well. Wayland limitations aren’t the biggest issue by far, but they do start to come come up now, especially in the scientific space with Ubuntu having switched to Wayland by default. For application authors there is often no way to address these issues. Many scientists do not even understand why their Python script that creates some GUIs suddenly behaves weirdly because Qt is now using the Wayland backend on Ubuntu instead of X11. They do not know the difference and also do not want to deal with these details – even though they may be programmers as well, the real goal is not to fiddle with the display server, but to get to a scientific result somehow.

Another issue is portability layers like Wine which need to run Windows applications as-is on Wayland. Apparently Wine’s Wayland driver has some heuristics to make window positioning work (and I am amazed by the work done on this!), but that can only go so far.

A way out?

So, how would we actually solve this? Fundamentally, this excessively long blog post boils down to just one essential question:

Do we want to force applications to submit to a UX paradigm unconditionally, potentially loosing out on application ports or keeping apps on X11 eternally, or do we want to throw them some rope to get as many applications ported over to Wayland, even through we might sacrifice some protocol purity?

I think we really have to answer that to make the discussions on wayland-protocols a lot less grueling. This question can be answered at the wayland-protocols level, but even more so it must be answered by the individual desktops and compositors.

If the answer for your environment turns out to be “Yes, we want the Wayland protocol to be more opinionated and will not make any compromises for application portability”, then your desktop/compositor should just immediately NACK protocols that add something like this and you simply shouldn’t engage in the discussion, as you reject the very premise of the new protocol: That it has any merit to exist and is needed in the first place. In this case contributors to Wayland and application authors also know where you stand, and a lot of debate is skipped. Of course, if application authors want to support your environment, you are basically asking them now to rewrite their UI, which they may or may not do. But at least they know what to expect and how to target your environment.

If the answer turns out to be “We do want some portability”, the next question obviously becomes where the line should be drawn and which changes are acceptable and which aren’t. We can’t blindly copy all X11 behavior, some porting work to Wayland is simply inevitable. Some written rules for that might be nice, but probably more importantly, if you agree fundamentally that there is an issue to be fixed, please engage in the discussions for the respective MRs! We for sure do not want to repeat X11 mistakes, and I am certain that we can implement protocols which provide the required functionality in a way that is a nice compromise in allowing applications a path forward into the Wayland future, while also being as good as possible and improving upon X11. For example, the toplevel-icon proposal is already a lot better than anything X11 ever had. Relaxing ACK requirements for the ext namespace is also a good proposed administrative change, as it allows some compositors to add features they want to support to the shared repository easier, while also not mandating them for others. In my opinion, it would allow for a lot less friction between the two different ideas of how Wayland protocol development should work. Some compositors could move forward and support more protocol extensions, while more restrictive compositors could support less things. Applications can detect supported protocols at launch and change their behavior accordingly (ideally even abstracted by toolkits).

You may now say that a lot of apps are ported, so surely this issue can not be that bad. And yes, what Wayland provides today may be enough for 80-90% of all apps. But what I hope the detour into the research lab has done is convince you that this smaller percentage of apps matters. A lot. And that it may be worthwhile to support them.

To end on a positive note: When it came to porting concrete apps over to Wayland, the only real showstoppers so far5 were the missing window-positioning and window-position-restore features. I encountered them when porting my own software, and I got the issue as feedback from colleagues and fellow engineers. In second place was UI testing and automation support, the window-icon issue was mentioned twice, but being a cosmetic issue it likely simply hurts people less and they can ignore it easier.

What this means is that the majority of apps are already fine, and many others are very, very close! A Wayland future for everyone is within our grasp!

I will also bring my two protocol MRs to their conclusion for sure, because as application developers we need clarity on what the platform (either all desktops or even just a few) supports and will or will not support in future. And the only way to get something good done is by contribution and friendly discussion.

Footnotes
  1. Apologies for the clickbait-y title – it comes with the subject
  2. When I talk about “Wayland” I mean the combined set of display server protocols and accepted protocol extensions, unless otherwise clarified.
  3. I would have picked a picture from our lab, but that would have needed permission first
  4. Qt has awesome “platform issues” pages, like for macOS and Linux/X11 which help with porting efforts, but Qt doesn’t even list Linux/Wayland as supported platform. There is some information though, like window geometry peculiarities, which aren’t particularly helpful when porting (but still essential to know).
  5. Besides issues with Nvidia hardware – CUDA for simulations and machine-learning is pretty much everywhere, so Nvidia cards are common, which causes trouble on Wayland still. It is improving though.
Categories: FLOSS Project Planets

Matthias Klumpp: Wayland really breaks things… Just for now?

Planet Debian - Thu, 2024-01-11 11:24

This post is in part a response to an aspect of Nate’s post “Does Wayland really break everything?“, but also my reflection on discussing Wayland protocol additions, a unique pleasure that I have been involved with for the past months1.

Some facts

Before I start I want to make a few things clear: The Linux desktop will be moving to Wayland2 – this is a fact at this point (and has been for a while), sticking to X11 makes no sense for future projects. From reading Wayland protocols and working with it at a much lower level than I ever wanted to, it is also very clear to me that Wayland is an exceptionally well-designed core protocol, and so are the additional extension protocols (xdg-shell & Co.). The modularity of Wayland is great, it gives it incredible flexibility and will for sure turn out to be good for the long-term viability of this project (and also provides a path to correct protocol issues in future, if one is found). In other words: Wayland is an amazing foundation to build on, and a lot of its design decisions make a lot of sense!

The shift towards people seeing “Linux” more as an application developer platform, and taking PipeWire and XDG Portals into account when designing for Wayland is also an amazing development and I love to see this – this holistic approach is something I always wanted!

Furthermore, I think Wayland removes a lot of functionality that shouldn’t exist in a modern compositor – and that’s a good thing too! Some of X11’s features and design decisions had clear drawbacks that we shouldn’t replicate. I highly recommend to read Nate’s blog post, it’s very good and goes into more detail. And due to all of this, I firmly believe that any advancement in the Wayland space must come from within the project.

But!

But! Of course there was a “but” coming – I think while developing Wayland-as-an-ecosystem we are now entrenched into narrow concepts of how a desktop should work. While discussing Wayland protocol additions, a lot of concepts clash, people from different desktops with different design philosophies debate the merits of those over and over again never reaching any conclusion (just as you will never get an answer out of humans whether sushi or pizza is the clearly superior food, or whether CSD or SSD is better). Some people want to use Wayland as a vehicle to force applications to submit to their desktop’s design philosophies, others prefer the smallest and leanest protocol possible, other developers want the most elegant behavior possible. To be clear, I think those are all very valid approaches.

But this also creates problems: By switching to Wayland compositors, we are already forcing a lot of porting work onto toolkit developers and application developers. This is annoying, but just work that has to be done. It becomes frustrating though if Wayland provides toolkits with absolutely no way to reach their goal in any reasonable way. For Nate’s Photoshop analogy: Of course Linux does not break Photoshop, it is Adobe’s responsibility to port it. But what if Linux was missing a crucial syscall that Photoshop needed for proper functionality and Adobe couldn’t port it without that? In that case it becomes much less clear on who is to blame for Photoshop not being available.

A lot of Wayland protocol work is focused on the environment and design, while applications and work to port them often is considered less. I think this happens because the overlap between application developers and developers of the desktop environments is not necessarily large, and the overlap with people willing to engage with Wayland upstream is even smaller. The combination of Windows developers porting apps to Linux and having involvement with toolkits or Wayland is pretty much nonexistent. So they have less of a voice.

A quick detour through the neuroscience research lab

I have been involved with Freedesktop, GNOME and KDE for an incredibly long time now (more than a decade), but my actual job (besides consulting for Purism) is that of a PhD candidate in a neuroscience research lab (working on the morphology of biological neurons and its relation to behavior). I am mostly involved with three research groups in our institute, which is about 35 people. Most of us do all our data analysis on powerful servers which we connect to using RDP (with KDE Plasma as desktop). Since I joined, I have been pushing the envelope a bit to extend Linux usage to data acquisition and regular clients, and to have our data acquisition hardware interface well with it. Linux brings some unique advantages for use in research, besides the obvious one of having every step of your data management platform introspectable with no black boxes left, a goal I value very highly in research (but this would be its own blogpost).

In terms of operating system usage though, most systems are still Windows-based. Windows is what companies develop for, and what people use by default and are familiar with. The choice of operating system is very strongly driven by application availability, and WSL being really good makes this somewhat worse, as it removes the need for people to switch to a real Linux system entirely if there is the occasional software requiring it. Yet, we have a lot more Linux users than before, and use it in many places where it makes sense. I also developed a novel data acquisition software that even runs on Linux-only and uses the abilities of the platform to its fullest extent. All of this resulted in me asking existing software and hardware vendors for Linux support a lot more often. Vendor-customer relationship in science is usually pretty good, and vendors do usually want to help out. Same for open source projects, especially if you offer to do Linux porting work for them… But overall, the ease of use and availability of required applications and their usability rules supreme. Most people are not technically knowledgeable and just want to get their research done in the best way possible, getting the best results with the least amount of friction.

KDE/Linux usage at a control station for a particle accelerator at Adlershof Technology Park, Germany, for reference (by 25years of KDE)3 Back to the point

The point of that story is this: GNOME, KDE, RHEL, Debian or Ubuntu: They all do not matter if the necessary applications are not available for them. And as soon as they are, the easiest-to-use solution wins. There are many facets of “easiest”: In many cases this is RHEL due to Red Hat support contracts being available, in many other cases it is Ubuntu due to its mindshare and ease of use. KDE Plasma is also frequently seen, as it is perceived a bit easier to onboard Windows users with it (among other benefits). Ultimately, it comes down to applications and 3rd-party support though.

Here’s a dirty secret: In many cases, porting an application to Linux is not that difficult. The thing that companies (and FLOSS projects too!) struggle with and will calculate the merits of carefully in advance is whether it is worth the support cost as well as continuous QA/testing. Their staff will have to do all of that work, and they could spend that time on other tasks after all.

So if they learn that “porting to Linux” not only means added testing and support, but also means to choose between the legacy X11 display server that allows for 1:1 porting from Windows or the “new” Wayland compositors that do not support the same features they need, they will quickly consider it not worth the effort at all. I have seen this happen.

Of course many apps use a cross-platform toolkit like Qt, which greatly simplifies porting. But this just moves the issue one layer down, as now the toolkit needs to abstract Windows, macOS and Wayland. And Wayland does not contain features to do certain things or does them very differently from e.g. Windows, so toolkits have no way to actually implement the existing functionality in a way that works on all platforms. So in Qt’s documentation you will often find texts like “works everywhere except for on Wayland compositors or mobile”4.

Many missing bits or altered behavior are just papercuts, but those add up. And if users will have a worse experience, this will translate to more support work, or people not wanting to use the software on the respective platform.

What’s missing? Window positioning

SDI applications with multiple windows are very popular in the scientific world. For data acquisition (for example with microscopes) we often have one monitor with control elements and one larger one with the recorded image. There is also other configurations where multiple signal modalities are acquired, and the experimenter aligns windows exactly in the way they want and expects the layout to be stored and to be loaded upon reopening the application. Even in the image from Adlershof Technology Park above you can see this style of UI design, at mega-scale. Being able to pop-out elements as windows from a single-window application to move them around freely is another frequently used paradigm, and immensely useful with these complex apps.

It is important to note that this is not a legacy design, but in many cases an intentional choice – these kinds of apps work incredibly well on larger screens or many screens and are very flexible (you can have any window configuration you want, and switch between them using the (usually) great window management abilities of your desktop).

Of course, these apps will work terribly on tablets and small form factors, but that is not the purpose they were designed for and nobody would use them that way.

I assumed for sure these features would be implemented at some point, but when it became clear that that would not happen, I created the ext-placement protocol which had some good discussion but was ultimately rejected from the xdg namespace. I then tried another solution based on feedback, which turned out not to work for most apps, and now proposed xdg-placement (v2) in an attempt to maybe still get some protocol done that we can agree on, exploring more options before pushing the existing protocol for inclusion into the ext Wayland protocol namespace. Meanwhile though, we can not port any application that needs this feature, while at the same time we are switching desktops and distributions to Wayland by default.

Window position restoration

Similarly, a protocol to save & restore window positions was already proposed in 2018, 6 years ago now, but it has still not been agreed upon, and may not even help multiwindow apps in its current form. The absence of this protocol means that applications can not restore their former window positions, and the user has to move them to their previous place again and again.

Meanwhile, toolkits can not adopt these protocols and applications can not use them and can not be ported to Wayland without introducing papercuts.

Window icons

Similarly, individual windows can not set their own icons, and not-installed applications can not have an icon at all because there is no desktop-entry file to load the icon from and no icon in the theme for them. You would think this is a niche issue, but for applications that create many windows, providing icons for them so the user can find them is fairly important. Of course it’s not the end of the world if every window has the same icon, but it’s one of those papercuts that make the software slightly less user-friendly. Even applications with fewer windows like LibrePCB are affected, so much so that they rather run their app through Xwayland for now.

I decided to address this after I was working on data analysis of image data in a Python virtualenv, where my code and the Python libraries used created lots of windows all with the default yellow “W” icon, making it impossible to distinguish them at a glance. This is xdg-toplevel-icon now, but of course it is an uphill battle where the very premise of needing this is questioned. So applications can not use it yet.

Limited window abilities requiring specialized protocols

Firefox has a picture-in-picture feature, allowing it to pop out media from a mediaplayer as separate floating window so the user can watch the media while doing other things. On X11 this is easily realized, but on Wayland the restrictions posed on windows necessitate a different solution. The xdg-pip protocol was proposed for this specialized usecase, but it is also not merged yet. So this feature does not work as well on Wayland.

Automated GUI testing / accessibility / automation

Automation of GUI tasks is a powerful feature, so is the ability to auto-test GUIs. This is being worked on, with libei and wlheadless-run (and stuff like ydotool exists too), but we’re not fully there yet.

Wayland is frustrating for (some) application authors

As you see, there is valid applications and valid usecases that can not be ported yet to Wayland with the same feature range they enjoyed on X11, Windows or macOS. So, from an application author’s perspective, Wayland does break things quite significantly, because things that worked before can no longer work and Wayland (the whole stack) does not provide any avenue to achieve the same result.

Wayland does “break” screen sharing, global hotkeys, gaming latency (via “no tearing”) etc, however for all of these there are solutions available that application authors can port to. And most developers will gladly do that work, especially since the newer APIs are usually a lot better and more robust. But if you give application authors no path forward except “use Xwayland and be on emulation as second-class citizen forever”, it just results in very frustrated application developers.

For some application developers, switching to a Wayland compositor is like buying a canvas from the Linux shop that forces your brush to only draw triangles. But maybe for your avant-garde art, you need to draw a circle. You can approximate one with triangles, but it will never be as good as the artwork of your friends who got their canvases from the Windows or macOS art supply shop and have more freedom to create their art.

Triangles are proven to be the best shape! If you are drawing circles you are creating bad art!

Wayland, via its protocol limitations, forces a certain way to build application UX – often for the better, but also sometimes to the detriment of users and applications. The protocols are often fairly opinionated, a result of the lessons learned from X11. In any case though, it is the odd one out – Windows and macOS do not pose the same limitations (for better or worse!), and the effort to port to Wayland is orders of magnitude bigger, or sometimes in case of the multiwindow UI paradigm impossible to achieve to the same level of polish. Desktop environments of course have a design philosophy that they want to push, and want applications to integrate as much as possible (same as macOS and Windows!). However, there are many applications out there, and pushing a design via protocol limitations will likely just result in fewer apps.

The porting dilemma

I spent probably way too much time looking into how to get applications cross-platform and running on Linux, often talking to vendors (FLOSS and proprietary) as well. Wayland limitations aren’t the biggest issue by far, but they do start to come come up now, especially in the scientific space with Ubuntu having switched to Wayland by default. For application authors there is often no way to address these issues. Many scientists do not even understand why their Python script that creates some GUIs suddenly behaves weirdly because Qt is now using the Wayland backend on Ubuntu instead of X11. They do not know the difference and also do not want to deal with these details – even though they may be programmers as well, the real goal is not to fiddle with the display server, but to get to a scientific result somehow.

Another issue is portability layers like Wine which need to run Windows applications as-is on Wayland. Apparently Wine’s Wayland driver has some heuristics to make window positioning work (and I am amazed by the work done on this!), but that can only go so far.

A way out?

So, how would we actually solve this? Fundamentally, this excessively long blog post boils down to just one essential question:

Do we want to force applications to submit to a UX paradigm unconditionally, potentially loosing out on application ports or keeping apps on X11 eternally, or do we want to throw them some rope to get as many applications ported over to Wayland, even through we might sacrifice some protocol purity?

I think we really have to answer that to make the discussions on wayland-protocols a lot less grueling. This question can be answered at the wayland-protocols level, but even more so it must be answered by the individual desktops and compositors.

If the answer for your environment turns out to be “Yes, we want the Wayland protocol to be more opinionated and will not make any compromises for application portability”, then your desktop/compositor should just immediately NACK protocols that add something like this and you simply shouldn’t engage in the discussion, as you reject the very premise of the new protocol: That it has any merit to exist and is needed in the first place. In this case contributors to Wayland and application authors also know where you stand, and a lot of debate is skipped. Of course, if application authors want to support your environment, you are basically asking them now to rewrite their UI, which they may or may not do. But at least they know what to expect and how to target your environment.

If the answer turns out to be “We do want some portability”, the next question obviously becomes where the line should be drawn and which changes are acceptable and which aren’t. We can’t blindly copy all X11 behavior, some porting work to Wayland is simply inevitable. Some written rules for that might be nice, but probably more importantly, if you agree fundamentally that there is an issue to be fixed, please engage in the discussions for the respective MRs! We for sure do not want to repeat X11 mistakes, and I am certain that we can implement protocols which provide the required functionality in a way that is a nice compromise in allowing applications a path forward into the Wayland future, while also being as good as possible and improving upon X11. For example, the toplevel-icon proposal is already a lot better than anything X11 ever had. Relaxing ACK requirements for the ext namespace is also a good proposed administrative change, as it allows some compositors to add features they want to support to the shared repository easier, while also not mandating them for others. In my opinion, it would allow for a lot less friction between the two different ideas of how Wayland protocol development should work. Some compositors could move forward and support more protocol extensions, while more restrictive compositors could support less things. Applications can detect supported protocols at launch and change their behavior accordingly (ideally even abstracted by toolkits).

You may now say that a lot of apps are ported, so surely this issue can not be that bad. And yes, what Wayland provides today may be enough for 80-90% of all apps. But what I hope the detour into the research lab has done is convince you that this smaller percentage of apps matters. A lot. And that it may be worthwhile to support them.

To end on a positive note: When it came to porting concrete apps over to Wayland, the only real showstoppers so far5 were the missing window-positioning and window-position-restore features. I encountered them when porting my own software, and I got the issue as feedback from colleagues and fellow engineers. In second place was UI testing and automation support, the window-icon issue was mentioned twice, but being a cosmetic issue it likely simply hurts people less and they can ignore it easier.

What this means is that the majority of apps are already fine, and many others are very, very close! A Wayland future for everyone is within our grasp!

I will also bring my two protocol MRs to their conclusion for sure, because as application developers we need clarity on what the platform (either all desktops or even just a few) supports and will or will not support in future. And the only way to get something good done is by contribution and friendly discussion.

Footnotes
  1. Apologies for the clickbait-y title – it comes with the subject
  2. When I talk about “Wayland” I mean the combined set of display server protocols and accepted protocol extensions, unless otherwise clarified.
  3. I would have picked a picture from our lab, but that would have needed permission first
  4. Qt has awesome “platform issues” pages, like for macOS and Linux/X11 which help with porting efforts, but Qt doesn’t even list Linux/Wayland as supported platform. There is some information though, like window geometry peculiarities, which aren’t particularly helpful when porting (but still essential to know).
  5. Besides issues with Nvidia hardware – CUDA for simulations and machine-learning is pretty much everywhere, so Nvidia cards are common, which causes trouble on Wayland still. It is improving though.
Categories: FLOSS Project Planets

Andy Wingo: micro macro story time

GNU Planet! - Thu, 2024-01-11 09:10

Today, a tiny tale: about 15 years ago I was working on Guile’s macro expander. Guile inherited this code from an early version of Kent Dybvig’s portable syntax expander. It was... not easy to work with.

Some difficulties were essential. Scope is tricky, after all.

Some difficulties were incidental, but deep. The expander is ultimately a function that translates Scheme-with-macros to Scheme-without-macros. However, it is itself written in Scheme-with-macros, so to load it on a substrate without macros requires a pre-expanded copy of itself, whose data representations need to be compatible with any incremental change, so that you will be able to use the new expander to produce a fresh pre-expansion. This difficulty could have been avoided by incrementally bootstrapping the library. It works once you are used to it, but it’s gnarly.

But then, some difficulties were just superflously egregious. Dybvig is a totemic developer and researcher, but a generation or two removed from me, and when I was younger, it never occurred to me to just email him to ask why things were this way. (A tip to the reader: if someone is doing work you are interested in, you can just email them. Probably they write you back! If they don’t respond, it’s not you, they’re probably just busy and their inbox leaks.) Anyway in my totally speculatory reconstruction of events, when Dybvig goes to submit his algorithm for publication, he gets annoyed that “expand” doesn’t sound fancy enough. In a way it’s similar to the original SSA developers thinking that “phony functions” wouldn’t get published.

So Dybvig calls the expansion function “χ”, because the Greek chi looks like the X in “expand”. Fine for the paper, whatever paper that might be, but then in psyntax, there are all these functions named chi and chi-lambda and all sorts of nonsense.

In early years I was often confused by these names; I wasn’t in on the pun, and I didn’t feel like I had enough responsibility for this code to think what the name should be. I finally broke down and changed all instances of “chi” to “expand” back in 2011, and never looked back.

Anyway, this is a story with a very specific moral: don’t name your functions chi.

Categories: FLOSS Project Planets

KTextAddons 1.5.3

Planet KDE - Thu, 2024-01-11 07:29

KTextAddons is a library with Various text handling addons used by Ruqola and Kontact apps. It can be compiles for both Qt 5 and 6 and distros are advised to compile two builds for each until Ruqola is ported to Qt 6.

URL: https://download.kde.org/stable/ktextaddons/

SHA256: 8a52db8abfa8a9d68d2d291fb0f8be20659fd7899987b4dcafdf2468db0917dc

Changelog

  • Drop unused KXmlGui dependency
  • Adapt to new KConfigGroup API
  • As we exclude emojis we need to remove it from list and not exclude it
  • Use proxymodel when exclude emoticons were updated
  • Allow to exclude some specific emoticons (Need for ruqola)
  • Exclude mock engine => it’s for test
  • Remove generate pri support (removed in kf6)
Categories: FLOSS Project Planets

LN Webworks: How To Use Parameter Upcasting In Drupal

Planet Drupal - Thu, 2024-01-11 05:53

In Drupal 8 and later versions, the routing system allows developers to define routes for various pages in their applications. These routes can include placeholder elements in the path, which are essentially variables or dynamic values in the URL. These placeholders are enclosed in curly braces, such as {node} in the example I have provided.

How upcasting parameters work: Step-by-Step Process Placeholder Elements


These are parts of the URL that are variable or dynamic.In the example, /node/{node}, {node} is a placeholder indicating that this part of the URL can vary. The placeholders are named to make them identifiable. In the example, {node} is named to indicate that it represents a node ID.

Upcasting

Upcasting refers to the process of converting a placeholder value from the URL into an actual object instance.In the example, the system wants to convert the {node} placeholder value into an actual node object.

Categories: FLOSS Project Planets

Web Wash: Getting Started with Gutenberg (Block Editor) in Drupal (2024)

Planet Drupal - Thu, 2024-01-11 04:21

The Gutenberg Editor is a powerful page building tool for Drupal that utilizes a block building system to create and edit content.

This comprehensive guide walks you through each step of setting up and using the Gutenberg Editor.

From downloading and installing the Gutenberg module, to enabling it on your content types, and finally using it to create your content using blocks, this guide has you covered.

To get the most out of this guide, follow the video above.

Categories: FLOSS Project Planets

ImageX: Accessibility Elements, Part 5: Captions, Subtitles, Transcripts, and Audio Descriptions in Drupal

Planet Drupal - Wed, 2024-01-10 19:06

Multimedia content, such as engaging videos, insightful podcasts, and vibrant images, is meant to captivate, inform, and entertain website users. However, while creating an immersive world of multimedia experiences, it’s necessary to be mindful of the people with a wide range of impairments who need alternative ways to perceive the content. 

Categories: FLOSS Project Planets

Dirk Eddelbuettel: BH 1.84.0-1 on CRAN: New Upstream

Planet Debian - Wed, 2024-01-10 17:41

Boost is a very large and comprehensive set of (peer-reviewed) libraries for the C++ programming language, containing well over one hundred individual libraries. The BH package provides a sizeable subset of header-only libraries for (easier, no linking required) use by R. It is fairly widely used: the (partial) CRAN mirror logs (aggregated from the cloud mirrors) show over 35.7 million package downloads.

Version 1.84.0 of Boost was released in December following the regular Boost release schedule of April, August and December releases. As the commits and changelog show, we packaged it almost immediately and started testing following our annual update cycle which strives to balance being close enough to upstream and not stressing CRAN and the user base too much. The reverse depends check revealed five packages requiring changes or adjustments which is a pretty good outcome given the over three hundred direct reverse dependencies. So we opened issue #100 to coordinate the issue over the winter break during which CRAN also closes (just as we did in previous years). Our sincere thanks to the two packages that already updated before, and to the one that updated today within hours (!!) of the BH uploaded it needed.

There are very few actual changes. We honoured one request (in issue #97) to add Boost QVM bringing quarternion support to R. No other new changes needed to be made. A number of changes I have to make each time in BH, and it is worth mentioning them. Because CRAN cares about backwards compatibility and the ability to be used on minimal or older systems, we still adjust the filenames of a few files to fit a jurassic constraints of just over a 100 characters per filepath present in some long-outdated versions of tar. Not a big deal. We also, and that is more controversial, silence a number of #pragma diagnostic messages for g++ and clang++ because CRAN insists on it. I have no choice in that matter. One warning we suppressed last year, but no longer do, concerns the C++14 standard that some Boost libraries now default to. Packages setting C++11 explicitly will likely get a note from CRAN changing this; in most cases that should be trivial to remove as we only had to opt into (then) newer standards under old compilers. These days newer defaults help; R itself now defaults to C++17.

Changes in version 1.84.0-0 (2024-01-09)

Via my CRANberries, there is a diffstat report relative to the previous release. Comments and suggestions about BH are welcome via the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

TestDriven.io: Django vs. Flask in 2024: Which Framework to Choose

Planet Python - Wed, 2024-01-10 16:43
In this article, we'll look at the best use cases for Django and Flask along with what makes them unique, from an educational and development standpoint.
Categories: FLOSS Project Planets

Nikola: Nikola v8.3.0 is out!

Planet Python - Wed, 2024-01-10 15:34

On behalf of the Nikola team, I am pleased to announce the immediate availability of Nikola v8.3.0. This release adds support for Python 3.12, with some other features and bugfixes.

Note that Nikola v8.3.0 no longer uses the Yapsy plugin manager, which has been replaced by a custom, minimal manager. The new Nikola Plugin Manager was tested with some typical plugins, but there might be issues if your plugins have an unusual structure or are outdated. Please update your plugins and report any bugs you may encounter.

The Nikola developers would also like to express discontent with Python’s efforts to remove features from the standard library, causing breakage without a solid reason, other than “it’s old”.

What is Nikola?

Nikola is a static site and blog generator, written in Python. It can use Mako and Jinja2 templates, and input in many popular markup formats, such as reStructuredText and Markdown — and can even turn Jupyter Notebooks into blog posts! It also supports image galleries, and is multilingual. Nikola is flexible, and page builds are extremely fast, courtesy of doit (which is rebuilding only what has been changed).

Find out more at the website: https://getnikola.com/

Downloads

Install using pip install Nikola.

Changes Features
  • Implement a new plugin manager from scratch to replace Yapsy, which does not work on Python 3.12 due to Python 3.12 carelessly removing parts of the standard library (Issue #3719)

  • Support for Discourse as comment system (Issue #3689)

Bugfixes
  • Fix loading of templates from plugins with __init__.py files (Issue #3725)

  • Fix margins of paragraphs at the end of sections (Issue #3704)

  • Ignore .DS_Store files in listing indexes (Issue #3698)

  • Fix baguetteBox.js invoking in the base theme (Issue #3687)

  • Fix development (preview) server nikola auto for non-root SITE_URL, in particular when URL_TYPE is full_path. (Issue #3715)

For plugin developers

Nikola now requires the .plugin file to contain a [Nikola] section with a PluginCategory entry set to the name of the plugin category class. This was already required by plugins.getnikola.com, but you may have custom plugins that don’t have this set.

Categories: FLOSS Project Planets

PreviousNext: Real-time: Symfony Messenger Consume command and prioritised messages

Planet Drupal - Wed, 2024-01-10 14:39

The greatest advantage of Symfony Messenger is arguably the ability to send and process messages in a different thread almost immediately. This post covers the worker that powers this functionality.

by daniel.phin / 11 January 2024

This post is part 3 in a series about Symfony Messenger.

  1. Introducing Symfony Messenger integrations with Drupal
  2. Symfony Messenger’ message and message handlers, and comparison with @QueueWorker
  3. Real-time: Symfony Messenger’ Consume command and prioritised messages
  4. Automatic message scheduling and replacing hook_cron
  5. Adding real-time processing to QueueWorker plugins
  6. Making Symfony Mailer asynchronous: integration with Symfony Messenger
  7. Displaying notifications when Symfony Messenger messages are processed
  8. Future of Symfony Messenger in Drupal

The Symfony Messenger integration, including the worker, is provided by the SM project. The worker is tasked with listening for messages ready to be dispatched from an asynchronous transport, such as the Doctrine database transport. The worker then re-dispatches the message onto the bus.

Some messages may be added to a bus with no particular execution time, in which case they are serialised by the original thread. Then unserialised almost immediately by the consume command in a different thread.

Since Messenger has the concept of delaying messages until a particular date, the DelayStamp can be utilised. The consume command respects this stamp and will not redispatch a message until the time is right.

The worker is found in the sm console application, rather than Drush. When SM is installed, Composer makes the application available in your bin directory. Typically at /vendor/bin/sm

The command takes one or more transports as the argument. For example if you’re using the Doctrine transport, the command would be:

sm messenger:consume doctrine

Multiple instances of the worker may be run simultaneously to improve throughput.

The worker. Messages output to stdout for demonstration purposes.Prioritised messages

The worker allows you to prioritise the processing of messages by which transport a message was dispatched to. Transport prioritisation is achieved by adding a space separated list of transports as the command argument.

For example, given transports defined in a site-level services.yml file:

parameters: sm.transports: doctrine: dsn: 'doctrine://default?table_name=messenger_messages' highpriority: dsn: 'doctrine://default?table_name=messenger_messages_high' lowpriority: dsn: 'doctrine://default?table_name=messenger_messages_low'

In this case, the command would be sm messenger:consume highpriority doctrine lowpriority

Routing from messages to transports must also be configured appropriately. For example, you may decide Email messages are the highest priority. \Symfony\Component\Mailer\Messenger\SendEmailMessage would be mapped to highpriority:

parameters: sm.routing: Symfony\Component\Mailer\Messenger\SendEmailMessage: highpriority Drupal\my_module\LessImportantMessage: lowpriority '*': doctrine

More information on routing can be found in the previous post.

The transport a message is sent to may also be overridden on an individual message basis by utilising the Symfony\Component\Messenger\Stamp\TransportNamesStamp stamp. Though for simplicity I’d recommend sticking to standard routing.

Running the CLI application

The sm worker listens and processes messages, and is designed to run forever. A variety of built in flags are included, with the ability to quit when a memory or time limit is reached, or when a certain number of messages are processed or fail. Flags can be combined to process available messages and quit, much like drush queue:run.

Further information on how to use the worker in production can be found in the Consuming Messages (Running the Worker) documentation.

The next post covers Cron and Scheduled messages, a viable replacement to hook_cron.

Tagged Symfony, Symfony Messenger, Symfony Console, CLI
Categories: FLOSS Project Planets

The Drop Times: Essential Drupal Modules that Help you Prevent Spam

Planet Drupal - Wed, 2024-01-10 12:59
Discover essential modules and external services in Drupal designed to combat spam effectively, including CAPTCHA-based defences, non-CAPTCHA solutions, and external spam prevention services. Safeguard your website from unwanted traffic and malicious content with these powerful tools and strategies.
Categories: FLOSS Project Planets

Drupal Association blog: DrupalCon 2024 - How to Convince Your Boss (Sample Letter Enclosed)

Planet Drupal - Wed, 2024-01-10 12:57

DrupalCon 2024 is approaching soon, and you can’t wait to head to vibrant Portland. If this is you, you also must be stressed about persuading your boss to invest in your attendance at the Drupal event of the year. But don’t worry, we’ve got you covered! This article is your go-to resource, where you’ll find all the ammo you need to make your case. Let’s get started!

But First, Are You Convinced About Attending DrupalCon?

Naturally, your organization has various factors to weigh, with the primary concern being whether sending you to DrupalCon Portland is worth their investment. But the pivotal question is the value you see in it. Explore our list of strong reasons to attend DrupalCon 2024.

  • For everyone - DrupalCon, the biggest open-source event in North America, offers a unique experience for all Drupal enthusiasts—whether you're diving into Drupal for the first time or have been a community member for years. The benefits of attending are vast.
  • Training - Learn specific skills relevant to your role through targeted training at DrupalCon. Develop a deep knowledge of Drupal directly relevant to your career, ensuring a direct and positive return on investment. (Can we mention some of the Training sessions lined-up for DCON 2024?)
  • Sessions - Dive into sessions led by the Drupal experts at DrupalCon. These are not just classes; they're conversations with the thought leaders who know their Drupal inside out. It's not just learning; it's getting hands-on wisdom from the best in the biz. (Can we mention some of the sessions lined-up for DCON 2024?)
  • Keynotes - Want a front-row seat to the State of Drupal and the future of the web? Then you cannot miss out on DriesNote. Plus, there are other keynotes that'll fire up your imagination about what's possible in the digital world.
  • Networking - Imagine being in a room with thousands of Drupal enthusiasts at DrupalCon. It's a community buzzing with passion. Got Drupal questions? Tap into a wealth of knowledge and enthusiasm at one of the largest open-source communities. Hallway Tracks and Exhibition areas are the heart of networking at DrupalCon. Who knows, you might just score a selfie with Dries on your stroll!
  • Industry Summits - It's not just about networking—it's about conversing with peers who've been there, done that. Learn the nitty-gritty of industry best practices at industry summits like Higher Education Summit, Nonprofit Summit, Government Summit, and Community Summit. Discover how to tackle business challenges head-on with Drupal solutions, giving your job skills a serious boost.
  • Peer Connection - It’s a chance to connect with folks who share the same passion as you. Swap stories, share insights, and stay in the loop about the latest in Drupal. Learn firsthand from those who get your role and challenges.
  • Contribution Sprint - If you’re new to Drupal contributions, get hands-on guidance and sprint mentoring by experts. Whether you’re a coder or a non-coder, they are ways for everyone to participate in community contribution sprints to amplify the power of Drupal together.
Sample Letter to Your Boss

Here’s a sample letter to help you convince your boss about attending DrupalCon 2024. We guarantee they'll see the light!

Dear [Boss’s Name],

I am writing to express my strong interest in attending DrupalCon 2024 in Portland, Oregon, and to request your approval for participation in this significant event. I believe that attending DrupalCon will not only benefit my professional development but also contribute to the success of our team and the company as a whole.

Here are several compelling reasons why my attendance at DrupalCon is beneficial for us:

  • Industry Insights: Networking at Industry Summits will keep us updated on best practices and innovative solutions.

  • Strategic Vision: Keynotes, especially DriesNote, offer strategic insights vital for our long-term planning.

  • Community Engagement: Networking with thousands of community members ensures immediate answers and collaborations.

  • Role-Specific Learning: Connecting with peers in our specific roles provides insights into the latest in Drupal.

  • Contribution Sprint: Active participation contributes to Drupal's strength, enhancing our company's reputation.

I am seeking approval for the associated expenditures, which include:

EXPENSE

AMOUNT

Airfare

Visa Fees (if required)

Ground Transportation

Hotel

Meals

Conference Ticket

TOTAL EXPENSE

[Add this line if you’re traveling from overseas] The Drupal Association can issue an official letter of invitation to obtain a visa for my travel to the United States.

The Drupal Association can also issue a Certificate of Attendance for the conference if required for our records.

Please accept this proposal to attend, as I'm confident in the significant return we will receive for the small investment. For more information on the event, please visit the conference website: https://events.drupal.org/portland2024.

I'm available to discuss this further at your earliest convenience.

Sincerely,
[Your Full Name]
[Your Position]
[Your Contact Information]

Categories: FLOSS Project Planets

A historic view of the practice to delay releasing Open Source software: OSI’s report

Open Source Initiative - Wed, 2024-01-10 10:00

The Open Source Initiative published today a new report that looks at the history of the business practice to delay releasing their code under freedom-respecting licenses. Since the early days of the Open Source movement, companies have experimented with finding a balance between granting their users the basic freedoms guaranteed by Open Source licenses while also capitalizing on their investments in software development. One common approach, albeit with many different flavors, is what this report calls “Delayed Open Source Publication” (DOSP) — “the practice of distributing or publicly deploying software under a proprietary license at first, then subsequently and in a planned fashion publishing that software’s source code under an Open Source license.”

The new report titled “Delayed Open Source Publication: A Survey of Historical and Current Practices” was authored by the team of Open Tech Strategies (Seth Schoen, James Vasile and Karl Fogel) based on crowdsourced interviews. Their research was made possible through a donation by Sentry and the financial contributions of OSI individual members. 

Like the authors, I found that the historical survey revealed numerous surprises, and what I found even more intriguing are the new questions raised (see Section 7) that beg for more dedicated research. 

I encourage you to give it a read and share it with others. We encourage feedback from the community: I hold office hours for OSI members and you can discuss this on Mastodon or LinkedIn.

Download the report.

The post <span class='p-name'>A historic view of the practice to delay releasing Open Source software: OSI’s report</span> appeared first on Voices of Open Source.

Categories: FLOSS Research

Real Python: Python range(): Represent Numerical Ranges

Planet Python - Wed, 2024-01-10 09:00

A range is a Python object that represents an interval of integers. Usually, the numbers are consecutive, but you can also specify that you want to space them out. You can create ranges by calling range() with one, two, or three arguments, as the following examples show:

Python >>> list(range(5)) [0, 1, 2, 3, 4] >>> list(range(1, 7)) [1, 2, 3, 4, 5, 6] >>> list(range(1, 20, 2)) [1, 3, 5, 7, 9, 11, 13, 15, 17, 19] Copied!

In each example, you use list() to explicitly list the individual elements of each range. You’ll study these examples in more detail soon.

In this tutorial, you’ll learn how you can:

  • Create range objects that represent ranges of consecutive integers
  • Represent ranges of spaced-out numbers with a fixed step
  • Decide when range is a good solution for your use case
  • Avoid range in most loops

A range can sometimes be a powerful tool. However, throughout this tutorial, you’ll also explore alternatives that may work better in some situations. You can click the link below to download the code that you’ll see in this tutorial:

Get Your Code: Click here to download the free sample code that shows you how to represent numerical ranges in Python.

Construct Numerical Ranges

In Python, range() is built in. This means that you can always call range() without doing any preparations first. Calling range() constructs a range object that you can put to use. Later, you’ll see practical examples of how to use range objects.

You can provide range() with one, two, or three integer arguments. This corresponds to three different use cases:

  1. Ranges counting from zero
  2. Ranges of consecutive numbers
  3. Ranges stepping over numbers

You’ll learn how to use each of these next.

Count From Zero

When you call range() with one argument, you create a range that counts from zero and up to, but not including, the number you provided:

Python >>> range(5) range(0, 5) Copied!

Here, you’ve created a range from zero to five. To see the individual elements in the range, you can use list() to convert the range to a list:

Python >>> list(range(5)) [0, 1, 2, 3, 4] Copied!

Inspecting range(5) shows that it contains the numbers zero, one, two, three, and four. Five itself is not a part of the range. One nice property of these ranges is that the argument, 5 in this case, is the same as the number of elements in the range.

Count From Start to Stop

You can call range() with two arguments. The first value will be the start of the range. As before, the range will count up to, but not include, the second value:

Python >>> range(1, 7) range(1, 7) Copied!

The representation of a range object just shows you the arguments that you provided, so it’s not super helpful in this case. You can use list() to inspect the individual elements:

Python >>> list(range(1, 7)) [1, 2, 3, 4, 5, 6] Copied!

Observe that range(1, 7) starts at one and includes the consecutive numbers up to six. Seven is the limit of the range and isn’t included. You can calculate the number of elements in a range by subtracting the start value from the end value. In this example, there are 7 - 1 = 6 elements.

Count From Start to Stop While Stepping Over Numbers Read the full article at https://realpython.com/python-range/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Django Weblog: DSF membership now recognizes a much broader range of contributions to Django

Planet Python - Wed, 2024-01-10 07:00

Recently, the DSF made some changes to our bylaws to change the definition of DSF Membership. You can read the legalese of the new language in the meeting minutes for the October 12 board meeting, but here’s the short version: previously, individual membership required contribution of intellectual property (e.g. code or documentation) we’ve changed it so that individual membership now recognizes broader contributions to the DSF’s mission. That still includes code and docs, but now also includes many more activities: organizing a Django event, serving on a Working Group, maintaining a third-party app, moderating Django community spaces, and much more. (Corporate membership hasn’t changed; this just applies to individual membership.)

The DSF’s mission, as described in our bylaws, is: 

The Foundation's purposes shall include, but not be limited to, developing and promoting the Django framework for free and open public use among the worldwide web development community, protecting the framework's long-term viability, and advancing the state of the art in web development.

Membership, then, recognizes material contributions to that mission. This is deliberately broad and inclusive: we want to allow as broad a definition of “contribution” as possible – including, critically, contributions to the community as well as code contributions. But we do want those contributions to be “material”: we want to recognize substantial or sustained contributions, not one-offs or “drive-by” contributions.

Because this definition of “material” is somewhat deliberately vague, we’ve prepared an FAQ that outlines several examples of things we believe do and do not qualify someone for membership. Ultimately, though, if you’re not sure: please apply anyway! We generally try to err on the side of saying “yes”.

To join the DSF under these new, more inclusive rules, fill out the application form here. The Board approves new members at its monthly meeting, so you can expect to hear back within about a month.

Categories: FLOSS Project Planets

Pages