Feeds

ImageX: The Benefits of a Composable CMS (And How Drupal Fits the Bill)

Planet Drupal - Fri, 2024-05-24 14:08

This article was updated in May 2024.

As a marketing leader, you need to drive traffic to your site and create a superior user experience. But you also need to push your content out to a variety of channels so you can reach your audience where they are.

To achieve your goals, you need a content management system (CMS) that’s flexible, scalable, and efficient. And if you’re researching your options, you’ve probably heard a lot about composable CMSs.

Categories: FLOSS Project Planets

1xINTERNET blog: CMS features every editor and marketer needs

Planet Drupal - Fri, 2024-05-24 08:00

Every marketer and content editor deserves solutions that deliver outstanding results. Check how a preconfigured Drupal CMS can make your daily work easier!

Categories: FLOSS Project Planets

Real Python: Quiz: How to Create Pivot Tables With pandas

Planet Python - Fri, 2024-05-24 08:00

In this quiz, you’ll test your understanding of how to create pivot tables with pandas.

By working through this quiz, you’ll review your knowledge of pivot tables and also expand beyond what you learned in the tutorial. For some of the questions, you’ll need to do some research outside of the tutorial itself.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Web Review, Week 2024-21

Planet KDE - Fri, 2024-05-24 05:39

Let’s go for my web review for the week 2024-21.

Gender bias in open source: Pull request acceptance of women versus men

Tags: tech, foss, bias

A bit too GitHub centric for my taste. Still it shows some unwarranted bias, especially when outsiders to a project are identified as women. We should do better.

https://www.researchgate.net/publication/308716997_Gender_bias_in_open_source_Pull_request_acceptance_of_women_versus_men


BitKeeper, Linux, and licensing disputes: How Linus wrote Git in 14 days

Tags: tech, foss, version-control, linux, git, history

The often forgotten history behind the creation of Git. This article does a good job summarizing it.

https://graphite.dev/blog/bitkeeper-linux-story-of-git-creation


Pluralistic: The Coprophagic AI crisis

Tags: tech, ai, machine-learning, gpt, data

The training dataset crisis is looming in the case of large language models. They’ll sooner or later run out of genuine content to use… and the generated toxic waste will end up in training data, probably leading to dismal results.

https://pluralistic.net/2024/03/14/inhuman-centipede/#enshittibottification


Google Is Paying Reddit $60 Million for Fucksmith to Tell Its Users to Eat Glue

Tags: tech, ai, machine-learning, gpt, google, data, quality

No, your model won’t get smarter just by throwing more training data at it… on the contrary.

https://www.404media.co/google-is-paying-reddit-60-million-for-fucksmith-to-tell-its-users-to-eat-glue/


How DeviantArt died: A.I. and greed turned a once-thriving community into a ghost town.

Tags: tech, ai, machine-learning, art, social-media, criticism

This is indeed sad to see another platform turn against its users. This was once a place to nurture young artists… it’s now another ad driven platform full of AI made scams.

https://slate.com/technology/2024/05/deviantart-what-happened-ai-decline-lawsuit-stability.html


OpenAI departures: Why can’t former employees talk, but the new ChatGPT release can? - Vox

Tags: tech, ai, machine-learning, gpt, criticism

Open is unsurprisingly only in the name… this company is really just a cult.

https://www.vox.com/future-perfect/2024/5/17/24158478/openai-departures-sam-altman-employees-chatgpt-release


New Windows AI feature records everything you’ve done on your PC | Ars Technica

Tags: tech, microsoft, windows, security, privacy

This is completely nuts… they really want to unleash a security and privacy nightmare. The irony is that it does respect DRM content on the other hand, we can see where the priorities are.

https://arstechnica.com/gadgets/2024/05/microsofts-new-recall-feature-will-record-everything-you-do-on-your-pc/


A Grand Unified Theory of the AI Hype Cycle

Tags: tech, ai, machine-learning, gpt, hype

Definitely this, it’s not the first time we see such a hype cycle around “AI”. When it bursts the technology which created it is just not called “AI” anymore. I wonder how long this one will last though.

https://blog.glyph.im/2024/05/grand-unified-ai-hype.html


A Plea for Sober AI | Drew Breunig

Tags: tech, ai, machine-learning, gpt, hype, criticism

Definitely too much hype around large models right now. This over shadows the more useful specialized models.

https://www.dbreunig.com/2024/05/16/sober-ai.html


Bing outage shows just how little competition Google search really has | Ars Technica

Tags: tech, google, microsoft, web, search

We’re still fairly dependent on just two major web indices… time for an index built as a common for everyone to use?

https://arstechnica.com/gadgets/2024/05/bing-outage-shows-just-how-little-competition-google-search-really-has/


stract: web search done right

Tags: tech, web, search

Looks like an interesting new search engine.

https://github.com/StractOrg/stract?tab=readme-ov-file


The curious case of the missing period - Tjaart’s Substack

Tags: tech, email, debugging

Fascinating bug… the fine details of mundane protocols like SMTP can sometimes be surprising.

https://tjaart.substack.com/p/the-curious-case-of-the-missing-period


Firefox bookmark keywords for faster navigation

Tags: tech, firefox, bookmarks

Interesting Firefox feature I didn’t notice. Looks fairly nice, I’ll use it more.

https://blog.meain.io/2024/firefox-bookmark-keywords


CADmium: A Local-First CAD Program Built for the Browser

Tags: tech, web, frontend, cad, physics, mathematics

This gives a good idea of the important parts in a CAD program. It also list a few of the usable libraries to build one such program in the browser.

https://mattferraro.dev/posts/cadmium


WebAssembly: A promising technology that is quietly being sabotaged

Tags: tech, webassembly, server

Where WebAssembly is, and where WebAssembly on the server is going… let’s hope it doesn’t become another CORBA.

https://kerkour.com/webassembly-wasi-preview2


Hartwork Blog · Clone arbitrary single Git commit

Tags: tech, git, ci

Neat trick, especially useful for CI uses.

https://blog.hartwork.org/posts/clone-arbitrary-single-git-commit/


Writing commit messages

Tags: tech, version-control, writing, communication

Very extensive guide on writing better commit messages. This is important, it’s a very central communication mechanism with other developers.

https://www.chiark.greenend.org.uk/~sgtatham/quasiblog/commit-messages/


UI Density || Matthew Ström, designer-leader

Tags: tech, gui, ux

Interesting discussion about UI density. What are we talking about? Is there value to is? Which aspects of a UI are impacting it? The conclusion makes it all very clear.

https://matthewstrom.com/writing/ui-density/


Bye for now!

Categories: FLOSS Project Planets

KDDockWidgets 2.1 Released

Planet KDE - Fri, 2024-05-24 05:00

KDDockWidgets has launched its latest version 2.1. This release comes packed with over 500 commits, offering enhanced stability over its predecessor, version 2.0, without introducing any breaking changes.

KDDockWidgets is a versatile framework for custom-tailored docking systems in Qt written by KDAB’s Sérgio Martins. For more information about its rich set of features, have a look at its GitHub repository.

What’s changed in version 2.1?

Here are the main highlights:

Bug Fixes:

For starters, KDDW 2.1 introduces a range of bug fixes aimed at enhancing stability and user experience. Less popular features like nested main-windows and auto-hide received lots of attention and window manager specific bugs such as restoring maximized windows were addressed.

KDDW is now memory-leak free, several singletons were leaking before. We’ve added a valgrind GitHub Actions workflow to prevent regressions regarding leaks.

QtQuick:

KDDW 2.1 also introduces improvements in QtQuick. New features include an API for setting affinities via QML, enabling mixing MDI with docking similar to QtWidgets, and fixing DPI issues of icons in TitleBar.qml for better scaling at 150% and 200%. Additionally, it resolves issues such as MDI widgets not raising when clicked on and various crashes related to MDI mode.

QtWidgets:

For QtWidgets, we’ve improved handling of the preferredSize argument when adding dock widgets. It was being ignored in some cases. Overriding DockWidget::closeEvent() can now be done to prevent closing. Several crashes were fixed and we’ve added a GitHub Actions workflow which runs the tests against a Qt built with AddressSanitizer.

These enhancements improve the functionality and stability of KDDW 2.1 across different Qt environments.

Miscellaneous:

KDDW 2.1 brings miscellaneous updates, including an upgrade to nlohmann json v3.11.3, the addition of a standalone layouting example using the UI toolkit Slint, and extensive testing on CI. Additionally, Config::setLayoutSpacing(int) has been added for increased customization.

Learn about all the changes here. Let us know what you think.

More information about KDDockWidgets

About KDAB

If you like this article and want to read similar material, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post KDDockWidgets 2.1 Released appeared first on KDAB.

Categories: FLOSS Project Planets

Julian Andres Klode: Observations in Debian dependency solving

Planet Debian - Fri, 2024-05-24 04:57

In my previous blog, I explored The New APT 3.0 solver. Since then I have been at work in the test suite making tests pass and fixing some bugs.

You see for all intents and purposes, the new solver is a very stupid naive DPLL SAT solver (it just so happens we don’t actually have any pure literals in there). We can control it in a bunch of ways:

  1. We can mark packages as “install” or “reject”
  2. We can order actions/clauses. When backtracking the action that came later will be the first we try to backtrack on
  3. We can order the choices of a dependency - we try them left to right.

This is about all that we really want to do, we can’t go if we reach a conflict, say “oh but this conflict was introduced by that upgrade, and it seems more important, so let’s not backtrack on the upgrade request but on this dependency instead.”.

This forces us to think about lowering the dependency problem into this form, such that not only do we get formally correct solutions, but also semantically correct ones. This is nice because we can apply a systematic way to approach the issue rather than introducing ad-hoc rules in the old solver which had a “which of these packages should I flip the opposite way to break the conflict” kind of thinking.

Now our test suite has a whole bunch of these semantics encoded in it, and I’m going to share some problems and ideas for how to solve them. I can’t wait to fix these and the error reporting and then turn it on in Ubuntu and later Debian (the defaults change is a post-trixie change, let’s be honest).

apt upgrade is hard

The apt upgrade commands implements a safe version of dist-upgrade that essentially calculates the dist-upgrade, and then undoes anything that would cause a package to be removed, but it (unlike its apt-get counterpart) allows the solver to install new packages.

Now, consider the following package is installed:

X Depends: A (= 1) | B

An upgrade from A=1 to A=2 is available. What should happen?

The classic solver would choose to remove X in a dist-upgrade, and then upgrade A, so it’s answer is quite clear: Keep back the upgrade of A.

The new solver however sees two possible solutions:

  1. Install B to satisfy X Depends A (= 1) | B.
  2. Keep back the upgrade of A

Which one does it pick? This depends on the order in which it sees the upgrade action for A and the dependency, as it will backjump chronologically. So

  1. If it gets to the dependency first, it marks A=1 for install to satisfy A (= 1). Then it gets to the upgrade request, which is just A Depends A (= 2) | A (= 1) and sees it is satisfied already and is content.

  2. If it gets to the upgrade request first, it marks A=2 for install to satisfy A (= 2). Then later it gets to X Depends: A (= 1) | B, sees that A (= 1) is not satisfiable, and picks B.

We have two ways to approach this issue:

  1. We always order upgrade requests last, so they will be kept back in case of conflicting dependencies
  2. We require that, for apt upgrade a currently satisfied dependency must be satisfied by currently installed packages, hence eliminating B as a choice.
Recommends are hard too

See if you have a X Recommends: A (= 1) and a new version of A, A (= 2), the solver currently will silently break the Recommends in some cases.

But let’s explore what the behavior of a X Recommends: A (= 1) in combination with an available upgrade of A (= 2) should be. We could say the rule should be:

  • An upgrade should keep back A instead of breaking the Recommends
  • A dist-upgrade should either keep back A or remove X (if it is obsolete)

This essentially leaves us the same choices as for the previous problem, but with an interesting twist. We can change the ordering (and we already did), but we could also introduce a new rule, “promotions”:

A Recommends in an installed package, or an upgrade to that installed package, where the Recommends existed in the installed version, that is currently satisfied, must continue to be satisfied, that is, it effectively is promoted to a Depends.

This neatly solves the problem for us. We will never break Recommends that are satisfied.

Likewise, we already have a Recommends demotion rule:

A Recommends in an installed package, or an upgrade to that installed package, where the Recommends existed in the installed version, that is currently unsatisfied, will not be further evaluated (it is treated like a Suggests is in the default configuration).

Whether we should be allowed to break Suggests with our decisions or not (the old autoremover did not, for instance) is a different decision. Should we promote currently satisified Suggests to Depends as well? Should we follow currently satisified Suggests so the solver sees them and doesn’t autoremove them, but treat them as optional?

tightening of versioned dependencies

Another case of versioned dependencies with alternatives that has complex behavior is something like

X Depends: A (>= 2) | B X Recommends: A (>= 2) | B

In both cases, installing X should upgrade an A < 2 in favour of installing B. But a naive SAT solver might not. If your request to keep A installed is encoded as A (= 1) | A (= 2), then it first picks A (= 1). When it sees the Depends/Recommends it will switch to B.

We can solve this again as in the previous example by ordering the “keep A installed” requests after any dependencies. Notably, we will enqueue the common dependencies of all A versions first before selecting a version of A, so something may select a version for us.

version narrowing instead of version choosing

A different approach to dealing with the issue of version selection is to not select a version until the very last moment. So instead of selecting a version to satisfy A (>= 2) we instead translate

Depends: A (>= 2)

into two rules:

  1. The package selection rule:

    Depends: A

    This ensures that any version of A is installed (i.e. it adds a version choice clause, A (= 1) | A (= 2) in an example with two versions for A.

  2. The version narrowing rule:

    Conflicts: A (<< 2)

    This outright would reject a choice of A (= 1).

So now we have 3 kinds of clauses:

  1. package selection
  2. version narrowing
  3. version selection

If we process them in that order, we should surely be able to find the solution that best matches the semantics of our Debian dependency model, i.e. selecting earlier choices in a dependency before later choices in the face of version restrictions.

This still leaves one issue: What if our maintainer did not use Depends: A (>= 2) | B but e.g. Depends: A (= 3) | B | A (= 2). He’d expect us to fall back to B if A (= 3) is not installable, and not to B. But we’d like to enqueue A and reject all choices other than 3 and 2. I think it’s fair to say: “Don’t do that, then” here.

Implementing strict pinning correctly

APT knows a single candidate version per package, this makes the solver relatively deterministic: It will only ever pick the candidate, or an installed version. This also happens to significantly reduce the search space which is good - less backtracking. An uptodate system will only ever have one version per package that can be installed, so we never actually have to choose versions.

But of course, APT allows you to specify a non-candidate version of a package to install, for example:

apt install foo/oracular-proposed

The way this works is that the core component of the previous solver, which is the pkgDepCache maintains what essentially amounts to an overlay of the policy that you could see with apt-cache policy.

The solver currently however validates allowed version choices against the policy directly, and hence finds these versions are not allowed and craps out. This is an interesting problem because the solver should not be dependent on the pkgDepCache as the pkgDepCache initialization (Building dependency tree...) accounts for about half of the runtime of APT (until the Y/n prompt) and I’d really like to get rid of it.

But currently the frontend does go via the pkgDepCache. It marks the packages in there, building up what you could call a transaction, and then we translate it to the new solver, and once it is done, it translates the result back into the pkgDepCache.

The current implementation of “allowed version” is implemented by reducing the search space, i.e. every dependency, we outright ignore any non-allowed versions. So if you have a version 3 of A that is ignored a Depends: A would be translated into A (= 2) | A (= 1).

However this has two disadvantages. (1) It means if we show you why A could not be installed, you don’t even see A (= 3) in the list of choices and (2) you would need to keep the pkgDepCache around for the temporary overrides.

So instead of actually enforcing the allowed version rule by filtering, a more reasonable model is that we apply the allowed version rule by just marking every other version as not allowed when discovering the package in the from depcache translation layer. This doesn’t really increase the search space either but it solves both our problem of making overrides work and giving you a reasonable error message that lists all versions of A.

pulling up common dependencies to minimize backtracking cost

One of the common issues we have is that when we have a dependency group

`A | B | C | D`

we try them in order, and if one fails, we undo everything it did, and move on to the next one. However, this isn’t perhaps the best choice of operation.

I explained before that one thing we do is queue the common dependencies of a package (i.e. dependencies shared in all versions) when marking a package for install, but we don’t do this here: We have already lowered the representation of the dependency group into a list of versions, so we’d need to extract the package back out of it.

This can of course be done, but there may be a more interesting solution to the problem, in that we simply enqueue all the common dependencies. That is, we add n backtracking levels for n possible solutions:

  1. We enqueue the common dependencies of all possible solutions deps(A)&deps(B)&deps(C)&deps(D)
  2. We decide (adding a decision level) not to install D right now and enqueue deps(A)&deps(B)&deps(C)
  3. We decide (adding a decision level) not to install C right now and enqueue deps(A)&deps(B)
  4. We decide (adding a decision level) not to install B right now and enqueue A

Now if we need to backtrack from our choice of A we hopefully still have a lot of common dependencies queued that we do not need to redo. While we have more backtracking levels, each backtracking level would be significantly cheaper, especially if you have cheap backtracking (which admittedly we do not have, yet anyway).

The caveat though is: It may be pretty expensive to find the common dependencies. We need to iterate over all dependency groups of A and see if they are in B, C, and D, so we have a complexity of roughly

#A * (#B+#C+#D)

Each dependency group we need to check i.e. is X|Y in B meanwhile has linear cost: We need to compare the memory content of two pointer arrays containing the list of possible versions that solve the dependency group. This means that X|Y and Y|X are different dependencies of course, but that is to be expected – they are. But any dependency of the same order will have the same memory layout.

So really the cost is roughly N^4. This isn’t nice.

You can apply various heuristics here on how to improve that, or you can even apply binary logic:

  1. Enqueue common dependencies of A|B|C|D
  2. Move into the left half, enqueue of A|B
  3. Again divide and conquer and select A.

This has a significant advantage in long lists of choices, and also in the common case, where the first solution should be the right one.

Or again, if you enqueue the package and a version restriction instead, you already get the common dependencies enqueued for the chosen package at least.

Categories: FLOSS Project Planets

The Drop Times: Highlights from the First Ever Drupal Iberia Event

Planet Drupal - Fri, 2024-05-24 01:00
The first Drupal Iberia 2024 at PACT in Évora, Portugal, marked a pivotal moment for the Drupal communities of Spain and Portugal.
Categories: FLOSS Project Planets

Parabola GNU/Linux-libre: makepkg.conf change

GNU Planet! - Thu, 2024-05-23 21:06

Parabola's default makepkg.conf has long loaded /etc/makepkg.d/*.conf. As of makepkg 6.1.0, the program itself now loads /etc/makepkg.conf.d/*.conf, so this part of our makepkg.conf has been removed. Users who have /etc/makepkg.d/*.conf files need to move them to /etc/makepkg.conf.d/.

Categories: FLOSS Project Planets

Freexian Collaborators: Discover release 0.3.0 of the debusine software factory (by Colin Watson)

Planet Debian - Thu, 2024-05-23 20:00

Debusine is a Free Software project developed by Freexian to manage scheduling and distribution of Debian-related tasks to a network of worker machines. It was started some time back, but its development pace has recently increased significantly thanks to funding from the Sovereign Tech Fund. You can read more about it in its documentation.

For more background, Enrico Zini and Carles Pina i Estany gave a talk on Debusine in November 2023 at the mini-DebConf in Cambridge.

We described the work from our first funded milestone in a post to debian-devel-announce in March.

We’ve recently finished work on our second funded milestone, culminating in releasing version 0.3.0 to unstable. Our focus on this milestone was on new building blocks to allow us to automatically orchestrate QA tasks in bulk. Full details are in our release history document. As usual, debusine.debian.net is up to date with our latest work.

Collections

In the previous milestone, debusine could store artifacts and run tasks against those artifacts. However, on its own this required the user to do a lot of manual work, because the only way to refer to an artifact was by its ID.

We now have the concept of a collection, which can store references to other artifacts (or indeed to other collections) with some attached metadata. These are structured by category, so for example a debian:suite collection contains references to source and binary package artifacts with their names, versions, and architectures as metadata. This allows us to look up artifacts using a simple query language instead of just by ID.

At the moment, the main visible effect of this is that our Getting started with debusine tutorial no longer needs users of debusine.debian.net to create their own build environments before being able to submit other work requests: they can refer to existing environments using something like debian/match:codename=trixie:variant=sbuild instead.

We also have a basic user interface allowing you to browse existing collections, accessible via the relevant workspace (such as the default System workspace).

Workflows

We’ve always known that individual tasks were just a starting point: real-world requirements often involve chaining many tasks together, as many Debian developers already do using the Salsa CI pipeline. debusine intends to approach a similar problem from a different angle, defining common workflows that can be applied at the scale of a whole distribution without being tightly coupled to where each package’s code is hosted.

In time we intend to define a way for users to specify their own workflows, but rather than getting too bogged down in this we started by building a couple of predefined workflows into debusine. The update_environments workflow is used to create multiple build environments in bulk, while the sbuild workflow builds a source package for all the architectures that it supports and for which debusine has workers. (debusine.debian.net currently has amd64 and arm64 workers, supporting the amd64, arm64, armel, armhf, and i386 architectures between them.)

Upcoming work will build on this by adding more workflows that chain tasks together in various ways, such as workflows that build a package and run QA tasks on the results, or a workflow that builds a package and uploads the result to an upload queue.

Next steps

Our next planned milestone involves expanding debusine’s capability as a build daemon. For that, we already know that there are a number of specific extra workflow steps we need to add, and we’ve reached out to some members of Debian’s buildd team to ask for feedback on what they consider necessary. We hope to be able to replace some of Freexian’s own build infrastructure with debusine in the near future.

Categories: FLOSS Project Planets

Reproducible Builds (diffoscope): diffoscope 268 released

Planet Debian - Thu, 2024-05-23 20:00

The diffoscope maintainers are pleased to announce the release of diffoscope version 268. This version includes the following changes:

[ Chris Lamb ] * Drop apktool from Build-Depends; we can still test our APK code via autopkgtests. (Closes: #1071410) * Fix tests for 7zip version 24.05. * Add a versioned dependency for at least version 5.4.5 for the xz tests; they fail under (at least xz 5.2.8). (Closes: reproducible-builds/diffoscope#374) [ Vagrant Cascadian ] * Relax Chris' versioned xz test dependency (5.4.5) to also allow version 5.4.1.

You find out more by visiting the project homepage.

Categories: FLOSS Project Planets

KDE Plasma 6.1 Beta Release

Planet KDE - Thu, 2024-05-23 20:00

Here are the major changes available in the Plasma 6.1 beta:

  • Triple buffering in KWin for smoother rendering and animations
  • Support for the Wayland Explicit Sync protocol, which should improve life for NVIDIA users in particular
  • Support for the Input Capture portal
  • Remote Desktop system integration to allow RDP clients to connect to Plasma desktops, plus a new page in System Settings for configuring this
  • New UX for Plasma's edit mode to make its modality more obvious and visually fancier
  • Added a configurable edge barrier between screens, to make it easier to hit UI elements touching the edges between screens. This also allows auto-hide panels on edges between screens to work properly
  • Fake session restore on Wayland that at least re-opens apps that were open last time, even if they don't get positioned in the same place. Support for real session restore is still being worked on
  • Support for syncing the color of your keyboard's RGB backlight with Plasma's accent color
  • Support for using the color profile embedded into the display, for displays that bundle these
  • Support in Discover for replacing end-of-support Flatpak apps with their replacements
  • Support for the battery conservation mode features on many Lenovo IdeaPad and Legion laptops
  • Support for passwordless screen locking, for using it as a screensaver in an environment without security concerns
  • You can now middle-click the Power & Battery widget to block and unblock automatic sleep and screen locking, and scroll over it to switch the active power profile
  • Slightly rounder corners, and more consistency between corner radius everywhere
  • Better window layout algorithm for Overview
  • The "Shake cursor to find it" effect has been enabled by default
  • New off-by-default effect to hide the mouse pointer after a period of inactivity
  • System Settings Keyboard page has been rewritten in QML
View full changelog
Categories: FLOSS Project Planets

Plasma Wayland Protocols 1.13.0

Planet KDE - Thu, 2024-05-23 20:00

Plasma Wayland Protocls 1.13.0 is now available for packaging.

This adds features needed for the Plasma 6.1 beta.

URL: https://download.kde.org/stable/plasma-wayland-protocols/
SHA256: dd477e352f5ff6e6ac686286c4b22b19bf5a4921b85ee5a7da02bb7aa115d57e
Signed by: E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Esk-Riddell jr@jriddell.org

Full changelog:

  • plasma-window-management: add a stacking order object
  • output device, output management: add brightness setting
  • outputdevice,outputconfiguration: add a way to use the EDID-provided color profile
  • Enforce passing tests
  • output device, management: change the descriptions for sdr gamut wideness
Categories: FLOSS Project Planets

Promet Source: The Ultimate Guide to Drupal Migration for Higher Education

Planet Drupal - Thu, 2024-05-23 19:34
Note: This blog was first published on September 29, 2021, and has been updated to reflect new information and insights. Takeaway: We explore why you should migrate your higher education website from Drupal 7 to Drupal 10. From enhanced security and performance to powerful new features and integrations, Drupal 10 offers a platform that is purpose-built for the needs of modern universities.
Categories: FLOSS Project Planets

Luke Plant: pyastgrep and custom linting

Planet Python - Thu, 2024-05-23 15:07

A while back I released pyastgrep, which is a rewrite of astpath. It’s a tool that allows you to search for specific Python syntax elements using XPath as a query language.

As part of the rewrite, I separated out the layers of code so that it can now be used as a library as well as a command line tool. I haven’t committed to very much API surface area for library usage, but there is enough.

My main personal use of this has been for linting tasks or enforcing of conventions that might be difficult to do otherwise. I don’t always use this – quite often I’d reach for custom Semgrep rules, and at other times I use introspection to enforce conventions. However, there are times when both of these fail or are rather difficult.

Examples

Some examples of the kinds of rules I’m thinking of include:

  • Boolean arguments to functions/methods should always be “keyword only”.

    Keyword-only arguments are a big win in many cases, and especially when it comes to boolean values. For example, forcing delete_thing(True, False). to be something like delete_thing(permanent=True, force=False) is an easy win, and this is common enough that applying this as a default policy across the code base will probably be a good idea.

    The pattern can be distinguished easily at syntax level. Good:

    def foo(*, my_bool_arg: bool): ...

    Bad:

    def foo(my_bool_arg: bool): ...
  • Simple coding conventions like “Don’t use single letter variables like i or j as a loop variables, use index or idx instead”.

    This can be found by looking for code like:

    for i, val in enumerate(...): ...

    You might not care about this, but if you do, you really want the rule to be applied as an automated test, not a nit-picky code review.

  • A Django-specific one: for inclusion tags, the tag names should match the template file name. This is nice for consistency and code navigation, plus I actually have some custom “jump to definition” code in my editor that relies on it for fast navigation.

    The pattern can again be seen quite easily at the syntax level. Good:

    @inclusion_tag("something/foo.html") def foo(): ...

    Bad:

    @inclusion_tag("something/bar.html") def foo(): ...
  • Any ’task’ (something decorated with @task) should be named foo_task or foo_task, in order to give a clue that it works as an asynchronous call, and its return value is just a promise object.

There are many more examples you’ll come up with once you start thinking like this.

Method

Having identified the bad patterns we want to find and fix, my method for doing so looks as follows. It contains a number of tips and refinements I’ve made over the past few years.

First, I open a test file, e.g. tests/test_conventions.py, and start by inserting some example code – at least one bad example (the kind we are trying to fix), and one good example.

There are a few reasons for this:

  • First, I need to make sure I can prove life exists on earth, as John D. Cook puts it. I’ll say more about this later on.

  • Second, it gives me a deliberately simplified bit of code that I can pass to pyastdump.

  • Third, it provides some explanation for the test you are going to write, and a potentially rather hairy XPath expression.

I’ll use my first example above, keyword-only boolean args. I start by inserting the following text into my test file:

def bad_boolean_arg(foo: bool): pass def good_boolean_arg(*, foo: bool): pass

Then, I copy both of these in turn to the clipboard (or both together if there isn’t much code, like in this case), and pass them through pyastdump. From a terminal, I do:

$ xsel | pyastdump -

I’m using the xsel Linux utility, you can also use xclip -out, or pbpaste on MacOS, or Get-Clipboard in Powershell.

This gives me some AST to look at, structured as XML:

<Module> <body> <FunctionDef lineno="1" col_offset="0" type="str" name="bad_boolean_arg"> <args> <arguments> <posonlyargs/> <args> <arg lineno="1" col_offset="20" type="str" arg="foo"> <annotation> <Name lineno="1" col_offset="25" type="str" id="bool"> <ctx> <Load/> </ctx> </Name> </annotation> </arg> </args> <kwonlyargs/> <kw_defaults/> <defaults/> </arguments> </args> <body> <Pass lineno="2" col_offset="4"/> </body> <decorator_list/> </FunctionDef> </body> <type_ignores/> </Module>

In this case, the current structure of Python’s AST has helped us out a lot – it has separated out posonlyargs (positional only arguments), args (positional or keyword), and kwonlyargs (keyword only args). We can see the offending annotation containing a Name with id="bool" inside the args, when we want it only to be allowed as a keyword-only argument.

(Do we want to disallow boolean-annotated arguments as positional only? I’m leaning towards “no” here, as positional only is quite rare and usually a very deliberate choice).

I now have to construct an XPath expression that will find the offending XML nodes, but not match good examples. It’s pretty straightforward in this case, once you know the basics of XPath. I test it out straight away at the CLI:

pyastgrep './/FunctionDef/args/arguments/args/arg/annotation/Name[@id="bool"]' tests/test_conventions.py

If I’ve done it correctly, it should print my bad example, and not my good example.

Then I widen the net, omitting tests/test_conventions.py to search everywhere in my current directory.

At this point, I’ve probably got some real results that I want to address, but I might also notice there are other variants of the same thing I need to be able to match, and so I iterate, adding more bad/good examples as necessary.

Now I need to write a test. It’s going to look like this:

def test_boolean_arguments_are_keyword_only(): assert_expected_pyastgrep_matches( """ .//FunctionDef/args/arguments/args/arg/annotation/Name[@id="bool"] """, message="Function arguments with type `bool` should be keyword-only", expected_count=1, )

Of course, the real work is being done inside my assert_expected_pyastgrep_matches utility, which looks like this:

from pathlib import Path from boltons import iterutils from pyastgrep.api import Match, search_python_files SRC_ROOT = Path(__file__).parent.parent.resolve() # depends on project structure def assert_expected_pyastgrep_matches(xpath_expr: str, *, expected_count: int, message: str): """ Asserts that the pyastgrep XPath expression matches only `expected_count` times, each of which must be marked with `pyastgrep_exception` `message` is a message to be printed on failure. """ xpath_expr = xpath_expr.strip() matches: list[Match] = [item for item in search_python_files([SRC_ROOT], xpath_expr) if isinstance(item, Match)] expected_matches, other_matches = iterutils.partition( matches, key=lambda match: "pyastgrep: expected" in match.matching_line ) if len(expected_matches) < expected_count: assert False, f"Expected {expected_count} matches but found {len(expected_matches)} for {xpath_expr}" assert not other_matches, ( message + "\n Failing examples:\n" + "\n".join( f" {match.path}:{match.position.lineno}:{match.position.col_offset}:{match.matching_line}" for match in other_matches ) )

There is a bit of explaining to do now.

Being sure that you can “find life on earth” is especially important for a negative test like this. It would be very easy to have an XPath query that you thought worked but didn’t, as it might just silently return zero results. In addition, Python’s AST is not stable – so a query that works now might stop working in the future.

It’s like you have a machine that claims to be able to find needles in haystacks – when it comes back and says “no needles found”, do you believe it? To increase your confidence that everything works and continues to work, you place a few needles at locations that you know, then check that the machine is able to find those needles. When it claims “found exactly 2 needles”, and you can account for those, you’ve got much more confidence that it has indeed found the only needles.

So, it’s important to leave my bad examples in there.

But, I obviously don’t want the bad examples to cause the test to fail! In addition, I want a mechanism for exceptions. A simple mechanism I’ve chosen is to add the text pyastgrep: expected as a comment.

So, I need to change my bad example like this:

def bad_boolean_arg(foo: bool): # pyastgrep: expected pass

I also pass expected_count=1 to indicate that I expect to find at least one bad example (or more, if I’ve added more bad examples).

Hopefully that explains everything assert_expected_pyastgrep_matches does. A couple more notes:

  • it uses boltons, a pretty useful set of Python utilities

  • it requires a SRC_ROOT folder to be defined, which will depend on your project, and might be different depending on which folder(s) you want to apply the convention too.

Now, everything is set up, and I run the test for real, hopefully locating all the bad usages. I work through them and fix, then leave the test in.

Tips
  • pyastgrep works strictly at the syntax level, so unlike Semgrep you might get caught out by aliases if you try match on specific names:

    from foo import bar from foo import bar as foo_bar import foo # These all call the same function but look different in AST: foo.bar() bar() foo_bar()
  • There is however, an advantage to this – you don’t need a real import to construct your bad examples, you can just use a Mock. e.g. for my inclusion_tag example above, I have code like:

    from unittest.mock import Mock register = Mock() @register.inclusion_tag(filename="something/not_bad_tag.html") def bad_tag(): # pyastgrep: expected pass

    You can see the full code on GitHub.

  • You might be able to use a mixture of techniques:

    • A Semgrep rule avoids one set of bad patterns using some thirdparty.func, and requiring everyone to use your own wrapper, which is then constructed in such a way to make it easier to apply a pyastgrep rule

    • Some introspection that produces a list of classes or functions to which some rule applies, then dynamically generates XPath expression to pass to pyastgrep.

Conclusion

Syntax level searching isn’t right for every job, but it can be a powerful addition to your toolkit, and with a decent query language like XPath, you can do a surprising amount. Have a look at the pyastgrep examples for inspiration!

Categories: FLOSS Project Planets

Mike Driscoll: Episode 41 – Python Packaging and FOSS with Armin Ronacher

Planet Python - Thu, 2024-05-23 12:13

In this episode, I chatted with Armin Ronacher about his many amazing Python packages, such as pygments, flask, Jinja, Rye, and Click!

Specifically, we talked about the following:

  • How Flask came about
  • Favorite Python packages
  • Python packaging
  • and much more!
Links

The post Episode 41 – Python Packaging and FOSS with Armin Ronacher appeared first on Mouse Vs Python.

Categories: FLOSS Project Planets

Wim Leers: XB week 1: 0.x branch opened!

Planet Drupal - Thu, 2024-05-23 10:16

Acquia is sponsoring me full-time to operate as the tech lead for Experience Builder — thanks!

Dries announced the formal start of the Experience Builder initiative at DrupalCon Portland 2024, on May 6. Shortly before DrupalCon, Drupal core product manager Lauri already shared the findings of the deep & wide research he conducted in prior months.

During the (entire!) month of March, Lauri walked some members of Acquia’s Drupal Acceleration Team (Ben “bnjmnm”, Ted “tedbow” Bowman
“hooroomoo”, Alex “effulgentsia” Bronstein, Tim Plunkett and I) as well as the lead front-end and lead back-end developer of Acquia’s Site Studio team (Felix Mazeikis and Jesse Baker) through the product requirements that were identified for Drupal to leapfrog its competitors on this front.
We spent that month understanding those requirements and do an initial pass at sizing them. To be able to refine the estimates, we started building proof-of-concepts for the riskiest areas. For example, I started one for dynamically loading a different “design version”, and a few days later another one for validating the data model proposed by Alex.

These proof-of-concepts have been shared with long-time Drupal core contributors while they were being worked on — for example, we asked feedback from Mateu “e0ipso” at Lullabot from the very start since Single Directory Components are his brain child. We asked feedback from Lee “larowlan” Rowlands at PreviousNext given his work on Decoupled Layout Builder. And so on.
They’re hacky as hell — the purpose was to explore connections between concepts and check viability.

At DrupalCon, Dries revealed that he’d love to see organizations using Drupal to contribute back significantly to both Starshot (the other announcement, which will include Experience Builder once it’s ready). So at DrupalCon, Lauri and I found many people asking us how to start contributing — an excellent new challenge to have!

We’re currently in an awkward phase to welcome contributors. Because  despite a clear product ambition/vision, we are in the very early stages of defining the concrete UX (Acquia’s UX team is working on wireframes and did user testing at DrupalCon). And during DrupalCon, there was no code base to point to!

So, during the week after DrupalCon, hooroomoo got a 0.x branch of Experience Builder going, cooking up a delightful hodgepodge of various PoC branches we’d worked on.
On Thursday May 16, Lauri and I met with 6 (!!!) people of the PreviousNext team, where they have not only serious Drupal core expertise, but also deep Layout Builder and JS knowledge — they offered to run the asynchronous meetings in the #experience-builder Drupal Slack channel. They’ve used this pattern before with great success, and it is the only viable way to truly involve the global Drupal community.
 

By the end of the week I got GitLab CI pipelines going (PHPStan L8!). Ready for more serious work in week 2 :)

Categories: FLOSS Project Planets

The Drop Times: EvolveDrupal: Insights from Atlanta & What's Next in Montreal!

Planet Drupal - Thu, 2024-05-23 09:08
Dive into the highlights of EvolveDrupal Atlanta and brace yourself for major updates from Montreal! Get ready to be part of this exciting journey of innovation and collaboration.
Categories: FLOSS Project Planets

DrupalEasy: Ruminations on Drupal Starshot

Planet Drupal - Thu, 2024-05-23 08:03

a second official version of Drupal

- Dries Buytaert, in his blog post announcing Drupal Starshot

If you're a Drupal developer of any caliber and pay any attention to the goings-on in the Drupal community, then you no doubt have heard about Starshot, recently announced by Dries Buytaert at DrupalCon Portland 2024.

In this post, I'll do my best to not repeat everything that was announced, but rather to summarize, ask a question or two and offer an opinion or two…

The basics

Starshot will be a new download available on drupal.org that includes contributed modules and configuration to provide a superior out-of-the-box solution that is more usable/approachable/friendlier than Drupal core.

In my opinion, Starshot can be thought of (generally speaking of course) as two-ish large tasks. First, full integration with Automatic Updates, Project Browser and Recipes (including full Recipe support in Project Browser.) Second; Experience Builder, which is planned to be (roughly speaking) some combination of Layout Builder (or a replacement,) Paragraphs, Single Directory Components, an in-browser styling tool and other modules/configuration to provide a best-of-breed page-building solution.

The -ish in two-ish from above is all the additional functionality that having full Recipes support will bring. IMHO, this is the star in Starshot.

Note: Experience Builder is not new - it is an evolution of Next generation page builder initiative that started in 2023. 

Why?

In his keynote, Dries spoke about the need for Starshot over the course of a few minutes, enumerating various reasons why he and others in the community feel this is a necessary task, including the fact that Drupal's UI is difficult to use for new users.

I think Mike Herchel summed the Why up nicely in his blog post

It’s an acknowledgement of the perception that Drupal is archaic and/or legacy software, and that this perception needs to change.

Starshot is the Drupal community's effort to win back small- and medium-sized sites that don't currently even consider using Drupal.

Furthermore, one of the main goals of Drupal Starshot is to allow for faster innovation cycles, allowing Starshot to add functionality at a faster pace than Drupal core.

Timeline

There's a countdown clock to liftoff on drupal.org/starshot that is currently at a bit over 200 days. Based on this, Starshot will be ready by January 1, 2025. Dries mentioned in his keynote that the goal was to have an initial release by the end of 2024.

There's actually a very early prototype of a subset of the functionality available from phenaproxima (Adam G-H.) 

I'm a bit dubious about all of the mentioned functionality of Starshot being ready by the end of the year. More of my opinion on this in a bit…

Relationship to Drupal core

It's not a fork - that much is clear. It will include Drupal core, but have its own release schedule. I can only imagine that any time Drupal core has a security update, there will be a new Starshot release (as well as any time any of the contrib modules used in Starshot have a security release as well, I assume)

What's up with the "Launch" button?

One of my biggest questions after Dries' keynote was based on a mockup of a drupal.org page that he presented.

What happens when someone clicks "Launch?" I've been a proponent of the Drupal Association engaging/partnering with low-cost hosting providers to provide a way to easily provide hosting for a Drupal site that supports relies on Automatic Updates and Project Browser. The community has invested a lot of time in both of these initiatives, and I feel that neither really has a hosting "home." What would be a better way to officially launch these projects than hosting partners that fully support both, as well as a meaningful site backup plan, all included in a low monthly hosting price. IMHO, this type of thing should definitely be one of the options behind the mysterious "Launch" button. Maybe the DA gets a small referral fee from the hosting providers as well?

Gábor Hojtsy writes in his blog post about Starshot, "Discussions around making simple hosting available for simpler sites was reignited and a WebAssembly-based browser-executed demo solution's feasibility is also explored again." He also mentioned the potential for a WebAssembly-based option in his DrupalCon Portland 2024 session about Drupal 11, as well as options for ephemeral (temporary) hosting solutions (think SimplyTest.me.) 

Will the plan and/or timeline change?

Absolutely. Dries and other folks already involved in Starshot admit that there's a lot of things to still figure out, decisions to be made and a lot of work to do to make all this a reality. Nothing is set in stone. 

If I had a magic wand 🪄

As exciting as Experience Builder sounds, I'm worried that this is going to take a long time. In addition, as we've seen with the plethora of Layout Builder related contrib modules, there is often no one-size-fits-all solution for page creation.

From my perspective, I think that Drupal Starshot (or Drupal CMS, or whatever we end up calling it) phase 1 should be Automatic Updates, Project Browser, Recipes, and a set of curated recipes available geared towards page building. Experience Builder should be phase 2.

Being able to install recipes from Project Browser (leveraging Package Manager from Automatic Updates) will be a game-changer.

The way I look at it is with full Recipes support, we don't have to have just one Experience Builder, we can have many. We can have simpler ones (sooner) and more intricate ones (later.) We can have recipes that leverage Layout Builder and any number of the currently existing supporting contrib modules or recipes that focus on Paragraphs. The cream will rise to the top as the various Experience Builder modules are written, tested, and released.

Simon Hobbs agrees that Recipes is the "secret sauce" to Starshot in his optimistic blog post

Community reaction

In the two-ish weeks since Dries announced Starshot, Drupal agencies from around the world have weighed in with their support, including PreviousNext from Australia (blog post by Kim Pepper,) 1xINTERNET from Europe (announcement,) Specbee (blog post by Malabya Tewari,) and (obviously) Acquia (United States.) 

In my conversations with folks while at DrupalCon Portland 2024, reactions were mostly positive, but some folks had some concerns; with the leading issue being that (paraphrasing) the announcement feels like there were some internal (non-public) discussions about doing Starshot following by a "we are doing this" announcement by Dries. While I don't completely agree with this sentiment, I do understand it. The main pieces of Starshot have been open to discussion in the community, while the idea of putting them all together into a new "product" is something that (as far as I could tell) wasn't necessarily widely open for community input. 

Additional resources
Categories: FLOSS Project Planets

Salsa Digital: Why Use Drupal?

Planet Drupal - Thu, 2024-05-23 08:00
1. Introduction to Drupal What is a Content Management System (CMS)? A CMS is a tool that helps you create and manage your website’s content without needing to write code or have much technical experience. It's like a platform that provides the base for your website.  Some well-known CMSs include WordPress , which is great for beginners and has many plugins; Joomla , which works well for online stores; Magento, which suits larger e-commerce sites and of course Drupal , which can be used to build any complex website. These tools make it easier to build and customise your site. What is Drupal? Drupal is an advanced, open-source CMS. It's perfect for making complex websites thanks to its flexibility and strong security features.
Categories: FLOSS Project Planets

Pages