Feeds

Ned Batchelder: Debug helpers in coverage.py

Planet Python - Sun, 2023-11-12 16:40

Debugging in the coverage.py code can be difficult, so I’ve written a lot of helper code to support debugging. I just added some more.

These days I’m working on adding support in coverage.py for sys.monitoring. This is a new feature in Python 3.12 that completely changes how Python reports information about program execution. It’s a big change to coverage.py and it’s a new feature in Python, so while working on it I’ve been confused a lot.

Some of the confusion has been about how sys.monitoring works, and some was eventually diagnosed as a genuine bug involving sys.monitoring. But all of it started as straight-up “WTF!?” confusion. My preferred debugging approach at times like this is to log a lot of detailed information and then pore over it.

For something like sys.monitoring where Python is calling my functions and passing code objects, it’s useful to see stack traces for each function call. And because I’m writing large log files it’s useful to be able to tailor the information to the exact details I need so I don’t go cross-eyed trying to find the clues I’m looking for.

I already had a function for producing compact log-friendly stack traces. For this work, I added more options to it. Now my short_stack function produces one line per frame, with options for which frames to include (it can omit the 20 or so frames of pytest before my own code is involved); whether to show the full file name, or an abbreviated one; and whether to include the id of the frame object:

                     _hookexec : 0x10f23c120 syspath:/pluggy/_manager.py:115
                    _multicall : 0x10f308bc0 syspath:/pluggy/_callers.py:77
            pytest_pyfunc_call : 0x10f356340 syspath:/_pytest/python.py:194
    test_thread_safe_save_data : 0x10e056480 syspath:/tests/test_concurrency.py:674
                     __enter__ : 0x10f1a7e20 syspath:/contextlib.py:137
                       collect : 0x10f1a7d70 cov:/control.py:669
                         start : 0x10f1a7690 cov:/control.py:648
                         start : 0x10f650300 cov:/collector.py:353
                 _start_tracer : 0x10f5c4e80 cov:/collector.py:296
                      __init__ : 0x10e391ee0 cov:/pep669_tracer.py:155
                           log : 0x10f587670 cov:/pep669_tracer.py:55
                   short_stack : 0x10f5c5180 cov:/pep669_tracer.py:93

Once I had these options implemented in a quick way and they proved useful, I moved the code into coverage.py’s debug.py file and added tests for the new behaviors. This took a while, but in the end I think it was worth it. I don’t need to use these tools often, but when I do, I’m deep in a bad problem and I want to have a well-sharpened tool at hand.

Writing debug code is like writing tests: it’s just there to support you in development, it’s not part of “the product.” But it’s really helpful. You should do it. It could be something as simple as a custom __repr__ method for your classes to show just the information you need.

It’s especially helpful when your code deals in specialized domains or abstractions. Your debugging code can speak the same language. Zellij was a small side project of mine to draw geometric art like this:

When the over-under weaving code wasn’t working right, I added some tooling to get debug output like this:

I don’t remember what the different colors, patterns, and symbols meant, but at the time it was very helpful for diagnosing what was wrong with the code.

Categories: FLOSS Project Planets

#! code: Drupal 10: Running Drupal Tests On GitHub Using Workflows

Planet Drupal - Sun, 2023-11-12 14:10

There are a number of different tools that allow you to validate and test a Drupal site. Inspecting your custom code allows you to adhere to coding standards and ensure that you stamp our common coding problems. Adding tests allows you to make certain that the functionality of your Drupal site works correctly.

If you have tests in your Drupal project then you ideally need to be running them at some point in your development workflow. Getting GitHub to run the tests when you push code or create a pull request means that you can have peace of mind that your test suite is being run at some point in workflow. You also want to allow your tests to be run locally with ease, without having to remember lots of command line arguments.

In this article I will show how to set up validation and tests against a Drupal site and how to get GitHub to run these steps when you create a pull request. This assumes you have a Drupal 10 project that is controlled via composer.

Let's start with creating a runner using Makefile.

Makefile

A Makefile is an automation tool that allows developers to create a dependency structure of tasks that is then run by using the "make" command. This file format was original developed to assist with compiling complex projects, but it can easily be used to perform any automation script you need.

For example, let's say that we want to allow a command to be run that has a number of different parameters. This might be a curl command or even an rsync command, where the order of the parameters are absolutely critical. To do this you would create a file called "Makefile" and add the following.

sync-files: rsync -avzh source/directory destination/directory

To run this you just need to type "make" followed by the name of the command.

Read more

Categories: FLOSS Project Planets

Lukas Märdian: Netplan brings consistent network configuration across Desktop, Server, Cloud and IoT

Planet Debian - Sun, 2023-11-12 10:00

Ubuntu 23.10 “Mantic Minotaur” Desktop, showing network settings

We released Ubuntu 23.10 ‘Mantic Minotaur’ on 12 October 2023, shipping its proven and trusted network stack based on Netplan. Netplan is the default tool to configure Linux networking on Ubuntu since 2016. In the past, it was primarily used to control the Server and Cloud variants of Ubuntu, while on Desktop systems it would hand over control to NetworkManager. In Ubuntu 23.10 this disparity in how to control the network stack on different Ubuntu platforms was closed by integrating NetworkManager with the underlying Netplan stack.

Netplan could already be used to describe network connections on Desktop systems managed by NetworkManager. But network connections created or modified through NetworkManager would not be known to Netplan, so it was a one-way street. Activating the bidirectional NetworkManager-Netplan integration allows for any configuration change made through NetworkManager to be propagated back into Netplan. Changes made in Netplan itself will still be visible in NetworkManager, as before. This way, Netplan can be considered the “single source of truth” for network configuration across all variants of Ubuntu, with the network configuration stored in /etc/netplan/, using Netplan’s common and declarative YAML format.

Netplan Desktop integration

On workstations, the most common scenario is for users to configure networking through NetworkManager’s graphical interface, instead of driving it through Netplan’s declarative YAML files. Netplan ships a “libnetplan” library that provides an API to access Netplan’s parser and validation internals, which is now used by NetworkManager to store any network interface configuration changes in Netplan. For instance, network configuration defined through NetworkManager’s graphical UI or D-Bus API will be exported to Netplan’s native YAML format in the common location at /etc/netplan/. This way, the only thing administrators need to care about when managing a fleet of Desktop installations is Netplan. Furthermore, programmatic access to all network configuration is now easily accessible to other system components integrating with Netplan, such as snapd. This solution has already been used in more confined environments, such as Ubuntu Core and is now enabled by default on Ubuntu 23.10 Desktop.

Migration of existing connection profiles

On installation of the NetworkManager package (network-manager >= 1.44.2-1ubuntu1) in Ubuntu 23.10, all your existing connection profiles from /etc/NetworkManager/system-connections/ will automatically and transparently be migrated to Netplan’s declarative YAML format and stored in its common configuration directory /etc/netplan/. 

The same migration will happen in the background whenever you add or modify any connection profile through the NetworkManager user interface, integrated with GNOME Shell. From this point on, Netplan will be aware of your entire network configuration and you can query it using its CLI tools, such as “sudo netplan get” or “sudo netplan status” without interrupting traditional NetworkManager workflows (UI, nmcli, nmtui, D-Bus APIs). You can observe this migration on the apt-get command line, watching out for logs like the following:

Setting up network-manager (1.44.2-1ubuntu1.1) ... Migrating HomeNet (9d087126-ae71-4992-9e0a-18c5ea92a4ed) to /etc/netplan Migrating eduroam (37d643bb-d81d-4186-9402-7b47632c59b1) to /etc/netplan Migrating DebConf (f862be9c-fb06-4c0f-862f-c8e210ca4941) to /etc/netplan

In order to prepare for a smooth transition, NetworkManager tests were integrated into Netplan’s continuous integration pipeline at the upstream GitHub repository. Furthermore, we implemented a passthrough method of handling unknown or new settings that cannot yet be fully covered by Netplan, making Netplan future-proof for any upcoming NetworkManager release.

The future of Netplan

Netplan has established itself as the proven network stack across all variants of Ubuntu – Desktop, Server, Cloud, or Embedded. It has been the default stack across many Ubuntu LTS releases, serving millions of users over the years. With the bidirectional integration between NetworkManager and Netplan the final piece of the puzzle is implemented to consider Netplan the “single source of truth” for network configuration on Ubuntu. With Debian choosing Netplan to be the default network stack for their cloud images, it is also gaining traction outside the Ubuntu ecosystem and growing into the wider open source community.

Within the development cycle for Ubuntu 24.04 LTS, we will polish the Netplan codebase to be ready for a 1.0 release, coming with certain guarantees on API and ABI stability, so that other distributions and 3rd party integrations can rely on Netplan’s interfaces. First steps into that direction have already been taken, as the Netplan team reached out to the Debian community at DebConf 2023 in Kochi/India to evaluate possible synergies.

Conclusion

Netplan can be used transparently to control a workstation’s network configuration and plays hand-in-hand with many desktop environments through its tight integration with NetworkManager. It allows for easy network monitoring, using common graphical interfaces and provides a “single source of truth” to network administrators, allowing for configuration of Ubuntu Desktop fleets in a streamlined and declarative way. You can try this new functionality hands-on by following the “Access Desktop NetworkManager settings through Netplan” tutorial.


If you want to learn more, feel free to follow our activities on Netplan.io, GitHub, Launchpad, IRC or our Netplan Developer Diaries blog on discourse.

Categories: FLOSS Project Planets

death and gravity: reader 3.10 released – storage internal API

Planet Python - Sun, 2023-11-12 08:42

Hi there!

I'm happy to announce version 3.10 of reader, a Python feed reader library.

What's new? #

Here are the highlights since reader 3.9.

Storage internal API #

The storage internal API is now documented!

This is important because it opens up reader to using other databases than SQLite.

The protocols are mostly stable, but some changes are still expected. The long term goal is full stabilization, but at least one other implementation needs to exists before that, to work out any remaining kinks.

A SQLAlchemy backend would be especially useful, since it would provide access to a variety of database engines mostly out of the box. (Alas, I do not have time nor a need for this at the moment. Interested on working on it? Let me know!)

Why not use SQLAlchemy from the start? #

In the beginning:

  • I wanted to keep things as simple as possible, so I stay motivated for the long term. I also wanted to follow a problem-solution approach, which cautions against solving problems you don't have. (Details on both here and here.)
  • By that time, I was already a SQLite fan, and due to the single-user nature of reader, I was relatively confident concurrency won't be an issue.
  • I didn't know exactly where and how I would deploy the web app; sqlite3 being in the standard library made it very appealing.

Since then, I did come up with some of my own complexity – reader has a query builder and a migration system (albeit both of them tiny), and there were some concurrency issues. SQLAlchemy would have likely helped with the first two, but not with the last. Overall, I still think plain SQLite was the right choice at the time.

Deprecated sqlite3 datetime support #

The default sqlite3 datetime adapters/converters were deprecated in Python 3.12. Since adapters/converters apply to all database connections, reader does not have the option of registering its own (as a library, it should not change global stuff), so datetime conversions now happen in the storage. As an upside, this provided an opportunity to change the storage to use timezone-aware datetimes.

Share experimental plugin #

There's a new share web app plugin to add social sharing links to the entry page.

Ideally, this functionality should end up in a plugin that adds them to Entry.links (to be exposed in #320), so all reader users can benefit from it.

Python versions #

None this time, but Python 3.12 support is coming soon!

For more details, see the full changelog.

That's it for now.

Want to contribute? Check out the docs and the roadmap.

Learned something new today? Share this with others, it really helps!

What is reader? #

reader takes care of the core functionality required by a feed reader, so you can focus on what makes yours different.

reader allows you to:

  • retrieve, store, and manage Atom, RSS, and JSON feeds
  • mark articles as read or important
  • add arbitrary tags/metadata to feeds and articles
  • filter feeds and articles
  • full-text search articles
  • get statistics on feed and user activity
  • write plugins to extend its functionality

...all these with:

  • a stable, clearly documented API
  • excellent test coverage
  • fully typed Python

To find out more, check out the GitHub repo and the docs, or give the tutorial a try.

Why use a feed reader library? #

Have you been unhappy with existing feed readers and wanted to make your own, but:

  • never knew where to start?
  • it seemed like too much work?
  • you don't like writing backend code?

Are you already working with feedparser, but:

  • want an easier way to store, filter, sort and search feeds and entries?
  • want to get back type-annotated objects instead of dicts?
  • want to restrict or deny file-system access?
  • want to change the way feeds are retrieved by using Requests?
  • want to also support JSON Feed?
  • want to support custom information sources?

... while still supporting all the feed types feedparser does?

If you answered yes to any of the above, reader can help.

The reader philosophy #
  • reader is a library
  • reader is for the long term
  • reader is extensible
  • reader is stable (within reason)
  • reader is simple to use; API matters
  • reader features work well together
  • reader is tested
  • reader is documented
  • reader has minimal dependencies
Why make your own feed reader? #

So you can:

  • have full control over your data
  • control what features it has or doesn't have
  • decide how much you pay for it
  • make sure it doesn't get closed while you're still using it
  • really, it's easier than you think

Obviously, this may not be your cup of tea, but if it is, reader can help.

Categories: FLOSS Project Planets

Lisandro Damián Nicanor Pérez Meyer: Mini DebConf 2023 in Montevideo, Uruguay

Planet Debian - Sun, 2023-11-12 06:41

15 years, "la niña bonita", if you ask many of my fellow argentinians, is the amount of time I haven't been present in any Debian-related face to face activity. It was already time to fix that. Thanks to Santiago Ruano Rincón and Gunnar Wolf that proded me to come I finally attended the Mini DebConf Uruguay in Montevideo.

I took the opportunity to do my first trip by ferry, which is currently one of the best options to get from Buenos Aires to Montevideo, in my case through Colonia. Living ~700km at the south west of Buenos Aires city the trip was long, it included a 10 hours bus, a ferry and yet another bus... but of course, it was worth it.

In Buenos Aires' port I met Emmanuel eamanu Arias, a fellow Argentinian Debian Developer from La Rioja, so I had the pleasure to travel with him.

To be honest Gunnar already did a wonderful blog post with many pictures, I should have taken more.

I had the opportunity to talk about device trees, and even look at Gunnar's machine one in order to find why a Display Port port was not working on a kernel but did in another. At the same time I also had time to start packaging qt6-grpc. Sadly I was there just one entire day, as I arrived on Thursday afternoon and had to leave on Saturday after lunch, but we did have a lot of quality Debian time.

I'll repeat here what Gunnar already wrote:

We had a long, important conversation about an important discussion that we are about to present on debian-vote@lists.debian.org.

Stay tuned on that, I think this is something we should all get involved.

All in all I already miss hacking with people on the same room. Meetings for us mean a lot of distance to be traveled (well, I live far away of almost everything), but I really should try to this more often. Certainly more than just once every 15 years :-)

Categories: FLOSS Project Planets

Petter Reinholdtsen: New and improved sqlcipher in Debian for accessing Signal database

Planet Debian - Sun, 2023-11-12 06:00

For a while now I wanted to have direct access to the Signal database of messages and channels of my Desktop edition of Signal. I prefer the enforced end to end encryption of Signal these days for my communication with friends and family, to increase the level of safety and privacy as well as raising the cost of the mass surveillance government and non-government entities practice these days. In August I came across a nice recipe on how to use sqlcipher to extract statistics from the Signal database explaining how to do this. Unfortunately this did not work with the version of sqlcipher in Debian. The sqlcipher package is a "fork" of the sqlite package with added support for encrypted databases. Sadly the current Debian maintainer announced more than three years ago that he did not have time to maintain sqlcipher, so it seemed unlikely to be upgraded by the maintainer. I was reluctant to take on the job myself, as I have very limited experience maintaining shared libraries in Debian. After waiting and hoping for a few months, I gave up the last week, and set out to update the package. In the process I orphaned it to make it more obvious for the next person looking at it that the package need proper maintenance.

The version in Debian was around five years old, and quite a lot of changes had taken place upstream into the Debian maintenance git repository. After spending a few days importing the new upstream versions, realising that upstream did not care much for SONAME versioning as I saw library symbols being both added and removed with minor version number changes to the project, I concluded that I had to do a SONAME bump of the library package to avoid surprising the reverse dependencies. I even added a simple autopkgtest script to ensure the package work as intended. Dug deep into the hole of learning shared library maintenance, I set out a few days ago to upload the new version to Debian experimental to see what the quality assurance framework in Debian had to say about the result. The feedback told me the pacakge was not too shabby, and yesterday I uploaded the latest version to Debian unstable. It should enter testing today or tomorrow, perhaps delayed by a small library transition.

Armed with a new version of sqlcipher, I can now have a look at the SQL database in ~/.config/Signal/sql/db.sqlite. First, one need to fetch the encryption key from the Signal configuration using this simple JSON extraction command:

/usr/bin/jq -r '."key"' ~/.config/Signal/config.json

Assuming the result from that command is 'secretkey', which is a hexadecimal number representing the key used to encrypt the database. Next, one can now connect to the database and inject the encryption key for access via SQL to fetch information from the database. Here is an example dumping the database structure:

% sqlcipher ~/.config/Signal/sql/db.sqlite sqlite> PRAGMA key = "x'secretkey'"; sqlite> .schema CREATE TABLE sqlite_stat1(tbl,idx,stat); CREATE TABLE conversations( id STRING PRIMARY KEY ASC, json TEXT, active_at INTEGER, type STRING, members TEXT, name TEXT, profileName TEXT , profileFamilyName TEXT, profileFullName TEXT, e164 TEXT, serviceId TEXT, groupId TEXT, profileLastFetchedAt INTEGER); CREATE TABLE identityKeys( id STRING PRIMARY KEY ASC, json TEXT ); CREATE TABLE items( id STRING PRIMARY KEY ASC, json TEXT ); CREATE TABLE sessions( id TEXT PRIMARY KEY, conversationId TEXT, json TEXT , ourServiceId STRING, serviceId STRING); CREATE TABLE attachment_downloads( id STRING primary key, timestamp INTEGER, pending INTEGER, json TEXT ); CREATE TABLE sticker_packs( id TEXT PRIMARY KEY, key TEXT NOT NULL, author STRING, coverStickerId INTEGER, createdAt INTEGER, downloadAttempts INTEGER, installedAt INTEGER, lastUsed INTEGER, status STRING, stickerCount INTEGER, title STRING , attemptedStatus STRING, position INTEGER DEFAULT 0 NOT NULL, storageID STRING, storageVersion INTEGER, storageUnknownFields BLOB, storageNeedsSync INTEGER DEFAULT 0 NOT NULL); CREATE TABLE stickers( id INTEGER NOT NULL, packId TEXT NOT NULL, emoji STRING, height INTEGER, isCoverOnly INTEGER, lastUsed INTEGER, path STRING, width INTEGER, PRIMARY KEY (id, packId), CONSTRAINT stickers_fk FOREIGN KEY (packId) REFERENCES sticker_packs(id) ON DELETE CASCADE ); CREATE TABLE sticker_references( messageId STRING, packId TEXT, CONSTRAINT sticker_references_fk FOREIGN KEY(packId) REFERENCES sticker_packs(id) ON DELETE CASCADE ); CREATE TABLE emojis( shortName TEXT PRIMARY KEY, lastUsage INTEGER ); CREATE TABLE messages( rowid INTEGER PRIMARY KEY ASC, id STRING UNIQUE, json TEXT, readStatus INTEGER, expires_at INTEGER, sent_at INTEGER, schemaVersion INTEGER, conversationId STRING, received_at INTEGER, source STRING, hasAttachments INTEGER, hasFileAttachments INTEGER, hasVisualMediaAttachments INTEGER, expireTimer INTEGER, expirationStartTimestamp INTEGER, type STRING, body TEXT, messageTimer INTEGER, messageTimerStart INTEGER, messageTimerExpiresAt INTEGER, isErased INTEGER, isViewOnce INTEGER, sourceServiceId TEXT, serverGuid STRING NULL, sourceDevice INTEGER, storyId STRING, isStory INTEGER GENERATED ALWAYS AS (type IS 'story'), isChangeCreatedByUs INTEGER NOT NULL DEFAULT 0, isTimerChangeFromSync INTEGER GENERATED ALWAYS AS ( json_extract(json, '$.expirationTimerUpdate.fromSync') IS 1 ), seenStatus NUMBER default 0, storyDistributionListId STRING, expiresAt INT GENERATED ALWAYS AS (ifnull( expirationStartTimestamp + (expireTimer * 1000), 9007199254740991 )), shouldAffectActivity INTEGER GENERATED ALWAYS AS ( type IS NULL OR type NOT IN ( 'change-number-notification', 'contact-removed-notification', 'conversation-merge', 'group-v1-migration', 'keychange', 'message-history-unsynced', 'profile-change', 'story', 'universal-timer-notification', 'verified-change' ) ), shouldAffectPreview INTEGER GENERATED ALWAYS AS ( type IS NULL OR type NOT IN ( 'change-number-notification', 'contact-removed-notification', 'conversation-merge', 'group-v1-migration', 'keychange', 'message-history-unsynced', 'profile-change', 'story', 'universal-timer-notification', 'verified-change' ) ), isUserInitiatedMessage INTEGER GENERATED ALWAYS AS ( type IS NULL OR type NOT IN ( 'change-number-notification', 'contact-removed-notification', 'conversation-merge', 'group-v1-migration', 'group-v2-change', 'keychange', 'message-history-unsynced', 'profile-change', 'story', 'universal-timer-notification', 'verified-change' ) ), mentionsMe INTEGER NOT NULL DEFAULT 0, isGroupLeaveEvent INTEGER GENERATED ALWAYS AS ( type IS 'group-v2-change' AND json_array_length(json_extract(json, '$.groupV2Change.details')) IS 1 AND json_extract(json, '$.groupV2Change.details[0].type') IS 'member-remove' AND json_extract(json, '$.groupV2Change.from') IS NOT NULL AND json_extract(json, '$.groupV2Change.from') IS json_extract(json, '$.groupV2Change.details[0].aci') ), isGroupLeaveEventFromOther INTEGER GENERATED ALWAYS AS ( isGroupLeaveEvent IS 1 AND isChangeCreatedByUs IS 0 ), callId TEXT GENERATED ALWAYS AS ( json_extract(json, '$.callId') )); CREATE TABLE sqlite_stat4(tbl,idx,neq,nlt,ndlt,sample); CREATE TABLE jobs( id TEXT PRIMARY KEY, queueType TEXT STRING NOT NULL, timestamp INTEGER NOT NULL, data STRING TEXT ); CREATE TABLE reactions( conversationId STRING, emoji STRING, fromId STRING, messageReceivedAt INTEGER, targetAuthorAci STRING, targetTimestamp INTEGER, unread INTEGER , messageId STRING); CREATE TABLE senderKeys( id TEXT PRIMARY KEY NOT NULL, senderId TEXT NOT NULL, distributionId TEXT NOT NULL, data BLOB NOT NULL, lastUpdatedDate NUMBER NOT NULL ); CREATE TABLE unprocessed( id STRING PRIMARY KEY ASC, timestamp INTEGER, version INTEGER, attempts INTEGER, envelope TEXT, decrypted TEXT, source TEXT, serverTimestamp INTEGER, sourceServiceId STRING , serverGuid STRING NULL, sourceDevice INTEGER, receivedAtCounter INTEGER, urgent INTEGER, story INTEGER); CREATE TABLE sendLogPayloads( id INTEGER PRIMARY KEY ASC, timestamp INTEGER NOT NULL, contentHint INTEGER NOT NULL, proto BLOB NOT NULL , urgent INTEGER, hasPniSignatureMessage INTEGER DEFAULT 0 NOT NULL); CREATE TABLE sendLogRecipients( payloadId INTEGER NOT NULL, recipientServiceId STRING NOT NULL, deviceId INTEGER NOT NULL, PRIMARY KEY (payloadId, recipientServiceId, deviceId), CONSTRAINT sendLogRecipientsForeignKey FOREIGN KEY (payloadId) REFERENCES sendLogPayloads(id) ON DELETE CASCADE ); CREATE TABLE sendLogMessageIds( payloadId INTEGER NOT NULL, messageId STRING NOT NULL, PRIMARY KEY (payloadId, messageId), CONSTRAINT sendLogMessageIdsForeignKey FOREIGN KEY (payloadId) REFERENCES sendLogPayloads(id) ON DELETE CASCADE ); CREATE TABLE preKeys( id STRING PRIMARY KEY ASC, json TEXT , ourServiceId NUMBER GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId'))); CREATE TABLE signedPreKeys( id STRING PRIMARY KEY ASC, json TEXT , ourServiceId NUMBER GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId'))); CREATE TABLE badges( id TEXT PRIMARY KEY, category TEXT NOT NULL, name TEXT NOT NULL, descriptionTemplate TEXT NOT NULL ); CREATE TABLE badgeImageFiles( badgeId TEXT REFERENCES badges(id) ON DELETE CASCADE ON UPDATE CASCADE, 'order' INTEGER NOT NULL, url TEXT NOT NULL, localPath TEXT, theme TEXT NOT NULL ); CREATE TABLE storyReads ( authorId STRING NOT NULL, conversationId STRING NOT NULL, storyId STRING NOT NULL, storyReadDate NUMBER NOT NULL, PRIMARY KEY (authorId, storyId) ); CREATE TABLE storyDistributions( id STRING PRIMARY KEY NOT NULL, name TEXT, senderKeyInfoJson STRING , deletedAtTimestamp INTEGER, allowsReplies INTEGER, isBlockList INTEGER, storageID STRING, storageVersion INTEGER, storageUnknownFields BLOB, storageNeedsSync INTEGER); CREATE TABLE storyDistributionMembers( listId STRING NOT NULL REFERENCES storyDistributions(id) ON DELETE CASCADE ON UPDATE CASCADE, serviceId STRING NOT NULL, PRIMARY KEY (listId, serviceId) ); CREATE TABLE uninstalled_sticker_packs ( id STRING NOT NULL PRIMARY KEY, uninstalledAt NUMBER NOT NULL, storageID STRING, storageVersion NUMBER, storageUnknownFields BLOB, storageNeedsSync INTEGER NOT NULL ); CREATE TABLE groupCallRingCancellations( ringId INTEGER PRIMARY KEY, createdAt INTEGER NOT NULL ); CREATE TABLE IF NOT EXISTS 'messages_fts_data'(id INTEGER PRIMARY KEY, block BLOB); CREATE TABLE IF NOT EXISTS 'messages_fts_idx'(segid, term, pgno, PRIMARY KEY(segid, term)) WITHOUT ROWID; CREATE TABLE IF NOT EXISTS 'messages_fts_content'(id INTEGER PRIMARY KEY, c0); CREATE TABLE IF NOT EXISTS 'messages_fts_docsize'(id INTEGER PRIMARY KEY, sz BLOB); CREATE TABLE IF NOT EXISTS 'messages_fts_config'(k PRIMARY KEY, v) WITHOUT ROWID; CREATE TABLE edited_messages( messageId STRING REFERENCES messages(id) ON DELETE CASCADE, sentAt INTEGER, readStatus INTEGER , conversationId STRING); CREATE TABLE mentions ( messageId REFERENCES messages(id) ON DELETE CASCADE, mentionAci STRING, start INTEGER, length INTEGER ); CREATE TABLE kyberPreKeys( id STRING PRIMARY KEY NOT NULL, json TEXT NOT NULL, ourServiceId NUMBER GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId'))); CREATE TABLE callsHistory ( callId TEXT PRIMARY KEY, peerId TEXT NOT NULL, -- conversation id (legacy) | uuid | groupId | roomId ringerId TEXT DEFAULT NULL, -- ringer uuid mode TEXT NOT NULL, -- enum "Direct" | "Group" type TEXT NOT NULL, -- enum "Audio" | "Video" | "Group" direction TEXT NOT NULL, -- enum "Incoming" | "Outgoing -- Direct: enum "Pending" | "Missed" | "Accepted" | "Deleted" -- Group: enum "GenericGroupCall" | "OutgoingRing" | "Ringing" | "Joined" | "Missed" | "Declined" | "Accepted" | "Deleted" status TEXT NOT NULL, timestamp INTEGER NOT NULL, UNIQUE (callId, peerId) ON CONFLICT FAIL ); [ dropped all indexes to save space in this blog post ] CREATE TRIGGER messages_on_view_once_update AFTER UPDATE ON messages WHEN new.body IS NOT NULL AND new.isViewOnce = 1 BEGIN DELETE FROM messages_fts WHERE rowid = old.rowid; END; CREATE TRIGGER messages_on_insert AFTER INSERT ON messages WHEN new.isViewOnce IS NOT 1 AND new.storyId IS NULL BEGIN INSERT INTO messages_fts (rowid, body) VALUES (new.rowid, new.body); END; CREATE TRIGGER messages_on_delete AFTER DELETE ON messages BEGIN DELETE FROM messages_fts WHERE rowid = old.rowid; DELETE FROM sendLogPayloads WHERE id IN ( SELECT payloadId FROM sendLogMessageIds WHERE messageId = old.id ); DELETE FROM reactions WHERE rowid IN ( SELECT rowid FROM reactions WHERE messageId = old.id ); DELETE FROM storyReads WHERE storyId = old.storyId; END; CREATE VIRTUAL TABLE messages_fts USING fts5( body, tokenize = 'signal_tokenizer' ); CREATE TRIGGER messages_on_update AFTER UPDATE ON messages WHEN (new.body IS NULL OR old.body IS NOT new.body) AND new.isViewOnce IS NOT 1 AND new.storyId IS NULL BEGIN DELETE FROM messages_fts WHERE rowid = old.rowid; INSERT INTO messages_fts (rowid, body) VALUES (new.rowid, new.body); END; CREATE TRIGGER messages_on_insert_insert_mentions AFTER INSERT ON messages BEGIN INSERT INTO mentions (messageId, mentionAci, start, length) SELECT messages.id, bodyRanges.value ->> 'mentionAci' as mentionAci, bodyRanges.value ->> 'start' as start, bodyRanges.value ->> 'length' as length FROM messages, json_each(messages.json ->> 'bodyRanges') as bodyRanges WHERE bodyRanges.value ->> 'mentionAci' IS NOT NULL AND messages.id = new.id; END; CREATE TRIGGER messages_on_update_update_mentions AFTER UPDATE ON messages BEGIN DELETE FROM mentions WHERE messageId = new.id; INSERT INTO mentions (messageId, mentionAci, start, length) SELECT messages.id, bodyRanges.value ->> 'mentionAci' as mentionAci, bodyRanges.value ->> 'start' as start, bodyRanges.value ->> 'length' as length FROM messages, json_each(messages.json ->> 'bodyRanges') as bodyRanges WHERE bodyRanges.value ->> 'mentionAci' IS NOT NULL AND messages.id = new.id; END; sqlite>

Finally I have the tool needed to inspect and process Signal messages that I need, without using the vendor provided client. Now on to transforming it to a more useful format.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Categories: FLOSS Project Planets

Daniel Roy Greenfeld: TIL: Autoloading on Filechange using Watchdog

Planet Python - Sun, 2023-11-12 05:30

Using Watchdog to monitor changes to a directory so we can alter what we serve out as HTTP. Each segment is the evolution towards getting it to work.

Serving HTTP from a directory

I've done this for ages:

python -m http.server 8000 -d /path/to/files Using http.server in a function

The Python docs aren't very clear on this, and rather than think hard about it I did this fun hack:

# cli.py from pathlib import Path from subprocess import check_call def server(site: Path) -> None: check_call(['python', '-m', 'http.server', '8000', '-d', site]) Autoreloading on filechanges

A bit more involved, and can certainly be improved. Here goes:

# server.py import functools import http.server import os import socketserver from pathlib import Path from watchdog.events import FileSystemEventHandler from watchdog.observers import Observer def build_handler(directory: Path): """Specify the directory of SimpleHTTPRequestHandler""" return functools.partial(http.server.SimpleHTTPRequestHandler, directory=directory) class EventHandler(FileSystemEventHandler): def on_any_event(self, event): if event.is_directory: return elif event.event_type in ["created", "modified"]: print(f"Reloading server due to file change: {event.src_path}") os._exit(0) def run_server(directory: Path, port: int = 8000): with socketserver.TCPServer(("", port), build_handler(directory)) as httpd: print(f"Serving on port {port}") httpd.serve_forever() def server(directory: Path, port: int = 8000): """Serve files in the watched directory""" # Watch the directory event_handler = EventHandler() observer = Observer() observer.schedule(event_handler, site, recursive=True) observer.start() try: # Run the HTTP server run_server(directory=directory, port=port) except KeyboardInterrupt: observer.stop() observer.join()

Usage:

# cli.py def server(site: Path, port: int = 8000): server(site="/path/to/directory/of/html", port=7500)
Categories: FLOSS Project Planets

KDE Ships Frameworks 5.112.0

Planet KDE - Sat, 2023-11-11 19:00

Sunday, 12 November 2023

KDE today announces the release of KDE Frameworks 5.112.0.

KDE Frameworks are 83 addon libraries to Qt which provide a wide variety of commonly needed functionality in mature, peer reviewed and well tested libraries with friendly licensing terms. For an introduction see the KDE Frameworks release announcement.

This release is part of a series of planned monthly releases making improvements available to developers in a quick and predictable manner.

New in this version Baloo
  • [PendingFile] Remove default constructor, METATYPE declaration
  • [PendingFile] Remove unused and incorrect setPath method
Extra CMake Modules
  • Rename prefix.sh.cmake to prefix.sh.in
KActivitiesStats
  • ResultSet: expose agent field
KCalendarCore
  • ICalFormat: don't shift all-day invite dates to UTC (bug 421400)
KConfig
  • kconfigwatcher: do not assert absolute paths
  • dbussanitizer: do not allow trailing slashes
  • dbussanitizer: qassertx to print the path
  • notify: don't try to send or receive dbus notifications on absolute paths
  • more aggressively sanitize dbus paths
KCoreAddons
  • Fix API docs generation for KPluginMetaDataOption enum values
  • Deprecate unused KStringHandler::isUtf8 & KStringHandler::from8Bit
KGlobalAccel
  • Add build option for KF6 coinstallability
KIO
  • KDirModel: Refactor _k_slotClear()
  • KDirModel: Replace 'slow' with 'fsType' naming
  • KDirModel: Reduce calls to isSlow()
  • KDirModel: Limit details fetching for network fs
Kirigami
  • Avatar: Add tests for cyrillic initials
  • Add support for cyrillic initials
KNotification
  • Adapt to notification API and permission changes in Android SDK 33 (bug 474643)
KTextEditor
  • Fix selection shrink when indenting (bug 329247)
NetworkManagerQt
  • Fix incorrect signal signature
  • Remove incorrect comment
  • Listen for both DBus service registration events and interface added events (bug 471870)
Security information

The released code has been GPG-signed using the following key: pub rsa2048/58D0EE648A48B3BB 2016-09-05 David Faure faure@kde.org Primary key fingerprint: 53E6 B47B 45CE A3E0 D5B7 4577 58D0 EE64 8A48 B3BB

Categories: FLOSS Project Planets

Gunnar Wolf: There once was a miniDebConf in Uruguay...

Planet Debian - Sat, 2023-11-11 16:59

Meeting Debian people for having a good time together, for some good hacking, for learning, for teaching… Is always fun and welcome. It brings energy, life and joy. And this year, due to the six-months-long relocation my family and me decided to have to Argentina, I was unable to attend the real deal, DebConf23 at India.

And while I know DebConf is an experience like no other, this year I took part in two miniDebConfs. One I have already shared in this same blog: I was in MiniDebConf Tamil Nadu in India, followed by some days of pre-DebConf preparation and scouting in Kochi proper, where I got to interact with the absolutely great and loving team that prepared DebConf.

The other one is still ongoing (but close to finishing). Some months ago, I talked with Santiago Ruano, jokin as we were Spanish-speaking DDs announcing to the debian-private mailing list we’d be relocating to around Río de la Plata. And things… worked out normally: He has been for several months in Uruguay already, so he decided to rent a house for some days, and invite Debian people to do what we do best.

I left Paraná Tuesday night (and missed my online class at UNAM! Well, you cannot have everything, right?). I arrived early on Wednesday, and around noon came to the house of the keysigning (well, the place is properly called “Casa Key”, it’s a publicity agency that is also rented as a guesthouse in a very nice area of Montevideo, close to Nuevo Pocitos beach).

In case you don’t know it, Montevideo is on the Northern (or Eastern) shore of Río de la Plata, the widest river in the world (up to 300Km wide, with current and non-salty water). But most important for some Debian contributors: You can even come here by boat!

That first evening, we received Ilu, who was in Uruguay by chance for other issues (and we were very happy about it!) and a young and enthusiastic Uruguayan, Felipe, interested in getting involved in Debian. We spent the evening talking about life, the universe and everything… Which was a bit tiring, as I had to interface between Spanish and English, talking with two friends that didn’t share a common language 😉

On Thursday morning, I went out for an early walk at the beach. And lets say, if only just for the narrative, that I found a lost penguin emerging from Río de la Plata!

For those that don’t know (who’d be most of you, as he has not been seen at Debian events for 15 years), that’s Lisandro Damián Nicanor Pérez Meyer (or just lisandro), long-time maintainer of the Qt ecosystem, and one of our embedded world extraordinaires. So, after we got him dry and fed him fresh river fishes, he gave us a great impromptu talk about understanding and finding our way around the Device Tree Source files for development boards and similar machines, mostly in the ARM world.

From Argentina, we also had Emanuel (eamanu) crossing all the way from La Rioja.

I spent most of our first workday getting ① my laptop in shape to be useful as the driver for my online class on Thursday (which is no small feat — people that know the particularities of my much loved ARM-based laptop will understand), and ② running a set of tests again on my Raspberry Pi labortory, which I had not updated in several months.

I am happy to say we are also finally also building Raspberry images for Trixie (Debian 13, Testing)! Sadly, I managed to burn my USB-to-serial-console (UART) adaptor, and could neither test those, nor the oldstable ones we are still building (and will probably soon be dropped, if not for anything else, to save disk space).

We enjoyed a lot of socialization time. An important highlight of the conference for me was that we reconnected with a long-lost DD, Eduardo Trápani, and got him interested in getting involved in the project again! This second day, another local Uruguayan, Mauricio, joined us together with his girlfriend, Alicia, and Felipe came again to hang out with us. Sadly, we didn’t get photographic evidence of them (nor the permission to post it).

The nice house Santiago got for us was very well equipped for a miniDebConf. There were a couple of rounds of pool played by those that enjoyed it (I was very happy just to stand around, take some photos and enjoy the atmosphere and the conversation).

Today (Saturday) is the last full-house day of miniDebConf; tomorrow we will be leaving the house by noon. It was also a very productive day! We had a long, important conversation about an important discussion that we are about to present on debian-vote@lists.debian.org.

It has been a great couple of days! Sadly, it’s coming to an end… But this at least gives me the opportunity (and moral obligation!) to write a long blog post. And to thank Santiago for organizing this, and Debian, for sponsoring our trip, stay, foods and healthy enjoyment!

Categories: FLOSS Project Planets

AppStream 1.0 released!

Planet KDE - Sat, 2023-11-11 14:48

Today, 12 years after the meeting where AppStream was first discussed and 11 years after I released a prototype implementation I am excited to announce AppStream 1.0!

Check it out on GitHub, or get the release tarball or read the documentation or release notes!

Some nostalgic memories

I was not in the original AppStream meeting, since in 2011 I was extremely busy with finals preparations and ball organization in high school, but I still vividly remember sitting at school in the students’ lounge during a break and trying to catch the really choppy live stream from the meeting on my borrowed laptop (a futile exercise, I watched parts of the blurry recording later).

I was extremely passionate about getting software deployment to work better on Linux and to improve the overall user experience, and spent many hours on the PackageKit IRC channel discussing things with many amazing people like Richard Hughes, Daniel Nicoletti, Sebastian Heinlein and others.

At the time I was writing a software deployment tool called Listaller – this was before Linux containers were a thing, and building it was very tough due to technical and personal limitations (I had just learned C!). Then in university, when I intended to recreate this tool, but for real and better this time as a new project called Limba, I needed a way to provide metadata for it, and AppStream fit right in! Meanwhile, Richard Hughes was tackling the UI side of things while creating GNOME Software and needed a solution as well. So I implemented a prototype and together we pretty much reshaped the early specification from the original meeting into what would become modern AppStream.

Back then I saw AppStream as a necessary side-project for my actual project, and didn’t even consider me as the maintainer of it for quite a while (I hadn’t been at the meeting afterall). All those years ago I had no idea that ultimately I was developing AppStream not for Limba, but for a new thing that would show up later, with an even more modern design called Flatpak. I also had no idea how incredibly complex AppStream would become and how many features it would have and how much more maintenance work it would be – and also not how ubiquitous it would become.

The modern Linux desktop uses AppStream everywhere now, it is supported by all major distributions, used by Flatpak for metadata, used for firmware metadata via Richard’s fwupd/LVFS, runs on every Steam Deck, can be found in cars and possibly many places I do not know yet.

What is new in 1.0? API breaks

The most important thing that’s new with the 1.0 release is a bunch of incompatible changes. For the shared libraries, all deprecated API elements have been removed and a bunch of other changes have been made to improve the overall API and especially make it more binding-friendly. That doesn’t mean that the API is completely new and nothing looks like before though, when possible the previous API design was kept and some changes that would have been too disruptive have not been made. Regardless of that, you will have to port your AppStream-using applications. For some larger ones I already submitted patches to build with both AppStream versions, the 0.16.x stable series as well as 1.0+.

For the XML specification, some older compatibility for XML that had no or very few users has been removed as well. This affects for example release elements that reference downloadable data without an artifact block, which has not been supported for a while. For all of these, I checked to remove only things that had close to no users and that were a significant maintenance burden. So as a rule of thumb: If your XML validated with no warnings with the 0.16.x branch of AppStream, it will still be 100% valid with the 1.0 release.

Another notable change is that the generated output of AppStream 1.0 will always be 1.0 compliant, you can not make it generate data for versions below that (this greatly reduced the maintenance cost of the project).

Developer element

For a long time, you could set the developer name using the top-level developer_name tag. With AppStream 1.0, this is changed a bit. There is now a developer tag with a name child (that can be translated unless the translate="no" attribute is set on it). This allows future extensibility, and also allows to set a machine-readable id attribute in the developer element. This permits software centers to group software by developer easier, without having to use heuristics. If we decide to extend the developer information per-app in future, this is also now possible. Do not worry though the developer_name tag is also still read, so there is no high pressure to update. The old 0.16.x stable series also has this feature backported, so it can be available everywhere. Check out the developer tag specification for more details.

Scale factor for screenshots

Screenshot images can now have a scale attribute, to indicate an (integer) scaling factor to apply. This feature was a breaking change and therefore we could not have it for the longest time, but it is now available. Please wait a bit for AppStream 1.0 to become deployed more widespread though, as using it with older AppStream versions may lead to issues in some cases. Check out the screenshots tag specification for more details.

Screenshot environments

It is now possible to indicate the environment a screenshot was recorded in (GNOME, GNOME Dark, KDE Plasma, Windows, etc.) via an environment attribute on the respective screenshot tag. This was also a breaking change, so use it carefully for now! If projects want to, they can use this feature to supply dedicated screenshots depending on the environment the application page is displayed in. Check out the screenshots tag specification for more details.

References tag

This is a feature more important for the scientific community and scientific applications. Using the references tag, you can associate the AppStream component with a DOI (Digital object identifier) or provide a link to a CFF file to provide citation information. It also allows to link to other scientific registries. Check out the references tag specification for more details.

Release tags

Releases can have tags now, just like components. This is generally not a feature that I expect to be used much, but in certain instances it can become useful with a cooperating software center, for example to tag certain releases as long-term supported versions.

Multi-platform support

Thanks to the interest and work of many volunteers, AppStream (mostly) runs on FreeBSD now, a NetBSD port exists, support for macOS was written and a Windows port is on its way! Thank you to everyone working on this

Better compatibility checks

For a long time I thought that the AppStream library should just be a thin layer above the XML and that software centers should just implement a lot of the actual logic. This has not been the case for a while, but there was still a lot of complex AppStream features that were hard for software centers to implement and where it makes sense to have one implementation that projects can just use.

The validation of component relations is one such thing. This was implemented in 0.16.x as well, but 1.0 vastly improves upon the compatibility checks, so you can now just run as_component_check_relations and retrieve a detailed list of whether the current component will run well on the system. Besides better API for software developers, the appstreamcli utility also has much improved support for relation checks, and I wrote about these changes in a previous post. Check it out!

With these changes, I hope this feature will be used much more, and beyond just drivers and firmware.

So much more!

The changelog for the 1.0 release is huge, and there are many papercuts resolved and changes made that I did not talk about here, like us using gi-docgen (instead of gtkdoc) now for nice API documentation, or the many improvements that went into better binding support, or better search, or just plain bugfixes.

Outlook

I expect the transition to 1.0 to take a bit of time. AppStream has not broken its API for many, many years (since 2016), so a bunch of places need to be touched even if the changes themselves are minor in many cases. In hindsight, I should have also released 1.0 much sooner and it should not have become such a mega-release, but that was mainly due to time constraints.

So, what’s in it for the future? Contrary to what I thought, AppStream does not really seem to be “done” and fetature complete at a point, there is always something to improve, and people come up with new usecases all the time. So, expect more of the same in future: Bugfixes, validator improvements, documentation improvements, better tools and the occasional new feature.

Onwards to 1.0.1!

Categories: FLOSS Project Planets

Matthias Klumpp: AppStream 1.0 released!

Planet Debian - Sat, 2023-11-11 14:48

Today, 12 years after the meeting where AppStream was first discussed and 11 years after I released a prototype implementation I am excited to announce AppStream 1.0!

Check it out on GitHub, or get the release tarball or read the documentation or release notes!

Some nostalgic memories

I was not in the original AppStream meeting, since in 2011 I was extremely busy with finals preparations and ball organization in high school, but I still vividly remember sitting at school in the students’ lounge during a break and trying to catch the really choppy live stream from the meeting on my borrowed laptop (a futile exercise, I watched parts of the blurry recording later).

I was extremely passionate about getting software deployment to work better on Linux and to improve the overall user experience, and spent many hours on the PackageKit IRC channel discussing things with many amazing people like Richard Hughes, Daniel Nicoletti, Sebastian Heinlein and others.

At the time I was writing a software deployment tool called Listaller – this was before Linux containers were a thing, and building it was very tough due to technical and personal limitations (I had just learned C!). Then in university, when I intended to recreate this tool, but for real and better this time as a new project called Limba, I needed a way to provide metadata for it, and AppStream fit right in! Meanwhile, Richard Hughes was tackling the UI side of things while creating GNOME Software and needed a solution as well. So I implemented a prototype and together we pretty much reshaped the early specification from the original meeting into what would become modern AppStream.

Back then I saw AppStream as a necessary side-project for my actual project, and didn’t even consider me as the maintainer of it for quite a while (I hadn’t been at the meeting afterall). All those years ago I had no idea that ultimately I was developing AppStream not for Limba, but for a new thing that would show up later, with an even more modern design called Flatpak. I also had no idea how incredibly complex AppStream would become and how many features it would have and how much more maintenance work it would be – and also not how ubiquitous it would become.

The modern Linux desktop uses AppStream everywhere now, it is supported by all major distributions, used by Flatpak for metadata, used for firmware metadata via Richard’s fwupd/LVFS, runs on every Steam Deck, can be found in cars and possibly many places I do not know yet.

What is new in 1.0? API breaks

The most important thing that’s new with the 1.0 release is a bunch of incompatible changes. For the shared libraries, all deprecated API elements have been removed and a bunch of other changes have been made to improve the overall API and especially make it more binding-friendly. That doesn’t mean that the API is completely new and nothing looks like before though, when possible the previous API design was kept and some changes that would have been too disruptive have not been made. Regardless of that, you will have to port your AppStream-using applications. For some larger ones I already submitted patches to build with both AppStream versions, the 0.16.x stable series as well as 1.0+.

For the XML specification, some older compatibility for XML that had no or very few users has been removed as well. This affects for example release elements that reference downloadable data without an artifact block, which has not been supported for a while. For all of these, I checked to remove only things that had close to no users and that were a significant maintenance burden. So as a rule of thumb: If your XML validated with no warnings with the 0.16.x branch of AppStream, it will still be 100% valid with the 1.0 release.

Another notable change is that the generated output of AppStream 1.0 will always be 1.0 compliant, you can not make it generate data for versions below that (this greatly reduced the maintenance cost of the project).

Developer element

For a long time, you could set the developer name using the top-level developer_name tag. With AppStream 1.0, this is changed a bit. There is now a developer tag with a name child (that can be translated unless the translate="no" attribute is set on it). This allows future extensibility, and also allows to set a machine-readable id attribute in the developer element. This permits software centers to group software by developer easier, without having to use heuristics. If we decide to extend the developer information per-app in future, this is also now possible. Do not worry though the developer_name tag is also still read, so there is no high pressure to update. The old 0.16.x stable series also has this feature backported, so it can be available everywhere. Check out the developer tag specification for more details.

Scale factor for screenshots

Screenshot images can now have a scale attribute, to indicate an (integer) scaling factor to apply. This feature was a breaking change and therefore we could not have it for the longest time, but it is now available. Please wait a bit for AppStream 1.0 to become deployed more widespread though, as using it with older AppStream versions may lead to issues in some cases. Check out the screenshots tag specification for more details.

Screenshot environments

It is now possible to indicate the environment a screenshot was recorded in (GNOME, GNOME Dark, KDE Plasma, Windows, etc.) via an environment attribute on the respective screenshot tag. This was also a breaking change, so use it carefully for now! If projects want to, they can use this feature to supply dedicated screenshots depending on the environment the application page is displayed in. Check out the screenshots tag specification for more details.

References tag

This is a feature more important for the scientific community and scientific applications. Using the references tag, you can associate the AppStream component with a DOI (Digital object identifier) or provide a link to a CFF file to provide citation information. It also allows to link to other scientific registries. Check out the references tag specification for more details.

Release tags

Releases can have tags now, just like components. This is generally not a feature that I expect to be used much, but in certain instances it can become useful with a cooperating software center, for example to tag certain releases as long-term supported versions.

Multi-platform support

Thanks to the interest and work of many volunteers, AppStream (mostly) runs on FreeBSD now, a NetBSD port exists, support for macOS was written and a Windows port is on its way! Thank you to everyone working on this

Better compatibility checks

For a long time I thought that the AppStream library should just be a thin layer above the XML and that software centers should just implement a lot of the actual logic. This has not been the case for a while, but there was still a lot of complex AppStream features that were hard for software centers to implement and where it makes sense to have one implementation that projects can just use.

The validation of component relations is one such thing. This was implemented in 0.16.x as well, but 1.0 vastly improves upon the compatibility checks, so you can now just run as_component_check_relations and retrieve a detailed list of whether the current component will run well on the system. Besides better API for software developers, the appstreamcli utility also has much improved support for relation checks, and I wrote about these changes in a previous post. Check it out!

With these changes, I hope this feature will be used much more, and beyond just drivers and firmware.

So much more!

The changelog for the 1.0 release is huge, and there are many papercuts resolved and changes made that I did not talk about here, like us using gi-docgen (instead of gtkdoc) now for nice API documentation, or the many improvements that went into better binding support, or better search, or just plain bugfixes.

Outlook

I expect the transition to 1.0 to take a bit of time. AppStream has not broken its API for many, many years (since 2016), so a bunch of places need to be touched even if the changes themselves are minor in many cases. In hindsight, I should have also released 1.0 much sooner and it should not have become such a mega-release, but that was mainly due to time constraints.

So, what’s in it for the future? Contrary to what I thought, AppStream does not really seem to be “done” and fetature complete at a point, there is always something to improve, and people come up with new usecases all the time. So, expect more of the same in future: Bugfixes, validator improvements, documentation improvements, better tools and the occasional new feature.

Onwards to 1.0.1!

Categories: FLOSS Project Planets

ListenData: NumPy argmin() Function : Learn with Examples

Planet Python - Sat, 2023-11-11 09:13

In this tutorial, we will see how to use the NumPy argmin() function in Python along with examples.

To read this article in full, please click hereThis post appeared first on ListenData
Categories: FLOSS Project Planets

Reproducible Builds: Reproducible Builds in October 2023

Planet Debian - Sat, 2023-11-11 07:39

Welcome to the October 2023 report from the Reproducible Builds project. In these reports we outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries.

Reproducible Builds Summit 2023

Between October 31st and November 2nd, we held our seventh Reproducible Builds Summit in Hamburg, Germany!

Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort, and this instance was no different.

During this enriching event, participants had the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. A number of concrete outcomes from the summit will documented in the report for November 2023 and elsewhere.

Amazingly the agenda and all notes from all sessions are already online.

The Reproducible Builds team would like to thank our event sponsors who include Mullvad VPN, openSUSE, Debian, Software Freedom Conservancy, Allotropia and Aspiration Tech.


Reflections on Reflections on Trusting Trust

Russ Cox posted a fascinating article on his blog prompted by the fortieth anniversary of Ken Thompson’s award-winning paper, Reflections on Trusting Trust:

[…] In March 2023, Ken gave the closing keynote [and] during the Q&A session, someone jokingly asked about the Turing award lecture, specifically “can you tell us right now whether you have a backdoor into every copy of gcc and Linux still today?”

Although Ken reveals (or at least claims!) that he has no such backdoor, he does admit that he has the actual code… which Russ requests and subsequently dissects in great but accessible detail.


Ecosystem factors of reproducible builds

Rahul Bajaj, Eduardo Fernandes, Bram Adams and Ahmed E. Hassan from the Maintenance, Construction and Intelligence of Software (MCIS) laboratory within the School of Computing, Queen’s University in Ontario, Canada have published a paper on the “Time to fix, causes and correlation with external ecosystem factors” of unreproducible builds.

The authors compare various response times within the Debian and Arch Linux distributions including, for example:

Arch Linux packages become reproducible a median of 30 days quicker when compared to Debian packages, while Debian packages remain reproducible for a median of 68 days longer once fixed.

A full PDF of their paper is available online, as are many other interesting papers on MCIS’ publication page.


NixOS installation image reproducible

On the NixOS Discourse instance, Arnout Engelen (raboof) announced that NixOS have created an independent, bit-for-bit identical rebuilding of the nixos-minimal image that is used to install NixOS. In their post, Arnout details what exactly can be reproduced, and even includes some of the history of this endeavour:

You may remember a 2021 announcement that the minimal ISO was 100% reproducible. While back then we successfully tested that all packages that were needed to build the ISO were individually reproducible, actually rebuilding the ISO still introduced differences. This was due to some remaining problems in the hydra cache and the way the ISO was created. By the time we fixed those, regressions had popped up (notably an upstream problem in Python 3.10), and it isn’t until this week that we were back to having everything reproducible and being able to validate the complete chain.

Congratulations to NixOS team for reaching this important milestone! Discussion about this announcement can be found underneath the post itself, as well as on Hacker News.


CPython source tarballs now reproducible

Seth Larson published a blog post investigating the reproducibility of the CPython source tarballs. Using diffoscope, reprotest and other tools, Seth documents his work that led to a pull request to make these files reproducible which was merged by Łukasz Langa.


New arm64 hardware from Codethink

Long-time sponsor of the project, Codethink, have generously replaced our old “Moonshot-Slides”, which they have generously hosted since 2016 with new KVM-based arm64 hardware. Holger Levsen integrated these new nodes to the Reproducible Builds’ continuous integration framework.


Community updates

On our mailing list during October 2023 there were a number of threads, including:

  • Vagrant Cascadian continued a thread about the implementation details of a “snapshot” archive server required for reproducing previous builds. []

  • Akihiro Suda shared an update on BuildKit, a toolkit for building Docker container images. Akihiro links to a interesting talk they recently gave at DockerCon titled Reproducible builds with BuildKit for software supply-chain security.

  • Alex Zakharov started a thread discussing and proposing fixes for various tools that create ext4 filesystem images. []

Elsewhere, Pol Dellaiera made a number of improvements to our website, including fixing typos and links [][], adding a NixOS “Flake” file [] and sorting our publications page by date [].

Vagrant Cascadian presented Reproducible Builds All The Way Down at the Open Source Firmware Conference.


Distribution work

distro-info is a Debian-oriented tool that can provide information about Debian (and Ubuntu) distributions such as their codenames (eg. bookworm) and so on. This month, Benjamin Drung uploaded a new version of distro-info that added support for the SOURCE_DATE_EPOCH environment variable in order to close bug #1034422. In addition, 8 reviews of packages were added, 74 were updated and 56 were removed this month, all adding to our knowledge about identified issues.

Bernhard M. Wiedemann published another monthly report about reproducibility within openSUSE.


Software development

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

In addition, Chris Lamb fixed an issue in diffoscope, where if the equivalent of file -i returns text/plain, fallback to comparing as a text file. This was originally filed as Debian bug #1053668) by Niels Thykier. [] This was then uploaded to Debian (and elsewhere) as version 251.


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework (available at tests.reproducible-builds.org) in order to check packages and other artifacts for reproducibility. In October, a number of changes were made by Holger Levsen:

  • Debian-related changes:

    • Refine the handling of package blacklisting, such as sending blacklisting notifications to the #debian-reproducible-changes IRC channel. [][][]
    • Install systemd-oomd on all Debian bookworm nodes (re. Debian bug #1052257). []
    • Detect more cases of failures to delete schroots. []
    • Document various bugs in bookworm which are (currently) being manually worked around. []
  • Node-related changes:

  • Monitoring-related changes:

    • Remove unused Munin monitoring plugins. []
    • Complain less visibly about “too many” installed kernels. []
  • Misc:

    • Enhance the firewall handling on Jenkins nodes. [][][][]
    • Install the fish shell everywhere. []

In addition, Vagrant Cascadian added some packages and configuration for snapshot experiments. []


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

Categories: FLOSS Project Planets

Porting KDE Android apps to Qt6/KF6

Planet KDE - Sat, 2023-11-11 04:45

With the first Qt 6 based release of KDE software rapidly approaching, there’s still one thing that has been lagging behind in porting so far, working Android APK packages. After recent changes on our build and deployment infrastructure, on Qt and on KDE Frameworks we are getting closer to that though.

Code changes

One of the larger areas of source incompatible change in Qt 6 was the removal of the QtAndroidExtras module, affecting for example:

  • Interfacing with JNI code via QAndroidJniObject (now QJniObject in Qt6::Core).
  • Access to the context or activity handle via QtAndroid (now QAndroidApplicationin Qt6::Core).
  • Permission checks and requests via QtAndroid (if you are lucky now with the new QPermission API in Qt6::Core, otherwise with private API in QtCore/private/qandroidextras_p.h).

A lot of this has been taken care of already though, as we have Qt 6 Android coverage on the KDE CI since quite some time, and all of this is doable while still keeping compatibility with Qt 5.

For most of the following this is unfortunately not the case though, so those changes can only be applied when Qt 6 is required. The Java code is such a case, if you inherit from org.qtproject.qt5.android.bindings.QtActivity for example, the qt5 part of the fully qualified class name is now just qt.

Build system and packaging changes

If your app doesn’t have a custom Java Activity class you’ll find the above fully qualified class name in the Android Manifest XML file, it needs to be changed there in the same way.

<application android:name="org.qtproject.qt.android.bindings.QtApplication" android:label="My App" android:icon="@mipmap/ic_launcher"> <activity android:name="org.qtproject.qt.android.bindings.QtActivity" android:label="My App" ... android:exported="true">

In case you have a custom build.gradle file instead of implicitly using the one from the Qt template that will also need a few adjustments. Anything mentioning qt5 is an obvious candidate but there’s likely more. The easiest way here is probably to rebase your customizations on top of the one provided by Qt in $PREFIX/src/android/templates/build.gradle.

And finally the CMake part of the buildsystem will need changes as well. In KF5 ECM did create APKs magically, but the way this was injected into the CMake toolchain file is clashing with Qt 6 also having its own CMake toolchain file now. So we need explicit CMake API for creating APKs now:

include(ECMAddAndroidApk) ecm_add_android_apk(myapp ANDROID_DIR ${CMAKE_CURRENT_SOURCE_DIR}/android) Craft changes

Building the APK with Craft is then mostly a matter of configuration meanwhile. As most dependencies don’t have a Qt 6 based release yet, you will likely need to switch all of those to use the latest Git version.

[BlueprintSettings] libs/qt.qtMajorVersion=6 kde/frameworks.version=master

In a few cases you might also encounter Craft blueprints that still unconditionally pull in Qt 5 and thus likely break the build. Dependencies on Qt modules that no longer exist in Qt 6 are a possible reason for example. A corresponding condition on the Qt version tends to fix that.

if CraftPackageObject.get("libs/qt").instance.subinfo.options.dynamic.qtMajorVersion == "5": self.runtimeDependencies["libs/qt5/qtquickcontrols2"] = None CI/CD changes

With the Jenkins-based binary factory being up for retirement Qt 6 based APKs can only be built on the Gitlab CI on KDE Invent. So this has to be set up and configured differently than with Qt 5 as well.

Defining APK builds jobs now happens in the .gitlab-ci.yml file in the corresponding repository, by including the respective templates in the same way other CI jobs are defined. In many cases this however will also imply switching to the new style of including templates that Ingo described here.

include: - project: sysadmin/ci-utilities file: - /gitlab-templates/craft-android-apks.yml

The location for customized Craft settings also changed, those now go into a .craft.ini file at the top-level of the application repository. For now you’ll typically need at least the above mentioned version settings there.

Examples

There’s a few apps having those changes applied meanwhile which might serve as examples:

Remaining issues

While for Kongress the above will result in a properly signed and working Qt 6 based APK, that’s unfortunately not the case for all apps yet. One of the larger known issues is that the Breeze Qt Quick Controls style doesn’t load on Android, so apps including that will still fail to start.

Categories: FLOSS Project Planets

This week in KDE: Wayland by default, de-framed Breeze, HDR games, rectangle screen recording

Planet KDE - Sat, 2023-11-11 00:58

Yep you read that right, we’ve decided to throw the lever and go Wayland by default! The three remaining showstoppers are in the process of being fixed and we expect them to be done soon–certainly before the final release of Plasma 6. So we wanted to make the change early to gather as much feedback as possible.

But that’s not all, of course. This was another big week! Read on to see the rest:

Plasma 6

(Includes all software to be released on the February 28th mega-release: Plasma 6, Frameworks 6, and apps from Gear 24.02)

General infoOpen issues: 118

The Breeze app style has gotten the visual overhaul you’ve all dreamed of: no more frames within frames! Instead Breeze-themed apps adopt the clean design of modern Kirigami apps, with views separated from one another with single-pixel lines! (Carl Schwan, link 1, link 2, link 3, link 4, link 5, link 6, link 7, link 8, link 9, and link 10):

In the Plasma Wayland session, there’s now preliminary support for playing HDR-capable games when using an HDR-capable screen! (Xaver Hugl, link)

Spectacle has gained support for rectangular region screen recording! (Noah Davis, link)

System Settings’ Printers page has gotten a major overhaul and now includes the features internally that it used to direct you to external apps for. The result is much nicer and more integrated, without a cascading soup of dialog windows (Mike Noe, link):

The Plasma Panel settings have been redesigned again, and this time everything is in one dialog; no more nested sub-menus! This work fixed 14 open bug reports (Niccolò Venerandi and Marco Martin, link 1 and link 2):

Ark is now significantly faster to compress files using xz and zstd compression, as they are now multi-threaded (Zhangzhi Hu, link)

When you run Flatpak apps, they’ll no longer prompt you to approve or deny “background activity”; the whole concept of this has been removed as it was kinda sus and not useful at all (David Edmundson, link)

There’s now a simple setting to disable notification sounds systemwide, for those of you who don’t like them (Ismael Asensio, link 1, link 2, and link 3):

Improved Plasma’s start time rather significantly–up to a few whole seconds (Harald Sitter, link)

The double-click speed setting returns, and now lives on System Settings’ General Behavior page. Before you ask why it’s not on the mouse Page, it’s because it affects touchpads too and that has its own page, and duplicating the setting on both pages seemed messy and ugly (Nicolas Fella, link)

Syncing your Plasma settings to SDDM now also syncs your desired NumLock state on boot (Chandradeep Dey, link)

In QtWidgets-based apps, you can open the context menu for the selected thing with the Shift+F10 shortcut (Felix Ernst, link 1 and link 2)

You can now open System Monitor with the Meta+Escape shortcut (Arjen Hiemstra, link)

Significant Bugfixes

(This is a curated list of e.g. HI and VHI priority bugs, Wayland showstoppers, major regressions, etc.)

Fixed a wide variety of multi-screen issues related to screens sometimes not turning on at the right times or becoming visually frozen until going to another VT and back (Xaver Hugl, Plasma 6.0. Link)

Fixed a bug that could cause desktop icon positions to be mis-remembered, especially if the system has ever had multiple screens connected (Harald Sitter, Plasma 5.27.10. Link)

Fixed a bug that could cause Night Color to start transitioning to night mode at inappropriate times when using a certain combination of settings (Ismael Asensio, Plasma 5.27.10. Link)

Just in case you have a window that fails to set a minimum size, KWin no longer lets you resize it to a width of zero pixels, whereupon it would become invisible and impossible to find (Xaver Hugl, Plasma 6.0. Link)

In the “Get new [thing]” dialogs, items’ full descriptions are now visible, instead of getting cut off at some point (Ismael Asensio, Plasma 6.0. Link)

Other bug-related information of interest:

Automation & Systematization

Added a GUI test to make sure that Panel Edit Mode can be entered (Fushan Wen, link)

Added some GUI tests for functionality of the wallpaper chooser dialog (Fushan Wen, link)

Added some GUI tests for KRunner’s plugins and their presents in the relevant System Settings page (Fushan Wen, link)

…And Everything Else

This blog only covers the tip of the iceberg! If you’re hungry for more, check out https://planet.kde.org, where you can find more news from other KDE contributors.

How You Can Help

We’re hosting our Plasma 6 fundraiser right now and need your help! Thanks to you we’re past the 60% mark, but we’re not there yet! So if you like the work we’re doing, spreading the wealth is a great way to share the love.

If you’re a developer, work on Qt6/KF6/Plasma 6 issues! Which issues? These issues. Plasma 6 is usable for daily driving now, but still in need of bug-fixing and polishing to get it into a releasable state by February.

Otherwise, visit https://community.kde.org/Get_Involved to discover other ways to be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don’t have to already be a programmer, either. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

Categories: FLOSS Project Planets

Web Review, Week 2023-45

Planet KDE - Fri, 2023-11-10 19:20

A bit later than usual since I got a failure on my hosting infrastructure which required some love. Anyway, let’s go for my web review for the week 2023-45.

A new home and license (AGPL) for Synapse and friends

Tags: tech, matrix, licensing

Interesting move in the Matrix space. It’s nice to see them go for a dual license business model involving AGPLv3. I’m a bit more concerned about the CLA though. Let’s hope they setup something equivalent to the KDE-FreeQt Foundatio going through the Matrix Foundation. Otherwise, AFAICT, there’s no safeguard against some nefarious relicensing years down the line.

https://element.io/blog/element-to-adopt-agplv3/


Rule Ambiguity, Institutional Clashes, and Population Loss: How Wikipedia Became the Last Good Place on the Internet

Tags: tech, wikipedia, community, politics

Interesting exploration of the Wikipedia community dynamics. This explains quite a few things on its evolution. It highlights how it became a beacon of sanity in the insane political landscape we’re collectively facing.

https://www.cambridge.org/core/journals/american-political-science-review/article/rule-ambiguity-institutional-clashes-and-population-loss-how-wikipedia-became-the-last-good-place-on-the-internet/FC3F7B9CBF951DD30C2648E7DEFB65EE


Introducing Steam Deck OLED

Tags: tech, kde, hardware, repair

Looks like Valve is delivering on its promise to do further iterations on their hardware. Looks like they paid further attention to repairability which is very welcome. It’ll put KDE products in an even better light now. 😉

https://www.steamdeck.com/en/oled


Critical vulnerability in Atlassian Confluence server is under “mass exploitation” | Ars Technica

Tags: tech, atlassian, security

This is indeed a very nasty vulnerability. This won’t improve my low trust in this product. They’ve been trying to phase it out for a while, it shows now.

https://arstechnica.com/security/2023/11/critical-vulnerability-in-atlassian-confluence-server-is-under-mass-exploitation/


AI Entity Resolution: Bridging Records Across Human Languages - TerminusDB

Tags: tech, vector, databases, ai, machine-learning

Ever wondered what you can do with vector databases and LLMs? Here is an interesting use case.

https://terminusdb.com/blog/ai-entity-resolution/


tailspin: 🌀 A log file highlighter

Tags: tech, logging, tools, command-line

Looks like a nice tool. Should complete nicely my trusty lnav for unsupported formats.

https://github.com/bensadeh/tailspin


Ninja is enough build system | Max Bernstein

Tags: tech, tools, buildsystems, ninja

Interesting tidbits I didn’t know about. The little Python API provided to generate Ninja files could turn out interesting.

https://bernsteinbear.com//blog/ninja-is-enough/


dotree: A small, interactive command runner

Tags: tech, tools, command-line

Looks like a neat tool for the less common commands you still need to reach easily.

https://github.com/KnorrFG/dotree


Backtraces with strace :: Words from Shane

Tags: tech, system, tools, command-line

OK, I admit I missed the introduction of this flag in strace as well. Super interesting, it can definitely be useful.

https://shane.ai/posts/backtraces-with-strace/


git rebase: what can go wrong?

Tags: tech, git, tools

I tend to encourage people to master git rebase. In any case this comes with a few warnings so do it with care. This article does a good job pointing the caveats of the rebase command.

https://jvns.ca/blog/2023/11/06/rebasing-what-can-go-wrong-/


What Happens When You Enter a URL into a Browser

Tags: tech, web, browser, http, learning

Nothing groundbreaking if you already know about the topic. But very nice introductory resource for people who wish to learn about it. Nicely put together.

https://medium.com/@atakanserbes/web-navigation-demystified-what-happens-when-you-enter-a-url-into-a-browser-39d8f2043b19


What is a Query Optimizer for?

Tags: tech, sql, databases

Interesting view on the motives and overall behavior of query planners.

https://justinjaffray.com/what-is-a-query-optimizer-for/


5 Inconvenient Truths about TypeScript

Tags: tech, web, frontend, typescript

I like this kind of balanced view. Indeed Typescript isn’t all roses, still it’s worth using in complex cases.

https://oida.dev/5-truths-about-typescript/


Going up in color bit depth

Tags: tech, graphics, colors

This is a nice trick when converting colors.

https://30fps.net/pages/bit-depths/


Shoelace: A forward-thinking library of web components.

Tags: tech, webcomponents, web, frontend

Another library of web components. This seems to pick up and it’s welcome.

https://shoelace.style/


A better explanation of the Liskov Substitution Principle

Tags: tech, object-oriented, teaching

One of the toughest object-oriented programming principles to apply properly in my opinion. At least it looks like we found a better way to teach it now.

https://www.hillelwayne.com/post/lsp/


10 hard-to-swallow truths they won’t tell you about software engineer job

Tags: tech, engineering, career

It sometimes feel a bit like caricature… but there’s some truth grounded into this article. The faster new software engineers internalize the proposed “truths”, the better for their own mental health.

https://www.mensurdurakovic.com/hard-to-swallow-truths-they-wont-tell-you-about-software-engineer-job/


no hello

Tags: tech, messaging, remote-working

Yes, we definitely shouldn’t use chats as the phone. I often fails at this, it’s also a good reminder for me.

https://nohello.net/en/


Bye for now!

Categories: FLOSS Project Planets

First week of fulltime KDE

Planet KDE - Fri, 2023-11-10 19:00

Seems getting laid off was pretty good for me after all. Funny how things go sometimes.

A company that works on KDE stuff (and other Linuxy things), interviewed me for a fun job: "Wanna help us work on KDE Plasma?"

You bet I said YES!

So now I work daily 8 hours a day, 5 days a week, to improve KDE Plasma! It's contract work, but I hope people there like me a lot to keep me around. :) At least I am planning to be around for the long haul!

Anyway, this was my first week doing this job!

Everyone I work with is really nice and most of them I have already met during my contribution adventures.

The job itself for now has been about helping finding and fixing bugs in KDE Plasma. I've mostly concentrated on bugs that can appear when moving from 5 to 6, like migrating configs, etc.

There's some other stuff I'm doing as well, but they're not that visible to end user, necessarily. Like fixing warnings.

I am also focusing on learning the stack and hopefully eventually get to work more on Flatpak related things and Kwin related things, since those interest me. I am also hoping to help with accessibility, like high-contrast color schemes and such. And who knows what else I will work on in future!

Also due to my experience in test automation, I have taken on the sidequest to help with that part as well. I have been quite interested how test automation of Linux desktop apps works, and there's quite cool stuff going on there.

In my personal KDE plans, I want to add color customization options for separators, since those can be used in high contrast themes. But since it would be any custom color the user wants, well, they can set it to anything. Anyhow, I will first prioritize fixing bugs and learning more.

Thank you so much for the company who took me under their wing. And thanks to my colleagues for helping me get into this and teaching me things, even when my questions can be a bit dumb at times.. :'D You know who you are!

Expect more posts in future about what I learn during this job! :)

Thanks for reading!

Categories: FLOSS Project Planets

Jonathan Dowland: Plato document reader

Planet Debian - Fri, 2023-11-10 17:03

Kobo Libra 2

text-handling in Plato

Until now, I haven't hacked my Kobo Libra 2 ereader, despite knowing it is a relatively open device. The default document reader (Nickel) does everything I need it to. Syncing the books via USB is tedious, but I don't do it that often.

Via Videah's blog post My E-Reader Setup, I learned of Plato, an alternative document reader.

Plato doesn't really offer any headline features that I need, but it cost me nothing to try it out, so I installed it (fairly painlessly) and launched it just once. The library view seems good, although I've not used it much: I picked a book and read it through1, and I'm 60% through another2. I tend to read one ebook at a time.

The main reader interface is great: Just the text3. Page transitions are really, really fast. Tweaking the backlight intensity is a little slower than Nickel: menu-driven rather than an active scroll region (which is convenient in Nickel but easy to accidentally turn to 0% and hard to recover from in pitch black).

Now that I've started down the road of hacking the Kobo, I think I will explore wifi-syncing the library, perhaps using a variation on the hook scripts shared in Videah's blog post.

  1. Venomous Lumpsucker by Ned Beauman. It's fantastic. Guardian review
  2. There Is No Antimemetics Division by qntm
  3. I do miss Nickel's tiny progress bar somewhat: the only non-text bit of UX I left turned on.
Categories: FLOSS Project Planets

Fonts! Fonts! Fonts!

Planet KDE - Fri, 2023-11-10 14:44

Continuing with the design system talk, I wanted to show you my selection for my system’s font.

Ta dah! INTER

The Inter font family is no stranger to UI. It is currently featured in a few OS gaining prominence in the mobile space for its direct and sharp style.

I have been through a few fonts and while users can decide to change this in our settings, it’s also very important to have a good selection by default. I imagine many users choose Plasma but are not the kind to tinker with the system too much. For them, selecting a strong, very readable font, is very important.

If you want to know more about this font, you can find some good articles here:

https://github.com/rsms/inter

I have been designing with this font for some time and I am very happy with the results. Now, I should say that this a personal preference that as good backing with industry use. However, it is not the “best” font there could ever be. That’s a subjective opinion and we all have one. It is also not a race, more like a phase.

Below you will see a spread of how this font is used in my design system. There are combinations that work really well between the Display and Text areas.

For an in-depth explanation of how apply fonts to your system from a design system, we can use the Material Type System.

https://m2.material.io/design/typography/the-type-system.html#applying-the-type-scale

As you can see the measurements are placed on the right of each row for a better explanation of how they should work. In a design application like Figma or Penpot, these fonts styles are easier to pick and apply, leading to faster designing.

I have included a PDF version in case the PNG is too small. Please note that the font sizes may be small in the graphic, but this does not mean they will be small when applied in a system.

I have included some screenshots of experimental mockups I am working on for you to see these fonts in action.

TypographyDownload
Categories: FLOSS Project Planets

KDE: Krita 5.2.1 Snap! KDE Gear 23.08.3 Snaps and KDE neon release

Planet KDE - Fri, 2023-11-10 13:45

Today https://kde.org/announcements/gear/23.08.3/ !

I have finished all the snaps and have released to stable channel, if the snap you are looking for hasn’t arrived yet, there is an MR open and it will be soon!

I have finished all the applications in KDE neon and they are available in Unstable and I am snapshotting User edition and they will be available shortly.

Krita 5.2.1 Snap is complete and released to stable channel!

Enjoy!

I fixed some issues with a few of our –classic snaps, namely in Wayland sessions by bundling some missing wayland Qt libs. They should no longer go BOOM upon launch.

KF6 SDK snap is complete. Next freetime I will work on runtime and launcher.

Down to the last part on the akonadi snap build so PIM snaps are coming soon.

Personal:

As many of you know, I have been out of proper employment for a year now. I had a hopeful project in the works, but it is out of my hands now and the new project holder was only allowed to give me part time and it is still not in stone with further delays. I understand that these things take time and refinement to go through. I have put myself and my family in dire straights with my stubbornness and need to re-evaluate my priorities. I enjoy doing this work very much, but I also need to pay some very over due bills and well life costs money. With that said, I hope to have an interview next week with a local hospital that needs a Linux Administrator. Who knew someone in nowhere Arizona would have a Linux shop! Anyway, I will be going back to my grass roots, network administration is where I started way back in 1996. I will still be around! Just not at the level I am now obviously. I will still be in the project if they allow, I need 2 jobs to clean up this mess I have made for myself. In my spare time I will of course keep up with Debian and KDE neon and Snaps!

If you can spare any change to help with my gas for interview and 45 minute commute till I get a paycheck I would be super grateful. Hopefully I won’t have to ask for much longer. Thank you so much to everyone that has helped over the last year, it means the world to me.

https://gofund.me/b8b69e54

Categories: FLOSS Project Planets

Pages