FLOSS Project Planets
Sandro Knauß: QML Dependency tracking in Debian
Tracking library dependencies work in Debian to resolve from symbols usage to a library and add this to the list of dependencies. That is working for years now. The KDE community nowadays create more and more QML based applications. Unfortunately QML is a interpreted language, this means missing QML dependencies will only be an issue at runtime.
To fix this I created dh_qmldeps, that searches for QML dependencies at build time and will fail if it can't resolve the QML dependency.
Me didn't create an own QML interpreter, just using qmlimportscanner behind the scenes and process the output further to resolve the QML modules to Debian packages.
The workflow is like follows:
The package compiles normally and split to the binary packages. Than dh_qmldeps scans through the package content to find QML content ( .qml files, or qmldirfor QML modules). All founded files will be scanned by qmlimportscanner, the output is a list of depended QML modules. As QML modules have a standardized file path, we can ask the Debian system, which packages ship this file path. We end up with a list of Debian packages in the variable ${qml6:Depends}. This variable can be attached to the list of dependencies of the scanned package. A maintainer can also lower some dependencies to Recommends or Suggest, if needed.
You can find the source code on salsa and usage documentation you can find on https://qt-kde-team.pages.debian.net/dh_qmldeps.html.
The last weeks I now enabled dh_qmldeps for newly every package, that creates a QML6 module package. So the first bugs are solved and it should be usable for more packages.
By scanning with qmlimportscanner trough all code, I found several non-existing QML modules:
- import QtQuick3DPrivate qt6-multimedia - no Private QML module QTBUG-131753.
- import QtQuickPrivate qt6-graphs - no Private QML module QTBUG-131754.
- import QtQuickTimeline qt6-quicktimeline - the correct QML name is QtQuick.Timeline QTBUG-131755.
- import QtQuickControls2 qt6-webengine - looks like a porting bug as the QML6 modules name is QtQuick.Controls QTBUG-131756.
- import QtGraphicalEffects kquickimageeditor - the correct name is for QML6 is qt5compat.graphicaleffects, properly as it is an example nobody checks it kquickimageeditor!7.
YEAH - the first milestone is reached. We are able to simply handle QML modules.
But QML applications there is still room for improvement. In apps the QML files are inside the executable. Additionally applications create internal QML modules, that are shipped directly in the same executable. I still search for a good way to analyse an executable to get a list of internal QML modules and a list of included QML files. Any ideas are welcomed :)
As workaround dh_qmldeps scans currently all QML files inside the application source code.
Dima Kogan: Strava track filtering validation
After years of seeing people's strava tracks, I became convinced that they insufficiently filter the data, resulting in over-estimating the effort. Today I did a bit of lazy analysis, and half-confirmed this: in the one case I looked at, strava reported reasonable elevation gain numbers, but greatly overestimated the distance traveled.
I looked at a single gps track of a long bike ride. This was uploaded to strava manually, as a .gpx file. I can imagine that different things happen if you use the strava app or some device that integrates with the service (the filtering might happen before the data hits the server, and the server could decide to not apply any more filtering).
I processed the data with a simple hysteretic filter, ignoring small changes in position and elevation, trying out different thresholds in the process. I completely ignore the timestamps, and only look at the differences between successive points. This handles the usual GPS noise; it does not handle GPS jumps, which I completely ignore in this analysis. Ignoring these would produce inflated elevation/gain numbers, but I'm working with a looong track, so hopefully this is a small effect.
Clearly this is not scientific, but it's something.
The codeParsing .gpx is slow (this is a big file), so I cache that into a .vnl:
import sys import gpxpy filename_in = 'INPUT.gpx' filename_out = 'OUTPUT.gpx' with open(filename_in, 'r') as f: gpx = gpxpy.parse(f) f_out = open(filename_out, 'w') tracks = gpx.tracks if len(tracks) != 1: print("I want just one track", file=sys.stderr) sys.exit(1) track = tracks[0] segments = track.segments if len(segments) != 1: print("I want just one segment", file=sys.stderr) sys.exit(1) segment = segments[0] time0 = segment.points[0].time print("# time lat lon ele_m") for p in segment.points: print(f"{(p.time - time0).seconds} {p.latitude} {p.longitude} {p.elevation}", file = f_out)And I process this data with the different filters (this is a silly Python loop, and is slow):
#!/usr/bin/python3 import sys import numpy as np import numpysane as nps import gnuplotlib as gp import vnlog import pyproj geod = None def dist_ft(lat0,lon0, lat1,lon1): global geod if geod is None: geod = pyproj.Geod(ellps='WGS84') return \ geod.inv(lon0,lat0, lon1,lat1)[2] * 100./2.54/12. f = 'OUTPUT.gpx' track,list_keys,dict_key_index = \ vnlog.slurp(f) t = track[:,dict_key_index['time' ]] lat = track[:,dict_key_index['lat' ]] lon = track[:,dict_key_index['lon' ]] ele_ft = track[:,dict_key_index['ele_m']] * 100./2.54/12. @nps.broadcast_define( ( (), ()), (2,)) def filter_track(ele_hysteresis_ft, dxy_hysteresis_ft): dist = 0.0 ele_gain_ft = 0.0 lon_accepted = None lat_accepted = None ele_accepted = None for i in range(len(lat)): if ele_accepted is not None: dxy_here = dist_ft(lat_accepted,lon_accepted, lat[i],lon[i]) dele_here = np.abs( ele_ft[i] - ele_accepted ) if dxy_here < dxy_hysteresis_ft and dele_here < ele_hysteresis_ft: continue if ele_ft[i] > ele_accepted: ele_gain_ft += dele_here; dist += np.sqrt(dele_here * dele_here + dxy_here * dxy_here) lon_accepted = lon[i] lat_accepted = lat[i] ele_accepted = ele_ft[i] # lose the last point. It simply doesn't matter dist_mi = dist / 5280. return np.array((ele_gain_ft, dist_mi)) Nele_hysteresis_ft = 20 ele_hysteresis0_ft = 5 ele_hysteresis1_ft = 100 ele_hysteresis_ft_all = np.linspace(ele_hysteresis0_ft, ele_hysteresis1_ft, Nele_hysteresis_ft) Ndxy_hysteresis_ft = 20 dxy_hysteresis0_ft = 5 dxy_hysteresis1_ft = 1000 dxy_hysteresis_ft = np.linspace(dxy_hysteresis0_ft, dxy_hysteresis1_ft, Ndxy_hysteresis_ft) # shape (Nele,Ndxy,2) gain,distance = \ nps.mv( filter_track( nps.dummy(ele_hysteresis_ft_all,-1), dxy_hysteresis_ft), -1,0 ) # Stolen from mrcal def options_heatmap_with_contours( plotoptions, # we update this on output *, contour_min = 0, contour_max, contour_increment = None, do_contours = True, contour_labels_styles = 'boxed', contour_labels_font = None): r'''Update plotoptions, return curveoptions for a contoured heat map''' gp.add_plot_option(plotoptions, 'set', ('view equal xy', 'view map')) if do_contours: if contour_increment is None: # Compute a "nice" contour increment. I pick a round number that gives # me a reasonable number of contours Nwant = 10 increment = (contour_max - contour_min)/Nwant # I find the nearest 1eX or 2eX or 5eX base10_floor = np.power(10., np.floor(np.log10(increment))) # Look through the options, and pick the best one m = np.array((1., 2., 5., 10.)) err = np.abs(m * base10_floor - increment) contour_increment = -m[ np.argmin(err) ] * base10_floor gp.add_plot_option(plotoptions, 'set', ('key box opaque', 'style textbox opaque', 'contour base', f'cntrparam levels incremental {contour_max},{contour_increment},{contour_min}')) if contour_labels_font is not None: gp.add_plot_option(plotoptions, 'set', f'cntrlabel format "%d" font "{contour_labels_font}"' ) else: gp.add_plot_option(plotoptions, 'set', f'cntrlabel format "%.0f"' ) plotoptions['cbrange'] = [contour_min, contour_max] # I plot 3 times: # - to make the heat map # - to make the contours # - to make the contour labels _with = np.array(('image', 'lines nosurface', f'labels {contour_labels_styles} nosurface')) else: gp.add_plot_option(plotoptions, 'unset', 'key') _with = 'image' using = \ f'({dxy_hysteresis0_ft}+$1*{float(dxy_hysteresis1_ft-dxy_hysteresis0_ft)/(Ndxy_hysteresis_ft-1)}):' + \ f'({ele_hysteresis0_ft}+$2*{float(ele_hysteresis1_ft-ele_hysteresis0_ft)/(Nele_hysteresis_ft-1)}):3' plotoptions['_3d'] = True plotoptions['_xrange'] = [dxy_hysteresis0_ft,dxy_hysteresis1_ft] plotoptions['_yrange'] = [ele_hysteresis0_ft,ele_hysteresis1_ft] plotoptions['ascii'] = True # needed for using to work gp.add_plot_option(plotoptions, 'unset', 'grid') return \ dict( tuplesize=3, legend = "", # needed to force contour labels using = using, _with=_with) contour_granularity = 1000 plotoptions = dict() curveoptions = \ options_heatmap_with_contours( plotoptions, # we update this on output # round down to the nearest contour_granularity contour_min = (np.min(gain) // contour_granularity)*contour_granularity, # round up to the nearest contour_granularity contour_max = ((np.max(gain) + (contour_granularity-1)) // contour_granularity) * contour_granularity, do_contours = True) gp.add_plot_option(plotoptions, 'unset', 'key') gp.add_plot_option(plotoptions, 'set', 'size square') gp.plot(gain, xlabel = "Distance hysteresis (ft)", ylabel = "Elevation hysteresis (ft)", cblabel = "Elevation gain (ft)", wait = True, **curveoptions, **plotoptions, title = 'Computed gain vs filtering parameters') contour_granularity = 10 plotoptions = dict() curveoptions = \ options_heatmap_with_contours( plotoptions, # we update this on output # round down to the nearest contour_granularity contour_min = (np.min(distance) // contour_granularity)*contour_granularity, # round up to the nearest contour_granularity contour_max = ((np.max(distance) + (contour_granularity-1)) // contour_granularity) * contour_granularity, do_contours = True) gp.add_plot_option(plotoptions, 'unset', 'key') gp.add_plot_option(plotoptions, 'set', 'size square') gp.plot(distance, xlabel = "Distance hysteresis (ft)", ylabel = "Elevation hysteresis (ft)", cblabel = "Distance (miles)", wait = True, **curveoptions, **plotoptions, title = 'Computed distance vs filtering parameters') Results: gainStrava says the gain was 46307ft. The analysis says:
These show the filtered gain for different values of the distance and gain hysteresis thresholds. The same data is shown at diffent zoom levels. There's no sweet spot, but we get 46307ft with a reasonable amount of filtering. Maybe 46307ft is a bit low even.
Results: distanceStrava says the distance covered was 322 miles. The analysis says:
Once again, there's no sweet spot, but we get 322 miles only if we apply no filtering at all. That's clearly too high, and is not reasonable. From the map (and from other people's strava routes) the true distance is closer to 305 miles. Why those people's strava numbers are more believable is anybody's guess.
Spinning Code: Architectures for Constituent Portals
One of the common needs, or at least desires, for CRM users is to link their CRM to some kind of constituent portal. There are several ways your can architect a good data pattern for your constituent portal. The trick is picking the right one for your organization.
This is the first in a series on what your choices are, and now to select the right one. The architecture you select will drive the implementation cost, maintenance costs, and scalability.
The Purpose of Your Constituent PortalBefore deciding on the architecture of your portal, you need to have a clear understanding of its intended purpose and expected scale. The purpose of the portal will also dictate the direction(s) data flows, and the scale will help you plan the long term costs.
If you can’t clearly state why you need a constituent portal, you don’t need one.
Different organizations have different reasons to create portals. Companies that sell products may want a warranty or support portal. Nonprofits often want a donor portal that provides tax recipes and allowed recurring donors to update their gift information. Member organizations want a place for members to see their benefits, update their information, and renew their memberships. And so on.
The purpose of your portal will determine the direction that data flows through it. There are essentially three directions that data can move.
Outbound DataYou can send data from your organization to your constituent. That could be things like receipts, memberships status, or product warranty information. Anything that is data you have, that your member might need to see, but not update.
Data ExchangeYou can allow constituents to update information. That could be address changes, support requests, donation schedule updates, event registration details, and more.
Data NetworkYour third option is to allow you constituents to exchange information with each other. This could be a any form of member community where they engage with each other.
When planning a network, remember you have to plan for content moderation and acceptable use policies.
Primary Portal Data ArchitecturesTo support your portal, whatever the data flow is, there are three main data architectures you can choose from, each which a different strengths and weaknesses. The more complex your architecture the more options you will have for what it can support, but it will also increase costs and maintenance efforts. Bigger is not always better – sometimes it’s just more expensive.
1. All-in-One Constituent PortalsIn an all-in-one solution your constituent portal is part of your CRM. For obvious reasons this is the type of portal that your CRM vendor wants to sell you. In Salesforce land this means Experience Cloud. There are also many nonprofit-focused CRMs which include one in their solution. When your portal is part of your CRM you get a bunch of advantages:
- Data all lives in one place. There is no data sync to worry about setting up or maintaining.
- You have one vendor. That means you can centralize your support arrangement and billing.
- They are often fastest to implement. They are designed to be fast to market to help make them the obvious choice.
- They generally do not require a developer. Again, your CRM vendor really wants you to select this path, so they clear as many hurdles as they can.
- There is only one technology stack. By leveraging the technology of your CRM your investments in learning that technology carry over, at least in part, to your portal.
There are also downsides:
- New template system to learn. The CRM’s portal likely has a different template system than the CMS on your main web site. That may be hard to make match your primary online branding.
- The vendors make a lot of assumptions. To provide that ease listed above they make assumptions about how data flows, security, user experience, and how other elements should work – those assumptions may not match your ideal solution.
- Costs often scale linearly. There are usually license costs that are scaled based on user count meaning your costs grow as the portal grows at more or less a 1:1 rate. While there are price breaks and other incentives this is the right expectation to have for estimating.
In this pattern an external constituent portal moves the user experience from within your main CRM’s sphere of influence into a separate platform. In my career I’ve mostly built these with Salesforce as the CRM and Drupal as the portal platform – it’s a powerful pairing. The strengths and weaknesses here are more or less the mirror of the all-in-one portal.
- You web team may already know the technology. If you use the same technology as your web developers use for your main web site(s) they already know who to handle design and branding.
- You have greater control over the User Experience. The assumptions that go into the portal are yours not the ones imposed by the CRM.
- You have greater control over security, along with data flow and formats. Since they aren’t built around assumptions you don’t control you have greater control and freedom.
- Costs typically grow more slowly. It is less common to have per-user license costs in this context so the cost curve is likely closer to logarithmic.
The downsides:
- You have to handle data synchronization. The two systems means data has to pass back and forth.
- You have two difference vendors/platforms. When you had one system everything was centralized, now you have two different systems handling constituent data.
- This design typically is harder to implement. All those short cuts your CRM provider had in mind for you are gone. Even if you use purpose built solution it’s going to take more time and effort.
- This approach generally requires a developer and/or data architect. To implement this pattern you need someone who understands both platforms – ideally both for setup and support.
In some situations it makes sense to insert a data proxy, or API layer, between your main CRM and your constituent portal. This creates a layer of abstraction, security, and data augmentation that can benefit a lot of organizations. While this is the most complex of the three main architectures it is one I find is frequently overlooked – in part, I suspect, because no one partner benefits from selling it.
This setup certainly isn’t for everyone, but when it’s the right fit it makes a huge difference. It also creates a significant number of iterations – all of those arrows could be one directional or two directional.
An extra data layer makes the most sense when data from your CRM is going to multiple places and/or needs to be augmented by an external data source. For example think about an organization that provides trail maps for hiking or paddling. All that data about trails does not need to be in your CRM, but you probably want to know which members are interested in which trails. An organization like that may have a portal for members, a web site to find trails for non-members, apps for iPhone and Android, and perhaps a public API for other people to use to build their own web sites with. That’s a lot of API calls all with different data sets, all of which need to be very fast.
Your CRM is designed to be very stable and reliable – it is not designed to be very fast. What’s more, you often have limits on the number of API calls you get in your package with extra fees charged if you go over. By inserting another layer between your CRM and other needs you can bypass these limitations.
There is another added benefit to this design pattern, and that is increased security. By creating an additional layer in your system you are able to create a separation of concerns between the various systems. Each should have access to only the data is absolutely needs from any of the others.
How to Pick?There are lots of considerations that go into selecting the right architecture for your project – which is why that’s getting it’s own post soon.
The post Architectures for Constituent Portals appeared first on Spinning Code.
KDE Gear 24.11.90 Available on Fedora 41 (COPR)
The Fedora KDE SIG is pleased to announce that KDE Gear 24.12 RC (24.11.90) is available on Fedora 41 via our @kdesig/kde-beta COPR repository
Enjoy!
Enrico Zini: New laptop setup
My new laptop Framework (Framework Laptop 13 DIY Edition (AMD Ryzen™ 7040 Series)) arrived, all the hardware works out of the box on Debian Stable, and I'm very happy indeed.
This post has the notes of all the provisioning steps, so that I can replicate them again if needed.
Installing Debian 12Debian 12's installer just worked, with Secure Boot enabled no less, which was nice.
The only glitch is an argument with the guided partitioner, which was uncooperative: I have been hit before by a /boot partition too small, and I wanted 1G of EFI and 1G of boot, while the partitioner decided that 512Mb were good enough. Frustratingly, there was no way of changing that, nor I found how to get more than 1G of swap, as I wanted enough swap to fit RAM for hybernation.
I let it install the way it pleased, then I booted into grml for a round of gparted.
The tricky part of that was resizing the root btrfs filesystem, which is in an LV, which is in a VG, which is in a PV, which is in LUKS. Here's a cheatsheet.
Shrink partitions:
- mount the root filesystem in /mnt
- btrfs filesystem resize 6G /mnt
- umount the root filesystem
- lvresize -L 7G vgname/lvname
- pvresize --setphysicalvolumesize /dev/mapper/pvname 8G
- cryptsetup resize --device-size 9G name
note that I used an increasing size because I don't trust that each tool has a way of representing sizes that aligns to the byte. I'd be happy to find out that they do, but didn't want to find out the hard way that they didn't.
Resize with gparted:
Move and resize partitions at will. Shrinking first means it all takes a reasonable time, and you won't have to wait almost an hour for a terabyte-sized empty partition to be carefully moved around. Don't ask me why I know.
Regrow partitions:
- cryptsetup resize name
- pvresize /dev/mapper/pvname
- lvresize -L 100% vgname/lvname
- mount the root filesystem in /mnt
- btrfs filesystem resize max /mnt
- umount the root filesystem
When I get a new laptop I have a tradition of trying to make it work with Gnome and Wayland, which normally ended up in frustration and a swift move to X11 and Xfce: I have a lot of long-time muscle memory involved in how I use a computer, and it needs to fit like prosthetics. I can learn to do a thing or two in a different way, but any papercut that makes me break flow and I cannot fix will soon become a dealbreaker.
This applies to Gnome as present in Debian Stable.
General Gnome settings tipsI can list all available settings with:
gsettings list-recursivelywhich is handy for grepping things like hotkeys.
I can manually set a value with:
gsettings set <schema> <key> <value>and I can reset it to its default with:
gsettings reset <schema> <key>Some applications like Gnome Terminal use "relocatable schemas", and in those cases you also need to specify a path, which can be discovered using dconf-editor:
gsettings set <schema>:<path> <key> <value> Install appindicatorsFirst thing first: app install gnome-shell-extension-appindicator, log out and in again: the Gnome Extension manager won't see the extension as available until you restart the whole session.
I have no idea why that is so, and I have no idea why a notification area is not present in Gnome by default, but at least now I can get one.
Fix font sizes across monitorsMy laptop screen and monitor have significantly different DPIs, so:
gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']"And in Settings/Displays, set a reasonable scaling factor for each display.
Disable Alt/Super as hotkey for the OverlaySeeing all my screen reorganize and reshuffle every time I accidentally press Alt leaves me disoriented and seasick:
gsettings set org.gnome.mutter overlay-key '' Focus-follows-mouse and Raise-or-lowerMy desktop is like my desktop: messy and cluttered. I have lots of overlapping window and I switch between them by moving the focus with the mouse, and when the visible part is not enough I have a handy hotkey mapped to raise-or-lower to bring forward what I need and send back what I don't need anymore.
Thankfully Gnome can be configured that way, with some work:
- In gnome-shell settings, keyboard, shortcuts, windows, set "Raise window if covered, otherwise lower it" to "Super+Escape"
- In gnome-tweak-tool, Windows, set "Focus on Hover"
This almost worked, but sometimes it didn't do what I wanted, like I expected to find a window to the front but another window disappeared instead. I eventually figured that by default Gnome delays focus changes by a perceivable amount, which is evidently too slow for the way I move around windows.
The amount cannot be shortened, but it can be removed with:
gsettings set org.gnome.shell.overrides focus-change-on-pointer-rest false Mouse and keyboard shortcutsGnome has lots of preconfigured sounds, shortcuts, animations and other distractions that I do not need. They also either interfere with key combinations I want to use in terminals, or cause accidental window moves or resizes that make me break flow, or otherwise provide sensory overstimulation that really does not work for me.
It was a lot of work, and these are the steps I used to get rid of most of them.
Disable Super+N combinations that accidentally launch a questionable choice of programs:
for i in `seq 1 9`; do gsettings set org.gnome.shell.keybindings switch-to-application-$i '[]'; doneGnome-Shell settings:
- Multitasking:
- disable hot corner
- disable active edges
- set a fixed number of workspaces
- workspaces on all displays
- switching includes apps from current workspace only
- Sound:
- disable system sounds
- Keyboard
- Compose Key set to Caps Lock
- View and Customize Shortcuts:
- Launchers
- launch help browser: remove
- Navigation
- move to workspace on the left: Super+Left
- move to workspace on the right: Super+Right
- move window one monitor …: remove
- move window one workspace to the left: Shift+Super+Left
- move window one workspace to the right: Shift+Super+Right
- move window to …: remove
- switch system …: remove
- switch to …: remove
- switch windows …: disabled
- Screenshots
- Record a screenshot interactively: Super+Print
- Take a screenshot interactively: Print
- Disable everything else
- System
- Focus the active notification: remove
- Open the applcation menu: remove
- Restore the keyboard shortctus: remove
- Show all applications: remove
- Show the notification list: remove
- Show the overvire: remove
- Show the run command prompt: remove (the default Gnome launcher is not for me) Super+F2 (or remove to leave it to the terminal)
- Windows
- Close window: remove
- Hide window: remove
- Maximize window: remove
- Move window: remove
- Raise window if covered, otherwise lower it: Super+Escape
- Resize window: remove
- Restore window: remove
- Toggle maximization state: remove
- Launchers
- Custom shortcuts
- xfrun4, launching xfrun4, bound to Super+F2
- Accessibility:
- disable "Enable animations"
gnome-tweak-tool settings:
- Keyboard & Mouse
- Overview shortcut: Right Super. This cannot be disabled, but since my keyboard doesn't have a Right Super button, that's good enough for me. Oddly, I cannot find this in gsettings.
- Window titlebars
- Double-Click: Toggle-Maximize
- Middle-Click: Lower
- Secondary-Click: Menu
- Windows
- Resize with secondary click
Gnome Terminal settings:
Thankfully 10 years ago I took notes on how to customize Gnome Terminal, and they're still mostly valid:
-
Shortcuts
- New tab: Super+T
- New window: Super+N
- Close tab: disabled
- Close window: disabled
- Copy: Super+C
- Paste: Super+V
- Search: all disabled
- Previous tab: Super+Page Up
- Next tab: Super+Page Down
- Move tab…: Disabled
- Switch to tab N: Super+Fn (only available after disabling overview)
-
Switch to tab N with Alt+Fn cannot be configured in the UI: Alt+Fn is detected as simply Fn. It can however be set with gsettings:
sh for i in `seq 1 12`; do gsettings set org.gnome.Terminal.Legacy.Keybindings:/org/gnome/terminal/legacy/keybindings/ switch-to-tab-$i "<Alt>F$i"; done
-
Profile
- Text
- Sound: disable terminal bell
- Text
Other hotkeys that got in my way and had to disable the hard way:
for n in `seq 1 12`; do gsettings set org.gnome.mutter.wayland.keybindings switch-to-session-$n '[]'; done gsettings set org.gnome.desktop.wm.keybindings move-to-workspace-down '[]' gsettings set org.gnome.desktop.wm.keybindings move-to-workspace-up '[]' gsettings set org.gnome.desktop.wm.keybindings panel-main-menu '[]' gsettings set org.gnome.desktop.interface menubar-accel '[]'Note that even after removing F10 from being bound to menubar-accel, and after having to gsetting binding to F10 as is:
$ gsettings list-recursively|grep F10 org.gnome.Terminal.Legacy.Keybindings switch-to-tab-10 '<Alt>F10'I still cannot quit Midnight Commander using F10 in a terminal, as that moves the focus in the window title bar. This looks like a Gnome bug, and a very frustrating one for me.
AppearanceGnome-Shell settings:
- Appearance:
- dark mode
gnome-tweak-tool settings:
- Fonts
- Antialiasing: Subpixel
- Top Bar
- Clock/Weekday: enable (why is this not a default?)
Gnome Terminal settings:
- General
- Theme variant: Dark (somehow it wasn't picked by up from the system settings)
- Profile
- Colors
- Background: #000
- Colors
Gnome Shell Settings:
- Search
- disable application search
- Removable media
- set everything to "ask what to do"
- Default applications
- Web: Chromium
- Mail: mutt
- Calendar: khal is not sadly an option
- Video: mpv
- Photos: Geequie
Set a delay between screen blank and lock: when the screen goes blank, it is important for me to be able to say "nope, don't blank yet!", and maybe switch on caffeine mode during a presentation without needing to type my password in front of cameras. No UI for this, but at least gsettings has it:
gsettings set org.gnome.desktop.screensaver lock-delay 30 ExtensionsI enabled the Applications Menu extension, since it's impossible to find less famous applications in the Overview without knowing in advance how they're named in the desktop. This stole a precious hotkey, which I had to disable in gsettings:
gsettings set org.gnome.shell.extensions.apps-menu apps-menu-toggle-menu '[]'I also enabled:
- Removable Drive Menu: why is this not on by default?
- Workspace Indicator
- Ubuntu Appindicators (apt install gnome-shell-extension-appindicator and restart Gnome)
I didn't go and look for Gnome Shell extentions outside what is packaged in Debian, as I'm very wary about running JavaScript code randomly downloaded from the internet with full access over my data and desktop interaction.
I also took care of checking that the Gnome Shell Extensions web page complains about the lacking "GNOME Shell integration" browser extension, because the web browser shouldn't be allowed to download random JavaScript from the internet and run it with full local access.
Yuck.
Run program dialogThe default run program dialog is almost, but not quite, totally useless to me, as it does not provide completion, not even just for executable names in path, and so it ends up being faster to open a new terminal window and type in there.
It's possible, in Gnome Shell settings, to bind a custom command to a key. The resulting keybinding will now show up in gsettings, though it can be located in a more circuitous way by grepping first, and then looking up the resulting path in dconf-editor:
gsettings list-recursively|grep custom-key org.gnome.settings-daemon.plugins.media-keys custom-keybindings ['/org/gnome/settings-daemon/plugins/media-keys/custom-keybindings/custom0/']I tried out several run dialogs present in Debian, with sad results, possibly due to most of them not being tested on wayland:
- fuzzel does not start
- gmrun is gtk2, last updated in 2016, but works fine
- kupfer segfaults as I type
- rofi shows, but can't get keboard input
- shellex shows a white bar at top of the screen and lots of errors on stderr
- superkb wants to grab the screen for hotkeys
- synapse searched news on the internet as I typed, which is a big no for me
- trabucco crashes on startup
- wofi works but looks like very much an acquired taste, though it has some completion that makes it more useful than Gnome's run dialog
- xfrun4 (package xfce4-appfinder) struggles on wayland, being unable to center its window and with the pulldown appearing elsewhere in the screen, but it otherwise works
Both gmrun and xfrun4 seem like workable options, with xfrun4 being customizable with convenient shortcut prefixes, so xfrun4 it is.
TODO- Figure out what is still binding F10 to menu, and what I can do about it
- Figure out how to reduce the size of window titlebars, which to my taste should be unobtrusive and not take 2.7% of vertical screen size each. There's a minwaita theme which isn't packaged in Debian. There's a User Theme extension, and then the whole theming can of worms to open. For another day.
- Figure out if Gnome can be convinced to resize popup windows? Take the Gnome Terminal shortcut preferences for example: it takes ⅓ of the vertical screen and can only display ¼ of all available shortcuts, and I cannot find a valid reason why I shouldn't be allowed to enlarge it vertically.
- Figure out if I can place shortcut launcher icons in the top panel, and how
I'll try to update these notes as I investigate.
Conclusion so farI now have something that seems to work for me. A few papercuts to figure out still, but they seem manageable.
It all feels a lot harder than it should be: for something intended to be minimal, Gnome defaults feel horribly cluttered and noisy to me, continuosly getting in the way of getting things done until tamed into being out of the way unless called for. It felt like a device that boots into flashy demo mode, which needs to be switched off before actual use.
Thankfully it can be switched off, and now I have notes to do it again if needed.
gsettings oddly feels to me like a better UI than the interactive settings managers: it's more comprehensive, more discoverable, more scriptable, and more stable across releases. Most of the Q&A I found on the internet with guidance given on the UI was obsolete, while when given with gsettings command lines it kept being relevant. I also have the feeling that these notes would be easier to understand and follow if given as gsettings invocations instead of descriptions of UI navigation paths.
At some point I'll upgrade to Trixie and reevaluate things, and these notes will be a useful checklist for that.
Fingers crossed that this time I'll manage to stay on Wayland. If not, I know that Xfce is still there for me, and I can trust it to be both helpful and good at not getting in the way of my work.
Kirigami Addons 1.6.0
Kirigami Addons is a collection of additional components for Kirigami applications. This release brings mostly improvements to the FormCard module.
AboutPageThe about page provided by Kirigami Addons received many improvements. Joshua added icons to all the buttons.
I worked on the component section, which now contains more information about the default components as well as the underlying platform and now has a button to copy all this information to the clipboard. This is super helpful, when writing a bug report. There were also some small bug fixes with, for example, the license dialog being correctly sized.
RadioSelectorA new component is the RadioSelector, which is a simple component that allows one to choose an option between two or more choices in a horizontal layout. This is not a new component as it has already been used in Itinerary and Marknote for a long time.
There is also a FormCard version of this, called FormRadioSelectorDelegate.
FormPlaceholderMessageDelegateAnother new component is FormPlaceholderMessageDelegate, which is basically a Kirigami.PlaceholderMessage, but instead of putting it in a ListView, this one is to be put inside a FormCard.
FormPlaceholderMessageDelegate for the health certificate
OtherVolker fixed the Android integration of the date picker. He also added support for static builds (required for iOS and probably hopeful for other platforms).
Claudio fixed various issues with the DatePicker.
Joshua made the caption used in AlbumMaximizeComponent selectable with the mouse. He also fixed the separator for the IndicatorItemDelegate which only appeared after the first item.
I added icon support to FormSwitchDelegate, which is similar to what we already have in FormRadioDelegate and FormCheckDelegate.
Packager SectionKirigami Addons 1.6.0 was tagged but the tarball are not yet available. I will update this post once it is available.
OptiImage 1.0.0 is out!
The first release of OptiImage is finally out! OptiImage is a useful image compressor that supports PNG, JPEG, WebP and SVG file types. It doesn’t do the compression itself but uses various tools like oxipng to do the compression.
OptiImage compressing screenshots
Thanks to Mathis Brüchert for his work on the icon and to Soumyadeep Ghosh for a bunch of bug fixes and pushing me to do the release.
Packager SectionOptiImage 1.0.0 was tagged but the tarball are not yet available. I will update this post once it is available.
Real Python: Python's F-String for String Interpolation and Formatting
Python f-strings offer a concise and efficient way to interpolate variables, objects, and expressions directly into strings. By prefixing a string with f or F, you can embed expressions within curly braces ({}), which are evaluated at runtime.
This makes f-strings faster and more readable compared to older approaches like the modulo (%) operator or the string .format() method. Additionally, f-strings support advanced string formatting using Python’s string format mini-language.
By the end of this tutorial, you’ll understand that:
- An f-string in Python is a string literal prefixed with f or F, allowing for the embedding of expressions within curly braces {}.
- To include dynamic content in an f-string, place your expression or variable inside the braces to interpolate its value into the string.
- An f-string error in Python often occurs due to syntax issues, such as unmatched braces or invalid expressions within the string.
- F-string calculation in Python involves writing expressions within the curly braces of an f-string, which Python evaluates at runtime.
- Python 3.12 improved f-strings by allowing nested expressions and the use of backslashes.
This tutorial will guide you through the features and advantages of f-strings, including interpolation and formatting. By familiarizing yourself with these features, you’ll be able to effectively use f-strings in your Python projects.
Get Your Code: Click here to download the free sample code that shows you how to do string interpolation and formatting with Python’s f-strings.
Take the Quiz: Test your knowledge with our interactive “Python F-Strings” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Python F-StringsIn this quiz, you'll test your knowledge of Python f-strings. With this knowledge, you'll be able to include all sorts of Python expressions inside your strings.
Interpolating and Formatting Strings Before Python 3.6Before Python 3.6, you had two main tools for interpolating values, variables, and expressions inside string literals:
- The string interpolation operator (%), or modulo operator
- The str.format() method
You’ll get a refresher on these two string interpolation tools in the following sections. You’ll also learn about the string formatting capabilities that these tools offer in Python.
The Modulo Operator (%)The modulo operator (%) was the first tool for string interpolation and formatting in Python and has been in the language since the beginning. Here’s what using this operator looks like in practice:
Python >>> name = "Jane" >>> "Hello, %s!" % name 'Hello, Jane!' Copied!In this quick example, you use the % operator to interpolate the value of your name variable into a string literal. The interpolation operator takes two operands:
- A string literal containing one or more conversion specifiers
- The object or objects that you’re interpolating into the string literal
The conversion specifiers work as replacement fields. In the above example, you use the %s combination of characters as a conversion specifier. The % symbol marks the start of the specifier, while the s letter is the conversion type and tells the operator that you want to convert the input object into a string.
If you want to insert more than one object into your target string, then you can use a tuple. Note that the number of objects in the tuple must match the number of format specifiers in the string:
Python >>> name = "Jane" >>> age = 25 >>> "Hello, %s! You're %s years old." % (name, age) 'Hello, Jane! You're 25 years old.' Copied!In this example, you use a tuple of values as the right-hand operand to %. Note that you’ve used a string and an integer. Because you use the %s specifier, Python converts both objects to strings.
You can also use dictionaries as the right-hand operand in your interpolation expressions. To do this, you need to create conversion specifiers that enclose key names in parentheses:
Python >>> "Hello, %(name)s! You're %(age)s years old." % {"name": "Jane", "age": 25} "Hello, Jane! You're 25 years old." Copied!This syntax provides a readable approach to string interpolation with the % operator. You can use descriptive key names instead of relying on the positional order of values.
When you use the % operator for string interpolation, you can use conversion specifiers. They provide some string formatting capabilities that take advantage of conversion types, conversion flags, and some characters like the period (.) and the asterisk (*). Consider the following example:
Python >>> "Balance: $%.2f" % 5425.9292 'Balance: $5425.93' >>> print("Name: %s\nAge: %5s" % ("John", 35)) Name: John Age: 35 Copied! Read the full article at https://realpython.com/python-f-strings/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Real Python: What Does if __name__ == "__main__" Do in Python?
The if __name__ == "__main__" idiom is a Python construct that helps control code execution in scripts. It’s a conditional statement that allows you to define code that runs only when the file is executed as a script, not when it’s imported as a module.
When you run a Python script, the interpreter assigns the value "__main__" to the __name__ variable. If Python imports the code as a module, then it sets __name__ to the module’s name instead. By encapsulating code within if __name__ == "__main__", you can ensure that it only runs in the intended context.
By the end of this tutorial, you’ll understand that:
- Python’s if __name__ == "__main__" idiom allows code to run only when the script is executed, not when it’s imported.
- The idiom checks if the __name__ variable equals "__main__", confirming that the script is the top-level module.
- Using this idiom helps prevent unintended code execution during module imports.
- It’s useful for adding script-specific logic, such as user input or test cases, without affecting module imports.
- Best practices suggest using this idiom minimally and placing it at the bottom of the script for clarity.
You’ve likely encountered Python’s if __name__ == "__main__" idiom when reading other people’s code. No wonder—it’s widespread! Understanding Python’s if __name__ == "__main__" idiom will help you to manage script execution and module imports effectively. In this tutorial you’ll explore its mechanics, appropriate usage, and best practices.
Take the Quiz: Test your knowledge with our interactive “Python Name-Main Idiom” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Python Name-Main IdiomTest your knowledge of Python's if __name__ == "__main__" idiom by answering a series of questions! You've probably encountered the name-main idiom and might have even used it in your own scripts. But did you use it correctly?
Get Your Code: Click here to download the free sample code that you’ll use to learn about the name-main idiom.
In Short: It Allows You to Execute Code When the File Runs as a Script, but Not When It’s Imported as a ModuleFor most practical purposes, you can think of the conditional block that you open with if __name__ == "__main__" as a way to store code that should only run when your file is executed as a script.
You’ll see what that means in a moment. For now, say you have the following file:
Python echo.py 1def echo(text: str, repetitions: int = 3) -> str: 2 """Imitate a real-world echo.""" 3 echoes = [text[-i:].lower() for i in range(repetitions, 0, -1)] 4 return "\n".join(echoes + ["."]) 5 6if __name__ == "__main__": 7 text = input("Yell something at a mountain: ") 8 print(echo(text)) Copied!In this example, you define a function, echo(), that mimics a real-world echo by gradually printing fewer and fewer of the final letters of the input text.
Below that, in lines 6 to 8, you use the if __name__ == "__main__" idiom. This code starts with the conditional statement if __name__ == "__main__" in line 6. In the indented lines, 7 and 8, you then collect user input and call echo() with that input. These two lines will execute when you run echo.py as a script from your command line:
Shell $ python echo.py Yell something at a mountain: HELLOOOO ECHOOOOOOOOOO ooo oo o . Copied!When you run the file as a script by passing the file object to your Python interpreter, the expression __name__ == "__main__" returns True. The code block under if then runs, so Python collects user input and calls echo().
Try it out yourself! You can download all the code files that you’ll use in this tutorial from the link below:
Get Your Code: Click here to download the free sample code that you’ll use to learn about the name-main idiom.
At the same time, if you import echo() in another module or a console session, then the nested code won’t run:
Python >>> from echo import echo >>> print(echo("Please help me I'm stuck on a mountain")) ain in n . Copied!In this case, you want to use echo() in the context of another script or interpreter session, so you won’t need to collect user input. Running input() would mess with your code by producing a side effect when importing echo.
When you nest the code that’s specific to the script usage of your file under the if __name__ == "__main__" idiom, then you avoid running code that’s irrelevant for imported modules.
Nesting code under if __name__ == "__main__" allows you to cater to different use cases:
- Script: When run as a script, your code prompts the user for input, calls echo(), and prints the result.
- Module: When you import echo as a module, then echo() gets defined, but no code executes. You provide echo() to the main code session without any side effects.
By implementing the if __name__ == "__main__" idiom in your code, you set up an additional entry point that allows you to use echo() right from the command line.
There you go! You’ve now covered the most important information about this topic. Still, there’s more to find out, and there are some subtleties that can help you build a deeper understanding of this code specifically and Python more generally.
Read the full article at https://realpython.com/if-name-main-python/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Real Python: Logging in Python
Logging in Python lets you record important information about your program’s execution. You use the built-in logging module to capture logs, which provide insights into application flow, errors, and usage patterns. With Python logging, you can create and configure loggers, set log levels, and format log messages without installing additional packages. You can also generate log files to store records for later analysis.
Using a logging library instead of print() calls gives you better control over log management, formatting, and output destinations. The logging module also allows you to set severity levels, simplifying the management and filtering of log data.
By the end of this tutorial, you’ll understand that:
- Logging in a computer involves recording program execution information for analysis.
- You can use logging for debugging, performance analysis, and monitoring usage patterns.
- No installation is needed for Python logging as the logging module is part of Python’s standard library.
- Logging in Python works by configuring loggers and setting log levels.
- Using a logging library provides structured logging and control over log output.
- You should prefer logging over print() because it decreases the maintainance burden and allows you to manage log levels.
- You can generate a log file in Python by setting filename and encoding through basicConfig().
You’ll do the coding for this tutorial in the Python standard REPL. If you prefer Python files, then you’ll find a full logging example as a script in the materials of this tutorial. You can download this script by clicking the link below:
Get Your Code: Click here to download the free sample code that you’ll use to learn about logging in Python.
Take the Quiz: Test your knowledge with our interactive “Logging in Python” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
Logging in PythonIn this quiz, you'll test your understanding of Python's logging module. With this knowledge, you'll be able to add logging to your applications, which can help you debug errors and analyze performance.
If you commonly use Python’s print() function to get information about the flow of your programs, then logging is the natural next step for you. This tutorial will guide you through creating your first logs and show you how to make logging grow with your projects.
Starting With Python’s Logging ModuleThe logging module in Python’s standard library is a ready-to-use, powerful module that’s designed to meet the needs of beginners as well as enterprise teams.
Note: Since logs offer a variety of insights, the logging module is often used by other third-party Python libraries, too. Once you’re more advanced in the practice of logging, you can integrate your log messages with the ones from those libraries to produce a homogeneous log for your application.
To leverage this versatility, it’s a good idea to get a better understanding of how the logging module works under the hood. For example, you could take a stroll through the logging module’s source code
The main component of the logging module is something called the logger. You can think of the logger as a reporter in your code that decides what to record, at what level of detail, and where to store or send these records.
Exploring the Root LoggerTo get a first impression of how the logging module and a logger work, open the Python standard REPL and enter the code below:
Python >>> import logging >>> logging.warning("Remain calm!") WARNING:root:Remain calm! Copied!The output shows the severity level before each message along with root, which is the name the logging module gives to its default logger. This output shows the default format that can be configured to include things like a timestamp or other details.
In the example above, you’re sending a message on the root logger. The log level of the message is WARNING. Log levels are an important aspect of logging. By default, there are five standard levels indicating the severity of events. Each has a corresponding function that can be used to log events at that level of severity.
Note: There’s also a NOTSET log level, which you’ll encounter later in this tutorial when you learn about custom logging handlers.
Here are the five default log levels, in order of increasing severity:
Log Level Function Description DEBUG logging.debug() Provides detailed information that’s valuable to you as a developer. INFO logging.info() Provides general information about what’s going on with your program. WARNING logging.warning() Indicates that there’s something you should look into. ERROR logging.error() Alerts you to an unexpected problem that’s occured in your program. CRITICAL logging.critical() Tells you that a serious error has occurred and may have crashed your app.The logging module provides you with a default logger that allows you to get started with logging without needing to do much configuration. However, the logging functions listed in the table above reveal a quirk that you may not expect:
Python >>> logging.debug("This is a debug message") >>> logging.info("This is an info message") >>> logging.warning("This is a warning message") WARNING:root:This is a warning message >>> logging.error("This is an error message") ERROR:root:This is an error message >>> logging.critical("This is a critical message") CRITICAL:root:This is a critical message Copied!Notice that the debug() and info() messages didn’t get logged. This is because, by default, the logging module logs the messages with a severity level of WARNING or above. You can change that by configuring the logging module to log events of all levels.
Adjusting the Log LevelTo set up your basic logging configuration and adjust the log level, the logging module comes with a basicConfig() function. As a Python developer, this camel-cased function name may look unusual to you as it doesn’t follow the PEP 8 naming conventions:
Read the full article at https://realpython.com/python-logging/ »[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
EuroPython: EuroPython Society 2024 fellows
Hi everyone! A warm welcome to the newly elected EuroPython Society Fellows in the year 2024.
- Laís Carvalho
- Cyril Bitterich
EuroPython Society Fellows
EuroPython Society Fellows have contributed significantly towards our mission, the EuroPython conference and the Society as an organisation. They are eligible for a lifetime free attendance of the EuroPython conference and will be listed on our EuroPython Society Fellow Grant page in recognition of their work.
Laís has been volunteering with the EPS since 2020 and is currently serving as a board member for 2023–2024. Theofanis wrote in nominating her for the EuroPython Society fellowship.
Lais is the "superhero" of this year&aposs EuroPython Organization. She&aposs constantly trying to improve our processes, our way of operating, to push for diversity and inclusion, and to make things happen. Apart from that, she&aposs always passionate about the EuroPython Society Mission, actioning on community outreach, expanding EPS communication channels and helping, in action, EPS to support the local python communities. I have no doubt that with her key contribution, 2024 will be one of the most impactful years for the mission of EPS.Cyril started volunteering for EuroPython since 2022, He started helping with Ops and Programme teams initially and his firefighting skills proved invaluable as he supported other teams and the general organisation, making him a de facto member of the 2024 core organisers team. Theofanis Petkos put it best in his email, nominating Cyril for the EuroPython Society fellowship
Cyril is one of the most dedicated volunteers I&aposve ever met. You&aposll argue with him, he&aposll care, you&aposll ask for his help. He will have input, he will review everything on time. Constantly trying to help, to include more people and always passionate about documenting all of our processes. Cyril is one of the people I&aposm very proud I have worked with and I&aposm sure with his contribution EPS will make a lot of steps forward.The EuroPython Society Board would like to congratulate and thank all the above new Fellows for their tireless work towards our mission! If you want to send in your nomination, check out our Fellowship page and get in touch!
Many thanks,
EuroPython Society
https://www.europython-society.org/
Aurelien Jarno: UEFI Unified Kernel Image for Debian Installer on riscv64
On the riscv64 port, the default boot method is UEFI, with U-Boot typically used as the firmware. This approach aligns more closely with other architectures, which avoid developping riscv64 specific code. For advanced users, booting using U-Boot and extlinux is possible, thanks to the kernel being built with CONFIG_EFI_STUB=y.
The same applies to the Debian Installer, which is provided as ISO images in various sizes and formats like on other architectures. These images can be put on a USB drive or an SD-card and booted directly from U-Boot in UEFI mode. Some users prefer to use the netboot "image", which in practice consists of a Linux kernel, an initrd, plus a set of Device Tree Blob (DTB) files.
However, booting this in UEFI mode is not straightforward, unless you use a TFTP server, which is also not trivial. Less known to users, there is also a corresponding mini.iso image, which contains all the above plus a bootloader. This offers a simpler alternative for installation, but depending on your (vendor) U-Boot version this still requires to go through a media.
Systemd version 257-rc2 comes with a great new feature, the ability to include multiple DTB files in a single UKI (Unified Kernel Image) file, with systemd-stub automatically loading the appropriate one for the current hardware. A UKI file combines a UEFI boot stub program, a Linux kernel image, an optional initrd, and further resources in a single UEFI PE file. This finally solves the DTB problem in the UEFI world for distributions, as a single image can work on multiple boards.
Building upon this, debian-installer on riscv64 now also creates a UEFI UKI mini.efi image, which contains systemd-stub, a Linux kernel, an initrd, plus a set of Device Tree Blob (DTB) files. Using this image also ensures that the system is booted in UEFI mode. Booting it with debian-installer is as simple as:
load mmc 0:1 mini.efo $kernel_addr_r # (can also be done using tftpboot, wget, etc.) bootefi $kernel_addr_rAdditional parameters can be passed to the image using the U-Boot bootargs environment variable. For instance, to boot in rescue mode:
setenv bootargs "rescue/enable=true"October/November in KDE Itinerary
In the two month since the previous summary KDE Itinerary got a new trip map view, per-trip statistics and better Android integration, and we have been preparing for Transious’ move to MOTIS v2, to just name a few things.
New Features Trip mapThe move to a per-trip timeline view described in the previous issue enables a bunch of new per-trip features.
The first of those is a map view for all activities and locations related to a trip.
Trip map view in Itinerary.Locations and intermediate stops are interactive and show additional information when clicked.
Trip statisticsItinerary now also shows a summary of the traveled distance, the estimated CO₂ emission and the total cost for each trip. The cost is based on information extracted from travel documents or manual editing of individual entries, and is automatically converted into your home currency if currency conversions are enabled in the settings.
Trip statistics in Itinerary. Pre-defined locations for journey searchesAll locations involved in the current trip are now added to the location search history for train or bus journey searches. They appear similarly to entries from past searches (but unlike those cannot be deleted).
This avoids having to enter locations again that you’ll likely need in public transport search during a trip.
New timeline layoutThere’s ongoing work for an updated timeline layout, in particular for public transport journeys, making the effects of delays and disruptions easier to see. It’s not yet finished and integrated at the time of writing (but might very well be by the time of publishing this post), so this will be covered next time. In the meantime you’ll probably get to see pictures of it in one of the next “This Week in KDE Apps” posts on Planet KDE.
Infrastructure Work MOTIS v2Transitous, the community-run free and open public transport routing services used by Itinerary, is preparing to move to the next major version of the routing engine MOTIS.
This meant implementing support for the new MOTIS API in KPublicTransport. The new API provides more control over access and egress modes and transfer times and returns more detailed information for foot paths (more details here).
Transfer foot path shown on a map.MOTIS v2 brings considerable performance improvements for OSM-based routing which should make it viable for Transitous to deploy door-to-door routing rather than just the current stop-to-stop routing. That is also the prerequisite for enabling support for shared vehicle routing eventually.
Android platform integrationThere have also been a number of improvements on Android platform integration, most of which not just benefit Itinerary but all KDE Android apps.
- Fixed the translation lookup order when non-US English is used as one of multiple languages in the system settings.
- Fixed deploying and loading of Qt’s own translation catalogs.
- Implemented support for the 24h time format platform settings in locales that usually use a 12h format (CR 600295, available in Qt 6.8.1).
- Enabled support for app-specific language selection in Android 13 or higher.
- Foundational work on changing the application language at runtime in Qt and KDE Frameworks. For complete support this is still missing reevaluating locale API related expressions in QML (CR 599626).
There’s some more details in a dedicated post about this.
Matrix session verificationWhile trip synchronization over Matrix unfortunately didn’t get done after all in time for 24.12, there’s nevertheless progress here.
Most notable is support for Matrix session verification, which is a prerequisite for end-to-end encrypted (E2EE) communication.
Emoji-based Matrix session verification.After a fix in libQuotient the trip syncing code can now also handle attached documents, using Matrix’ built-in E2EE file sharing.
Lobbying & PoliticsLast week I attended a networking event hosted by DELFI together with a couple of others representing various FOSS and Open Data projects and communities. DELFI is the entity producing Germany’s national aggregate public transport datasets. Those are used by Transitous and thus indirectly by Itinerary, and getting in direct contact with the people working on this is obviously useful.
- Most of what I heart was sensible and aligns with what we’d like to see as well, e.g. around quality gates for the datasets. One major exception was Deutsche Bahn’s refusal to allow DELFI to publish their realtime data. Coincidentally a new law mandating exactly that just had its first reading in the second chamber of the German parliament that day, so fortunately the pressure to publish this just keeps increasing.
- Lacking a better “official” channel some of the DELFI teams actually use the community-provided Github issue repositories as feedback channels for data issues. That works, any channel to get fixes upstream is good.
- The current realtime feed is supposed to cover about 70% of the static data and contain up to 22k concurrent trips according to DELFI, which is very far from what we actually see in Transitous currently. Where exactly that difference comes from isn’t fully understood yet though, but knowing the expectation and having people to talk to should help with resolving that.
This is the first time DELFI ran such an event also open to externals. Good to see some things slowly turning into the right direction after years of pushing by the community. There’s still much more to wish for of course, such as a deeper collaboration/integration between the stop registry and OSM.
Fixes & Improvements Travel document extractor- New or improved extractors for Agoda, Booking.com, Eurostar/Thalys, Flixbus, lu.ma, NH Hotels, NS, planway.com, Renfe, SBB, Thai state railway, Trenitalia and VietJet Air.
- Support for automatic price extraction from Apple Wallet files that use the currencyCode field.
All of this has been made possible thanks to your travel document donations!
Public transport data- Unfortunately Navitia ended their service, so the corresponding backend has been removed. For most affected areas there are fortunately alternatives meanwhile, such as Transitous.
- Added support for the NS onboard API.
- Translated backend information are reloaded correctly now when the system language changes.
- Implausible turns in railway paths found in German GTFS shapes or Hafas responses are now filtered out.
- The journey query API now has support for indicating a required bike transport, for specifying direct transportation preferences and for returning rental vehicle booking deep links.
- More Wikidata logo properties are considered for line logos now.
- Favorite locations used in a trip are now also part of a trip export.
- Transfers can now be added in more cases.
- Statistics also include transfers and live data where available, which should yield more accurate results.
- Incremental import of multi-traveler reservations has been fixed.
- The weather forecast on the entire day of departure and day of arrival is now included in the trip timeline.
- Checking for updates and downloading map data are now per-trip rather than global actions.
- Added a safety question before clearing the stop search history.
- Enabled importing GIF images on Android.
- Fixed importing DB return tickets via the online import.
- Fixed barcode scanning when using Itinerary as a Flatpak.
- Fixed display of canceled journeys.
- Fixed initial Matrix account setup working without needing a restart of the app.
- A fix for QtLocation map updates sometimes getting stuck until an application restart is still stuck in review (affects not just Itinerary).
Feedback and travel document samples are very much welcome, as are all other forms of contributions. Feel free to join us in the KDE Itinerary Matrix channel.
This Week in Plasma: Disable-able KWin Rules
This week there was a flurry of UI polishing work and a nice new feature to go along with the usual background level of bug-fixing. Some of the changes are quite consequential, being minor pain points for years. So hopefully this should be a crowd-pleasing week! If that's the case, consider directing your pleased-ness at KDE's year-end fundraiser! As of the time of writing, we're at 98% of our goal, and it would be amazing to get to 100% by the end of November!
Notable New FeaturesIt's now possible to temporarily disable KWin window rules rather than fully deleting them. (Ismael Asensio, 6.3.0. Link)
Notable UI ImprovementsDiscover no longer shows you a bunch of irrelevant information about Flatpak runtime packages. (Nate Graham, 6.2.4. Link)
The User Switcher widget now has a more sensible default height, fitting its content better. (Blazer Silving, 6.2.4. Link)
When you've disabled window thumbnails in the Task Manager widget, it now shows normal tooltips for open windows, rather than nothing at all. (Nate Graham, 6.3.0. Link)
Wireless headphones that expose battery information properly (as opposed to headsets, which include a microphone) now get a better icon in the Power and Battery widget and low battery notifications (Kai Uwe Broulik, 6.3.0. Link 1 and link 2)
KWin's Slide Back effect now has a duration that better matches the duration of other effects and animations, and responds more predictably and consistently to non-default global animation speed settings. (Blazer Silving, 6.3.0. Link)
On Ubuntu-based distros, the icon that appears in the System Tray alerting you to a major update available is now symbolic when using the Breeze icon theme, matching other such icons. (Nate Graham, Frameworks 6.9. Link)
Resizing windows for Qt-Quick-based apps should now look significantly better and smoother. (David Edmundson, Qt 6.9.0. Link)
Qt is now capable of displaying color emojis interspersed with black-and-white text when using the default font settings in Plasma. (Eskil Abrahamsen Blomfeldt, Qt 6.9.0. Link)
Notable Bug FixesFixed a case where Plasma could crash when you dismiss a notification about network changes. (Nicolas Fella, 6.2.4. Link)
Fixed a case where KWin could crash after running out of file descriptors when using certain non-Intel GPU drivers. (Xaver Hugl, 6.2.5. Link)
System Settings no longer crashes when you plug in a mouse while viewing the Mouse page. (Nicolas Fella, 6.2.5. Link)
Fixed a strange issue that would cause notifications to be mis-positioned after the first time you dragged any widgets that were on the desktop. This turned out to have been caused by the Plasma config file having old crusty System Tray widgets in it left over from prior Plasma customizations, which were competing for control over the positions of notifications. Now they're cleaned up properly, which also reduces memory usage, removes a ton of cruft in the config file, and may resolve other mysterious and random-seeming issues with notifications being positioned incorrectly. (Marco Martin, 6.2.5. Link 1 and link 2)
Fixed a bug that could make panels in "Fit Content" mode sometimes be too small when Plasma loads. (Niccolò Venerandi, 6.3.0. Link)
Fixed one of the last remaining known bugs relating to desktop icons shifting around: this time due to always-visible panels loading after the desktop and sometimes pushing the icons away. Now that doesn't happen anymore! (Akseli Lahtinen, 6.3.0. Link)
Fixed a regression caused by a change elsewhere in the stack that interacted poorly with some questionable code on our side that caused the highlight effect on the tab bar in the expanded view for the active network in the Networks widget to be invisible. (Harald Sitter, 6.3.0. Link)
Fixed a regression introduced with Frameworks 6.7 that caused many pieces of selected text in Plasma and QtQuick-based apps to become inappropriately de-selected after right-clicking on them. (Akseli Lahtinen, Frameworks 6.9. Link)
Other bug information of note:
- 3 Very high priority Plasma bugs (up from 2 last week). Current list of bugs
- 37 15-minute Plasma bugs (up from 35 last week). Current list of bugs
- 125 KDE bugs of all kinds fixed over the last week. Full list of bugs
Made KWin more robust against apps that send faulty HDR metadata so that it is less likely to crash when encountering this condition. (Xaver Hugl, 6.2.5. Link)
Made Plasma more robust against faulty widgets; now it is more likely to communicate a comprehensible error message rather than crashing. (Nicolas Fella, 6.2.5. Link)
Made Plasma more robust against malformed .desktop files; now it similarly robust against more types of broken files. (Alexander Lohnau, 6.2.4. Link)
How You Can HelpKDE has become important in the world, and your time and contributions have helped us get there. As we grow, we need your support to keep KDE sustainable.
You can help KDE by becoming an active community member and getting involved somehow. Each contributor makes a huge difference in KDE — you are not a number or a cog in a machine!
You don’t have to be a programmer, either. Many other opportunities exist:
- Filter and confirm bug reports, maybe even identify their root cause
- Contribute designs for wallpapers, icons, and app interfaces
- Design and maintain websites
- Translate user interface text items into your own language
- Promote KDE in your local community
- …And a ton more things!
You can also help us by donating to our yearly fundraiser! Any monetary contribution — however small — will help us cover operational costs, salaries, travel expenses for contributors, and in general just keep KDE bringing Free Software to the world.
To get a new Plasma feature or a bugfix mentioned here, feel free to push a commit to the relevant merge request on invent.kde.org.
Russell Coker: Links November 2024
Interesting news about NVidia using RISC-V CPUs in all their GPUs [1]. Hopefully they will develop some fast RISC-V cores.
Interesting blog post about using an 8K TV as a monitor, I’m very tempted to do this [2].
Interesting lecture at the seL4 symposium about the PANCAKE language for verified systems programming [5]. The idea that “if you are verifying your code types don’t help much” is interesting.
Interesting lecture from the seL4 summit about Cog’s work building a commercial virtualised phome [7]. He talks about not building a “brick of a smartphone that’s obsolete 6 months after release”, is he referring to the Librem5?
Linus tech tips did a walk through of an Intel fab, I learned a few things about CPU manufacture [9].
Interesting lecture about TEE on RISC-V with the seL4 kernel [11].
The quackery of Master Bates to allegedly remove the need for glasses is still going around [13].
- [1] https://tinyurl.com/2396vhdt
- [2] https://daniel.lawrence.lu/blog/y2023m12d15/
- [3] https://tinyurl.com/25d3n66c
- [4] https://notes.pault.ag/complex-for-whom/
- [5] https://www.youtube.com/watch?v=uAHVV0Lzopw
- [6] https://www.youtube.com/watch?v=oyWB3jLj_P8
- [7] https://www.youtube.com/watch?v=lI_Rn-riJ-Y
- [8] https://github.com/TravMurav/Qcom-Secure-Launch
- [9] https://www.youtube.com/watch?v=2ehSCWoaOqQ
- [10] https://x.com/gak_pdx/status/1860448570446610562
- [11] https://www.youtube.com/watch?v=kRgLIb4eO8U
- [12] https://diziet.dreamwidth.org/19151.html
- [13] https://www.youtube.com/watch?v=nvUB-tORITk
Related posts:
- Links October 2024 Dacid Brin wrote an interesting article about AI ecosystems and...
- Links August 2024 Bruce Schneier and Kim Córdova wrote an insightful article about...
- Links September 2024 CNA Insider has an insightful documentary series about Chinese illegal...
FSF Blogs: FSD meeting recap 2024-11-29
FSD meeting recap 2024-11-29
Dirk Eddelbuettel: RcppAPT 0.0.10: Maintenance
A new version of the RcppAPT package arrived on CRAN earlier today. RcppAPT connects R to the C++ library behind the awesome apt, apt-get, apt-cache, … commands (and their cache) which powering Debian, Ubuntu and other derivative distributions.
RcppAPT allows you to query the (Debian or Ubuntu) package dependency graph at will, with build-dependencies (if you have deb-src entries), reverse dependencies, and all other goodies. See the vignette and examples for illustrations.
This release moves the C++ compilation standard from C++11 to C++17. I had removed the setting for C++11 last year as compilation ‘by compiler default’ worked well enough. But the version at CRAN still carried, which started to lead to build failures on Debian unstable so it was time for an update. And rather than implicitly relying on C++17 as selected by the last two R releases, we made it explicit. Otherwise a few of the regular package and repository updates have been made, but no new code or features were added The NEWS entries follow.
Changes in version 0.0.10 (2024-11-29)Package maintenance updating continuous integration script versions as well as coverage link from README, and switching to Authors@R
C++ compilation standards updated to C++17 to comply with libapt-pkg
Courtesy of my CRANberries, there is also a diffstat report for this release. A bit more information about the package is available here as well as at the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
Anwesha Das: Keynote at PyLadiesCon!
Since the very inception of my journey in Python and PyLadies, I have always thought of having a PyLadies Conference, a celebration of PyLadies. There were conversations here and there, but nothing was fruitful then. In 2023, Mariatta, Cheuk, Maria Jose, and many more PyLadies volunteers around the globe made this dream come true, and we had our first ever PyLadiesCon.
I submitted a talk for the first-ever PyLadiesCon (how come I didn&apost?), and it was rejected. In 2024, I missed the CFP deadline. I was sad. Will I never be able to participate in PyLadiesCon?
On October 10th, 2024, I had my talk at PyCon NL. I woke up early to practice. I saw an email from PyLadiesCon, titled "Invitation to be a Keynote Speaker at PyLadiesCon". The panic call went to Kushal Das. "Check if there is any attack in the Python server? I got a spamming email about PyLadiesCon and the address is correct. "No, nothing.", replied Kushal after checking. Wait then "WHAT???". PyLadiesCon wants me to give the keynote. THE KEYNOTE in PyLadiesCon.
Thank you Audrey for conceptualizing and creating PyLadies, our home.
And here I am now. I will give the keynote on 7 December 2024 at PyLadiesCon on how PyLadies gave me purpose. See you all there.
Dreams do come true.