Feeds
Matt Layman: Cloud Migration Beginning - Building SaaS #202
KDE Ships Frameworks 6.6.0
Friday, 13 September 2024
KDE today announces the release of KDE Frameworks 6.6.0.
KDE Frameworks are 72 addon libraries to Qt which provide a wide variety of commonly needed functionality in mature, peer reviewed and well tested libraries with friendly licensing terms. For an introduction see the KDE Frameworks release announcement.
This release is part of a series of planned monthly releases making improvements available to developers in a quick and predictable manner.
New in this version Attica- Ci: add Alpine/musl job. Commit.
- Set up crash handling for baloo_file. Commit.
- Remove 48px draw-freehand symlink. Commit. Fixes bug #491887
- Add info(-symbolic) icon symlinks. Commit.
- Add new 64px dialog icons. Commit.
- Add icon for Apple Wallet passes bundle. Commit.
- Add battery icons with power profile emblems. Commit. See bug #483805
- Add remaining symbolic icons required for Discover. Commit.
- Update accessibility icons. Commit.
- Ci: add Alpine/musl job. Commit.
- Add system-reboot-update and system-shutdown-update icons. Commit.
- Add a couple of missing monochrome category icons. Commit.
- Don't generate symlinks for app icons if we not install the icons. Commit.
- Fix issues with zoom-map icons. Commit.
- Add Spinbox-specific decrease and increase icons. Commit. See bug #491312
- Make list-remove look like a red X. Commit.
- ECMQueryQt: don't cache QUERY_EXECUTABLE. Commit.
- Add fallback value for SASL_PATH. Commit.
- Add SASL_PATH to prefix.sh so that libkdexoauth2.so is found. Commit.
- Allow qml target to be actually optional. Commit.
- Fix FindLibExiv2 version detection from header. Commit.
- Ci: add Alpine/musl job. Commit.
- ECMEnableSanitizers: fix greedy linker parameter replacment. Commit.
- Add private code option to ecm_add_qtwayland_(client/server)_protocol. Commit.
- Add a PRIVATE_CODE option to ecm_add_wayland_server_protocol. Commit.
- Add [PRIVATE_CODE] also to the second signature of ecm_add_wayland_server_protocol. Commit.
- Unify format string usage. Commit.
- Avoid double assignment. Commit.
- Remove unnecessary escape. Commit.
- COLS_IN_ALPHA_INDEX has been deprecated. Commit.
- Footer.html correct URL for trademark_kde_gear_black_logo.png. Commit.
- Ci: add Alpine/musl job. Commit.
- HelperSupport: don't send debug message on application shutting down. Commit.
- Export KCalendarCore namespace to QML. Commit.
- Add read support for xCal events. Commit.
- Add KF7 TODOs to make ICalFormat::fromString methods static. Commit.
- Refactor libical <-> KCalendarCore enum conversion. Commit.
- Avoid computing the next recurrence interval based on an invalid time. Commit.
- Use passkey to avoid issues with private constructor. Commit.
- Additional public API. Commit.
- [kcolorschememanager] Fix crash. Commit. Fixes bug #492408
- Add KColorSchemeManager::instance in favor of public constructor. Commit.
- KStandardAction: Use windowIcon for AboutApp icon again. Commit.
- Fix macro documentation. Commit.
- Ci: add Alpine/musl job. Commit.
- Fix warning from staterc migration when there's no "old" file to migrate. Commit.
- Update KF6 TODO comments to KF7 given that they weren't addressed in KF6. Commit.
- Make the depedency to QML optional. Commit.
- ExportUrlsToPortal: check for dbus error. Commit.
- KDirWatch: don't try inotify again if it has already failed. Commit.
- Relicense some files from lgpl2-only to lgpl2.1-or-later. Commit.
- Ci: add Alpine/musl job. Commit.
- KPluginMetaData: Avoid reading metadata from plugin loader twice. Commit.
- Kcoreaddons_add_plugin: Fix typo in error message. Commit.
- Fix configuring error when QtQml is not around. Commit.
- Document that KCrash::initialize should be called after KAboutData. Commit.
- Drop ptrace forwarding code. Commit.
- Ci: add Alpine/musl job. Commit.
- Ci: add Alpine/musl job. Commit.
- UserMetadata: fix Win k_setxattr. Commit.
- [OfficeExtractor] Do not add word/line count if nothing has been extracted. Commit.
- [OfficeExtractor] Only try to extract content if PlainText is requested. Commit.
- [OfficeExtractor] Remove duplicate findExecutable calls, fix debug output. Commit.
- Cmake: Use KDE_INSTALL_FULL_LIBEXECDIR_KF instead of manual path mangling. Commit. Fixes bug #491462
- Add missing initializer for "Empty" PropertyInfo displayName. Commit.
- [Taglib] Use non-deprecated constructors for MPEG::File/FLAC::File. Commit.
- [Taglib] Replace deprecated length() with lengthInSeconds(). Commit.
- Ci: add Alpine/musl job. Commit.
- Port towards QNativeInterface. Commit.
- Fail at CMake configure time if xcb Qt feature is not enabled. Commit.
- Waylandclipboard: Dont explicitly clear when transfering sources. Commit.
- [kjobwidgets] Store window in a QPointer. Commit. See bug #491637. See bug #448532
- Ci: add Alpine/musl job. Commit.
- KOverlayIconEngine: Adjust to API change in Qt 6.8 in scaled pixmap hook. Commit.
- Generate wayland code with PRIVATE_CODE. Commit.
- Ci: add Alpine/musl job. Commit.
- Spinboxdoc. Commit.
- Formatting. Commit.
- Ci: add Alpine/musl job. Commit.
- Fix test to actually use a QDoubleSpinBox as intended here. Commit.
- Unambiguous documentation of formatString. Commit.
- Allow building without breeze-icons. Commit.
- Extend initTheme to ensure we properly follow the system colors. Commit.
- Add path for Android to theme locations. Commit.
- Take logical pixels in KIconEngine::createPixmap. Commit.
- [kiconengine] Adapt to Qt behavior change in scaledPixmap. Commit. Fixes bug #491677
- XCF: fix crash. Commit.
- README update. Commit.
- RGB: added options support. Commit.
- PCX: added options support. Commit.
- Fix crash on malformed files. Commit.
- Test: skip kfilewidgettest focus test in Wayland. Commit.
- KFileWidget: Fix selecting directories. Commit.
- [trash] Fix restoring entries with absolute paths in disk-local trash. Commit. Fixes bug #463751
- Add missing include. Commit.
- Also search kservices5 for service menus. Commit.
- Accept service menus that use ServiceTypes to specify their types. Commit. Fixes bug #478030
- Ignore application/x-kde-onlyReplaceEmpty in paste dialog. Commit. Fixes bug #492006
- Apply 1 suggestion(s) to 1 file(s). Commit.
- PasteDialog: hide application/x-kde-* formats from the combobox. Commit.
- PreviewJob: remove obsolete support X-KDE-Protocols aware thumbnailer. Commit.
- Fix documentation for ThumbnailRequest. Commit.
- Remove dead code for changing job priorities. Commit.
- Remove unused sslMetaData member. Commit.
- Remove unneeded friend. Commit.
- KFileWidget: Enable word wrapping for the message widget. Commit.
- Add missing include. Commit.
- Remove unused function. Commit.
- Consistently use WITH_QTDBUS instead of USE_DBUS. Commit.
- Previewjob: use contains instead of supportsMimetype. Commit.
- Document variable purposes and move EntryInfo definition to the start. Commit.
- KPropertiesDialog: Add "unknown" fallback. Commit.
- KPropertiesDialog: Use original URL for extra fields. Commit.
- Remove unneeded QPointer usage. Commit.
- PreviewJob: some refactoring. Commit.
- KUrlNavigator: Support modifiers on return similar to web browsers. Commit.
- PreviewJob: fix warnings and a todo. Commit.
- KDynamicJobTracker: Use widgets fallback if server says job tracker is required. Commit.
- Correctly escape unit names. Commit. Fixes bug #488854
- KRecentDocument: add removeApplication and removeUrl. Commit. See bug #480276
- Gui/kprocessrunner: normalize working directory. Commit. Fixes bug #490966
- DropJob: special-case "downloading http URLs" drop with better text. Commit.
- Don't show "Move" item in drop menu for source files accessed using http. Commit. Fixes bug #389600
- Set up crash handling for kiod. Commit.
- Deprecate leftovers from HTTP cache control. Commit.
- KUrlNavigator: Decode url title fully. Commit.
- Make sure KCrash works for kioworker. Commit.
- Previewjob: Use thumbnailer files for any mimetypes we don't have a plugin for. Commit.
- Disable cachegen. Commit. Fixes bug #488326
- PlaceholderMessage: Remove the icon opacity if the message is actionable. Commit.
- ToolBarLayout: Add test for dynamic actions. Commit.
- Read willShowOnActive value as Variant and convert to Bool. Commit.
- PrivateActionToolButton: Replace onVisibleChanged with Connections. Commit.
- Fix registration name for WheelEvent. Commit.
- [icon] Only reload icon from theme if the theme has that icon. Commit. Fixes bug #491806. Fixes bug #491854. Fixes bug #491848
- PrivateActionToolButton: Hide menu if button is hidden. Commit. Fixes bug #486107
- Allow recoloring of Android icon theme. Commit.
- Ci: add Alpine/musl job. Commit.
- [icon] Fix icon colors when using Plasma platformtheme and QIcon source. Commit. Fixes bug #491274
- ShadowedImage: Expose Image.status via a readonly alias. Commit.
- PromptDialog: fix buttons overflow. Commit.
- Relicense Chip to LGPL. Commit.
- Relicense LoadingPlaceholder to LGPL. Commit.
- Remove unused license text. Commit.
- Convert license statements to SPDX. Commit.
- Ci: add Alpine/musl job. Commit.
- Fix build without Qml. Commit.
- KColumnHeadersModel: Fix manual test. Commit.
- KExtraColumnsProxyModel: port to Qt 6.8's QIdentityProxyModel::setHandleSourceLayoutChanges. Commit.
- Ci: add Alpine/musl job. Commit.
- Only install D-Bus interface files when actually building with D-Bus. Commit.
- Add especially crappy magic to deal with transient parents in actions. Commit. Fixes bug #491083
- Make staticxmlprovider (more) reentrant. Commit.
- Make AtticaProvider reentrant. Commit.
- Typos--. Commit.
- Don't set desktop file name for XDG activation token. Commit.
- Add missing include guard. Commit.
- Ci: add Alpine/musl job. Commit.
- Ci: add Alpine/musl job. Commit.
- Make tests a bit faster. Commit.
- Fix clashing and missing keyboard accelerators. Commit.
- Add help texts for new editing commands. Commit.
- Move sort implementation to C++. Commit. Fixes bug #478250
- Move natsort to C++ and implement it using QCollator. Commit.
- Move the sortuniq, uniq implementation to C++. Commit. Fixes bug #478250
- Read and write font features to config. Commit.
- Fix doc.text() when first block is empty. Commit.
- Fix block splitting. Commit.
- Try to make test more robust. Commit.
- Restore previous indentation test mode based on individual files. Commit.
- Store startlines in the buffer instead of block. Commit.
- Doc: Fix code example for plugin hosting. Commit.
- No 10 second timeouts, the CI is not that consistent fast. Commit.
- Try to make test more stable. Commit.
- Improve encoding detection. Commit. Fixes bug #487594
- Fix grouping on config dialog page. Commit. Fixes bug #490617
- Optimize cursorToOffset. Commit.
- Dont indent on tab when in block selection mode. Commit. Fixes bug #448695
- Fix selection printing. Commit. Fixes bug #415570
- Ci: add Alpine/musl job. Commit.
- Fix unused-variable warning (with clang) when HAVE_SPEECH isn't set. Commit.
- Deprecate KPluralHandlingSpinBox. Commit.
- Hide toolbar when not need (e.g. in tabbed view). Commit.
- FontChoose: Allow setting font features when selecting font. Commit. Fixes bug #479686
- Expand tabbar in KPageView. Commit.
- [kjobwidgets] Store window in a QPointer. Commit. Fixes bug #491637. See bug #448532
- Support page headers in KPageView. Commit.
- Generate wayland code with PRIVATE_CODE. Commit.
- Port to KStandardActions where possible. Commit.
- Ci: add Alpine/musl job. Commit.
- Ci: add Alpine/musl job. Commit.
- Fix WITH_QUICK=OFF by moving ECMQmlModule behind the conditional. Commit.
- Ci: add Alpine/musl job. Commit.
- Ci: add Alpine/musl job. Commit.
- Revert "fail if none of the plugins can be build". Commit.
- Fail if none of the plugins can be build. Commit.
- Update file spellcheckhighlighter.cpp. Commit.
- Quick: Silence valgrind warnings. Commit.
- Fix uic warning. Commit.
- Ci: add Alpine/musl job. Commit.
- Inc version. Commit.
- Update QFace IDL Definition to support single line comments. Commit.
- Odin.xml: Multiple fixes to the syntax. Commit.
- QFace: WordDetect without trailing space. Commit.
- Bash: add \E, \uHHHH and \UHHHHHHHH String Escape, @k parameter transformation and & in string substitution. Commit.
- Zsh: add \E, \uHHHH, \UHHHHHHHH and sinle \ as String Escape. Commit.
- Add definition for the QFace IDL. Commit.
- Modelines: add missing variables and delete some that don't work. Commit.
- Add Kate Config syntax (.kateconfig file). Commit.
- Modelines: fix spaces after remove-trailing-spaces value ; multiple values now stop parsing. Commit.
- Modelines: fix indent-mode value and remove deprecated variables. Commit.
- Detect-identical-context.py: add -p to show duplicate content. Commit.
- XML: add parameter entity declaration symbol (% in ). Commit.
- Indexer: suggest removing .* and .*$ from RegExpr with lookAhead=1. Commit.
- PHP: add { in double quote string as a Backslash Code. Commit. Fixes bug #486372
- Zsh: fix escaped line in brace condition. Commit.
- Bash: fix escaped line in brace condition. Commit. Fixes bug #487978
- Nix: fix string in attribute access. Commit. Fixes bug #491436
- Optimize AbstractHighlighterPrivate::ensureDefinitionLoaded (highlighter_benchmark is 1.1% faster). Commit.
- Ci: add Alpine/musl job. Commit.
- Indexer: suggest replacing RegExpr with Int or Float when possible. Commit.
- Add JSX as an alternative name for JavaScript React (JSX) and TSX for TypeScript React (TSX). Commit.
- Indexer: check name and alternativeNames conflict ; update alternativeNames for generated files. Commit.
- Optimize Repository::addCustomSearchPath. Commit.
- Replace QList contextDatas with std::vector: implicit sharing is not useful. Commit.
- Prefer range-based loops to loop over iterators. Commit.
- Optimize Definition::foldingEnabled() by calculating the result during loading. Commit.
- Replace DefinitionRef type with DefinitionData* in immediateIncludedDefinitions. Commit.
- Remove code duplication related to context resolution. Commit.
- Replace std::cout / std::cerr with fprintf. Commit.
- Use QStringView as key for format. Commit.
- Replace QStringView::mid,left,right with sliced. Commit.
- Replace QString::split with QStringTokenizer to avoid unnecessary list construction. Commit.
- Orgmode: add syntax highlighting to some languages. Commit.
- Optimize Repository::definition[s]For*(). Commit.
- Reduce QFileInfo usage. Commit.
- Theme_contrast_checker.py: add --scores to modify rating values. Commit.
- Theme_contrast_checker.py: displays the selected color space. Commit.
- Theme_contrast_checker.py: fix label inversion between bold and normal text. Commit.
- Ksyntaxhighlighter6: add --background-role parameter to display different possible theme backgrounds. Commit.
- Ksyntaxhighlighter6: rename ansi256Colors format to ansi256. Commit.
- Theme_contrast_checker.py: fix help of -M parameter. Commit.
- Added a link to all available syntaxes and how to test a file in an isolated environment. Commit.
- Python: raw-string with lowercase r are highlighted as regex. Commit.
- JSON: fix float that start with 0. Commit.
- JSON: add jsonlines and asciicast v2 format extensions. Commit.
- Ci: add Alpine/musl job. Commit.
Dirk Eddelbuettel: RcppArmadillo 14.0.2-1 on CRAN: Updates
Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and languageâand is widely used by (currently) 1164 other packages on CRAN, downloaded 36.1 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 595 times according to Google Scholar.
Conrad released two small incremental releases to version 14.0.0. We did not immediately bring these to CRAN as we have to be mindful of the desired upload cadence of âonce every one or two monthsâ. But as 14.0.2 has been stable for a few weeks, we now decided to bring it to CRAN. Changes since the last CRAN release are summarised below, and overall fairly minimal. On the package side, we reorder what citation() returns, and now follow CRAN requirements via Authors@R.
Changes in RcppArmadillo version 14.0.2-1 (2024-09-11)Upgraded to Armadillo release 14.0.2 (Stochastic Parrot)
Optionally use C++20 memory alignment
Minor corrections for several corner-cases
The order of items displayed by citation() is reversed (Conrad in #449)
The DESCRIPTION file now uses an Authors@R field with ORCID IDs
Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.
If you like this or other open-source work I do, you can sponsor me at GitHub.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
Four Kitchens: Get ready for Drupal 11: An essential guide
Backend Engineer
A graduate of the University of Costa Rica with a passion for programming, Yuvania is driven to constantly improve, study, and learn new technologies to be better every day.
January 1, 1970
Preparing for Drupal 11 is crucial to ensure a smooth transition, and weâre here to help you make it easy and efficient. This guide offers clear steps to update your environment and modules, perform thorough tests, and use essential tools like Upgrade Status and Drupal Rector.
Donât fall behind! Making sure your site is ready for the new features and improvements Drupal 11 brings will make the upgrade work quick and easy.
Read on to learn how to keep your site updated and future-proof.
Ensure your environment is ready- Upgrade to PHP 8.3: Ensure optimal performance and compatibility with Drupal 11
- Use Drush 13: Make sure you have this version available in your development or sandbox environment
- Database requirements: Ensure your database meets the requirements for Drupal 11:
- MySQL 8.0
- PostgreSQL 16
- Web server: Drupal 11 requires Apache 2.4.7 or higher. Keep your server updated to avoid compatibility issues.
Upgrade to Drupal 10.3. Before migrating to Drupal 11, update your site to Drupal 10.3 to handle all deprecations properly. Drupal 10.3 defines all deprecated code to be removed in Drupal 11, making it easier to prepare for the next major update.
Update contributed modules. Use Composer to update all contributed modules to versions compatible with Drupal 11. The Upgrade Status module will help identify deprecated modules and APIs. Ensure all modules are updated to avoid compatibility issues.
Fix custom code. Use Drupal Rector to identify and fix deprecations in your custom code. Drupal Rector automates much of the update process, leaving âto doâ comments where manual intervention is needed. Perform a manual review of critical areas to ensure everything functions correctly.
Run tests in a safe environment. Conduct tests in a safe environment, such as a local sandbox or cloud IDE. Itâs likely to fail at first, but itâs essential to run multiple tests until you achieve a successful result. Use:
- composer update --dry-run to simulate the update without making changes
- composer why-not drupal/core 11.0 if there are issues, identify which dependencies require an earlier version of Drupal
Compatibility tools. Install and use the Upgrade Status module to ensure your site is ready. This module provides a detailed report on your siteâs compatibility with Drupal 11. Check for compatibility issues in contributed projects on Drupal.org using the Project Update Bot.
Back up everything. Before updating, ensure you have a complete backup of your code and database. This is crucial to restore your site if something goes wrong during the update.
Considerations for immediate upgradeYou may wonder if you should upgrade your site to Drupal 11 as soon as itâs available. Here are some pros and cons to consider:
- Maybe no: Sites can wait up till when the Drupal 10 LTS (long term support) ends (mid-late 2026) and then upgrade. This allows contributed modules to be fully ready for the update.
- Maybe yes: Upgrading early lets you take advantage of new features and improvements but may introduce new bugs. Additionally, if everyone waits to upgrade, it could delay the readiness of contributed modules for the new version.
While Drupal 10 will be supported for some time, itâs advisable to stay ahead with these updates to use the improvements they offer and ensure a smoother, optimized transition.
By following these steps and considerations, your Drupal site will be well prepared for the transition to Drupal 11, ensuring a smooth and uninterrupted experience. Get ready for the new and exciting features Drupal 11 has to offer!
References- Are You Ready for Drupal 11?
- Drupal 11 on Acquia
- YouTube: Preparing for Drupal 11
- Getting Ready for Drupal 11 – Slide Deck
The post Get ready for Drupal 11: An essential guide appeared first on Four Kitchens.
Will Kahn-Greene: Switching from pyenv to uv
The 0.4.0 release of uv does everything I currently do with pip, pyenv, pipx, pip-tools, and pipdeptree. Because of that, I'm in the process of switching to uv.
This blog post covers switching from pyenv to uv.
History2024-08-29: Initial writing.
2024-09-12: Minor updates and publishing.
I'm running Ubuntu Linux 24.04. I have pyenv installed using the the automatic installer. pyenv is located in $HOME/.pyenv/bin/.
I have the following Pythons installed with pyenv:
$ pyenv versions system 3.7.17 3.8.19 3.9.19 * 3.10.14 (set by /home/willkg/mozilla/everett/.python-version) 3.11.9 3.12.3I'm not sure why I have 3.7 still installed. I don't think I use that for anything.
My default version is 3.10.14 for some reason. I'm not sure why I haven't updated that to 3.12, yet.
In my 3.10.14, I have the following Python packages installed:
$ pip freeze appdirs==1.4.4 argcomplete==3.1.1 attrs==22.2.0 cffi==1.15.1 click==8.1.3 colorama==0.4.6 diskcache==5.4.0 distlib==0.3.8 distro==1.8.0 filelock==3.14.0 glean-parser==6.1.1 glean-sdk==50.1.4 Jinja2==3.1.2 jsonschema==4.17.3 MarkupSafe==2.0.1 MozPhab==1.5.1 packaging==24.0 pathspec==0.11.0 pbr==6.0.0 pipx==1.5.0 platformdirs==4.2.1 pycparser==2.21 pyrsistent==0.19.3 python-hglib==2.6.2 PyYAML==6.0 sentry-sdk==1.16.0 stevedore==5.2.0 tomli==2.0.1 userpath==1.8.0 virtualenv==20.26.2 virtualenv-clone==0.5.7 virtualenvwrapper==6.1.0 yamllint==1.29.0That probably means I installed the following in the Python 3.10.14 Python environment:
MozPhab
pipx
virtualenvwrapper
Maybe I installed some other things for some reason lost in the sands of time.
Then I had a whole bunch of things installed with pipx.
I have many open source projects all of which have a .python-version file listing the Python versions the project uses.
I think that covers the start state.
StepsFirst, I made a list of things I had.
I listed all the versions of Python I have installed so I know what I need to reinstall with uv.
$ pyenv versionsI listed all the packages I have installed in my 3.10.14 environment (the default one).
$ pip freezeI listed all the packages I installed with pipx.
$ pipx list
I uninstalled all the packages I installed with pipx.
$ pipx uninstall PACKAGEThen I uninstalled pyenv and everything it uses. I followed the pyenv uninstall instructions:
$ rm -rf $(pyenv root)Then I removed the bits in my shell that add to the PATH and set up pyenv and virtualenvwrapper.
Then I started a new shell that didn't have all the pyenv and virtualenvwrapper stuff in it.
Then I installed uv using the uv standalone installer.
Then I ran uv --version to make sure it was installed.
Then I installed the shell autocompletion.
Note
I have a dotfiles thing and separate out bashrc changes by what changes them. You can see my home-grown thing that works for me here:
https://github.com/willkg/dotfiles
These instructions are specific to my home-grown dotfiles thing.
$ echo 'eval "$(uv generate-shell-completion bash)"' >> ~/dotfiles/bash.d/20-uv.bashThen I started a new shell to pick up those changes.
Then I installed Python versions:
$ uv python install 3.8 3.9 3.10 3.11 3.12 Searching for Python versions matching: Python 3.10 Searching for Python versions matching: Python 3.11 Searching for Python versions matching: Python 3.12 Searching for Python versions matching: Python 3.8 Searching for Python versions matching: Python 3.9 Installed 5 versions in 8.14s + cpython-3.8.19-linux-x86_64-gnu + cpython-3.9.19-linux-x86_64-gnu + cpython-3.10.14-linux-x86_64-gnu + cpython-3.11.9-linux-x86_64-gnu + cpython-3.12.5-linux-x86_64-gnuWhen I type "python", I want it to be a Python managed by uv. Also, I like having "pythonX.Y" symlinks, so I created a uv-sync script which creates symlinks to uv-managed Python versions:
https://github.com/willkg/dotfiles/blob/main/dotfiles/bin/uv-sync
Then I installed all my tools using uv tool install.
$ uv tool install PACKAGEFor tox, I had to install the tox-uv package in the tox environment:
$ uv tool install --with tox-uv toxNow I've got everything I do mostly working.
So what does that give me?I installed uv and I can upgrade uv using uv self update.
Python interpreters are managed using uv python. I can create symlinks to interpreters using uv-sync script. Adding new interpreters and removing old ones is pretty straight-forward.
When I type python, it opens up a Python shell with the latest uv-managed Python version. I can type pythonX.Y and get specific shells.
I can use tools written in Python and manage them with uv tool including ones where I want to install them in an "editable" mode.
I can write scripts that require dependencies and it's a lot easier to run them now.
I can create and manage virtual environments with uv venv.
Next stepsDelete all the .python-version files I've got.
Update documentation for my projects and add a uv tool install PACKAGE option to installation instructions.
Probably discover some additional things to add to this doc.
mark.ie: My LocalGov Drupal contributions for week-ending September 13th, 2024
This week's big issue was building a prototype for "Axe Thrower" so we can "throw" multiple URLs at AXE at the same time.
git revert name and Akademy
I reverted my name back to Jonathan Riddell and have now made a new uid for my PGP key, you can get the updated one on keyserver.ubuntu.com or my contact page or my Launchpad page.
Here’s some pics from Akademy
KDE neon Akademy Session
KDE Akademy is meeting this week in WĂŒrzburg.
We just had a BoF session where we discussed..
- The current progress to rebasing on Ubuntu 24.04
- The current state of KDE neon Core our amazing forthcoming Snap based distro
- Fixing up broken and bitrotting infrastructure
- Moving to KDE invent
- Issues with Plasma 6 upgrades and looking at integrating more QA tests
- How the proposed KDE LinuxTM distro fits into the mix
James Bennett: Know your Python container types
This is the last of a series of posts I’m doing as a sort of Python/Django Advent calendar, offering a small tip or piece of information each day from the first Sunday of Advent through Christmas Eve. See the first post for an introduction.
Python contains multitudesThere are a lot of container types available in the Python standard library, and it can be confusing sometimes to keep track of them all. So since it’s âŠ
Real Python: Quiz: Python 3.13: Free-Threading and a JIT Compiler
In this quiz, you’ll test your understanding of the new features in Python 3.13.
By working through this quiz, you’ll revisit how to compile a custom Python build, disable the Global Interpreter Lock (GIL), enable the Just-In-Time (JIT) compiler, determine the availability of new features at runtime, assess the performance improvements in Python 3.13, and make a C extension module targeting Python’s new ABI.
[ Improve Your Python With đ Python Tricks đ â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Mario Hernandez: Migrating your Drupal theme from Patternlab to Storybook
Building a custom Drupal theme nowadays is a more complex process than it used to be. Most themes require some kind of build tool such as Gulp, Grunt, Webpack or others to automate many of the repeatitive tasks we perform when working on the front-end. Tasks like compiling and minifying code, compressing images, linting code, and many more. As Atomic Web Design became a thing, things got more complicated because now if you are building components you need a styleguide or Design System to showcase and maintain those components. One of those design systems for me has been Patternlab. I started using Patternlab in all my Drupal projects almost ten years ago with great success. In addition, Patternlab has been the design system of choice at my place of work but one of my immediate tasks was to work on migrating to a different design system. We have a small team but were very excited about the challenge of finding and using a more modern and robust design system for our large multi-site Drupal environment.
Enter StorybookAfter looking a various options for a design system, Storybook seemed to be the right choice for us for a couple of reasons: one, it has been around for about 10 years and during this time it has matured significantly, and two, it has become a very popular option in the Drupal ecosystem. In some ways, Storybook follows the same model as Drupal, it has a pretty active community and a very healthy ecosystem of plugins to extend its core functionality.
Storybook looks very promising as a design system for Drupal projects and with the recent release of Single Directory Components or SDC, and the new Storybook module, we think things can only get better for Drupal front-end development. Unfortunately for us, technical limitations in combination with our specific requirements, prevented us from using SDC or the Storybook module. Instead, we built our environment from scratch with a stand-alone integration of Storybook 8.
INFO: At the time of our implementation, TwigJS did not have the capability to resolve SDC's namespace. It appears this has been addressed and using SDC should now be possible with this custom setup. I haven't personally tried it and therefore I can't confirm. Our process and requirementsIn choosing Storybook, we went through a rigorous research and testing process to ensure it will not only solve our immediate problems with our current environment, but it will be around as a long term solution. As part of this process, we also tested several available options like Emulsify and Gesso which would be great options for anyone looking for a ready-to-go system out of the box. Some of our requirements included:
1. No components refactoringThe first and non-negotiable requirement was to be able to migrate components from Patternlab to a new design system with the least amount of refactoring as possible. We have a decent amount of components which have been built within the last year and the last thing we wanted was to have to rebuild them again because we are switching design system.
2. A new Front-end build workflowI personally have been faithful to Gulp as a front-end build tool for as long as I can remember because it did everything I needed done in a very efficient manner. The Drupal project we maintain also used Gulp, but as part of this migration, we wanted to see what other options were out there that could improve our workflow. The obvious choice seemed to be Webpack, but as we looked closer into this we learned about ViteJS, "The Next Genration Frontend Tooling". Vite delivers on its promise of being "blazing fast", and its ecosystem is great and growing, so we went with it.
3. No more Sass in favor of PostCSSCSS has drastically improved in recent years. It is now possible with plain CSS, to do many of the things you used to be able to only do with Sass or similar CSS Preprocessor. Eliminating Sass from our workflow meant we would also be able to get rid of many other node dependencies related to Sass. The goal for this project was to use plain CSS in combination with PostCSS and one bonus of using Vite is that Vite offers PostCSS processing out of the box without additional plugins or dependencies. Ofcourse if you want to do more advance PostCSS processing you will probably need some external dependencies.
Building a new Drupal theme with StorybookLet's go over the steps to building the base of your new Drupal theme with ViteJS and Storybook. This will be at a high-level to callout only the most important and Drupal-related parts. This process will create a brand new theme. If you already have a theme you would like to use, make the appropriate changes to the instructions.
1. Setup Storybook with ViteJS ViteJS- In your Drupal project, navigate to the theme's directory (i.e. /web/themes/custom/)
- Run the following command:
- When prompted, select the framework of your choice, for us the framework is React.
- When prompted, select the variant for your project, for us this is JavaScript
After the setup finishes you will have a basic Vite project running.
Storybook- Be sure your system is running NodeJS version 18 or higher
- Inside the newly created theme, run this command:
- After installation completes, you will have a new Storybook instance running
- If Storybook didn't start on its own, start it by running:
Twig templates are server-side templates which are normally rendered with TwigPHP to HTML by Drupal, but Storybook is a JS tool. TwigJS is the JS-equivalent of TwigPHP so that Storybook understands Twig. Let's install all dependencies needed for Storybook to work with Twig.
- If Storybook is still running, press Ctrl + C to stop it
- Then run the following command:
- vite-plugin-twig-drupal: If you are using Vite like we are, this is a Vite plugin that handles transforming twig files into a Javascript function that can be used with Storybook. This plugin includes the following:
- Twig or TwigJS: This is the JavaScript implementation of the Twig PHP templating language. This allows Storybook to understand Twig.
Note: TwigJS may not always be in sync with the version of Twig PHP in Drupal and you may run into issues when using certain Twig functions or filters, however, we are adding other extensions that may help with the incompatability issues. - drupal attribute: Adds the ability to work with Drupal attributes.
- Twig or TwigJS: This is the JavaScript implementation of the Twig PHP templating language. This allows Storybook to understand Twig.
- twig-drupal-filters: TwigJS implementation of Twig functions and filters.
- html-react-parser: This extension is key for Storybook to parse HTML code into react elements.
- @modifi/vite-plugin-yaml: Transforms a YAML file into a JS object. This is useful for passing the component's data to React as args.
Update your vite.config.js so it makes use of the new extensions we just installed as well as configuring the namesapces for our components.
import { defineConfig } from "vite" import yml from '@modyfi/vite-plugin-yaml'; import twig from 'vite-plugin-twig-drupal'; import { join } from "node:path" export default defineConfig({ plugins: [ twig({ namespaces: { components: join(__dirname, "./src/components"), // Other namespaces maybe be added. }, }), // Allows Storybook to read data from YAML files. yml(), ], }) Storybook configurationOut of the box, Storybook comes with main.js and preview.js inside the .storybook directory. These two files is where a lot of Storybook's configuration is done. We are going to define the location of our components, same location as we did in vite.config.js above (we'll create this directory shortly). We are also going to do a quick config inside preview.js for handling drupal filters.
- Inside .storybook/main.js file, update the stories array as follows:
- Inside .storybook/preview.js, update it as follows:
- If Storybook is still running, press Ctrl + C to stop it
- Inside the src directory, create the components directory. Alternatively, you could rename the existing stories directory to components.
With the current system in place we can start building components. We'll start with a very simple component to try things out first.
- Inside src/components, create a new directory called title
- Inside the title directory, create the following files: title.yml and title.twig
- Inside title.yml, add the following:
- Inside title.twig, add the following:
We have a simple title component that will print a title of anything you want. The level key allows us to change the heading level of the title (i.e. h1, h2, h3, etc.), and the modifier key allows us to pass a modifier class to the component, and the url will be helpful when our title needs to be a link to another page or component.
Currently the title component is not available in storybook. Storybook uses a special file to display each component as a story, the file name is component-name.stories.jsx.
- Inside title create a file called title.stories.jsx
- Inside the stories file, add the following:
- If Storybook is running you should see the title story. See example below:
- Otherwise start Storybook by running:
With Storybook running, the title component should look like the image below:
The controls highlighted at the bottom of the title allow you to change the values of each of the fields for the title.
I wanted to start with the simplest of components, the title, to show how Storybook, with help from the extensions we installed, understands Twig. The good news is that the same approach we took with the title component works on even more complex components. Even the React code we wrote does not change much on large components.
In the next blog post, we will build more components that nest smaller components, and we will also add Drupal related parts and configuration to our theme so we can begin using the theme in a Drupal site. Finally, we will integrate the components we built in Storybook with Drupal so our content can be rendered using the component we're building. Stay tuned. For now, if you want to grab a copy of all the code in this post, you can do so below.
Resources In closingGetting to this point was a team effort and I'd like to thank Chaz Chumley, a Senior Software Engineer, who did a lot of the configuration discussed in this post. In addition, I am thankful to the Emulsify and Gesso teams for letting us pick their brains during our research. Their help was critical in this process.
I hope this was helpful and if there is anything I can help you with in your journey of a Storybook-friendly Drupal theme, feel free to reach out.
Mario Hernandez: Responsive images in Drupal - a series
Images are an essential part of a website. They enhance the appeal of the site and make the user experience a more pleasant one. The challenge is finding the balance between enhancing the look of your website through the use of images and not jeopardizing performance. In this guide, we'll dig deep into how to find that balance by going over knowledge, techniques and practices that will provide you with a solid understanding of the best way to serve images to your visitors using the latest technologies and taking advantage of the advances of web browsers in recent years.
Hi, I hope you are ready to dig into responsive images. This is a seven-part guide that will cover everything you need to know about responsive images and how to manage them in a Drupal site. Although the excercises in this guide are Drupal-specific, the core principles of responsive images apply to any platform you use to build your sites.
Where do we start?Choosing Drupal as your CMS is a great place to start. Drupal has always been ahead of the game when it comes to managing images by providing features such as image compression, image styles, responsive images styles and media library to mention a few. All these features, and more, come out of the box in Drupal. In fact, most of what we will cover in this guide will be solely out of the box Drupal features. We may touch on third party or contrib techniques or tools but only to let you know what's available not as a hard requirement for managing images in Drupal.
It is important to become well-versed with the tools available in Drupal for managing images. Only then you will be able to make the most of those tools. Don't worry though, this guide will provide you with a lot of knowledge about all the pieces that take part in building a solid system for managing and serving responsive images.
Let's start by breaking down the topics this guide will cover:
- What are responsive images?
- Art Direction using the <picture> HTML element
- Image resolution switching using srcset and sizes attributes
- Image styles and Responsive image styles in Drupal
- Responsive images and Media
- Responsive images, wrapping up
A responsive image is one whose dimensions adjust to changes in screen resolutions. The concept of responsive images is one that developers and designers have been strugling with ever since Ethan Marcotte published his famous blog post, Responsive Web Design, back in 2010 followed by his book of the same title. The concept itself is pretty straight forward, serve the right image to any device type based on various factors such as screen resolution, internet speed, device orientation, viewport size, and others. The technique for achieving this concept is not as easy. I can honestly say that over 10 years after reponsive images were introduced, we are still trying to figure out the best way to render images that are responsive. Read more about responsive images.
So if the concept of responsive images is so simple, why don't we have one standard for effectively implementing it? Well, images are complicated. They bring with them all sorts of issues that can negatively impact a website if not properly handled. Some of these issues include: Resolution, file size or weight, file type, bandwidth demands, browser support, and more.
Some of these issues have been resolved by fast internet speeds available nowadays, better browser support for file tyes such as webp, as well as excellent image compression technologies. However, there are still some issues that will probably never go away and that's what makes this topic so complicated. One issue in particular is using poorly compressed images that are extremely big in file size. Unfortunately often times this is at the hands of people who lack the knowledge of creating images that are light in weight and properly compressed. So it's up to us, developers, to anticipate the problems and proactively address them.
Ways to improve image files for your websiteIf you are responsible for creating or working with images in an image editor such as Photoshop, Illustrator, GIMP, and others, you have great tools at your disposal to ensure your images are optimized and sized properly. You can play around with the image quality scale as you export your images and ensure they are not bigger than they need to be. There are many other tools that can help you with compression. One little tool I've been using for years is this little app called ImageOptim, which allows you to drop in your images in it and it compresses them saving you some file size and improving compression.
Depending on your requirements and environment, you could also look at using different file types for your images. One highly recommended image type is webp. With the ability to do lossless and lossy compression, webp provides significant improvements in file sizes while still maintaining your images high quality. The browser support for webp is excellent as it is supported by all major browsers, but do some research prior to start using it as there are some hosting platforms that do not support webp.
To give you an example of how good webp is, the image in the header of this blog post was originally exported from Photoshop as a .JPG, which resulted in a 317KB file size. This is not bad at all, but then I ran the image through the ImageOptim app and the file size was reduced to 120KB. That's a 62% file size reduction. Then I exported the same image from Photoshop but this time in .webp format and the file size became 93KB. That's 71% in file size reduction compared to the original JPG version.
A must have CSS rule in your projectBy now it should be clear that the goal for serving images on any website is doing it by using the responsive images approach. The way you implement responsive images on your site may vary depending on your platform, available tools, and skillset. Regardless, the following CSS rule should always be available within your project base CSS styles and should apply to all images on your site:
img { display: block; max-width: 100%; }Easy right? That's it, we're done đ
The CSS rule above will in fact make your images responsive (images will automatically adapt to the width of their containers/viewport). This rule should be added to your website's base styles so every image in your website becomes responsive by default. However, this should not be the extend of your responsive images solution. Although your images will be responsive with the CSS rule above, this does not address image compression nor optimization and this will result in performance issues if you are dealing with extremly large file sizes. Take a look at this example where the rule above is being used. Resize your browser to any width including super small to simulate a mobile device. Notice how the image automatically adapts to the width of the browser. Here's the problem though, the image in this example measures 5760x3840 pixels and it weights 6.7 MB. This means, even if your browser width is super narrow, and the image is resized to a very small visual size, you are still loading an image that is 6.7 MB in weight. No good đ
In the next post of this series, we will begin the process of implementing a solution for handling responsive images the right way.
Navigate posts within this series
Oliver Davies' daily list: When did you last deploy to production?
If you've experienced issues or are worried about deploying changes to production, on a Friday or another day, when did you last deploy something?
Can you make deployments smaller and more frequent?
Deploying regularly makes each deployment less risky and having a smaller changeset makes it easier to find and fix any issues that arise.
I'm much happier deploying to production if I've already done so that day, or at least that week.
Any time more than that, or if the changeset is large, the more likely there will be issues and the longer it will take to resolve them.
PythonâSpeed: It's time to stop using Python 3.8
Upgrading to new software versions is work, and work that doesnât benefit your softwareâs users. Users care about features and bug fixes, not how up-to-date you are.
So itâs perhaps not surprising how many people still use Python 3.8. As of September 2024, about 14% of packages downloaded from PyPI were for Python 3.8. This includes automated downloads as part of CI runs, so it doesnât mean 3.8 is used in 14% of applications, but thatâs still 250 million packages installed in a single day!
Still, there is only so much time you can delay upgrading, and for Python 3.8, the time to upgrade is as soon as possible. Python 3.8 is reaching its end of life at the end of October 2024.
No more bug fixes.
No more security fixes.
Still not convinced? Letâs see why you want to upgrade.
Read more...PythonâSpeed: When should you upgrade to Python 3.13?
Python 3.13 will be out October 1, 2024âbut should you switch to it immediately? And if you shouldnât upgrade just yet, when should you?
Immediately after the release, you probably didnât want to upgrade just yet. But from December 2024 and onwards, upgrading is definitely worth trying, though it may not succeed. To understand why, we need to consider Python packaging, the software development process, and take a look at the history of past releases.
Read more...Plasma Wayland Protocols 1.14.0
Plasma Wayland Protocols 1.14.0 is now available for packaging.
This adds features needed for the Plasma 6.2 beta.
URL: https://download.kde.org/stable/plasma-wayland-protocols/
SHA256: 1a4385ecfc79f7589f07381cab11c3ff51f6e2fa4b73b78600d6ad096394bf81
Signed by: E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Riddell jr@jriddell.org
Full changelog:
- add a protocol for externally controlled display brightness
- output device: add support for brightness in SDR mode
- plasma-window: add client geometry + bump to v18
- Add warnings discouraging third party clients using internal desktop environment protocols
Akademy 2024 - The Akademy of Many Changes
This year's Akademy in WĂŒrzburg, Germany was all about resetting priorities, refocusing goals, and combining individual projects into a final result greater than the sum of its parts.
A shift â or more accurately a broadening of interest â in the KDE community has been gradually emerging over the past few years, and reached a new peak at this year's event. The conference largely focused on delivering high quality, cutting-edge Free Software to end users. However the keynote "Only Hackers will Survive" by tech activist and environmentalist Joanna Murzyn, and Joseph De Veaugh-Geiss' "Opt In? Opt Out? Opt Green!" talk took attendees down a left turn by addressing growing concerns about the impact of IT on the environment and discussed how Free Software can help curb CO2 emissions.
 Joanna Murzyn explains the impact irresponsible IT developments have on the environment.KDE has always had a social conscience. The community's vision and mission of providing users with the tools to control their digital lives and protect their privacy is now combined with concern for the preservation of the planet we live on.
But KDE can do more than one thing at a time, and our software is also undergoing profound changes under the hood. Joshua Goins, for example, is working on ways to enable framework, plasma and application development in Rust. According to Joshua, Rust has many advantages, such as its memory safety capabilities, while adding a Qt front-end to Rust projects is much easier than many people believe.
This is in line with one of the new goals adopted by the community during this Akademy: Nicolas Fella, Plasma contributor and key developer of KDE's all-important frameworks, will champion "Streamlined Application Development Experience", a goal aimed at making it easier for new developers to access KDE technologies.
And onboarding new developers is what the "KDE Needs You! đ«”" goal is about. While the growing popularity of KDE software is great, growth puts more stress on the teams that create the applications, environments, and underlying engines and frameworks. Add to that the fact that the veteran developers are getting gray around the temples, and you need a constant influx of new contributors to keep the community and its projects running. The "KDE Needs You! đ«”" goal aims to address this challenge and is being spearheaded by members of the Promo and Mentoring teams. The champions aim to formalize and strengthen KDE's processes for recruiting active contributors, and to make recruiting active contributors to projects a priority and an ongoing task for the community.
These two goals aim to benefit the community and projects in general. KDE's third goal, "We care about your input", will build on their work: proposed by Jakob Petsovits and Gernot Schiller, "input" in this case refers to "input from devices". The goal addresses the fact that there are still a lot of languages and hardware that are not optimally supported by Plasma and KDE applications, and with the move to Wayland, some devices have even temporarily lost a measure of support they enjoyed on X11. Jakob, Gernot, and their team of supporters plan to solve this problem methodically, working to make KDE's software work smoothly and effortlessly on drawing tablets, accessibility devices, and game controllers, as well as software virtual keyboards and input methods for users of the Chinese, Japanese, and Korean languages.
One way to achieve the desired level of integration is to control the entire software stack, right down to the operating system. This is also in the works for KDE, as Harald Sitter is working on a new technologically advanced operating system tentatively named "KDE Linux". This operating system aims to break free of the constraints currently limiting KDE's existing Neon OS and offer a superior experience for KDE's developers, enthusiast users, and everyday users.
KDE Linux's base system will be immutable, meaning that nothing will be able to change critical parts of the system, such as the /etc, /usr, and /bin, directories. User applications will be installed via self-contained packages, such as Flatpaks and Snaps. Adventurous users and developers will be able to overlay anything they want on top of the base system in a non-destructive and reversible way, without ever having to touch the core and risk not-easily-fixable breakage.
This will help provide users with a solid, stable, and secure environment, without sacrificing the ability to run the latest and greatest KDE software.
As the proof of the pudding is in the eating, Harald surprised the audience when he revealed towards the end of his talk that his entire presentation had been delivered using KDE Linux!
The user-facing side of KDE is also changing with the work of Arjen Hiemstra and Andy Betts â and the Visual Design Group at large. Arjen is working on Union, a new theming engine that will eventually replace the different ways of styling in KDE. Up until now, developers and designers have had to deal with multiple ways of doing styling, some of which are quite difficult to use. Union, as the name implies, will end the fragmentation and provide something that is both easier for developer to maintain and more flexible for designers to interact with.
And then Andy Betts told us about Plasma Next, while introducing the audience to the concept of Design Systems -- management systems that allow all aspects of large collections of icons, components, and other graphical assets to be controlled. Using design systems, the VDG is developing a new look for Plasma and KDE applications that may very well replace KDE's current Breeze theme â and be consistently applied across all KDE apps and plasma, to boot!
 A first look at Dolphin rocking Plasma Next icons.This means that in the not too distant future, KDE users will be able to enjoy a rock-solid and elegant operating system with a brilliant look to match!
In other Akademy news...- Kevin Ottens added another KDE success story to the list, telling us how a wine producing company in Australia has been using hundreds of desktops running KDE Plasma for more than 10 years.
- Natalie Clarius explained how she is working on adapting Plasma to work better on our vendor partners' hardware products.
- In the "Openwashing" panel moderated by Markus Felner, Cornelius Schumacher of KDE, Holger Dyroff of OwnCloud, Richard Heigl of HalloWelt! and Leonhard Kugler of OpenCode took on companies that call themselves "open source" but are anything but.
- David Schlanger shared the harsh truth about AI, his wishful positive vision for the technology, and then faced questions and comments from Lydia Pintscher and Eike Hein in the keynote session on day 2.
- Ben Cooksley, Volker Kraus, Hannah von Reth, and Julius KĂŒnzel took a deep dive into the frameworks, services, and utilities that help KDE projects get their products to users quickly and on a wide variety of platforms.
The prestigious KDE Awards were given to:
- Friedrich W.H. Kossebau for his work on the Okteta hex editor.
- Albert Astals Cid for his work on the Qt patch collection, KDE Gear release maintenance, i18n, and many, many other things.
- Nicolas Fella received his award for his work on KDE Frameworks and Plasma.
- As is traditional, the Akademy organizing team was awarded for putting on a fun, interesting and safe event for the entire KDE community.
If you would like to see the talks as they happened, the unedited videos are currently available on YouTube. Cut and edited versions with slides will be available soon on both YouTube and PeerTube.
KDE Gear 24.08.1
Over 180 individual programs plus dozens of programmer libraries and feature plugins are released simultaneously as part of KDE Gear.
Today they all get new bugfix source releases with updated translations, including:
- filelight: Update the sidebar when deleting something from the context menu (Commit, fixes bug #492155)
- kclock: Fix timer circle alignment (Commit, fixes bug #481170)
- kwalletmanager: Start properly when invoked from the menu on wayland (Commit, fixes bug #492138)
Distro and app store packagers should update their application packages.
- 24.08 release notes for information on tarballs and known issues.
- Package download wiki page
- 24.08.1 source info page
- 24.08.1 full changelog
Copyright law makes a case for requiring data information rather than open datasets for Open Source AI
The Open Source Initiative (OSI) is running a blog series to introduce some of the people who have been actively involved in the Open Source AI Definition (OSAID) co-design process. The co-design methodology allows for the integration of diverging perspectives into one just, cohesive and feasible standard. Support and contribution from a significant and broad group of stakeholders is imperative to the Open Source process and is proven to bring diverse issues to light, deliver swift outputs and garner community buy-in.
This series features the voices of the volunteers who have helped shape and are shaping the Definition.
Meet Felix Reda Photo Credit: CC-by 4.0 International Volker Conradus volkerconradus.com.Felix Reda (he/they) has been an active contributor to the Open Source AI Definition (OSAID) co-design process, bringing his personal interest and expertise in copyright reform to the online forums. Working in digital policy for over ten years, including serving as a member of the European Parliament from 2014 to 2019 and working with the strategic litigation NGO Gesellschaft fĂŒr Freiheitsrechte (GFF), Felix is currently the director of developer policy at GitHub. He is also an affiliate of the Berkman Klein Center for Internet and Society at Harvard and serves on the board of the Open Knowledge Foundation Germany. He holds an M.A. in political science and communications science from the University of Mainz, Germany.
Data information as a viable alternativeNote: The original text was contributed by Felix Reda to the discussions happening on the Open Source AI forum as a response to Stefano Maffulliâs post on how the draft Open Source AI Definition arrived at its current state, the design principles behind the data information concept and the constraints (legal and technical) it operates under.
When we look at applying Open Source principles to the subject of AI, copyright law comes into play, especially for the topic of training data access. Open datasets have been a continuous discussion point in the collaborative process of writing the Open Source AI Definition. I would like to explain why the concept of data information is a viable alternative for the purposes of the OSAID.
The definition of Open Source software has an access element and a legal element â the access element being the availability of the source code and the legal element being a license rooted in the copyright-protection given to software. The underlying assumption is that the entity making software available as Open Source is the rights holder in the software and is therefore entitled to make the source code available without infringing the copyright of a third party, and to license it for re-use. To the extent that third-party copyright-protected material is incorporated into the Open Source software, it must itself be released under a compatible Open Source license that also allows the redistribution.
When it comes to AI, the situation is fundamentally different: The assumption that an Open Source AI model will only be trained on copyright-protected material that the developer is entitled to redistribute does not hold. Different copyright regimes around the world, including the EU, Japan and Singapore, have statutory exceptions that explicitly allow text and data mining for the purposes of AI training. The EU text and data mining exceptions, which I know best, were introduced with the objective of facilitating the development of AI and other automated analytical techniques. However, they only allow the reproduction of copyright-protected works (aka copying), but not the making available of those works (aka posting them on the internet).
That means that an Open Source AI definition that would require the republication of the complete dataset in order for an AI model to qualify as Open Source would categorically exclude Open Source AI models from the ability to rely on the text and data mining exceptions in copyright â that is despite the fact that the legislator explicitly decided that under certain circumstances (for example allowing rights holders to declare a machine-readable opt-out from training outside of the context of scientific research) the use of copyright-protected material for the purposes of training AI models should be legal. This result would be particularly counterproductive because it would even render Open Source AI models illegal in situations where the reproducibility of the dataset would be complete by the standards discussed on the OSAID forum.
Examples
Imagine an AI model that was trained on publicly accessible text on the internet that was version-controlled, for which the rights holder had not declared an opt-out, but which the rights holder had also not put under a permissive license (all rights reserved). Using this text as training data for an AI model would be legal under copyright law, but re-publishing the training dataset would be illegal. Publishing information about the training dataset that included the version of the data that was used, when and how it was retrieved from which website, and how it was tokenized would meet the requirements of the OSAID v 0.0.8 if (and only if) it put a skilled person in the position to build their own dataset to recreate an equivalent system.
Neither the developer of the original Open Source AI model nor the skilled person recreating it would violate copyright law in the process, unlike the scenario that required publication of the dataset. Including a requirement in the OSAID to publish the data, in which the AI developer typically does not hold the copyright, would have little added benefit but would drastically reduce the material that could be used for training, despite the existence of explicit legal permissions to use that content for AI training. I donât think that would be wise.
The international concern of public domain
While I support the creation of public domain datasets that can be republished without restrictions, I would like to caution against pointing to these efforts as a solution to the problem of copyright in training datasets. Public domain status is not harmonized internationally â what is in the public domain in one jurisdiction is routinely protected by copyright in other parts of the world. For example, in US discourse it is often assumed that works generated by US government employees are in the public domain. They are not, they are only in the public domain in the US, while they are copyright-protected in other jurisdictions.
The same goes for works in which copyright has expired: Although the Berne Convention allows signatory countries to limit the copyright term on works until protection in the workâs country of origin has expired, exceptions to this rule are permitted. For example, although the first incarnation of Mickey Mouse has recently entered the public domain in the US, it is still protected by copyright in Germany due to an obscure bilateral copyright treaty between the US and Germany from 1892. Copyright protection is not conditional on registration of a work, and no even remotely comprehensive, reliable rights information on the copyright status of works exists. Good luck to an Open Source AI developer who tried to stay on top of all of these legal pitfalls.
Bottom line
There are solid legal permissions for using copyright-protected works for AI training (reproductions). There are no equivalent legal permissions for incorporating copyright-protected works into publishable datasets (making available). What an Open Source AI developer thinks is in the public domain and therefore publishable in an open dataset regularly turns out to be copyright-protected after all, at least in some jurisdictions.
Unlike reproductions, which only need to follow the copyright law of the country in which the reproduction takes place, making content available online needs to be legal in all jurisdictions from which the content can be accessed. If the OSAID required the publication of the dataset, this would routinely lead to situations where Open Source AI models could not be made accessible across national borders, thus impeding their collaborative improvement, one of the great strengths of Open Source. I doubt that with such a restrictive definition, Open Source AI would gain any practical significance. Tragically, the text and data mining exceptions that were designed to facilitate research collaboration and innovation across borders, would only support proprietary AI models, while excluding Open Source AI. The concept of data information will help us avoid that pitfall while staying true to Open Source principles.
How to get involvedThe OSAID co-design process is open to everyone interested in collaborating. There are many ways to get involved:
- Join the forum: share your comment on the drafts.
- Leave comment on the latest draft: provide precise feedback on the text of the latest draft.
- Follow the weekly recaps: subscribe to our monthly newsletter and blog to be kept up-to-date.
- Join the town hall meetings: weâre increasing the frequency to weekly meetings where you can learn more, ask questions and share your thoughts.
- Join the workshops and scheduled conferences: meet the OSI and other participants at in-person events around the world.
Glyph Lefkowitz: Python macOS Framework Builds
When you build Python, you can pass various options to ./configure that change aspects of how it is built. There is documentation for all of these options, and they are things like --prefix to tell the build where to install itself, --without-pymalloc if you have some esoteric need for everything to go through a custom memory allocator, or --with-pydebug.
One of these options only matters on macOS, and its effects are generally poorly understood. The official documentation just says âCreate a Python.framework rather than a traditional Unix install.â But⊠do you need a Python.framework? If youâre used to running Python on Linux, then a âtraditional Unix installâ might sound pretty good; more consistent with what you are used to.
If you use a non-Framework build, most stuff seems to work, so why should anyone care? I have mentioned it as a detail in my previous post about Python on macOS, but even I didnât really explain why youâd want it, just that it was generally desirable.
The traditional answer to this question is that you need a Framework build âif you want to use a GUIâ, but this is demonstrably not true. At first it might not seem so, since the go-to Python GUI test is ârun IDLEâ; many non-Framework builds also omit Tkinter because they donât ship a Tk dependency, so IDLE wonât start. But other GUI libraries work fine. For example, uv tool install runsnakerun / runsnake will happily pop open a GUI window, Framework build or not. So it bears some explaining
Wait, what is a âFrameworkâ anyway?Letâs back up and review an important detail of the mac platform.
On macOS, GUI applications are not just an executable file, they are organized into a bundle, which is a directory with a particular layout, that includes metadata, that launches an executable. A thing that, on Linux, might live in a combination of /bin/foo for its executable and /share/foo/ for its associated data files, is instead on macOS bundled together into Foo.app, and those components live in specified locations within that directory.
A framework is also a bundle, but one that contains a library. Since they are directories, Applications can contain their own Frameworks and Frameworks can contain helper Applications. If /Applications is roughly equivalent to the Unix /bin, then /Library/Frameworks is roughly equivalent to the Unix /lib.
App bundles are contained in a directory with a .app suffix, and frameworks are a directory with a .framework suffix.
So what do you need a Framework for in Python?The truth about Framework builds is that there is not really one specific thing that you can point to that works or doesnât work, where you âneedâ or âdonât needâ a Framework build. I was not able to quickly construct an example that trivially fails in a non-framework context for this post, but I didnât try that many different things, and there are a lot of different things that might fail.
The biggest issue is not actually the Python.framework itself. The metadata on the framework is not used for much outside of a build or linker context. However, Pythonâs Framework builds also ship with a stub application bundle, which places your Python process into a normal application(-ish) execution context all the time, which allows for various platform APIs like [NSBundle mainBundle] to behave in the normal, predictable ways that all of the numerous, various frameworks included on Apple platforms expect.
Various Apple platform features might want to ask a process questions like âwhat is your unique bundle identifier?â or âwhat entitlements are you authorized to accessâ and even beginning to answer those questions requires information stored in the applicationâs bundle.
Python does not ship with a wrapper around the core macOS âcocoaâ API itself, but we can use pyobjc to interrogate this. After installing pyobjc-framework-cocoa, I can do this
1 2>>> import AppKit >>> AppKit.NSBundle.mainBundle()On a non-Framework build, it might look like this:
1NSBundle </Users/glyph/example/.venv/bin> (loaded)But on a Framework build (even in a venv in a similar location), it might look like this:
1NSBundle </Library/Frameworks/Python.framework/Versions/3.12/Resources/Python.app> (loaded)This is why, at various points in the past, GUI access required a framework build, since connections to the window server would just be rejected for Unix-style executables. But that was an annoying restriction, so it was removed at some point, or at least, the behavior was changed. As far as I can tell, this change was not documented. But other things like user notifications or geolocation might need to identity an application for preferences or permissions purposes, respectively. Even something as basic as âwhat is your app iconâ for what to show in alert dialogs is information contained in the bundle. So if you use a library that wants to make use of any of these features, it might work, or it might behave oddly, or it might silently fail in an undocumented way.
This might seem like undocumented, unnecessary cruft, but it is that way because itâs just basic stuff the platform expects to be there for a lot of different features of the platform.
/etc/ buildsStill, this might seem like a strangely vague description of this feature, so it might be helpful to examine it by a metaphor to something you are more familiar with. If youâre familiar with more Unix style application development, consider a junior developer â letâs call him Jim â asking you if they should use an â/etc buildâ or not as a basis for their Docker containers.
What is an â/etc buildâ? Well, base images like ubuntu come with a bunch of files in /etc, and Jim just doesnât see the point of any of them, so he likes to delete everything in /etc just to make things simpler. It seems to work so far. More experienced Unix engineers that he has asked react negatively and make a face when he tells them this, and seem to think that things will break. But their app seems to work fine, and none of these engineers can demonstrate some simple function breaking, so whatâs the problem?
Off the top of your head, can you list all the features that all the files that /etc is needed for? Why not? Jim thinks itâs weird that all this stuff is undocumented, and it must just be unnecessary cruft.
If Jim were to come back to you later with a problem like âit seems like hostname resolution doesnât work sometimesâ or âls says all my files are owned by 1001 rather than the user name I specified in my Dockerfileâ youâd probably say âplease, put /etc back, I donât know exactly what file you need but lots of things just expect it to be thereâ.
This is what a framework vs. a non-Framework build is like. A Framework build just includes all the pieces of the build that the macOS platform expects to be there. What pieces do what features need? It depends. It changes over time. And the stub that Pythonâs Framework builds include may not be sufficient for some more esoteric stuff anyway. For example, if you want to use a feature that needs a bundle that has been signed with custom entitlements to access something specific, like the virtualization API, you might need to build your own app bundle. To extend our analogy with Jim, the fact that /etc exists and has the default files in it wonât always be sufficient; sometimes you have to add more files to /etc, with quite specific contents, for some features to work properly. But âdonât get rid of /etc (or your application bundle)â is pretty good advice.
Do you ever want a non-Framework build?macOS does have a Unix subsystem, and many Unix-y things work, for Unix-y tasks. If you are developing a web application that mostly runs on Linux anyway and never care about using any features that touch the macOS-specific parts of your mac, then you probably donât have to care all that much about Framework builds. Youâre not going to be surprised one day by non-framework builds suddenly being unable to use some basic Unix facility like sockets or files. As long as you are aware of these limitations, itâs fine to install non-Framework builds. I have a dozen or so Pythons on my computer at any given time, and many of them are not Framework builds.
Framework builds do have some small drawbacks. They tend to be larger, they can be a bit more annoying to relocate, they typically want to live in a location like /Library or ~/Library. You can move Python.framework into an application bundle according to certain rules, as any bundling tool for macOS will have to do, but it might not work in random filesystem locations. This may make managing really large number of Python versions more annoying.
Most of all, the main reason to use a non-Framework build is if you are building a tool that manages a fleet of Python installations to perform some automation that needs to know about Python installs, and you want to write one simple tool that does stuff on Linux and on macOS. If you know you donât need any platform-specific features, donât want to spend the (not insignificant!) effort to cover those edge cases, and you get a lot of value from that level of consistency (for example, a teaching environment or interdisciplinary development team with a lot of platform diversity) then a non-framework build might be a better option.
Why do I care?Personally, I think itâs important for Framework builds to be the default for most users, because I think that as much stuff should work out of the box as possible. Any user who sees a neat library that lets them get control of some chunk of data stored on their mac - map data, health data, game center high scores, whatever it is - should be empowered to call into those APIs and deal with that data for themselves.
Apple already makes it hard enough with their thicket of code-signing and notarization requirements for distributing software, aggressive privacy restrictions which prevents API access to some of this data in the first place, all these weird Unix-but-not-Unix filesystem layout idioms, sandboxing that restricts access to various features, and the use of esoteric abstractions like mach ports for communications behind the scenes. We don't need to make it even harder by making the way that you install your Python be a surprise gotcha variable that determines whether or not you can use an API like âshow me a user notification when my data analysis is doneâ or âdonât do a power-hungry data analysis when Iâm on battery powerâ, especially if it kinda-sorta works most of the time, but only fails on certain patch-releases of certain versions of the operating system, becuase an implementation detail of a proprietary framework changed in the meanwhile to require an application bundle where it didnât before, or vice versa.
More generally, I think that we should care about empowering users with local computation and platform access on all platforms, Linux and Windows included. This just happens to be one particular quirk of how native platform integration works on macOS specifically.
AcknowledgmentsThank you to my patrons who are supporting my writing on this blog. For this one, thanks especially to long-time patron Hynek who requested it specifically. If you like what youâve read here and youâd like to read more of it, or youâd like to support my various open-source endeavors, you can support my work as a sponsor! I am also available for consulting work if you think your organization could benefit from expertise on topics like âhow can we set up our Mac developersâ laptops with Pythonâ.