Feeds

Dirk Eddelbuettel: RcppArmadillo 14.0.2-1 on CRAN: Updates

Planet Debian - Thu, 2024-09-12 18:29

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1164 other packages on CRAN, downloaded 36.1 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 595 times according to Google Scholar.

Conrad released two small incremental releases to version 14.0.0. We did not immediately bring these to CRAN as we have to be mindful of the desired upload cadence of ‘once every one or two months’. But as 14.0.2 has been stable for a few weeks, we now decided to bring it to CRAN. Changes since the last CRAN release are summarised below, and overall fairly minimal. On the package side, we reorder what citation() returns, and now follow CRAN requirements via Authors@R.

Changes in RcppArmadillo version 14.0.2-1 (2024-09-11)
  • Upgraded to Armadillo release 14.0.2 (Stochastic Parrot)

    • Optionally use C++20 memory alignment

    • Minor corrections for several corner-cases

  • The order of items displayed by citation() is reversed (Conrad in #449)

  • The DESCRIPTION file now uses an Authors@R field with ORCID IDs

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Four Kitchens: Get ready for Drupal 11: An essential guide

Planet Drupal - Thu, 2024-09-12 15:19

Yuvania Castillo

Backend Engineer

A graduate of the University of Costa Rica with a passion for programming, Yuvania is driven to constantly improve, study, and learn new technologies to be better every day.

January 1, 1970

Preparing for Drupal 11 is crucial to ensure a smooth transition, and we’re here to help you make it easy and efficient. This guide offers clear steps to update your environment and modules, perform thorough tests, and use essential tools like Upgrade Status and Drupal Rector.

Don’t fall behind! Making sure your site is ready for the new features and improvements Drupal 11 brings will make the upgrade work quick and easy.

Read on to learn how to keep your site updated and future-proof.

Ensure your environment is ready
  • Upgrade to PHP 8.3: Ensure optimal performance and compatibility with Drupal 11
  • Use Drush 13: Make sure you have this version available in your development or sandbox environment
  • Database requirements: Ensure your database meets the requirements for Drupal 11:
    • MySQL 8.0
    • PostgreSQL 16
  • Web server: Drupal 11 requires Apache 2.4.7 or higher. Keep your server updated to avoid compatibility issues.

Upgrade to Drupal 10.3. Before migrating to Drupal 11, update your site to Drupal 10.3 to handle all deprecations properly. Drupal 10.3 defines all deprecated code to be removed in Drupal 11, making it easier to prepare for the next major update.

Update contributed modules. Use Composer to update all contributed modules to versions compatible with Drupal 11. The Upgrade Status module will help identify deprecated modules and APIs. Ensure all modules are updated to avoid compatibility issues.

Fix custom code. Use Drupal Rector to identify and fix deprecations in your custom code. Drupal Rector automates much of the update process, leaving “to do” comments where manual intervention is needed. Perform a manual review of critical areas to ensure everything functions correctly.

Run tests in a safe environment. Conduct tests in a safe environment, such as a local sandbox or cloud IDE. It’s likely to fail at first, but it’s essential to run multiple tests until you achieve a successful result. Use:

  • composer update --dry-run to simulate the update without making changes
  • composer why-not drupal/core 11.0 if there are issues, identify which dependencies require an earlier version of Drupal

Compatibility tools. Install and use the Upgrade Status module to ensure your site is ready. This module provides a detailed report on your site’s compatibility with Drupal 11. Check for compatibility issues in contributed projects on Drupal.org using the Project Update Bot.

Back up everything. Before updating, ensure you have a complete backup of your code and database. This is crucial to restore your site if something goes wrong during the update.

Considerations for immediate upgrade

You may wonder if you should upgrade your site to Drupal 11 as soon as it’s available. Here are some pros and cons to consider:

  • Maybe no: Sites can wait up till when the Drupal 10 LTS (long term support) ends (mid-late 2026) and then upgrade. This allows contributed modules to be fully ready for the update.
  • Maybe yes: Upgrading early lets you take advantage of new features and improvements but may introduce new bugs. Additionally, if everyone waits to upgrade, it could delay the readiness of contributed modules for the new version.

While Drupal 10 will be supported for some time, it’s advisable to stay ahead with these updates to use the improvements they offer and ensure a smoother, optimized transition.

By following these steps and considerations, your Drupal site will be well prepared for the transition to Drupal 11, ensuring a smooth and uninterrupted experience. Get ready for the new and exciting features Drupal 11 has to offer!

References

The post Get ready for Drupal 11: An essential guide appeared first on Four Kitchens.

Categories: FLOSS Project Planets

Will Kahn-Greene: Switching from pyenv to uv

Planet Python - Thu, 2024-09-12 12:00
Premise

The 0.4.0 release of uv does everything I currently do with pip, pyenv, pipx, pip-tools, and pipdeptree. Because of that, I'm in the process of switching to uv.

This blog post covers switching from pyenv to uv.

History
  • 2024-08-29: Initial writing.

  • 2024-09-12: Minor updates and publishing.

Start state

I'm running Ubuntu Linux 24.04. I have pyenv installed using the the automatic installer. pyenv is located in $HOME/.pyenv/bin/.

I have the following Pythons installed with pyenv:

$ pyenv versions system 3.7.17 3.8.19 3.9.19 * 3.10.14 (set by /home/willkg/mozilla/everett/.python-version) 3.11.9 3.12.3

I'm not sure why I have 3.7 still installed. I don't think I use that for anything.

My default version is 3.10.14 for some reason. I'm not sure why I haven't updated that to 3.12, yet.

In my 3.10.14, I have the following Python packages installed:

$ pip freeze appdirs==1.4.4 argcomplete==3.1.1 attrs==22.2.0 cffi==1.15.1 click==8.1.3 colorama==0.4.6 diskcache==5.4.0 distlib==0.3.8 distro==1.8.0 filelock==3.14.0 glean-parser==6.1.1 glean-sdk==50.1.4 Jinja2==3.1.2 jsonschema==4.17.3 MarkupSafe==2.0.1 MozPhab==1.5.1 packaging==24.0 pathspec==0.11.0 pbr==6.0.0 pipx==1.5.0 platformdirs==4.2.1 pycparser==2.21 pyrsistent==0.19.3 python-hglib==2.6.2 PyYAML==6.0 sentry-sdk==1.16.0 stevedore==5.2.0 tomli==2.0.1 userpath==1.8.0 virtualenv==20.26.2 virtualenv-clone==0.5.7 virtualenvwrapper==6.1.0 yamllint==1.29.0

That probably means I installed the following in the Python 3.10.14 Python environment:

  • MozPhab

  • pipx

  • virtualenvwrapper

Maybe I installed some other things for some reason lost in the sands of time.

Then I had a whole bunch of things installed with pipx.

I have many open source projects all of which have a .python-version file listing the Python versions the project uses.

I think that covers the start state.

Steps

First, I made a list of things I had.

  • I listed all the versions of Python I have installed so I know what I need to reinstall with uv.

    $ pyenv versions
  • I listed all the packages I have installed in my 3.10.14 environment (the default one).

    $ pip freeze
  • I listed all the packages I installed with pipx.

    $ pipx list

I uninstalled all the packages I installed with pipx.

$ pipx uninstall PACKAGE

Then I uninstalled pyenv and everything it uses. I followed the pyenv uninstall instructions:

$ rm -rf $(pyenv root)

Then I removed the bits in my shell that add to the PATH and set up pyenv and virtualenvwrapper.

Then I started a new shell that didn't have all the pyenv and virtualenvwrapper stuff in it.

Then I installed uv using the uv standalone installer.

Then I ran uv --version to make sure it was installed.

Then I installed the shell autocompletion.

Note

I have a dotfiles thing and separate out bashrc changes by what changes them. You can see my home-grown thing that works for me here:

https://github.com/willkg/dotfiles

These instructions are specific to my home-grown dotfiles thing.

$ echo 'eval "$(uv generate-shell-completion bash)"' >> ~/dotfiles/bash.d/20-uv.bash

Then I started a new shell to pick up those changes.

Then I installed Python versions:

$ uv python install 3.8 3.9 3.10 3.11 3.12 Searching for Python versions matching: Python 3.10 Searching for Python versions matching: Python 3.11 Searching for Python versions matching: Python 3.12 Searching for Python versions matching: Python 3.8 Searching for Python versions matching: Python 3.9 Installed 5 versions in 8.14s + cpython-3.8.19-linux-x86_64-gnu + cpython-3.9.19-linux-x86_64-gnu + cpython-3.10.14-linux-x86_64-gnu + cpython-3.11.9-linux-x86_64-gnu + cpython-3.12.5-linux-x86_64-gnu

When I type "python", I want it to be a Python managed by uv. Also, I like having "pythonX.Y" symlinks, so I created a uv-sync script which creates symlinks to uv-managed Python versions:

https://github.com/willkg/dotfiles/blob/main/dotfiles/bin/uv-sync

Then I installed all my tools using uv tool install.

$ uv tool install PACKAGE

For tox, I had to install the tox-uv package in the tox environment:

$ uv tool install --with tox-uv tox

Now I've got everything I do mostly working.

So what does that give me?

I installed uv and I can upgrade uv using uv self update.

Python interpreters are managed using uv python. I can create symlinks to interpreters using uv-sync script. Adding new interpreters and removing old ones is pretty straight-forward.

When I type python, it opens up a Python shell with the latest uv-managed Python version. I can type pythonX.Y and get specific shells.

I can use tools written in Python and manage them with uv tool including ones where I want to install them in an "editable" mode.

I can write scripts that require dependencies and it's a lot easier to run them now.

I can create and manage virtual environments with uv venv.

Next steps

Delete all the .python-version files I've got.

Update documentation for my projects and add a uv tool install PACKAGE option to installation instructions.

Probably discover some additional things to add to this doc.

Categories: FLOSS Project Planets

mark.ie: My LocalGov Drupal contributions for week-ending September 13th, 2024

Planet Drupal - Thu, 2024-09-12 11:29

This week's big issue was building a prototype for "Axe Thrower" so we can "throw" multiple URLs at AXE at the same time.

Categories: FLOSS Project Planets

git revert name and Akademy

Planet KDE - Thu, 2024-09-12 10:33

I reverted my name back to Jonathan Riddell and have now made a new uid for my PGP key, you can get the updated one on keyserver.ubuntu.com or my contact page or my Launchpad page.

Here’s some pics from Akademy

Categories: FLOSS Project Planets

KDE neon Akademy Session

Planet KDE - Thu, 2024-09-12 09:24

KDE Akademy is meeting this week in Würzburg.

We just had a BoF session where we discussed..

  • The current progress to rebasing on Ubuntu 24.04
  • The current state of KDE neon Core our amazing forthcoming Snap based distro
  • Fixing up broken and bitrotting infrastructure
  • Moving to KDE invent
  • Issues with Plasma 6 upgrades and looking at integrating more QA tests
  • How the proposed KDE LinuxTM distro fits into the mix
Categories: FLOSS Project Planets

James Bennett: Know your Python container types

Planet Python - Thu, 2024-09-12 09:02

This is the last of a series of posts I’m doing as a sort of Python/Django Advent calendar, offering a small tip or piece of information each day from the first Sunday of Advent through Christmas Eve. See the first post for an introduction.

Python contains multitudes

There are a lot of container types available in the Python standard library, and it can be confusing sometimes to keep track of them all. So since it’s …

Read full entry

Categories: FLOSS Project Planets

Real Python: Quiz: Python 3.13: Free-Threading and a JIT Compiler

Planet Python - Thu, 2024-09-12 08:00

In this quiz, you’ll test your understanding of the new features in Python 3.13.

By working through this quiz, you’ll revisit how to compile a custom Python build, disable the Global Interpreter Lock (GIL), enable the Just-In-Time (JIT) compiler, determine the availability of new features at runtime, assess the performance improvements in Python 3.13, and make a C extension module targeting Python’s new ABI.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Mario Hernandez: Migrating your Drupal theme from Patternlab to Storybook

Planet Drupal - Thu, 2024-09-12 00:48

Building a custom Drupal theme nowadays is a more complex process than it used to be. Most themes require some kind of build tool such as Gulp, Grunt, Webpack or others to automate many of the repeatitive tasks we perform when working on the front-end. Tasks like compiling and minifying code, compressing images, linting code, and many more. As Atomic Web Design became a thing, things got more complicated because now if you are building components you need a styleguide or Design System to showcase and maintain those components. One of those design systems for me has been Patternlab. I started using Patternlab in all my Drupal projects almost ten years ago with great success. In addition, Patternlab has been the design system of choice at my place of work but one of my immediate tasks was to work on migrating to a different design system. We have a small team but were very excited about the challenge of finding and using a more modern and robust design system for our large multi-site Drupal environment.

Enter Storybook

After looking a various options for a design system, Storybook seemed to be the right choice for us for a couple of reasons: one, it has been around for about 10 years and during this time it has matured significantly, and two, it has become a very popular option in the Drupal ecosystem. In some ways, Storybook follows the same model as Drupal, it has a pretty active community and a very healthy ecosystem of plugins to extend its core functionality.

Storybook looks very promising as a design system for Drupal projects and with the recent release of Single Directory Components or SDC, and the new Storybook module, we think things can only get better for Drupal front-end development. Unfortunately for us, technical limitations in combination with our specific requirements, prevented us from using SDC or the Storybook module. Instead, we built our environment from scratch with a stand-alone integration of Storybook 8.

INFO: At the time of our implementation, TwigJS did not have the capability to resolve SDC's namespace. It appears this has been addressed and using SDC should now be possible with this custom setup. I haven't personally tried it and therefore I can't confirm. Our process and requirements

In choosing Storybook, we went through a rigorous research and testing process to ensure it will not only solve our immediate problems with our current environment, but it will be around as a long term solution. As part of this process, we also tested several available options like Emulsify and Gesso which would be great options for anyone looking for a ready-to-go system out of the box. Some of our requirements included:

1. No components refactoring

The first and non-negotiable requirement was to be able to migrate components from Patternlab to a new design system with the least amount of refactoring as possible. We have a decent amount of components which have been built within the last year and the last thing we wanted was to have to rebuild them again because we are switching design system.

2. A new Front-end build workflow

I personally have been faithful to Gulp as a front-end build tool for as long as I can remember because it did everything I needed done in a very efficient manner. The Drupal project we maintain also used Gulp, but as part of this migration, we wanted to see what other options were out there that could improve our workflow. The obvious choice seemed to be Webpack, but as we looked closer into this we learned about ViteJS, "The Next Genration Frontend Tooling". Vite delivers on its promise of being "blazing fast", and its ecosystem is great and growing, so we went with it.

3. No more Sass in favor of PostCSS

CSS has drastically improved in recent years. It is now possible with plain CSS, to do many of the things you used to be able to only do with Sass or similar CSS Preprocessor. Eliminating Sass from our workflow meant we would also be able to get rid of many other node dependencies related to Sass. The goal for this project was to use plain CSS in combination with PostCSS and one bonus of using Vite is that Vite offers PostCSS processing out of the box without additional plugins or dependencies. Ofcourse if you want to do more advance PostCSS processing you will probably need some external dependencies.

Building a new Drupal theme with Storybook

Let's go over the steps to building the base of your new Drupal theme with ViteJS and Storybook. This will be at a high-level to callout only the most important and Drupal-related parts. This process will create a brand new theme. If you already have a theme you would like to use, make the appropriate changes to the instructions.

1. Setup Storybook with ViteJS ViteJS
  • In your Drupal project, navigate to the theme's directory (i.e. /web/themes/custom/)
  • Run the following command:
npm create vite@latest storybook
  • When prompted, select the framework of your choice, for us the framework is React.
  • When prompted, select the variant for your project, for us this is JavaScript

After the setup finishes you will have a basic Vite project running.

Storybook
  • Be sure your system is running NodeJS version 18 or higher
  • Inside the newly created theme, run this command:
npx storybook@latest init --type react
  • After installation completes, you will have a new Storybook instance running
  • If Storybook didn't start on its own, start it by running:
npm run storybook TwigJS

Twig templates are server-side templates which are normally rendered with TwigPHP to HTML by Drupal, but Storybook is a JS tool. TwigJS is the JS-equivalent of TwigPHP so that Storybook understands Twig. Let's install all dependencies needed for Storybook to work with Twig.

  • If Storybook is still running, press Ctrl + C to stop it
  • Then run the following command:
npm i -D vite-plugin-twig-drupal html-react-parser twig-drupal-filters @modyfi/vite-plugin-yaml
  • vite-plugin-twig-drupal: If you are using Vite like we are, this is a Vite plugin that handles transforming twig files into a Javascript function that can be used with Storybook. This plugin includes the following:
    • Twig or TwigJS: This is the JavaScript implementation of the Twig PHP templating language. This allows Storybook to understand Twig.
      Note: TwigJS may not always be in sync with the version of Twig PHP in Drupal and you may run into issues when using certain Twig functions or filters, however, we are adding other extensions that may help with the incompatability issues.
    • drupal attribute: Adds the ability to work with Drupal attributes.
  • twig-drupal-filters: TwigJS implementation of Twig functions and filters.
  • html-react-parser: This extension is key for Storybook to parse HTML code into react elements.
  • @modifi/vite-plugin-yaml: Transforms a YAML file into a JS object. This is useful for passing the component's data to React as args.
ViteJS configuration

Update your vite.config.js so it makes use of the new extensions we just installed as well as configuring the namesapces for our components.

import { defineConfig } from "vite" import yml from '@modyfi/vite-plugin-yaml'; import twig from 'vite-plugin-twig-drupal'; import { join } from "node:path" export default defineConfig({ plugins: [ twig({ namespaces: { components: join(__dirname, "./src/components"), // Other namespaces maybe be added. }, }), // Allows Storybook to read data from YAML files. yml(), ], }) Storybook configuration

Out of the box, Storybook comes with main.js and preview.js inside the .storybook directory. These two files is where a lot of Storybook's configuration is done. We are going to define the location of our components, same location as we did in vite.config.js above (we'll create this directory shortly). We are also going to do a quick config inside preview.js for handling drupal filters.

  • Inside .storybook/main.js file, update the stories array as follows:
stories: [ "../src/components/**/*.mdx", "../src/components/**/*.stories.@(js|jsx|mjs|ts|tsx)", ],
  • Inside .storybook/preview.js, update it as follows:
/** @type { import('@storybook/react').Preview } */ import Twig from 'twig'; import drupalFilters from 'twig-drupal-filters'; function setupFilters(twig) { twig.cache(); drupalFilters(twig); return twig; } setupFilters(Twig); const preview = { parameters: { controls: { matchers: { color: /(background|color)$/i, date: /Date$/i, }, }, }, }; export default preview; Creating the components directory
  • If Storybook is still running, press Ctrl + C to stop it
  • Inside the src directory, create the components directory. Alternatively, you could rename the existing stories directory to components.
Creating your first component

With the current system in place we can start building components. We'll start with a very simple component to try things out first.

  • Inside src/components, create a new directory called title
  • Inside the title directory, create the following files: title.yml and title.twig
Writing the code
  • Inside title.yml, add the following:
--- level: 2 modifier: 'title' text: 'Welcome to your new Drupal theme with Storybook!' url: 'https://mariohernandez.io'
  • Inside title.twig, add the following:
<h{{ level|default(2) }}{% if modifier %} class="{{ modifier }}"{% endif %}> {% if url %} <a href="{{ url }}">{{ text }}</a> {% else %} <span>{{ text }}</span> {% endif %} </h{{ level|default(2) }}>

We have a simple title component that will print a title of anything you want. The level key allows us to change the heading level of the title (i.e. h1, h2, h3, etc.), and the modifier key allows us to pass a modifier class to the component, and the url will be helpful when our title needs to be a link to another page or component.

Currently the title component is not available in storybook. Storybook uses a special file to display each component as a story, the file name is component-name.stories.jsx.

  • Inside title create a file called title.stories.jsx
  • Inside the stories file, add the following:
/** * First we import the `html-react-parser` extension to be able to * parse HTML into react. */ import parse from 'html-react-parser'; /** * Next we import the component's markup and logic (twig), data schema (yml), * as well as any styles or JS the component may use. */ import title from './title.twig'; import data from './title.yml'; /** * Next we define a default configuration for the component to use. * These settings will be inherited by all stories of the component, * shall the component have multiple variations. * `component` is an arbitrary name assigned to the default configuration. * `title` determines the location and name of the story in Storybook's sidebar. * `render` uses the parser extension to render the component's html to react. * `args` uses the variables defined in title.yml as react arguments. */ const component = { title: 'Components/Title', render: (args) => parse(title(args)), args: { ...data }, }; /** * Export the Title and render it in Storybook as a Story. * The `name` key allows you to assign a name to each story of the component. * For example: `Title`, `Title dark`, `Title light`, etc. */ export const TitleElement = { name: 'Title', }; /** * Finally export the default object, `component`. Storybook/React requires this step. */ export default component;
  • If Storybook is running you should see the title story. See example below:
  • Otherwise start Storybook by running:
npm run storybook

With Storybook running, the title component should look like the image below:


The controls highlighted at the bottom of the title allow you to change the values of each of the fields for the title.

I wanted to start with the simplest of components, the title, to show how Storybook, with help from the extensions we installed, understands Twig. The good news is that the same approach we took with the title component works on even more complex components. Even the React code we wrote does not change much on large components.

In the next blog post, we will build more components that nest smaller components, and we will also add Drupal related parts and configuration to our theme so we can begin using the theme in a Drupal site. Finally, we will integrate the components we built in Storybook with Drupal so our content can be rendered using the component we're building. Stay tuned. For now, if you want to grab a copy of all the code in this post, you can do so below.

Download the code

Resources In closing

Getting to this point was a team effort and I'd like to thank Chaz Chumley, a Senior Software Engineer, who did a lot of the configuration discussed in this post. In addition, I am thankful to the Emulsify and Gesso teams for letting us pick their brains during our research. Their help was critical in this process.

I hope this was helpful and if there is anything I can help you with in your journey of a Storybook-friendly Drupal theme, feel free to reach out.

Categories: FLOSS Project Planets

Mario Hernandez: Responsive images in Drupal - a series

Planet Drupal - Thu, 2024-09-12 00:48

Images are an essential part of a website. They enhance the appeal of the site and make the user experience a more pleasant one. The challenge is finding the balance between enhancing the look of your website through the use of images and not jeopardizing performance. In this guide, we'll dig deep into how to find that balance by going over knowledge, techniques and practices that will provide you with a solid understanding of the best way to serve images to your visitors using the latest technologies and taking advantage of the advances of web browsers in recent years.

Hi, I hope you are ready to dig into responsive images. This is a seven-part guide that will cover everything you need to know about responsive images and how to manage them in a Drupal site. Although the excercises in this guide are Drupal-specific, the core principles of responsive images apply to any platform you use to build your sites.

Where do we start?

Choosing Drupal as your CMS is a great place to start. Drupal has always been ahead of the game when it comes to managing images by providing features such as image compression, image styles, responsive images styles and media library to mention a few. All these features, and more, come out of the box in Drupal. In fact, most of what we will cover in this guide will be solely out of the box Drupal features. We may touch on third party or contrib techniques or tools but only to let you know what's available not as a hard requirement for managing images in Drupal.

It is important to become well-versed with the tools available in Drupal for managing images. Only then you will be able to make the most of those tools. Don't worry though, this guide will provide you with a lot of knowledge about all the pieces that take part in building a solid system for managing and serving responsive images.

Let's start by breaking down the topics this guide will cover:

  1. What are responsive images?
  2. Art Direction using the <picture> HTML element
  3. Image resolution switching using srcset and sizes attributes
  4. Image styles and Responsive image styles in Drupal
  5. Responsive images and Media
  6. Responsive images, wrapping up
What are responsive images?

A responsive image is one whose dimensions adjust to changes in screen resolutions. The concept of responsive images is one that developers and designers have been strugling with ever since Ethan Marcotte published his famous blog post, Responsive Web Design, back in 2010 followed by his book of the same title. The concept itself is pretty straight forward, serve the right image to any device type based on various factors such as screen resolution, internet speed, device orientation, viewport size, and others. The technique for achieving this concept is not as easy. I can honestly say that over 10 years after reponsive images were introduced, we are still trying to figure out the best way to render images that are responsive. Read more about responsive images.

So if the concept of responsive images is so simple, why don't we have one standard for effectively implementing it? Well, images are complicated. They bring with them all sorts of issues that can negatively impact a website if not properly handled. Some of these issues include: Resolution, file size or weight, file type, bandwidth demands, browser support, and more.

Some of these issues have been resolved by fast internet speeds available nowadays, better browser support for file tyes such as webp, as well as excellent image compression technologies. However, there are still some issues that will probably never go away and that's what makes this topic so complicated. One issue in particular is using poorly compressed images that are extremely big in file size. Unfortunately often times this is at the hands of people who lack the knowledge of creating images that are light in weight and properly compressed. So it's up to us, developers, to anticipate the problems and proactively address them.

Ways to improve image files for your website

If you are responsible for creating or working with images in an image editor such as Photoshop, Illustrator, GIMP, and others, you have great tools at your disposal to ensure your images are optimized and sized properly. You can play around with the image quality scale as you export your images and ensure they are not bigger than they need to be. There are many other tools that can help you with compression. One little tool I've been using for years is this little app called ImageOptim, which allows you to drop in your images in it and it compresses them saving you some file size and improving compression.

Depending on your requirements and environment, you could also look at using different file types for your images. One highly recommended image type is webp. With the ability to do lossless and lossy compression, webp provides significant improvements in file sizes while still maintaining your images high quality. The browser support for webp is excellent as it is supported by all major browsers, but do some research prior to start using it as there are some hosting platforms that do not support webp.

To give you an example of how good webp is, the image in the header of this blog post was originally exported from Photoshop as a .JPG, which resulted in a 317KB file size. This is not bad at all, but then I ran the image through the ImageOptim app and the file size was reduced to 120KB. That's a 62% file size reduction. Then I exported the same image from Photoshop but this time in .webp format and the file size became 93KB. That's 71% in file size reduction compared to the original JPG version.

A must have CSS rule in your project

By now it should be clear that the goal for serving images on any website is doing it by using the responsive images approach. The way you implement responsive images on your site may vary depending on your platform, available tools, and skillset. Regardless, the following CSS rule should always be available within your project base CSS styles and should apply to all images on your site:

img { display: block; max-width: 100%; }

Easy right? That's it, we're done 😃

The CSS rule above will in fact make your images responsive (images will automatically adapt to the width of their containers/viewport). This rule should be added to your website's base styles so every image in your website becomes responsive by default. However, this should not be the extend of your responsive images solution. Although your images will be responsive with the CSS rule above, this does not address image compression nor optimization and this will result in performance issues if you are dealing with extremly large file sizes. Take a look at this example where the rule above is being used. Resize your browser to any width including super small to simulate a mobile device. Notice how the image automatically adapts to the width of the browser. Here's the problem though, the image in this example measures 5760x3840 pixels and it weights 6.7 MB. This means, even if your browser width is super narrow, and the image is resized to a very small visual size, you are still loading an image that is 6.7 MB in weight. No good 👎

In the next post of this series, we will begin the process of implementing a solution for handling responsive images the right way.

Navigate posts within this series

Categories: FLOSS Project Planets

Oliver Davies' daily list: When did you last deploy to production?

Planet Drupal - Wed, 2024-09-11 20:00

If you've experienced issues or are worried about deploying changes to production, on a Friday or another day, when did you last deploy something?

Can you make deployments smaller and more frequent?

Deploying regularly makes each deployment less risky and having a smaller changeset makes it easier to find and fix any issues that arise.

I'm much happier deploying to production if I've already done so that day, or at least that week.

Any time more than that, or if the changeset is large, the more likely there will be issues and the longer it will take to resolve them.

Categories: FLOSS Project Planets

Python⇒Speed: It's time to stop using Python 3.8

Planet Python - Wed, 2024-09-11 20:00

Upgrading to new software versions is work, and work that doesn’t benefit your software’s users. Users care about features and bug fixes, not how up-to-date you are.

So it’s perhaps not surprising how many people still use Python 3.8. As of September 2024, about 14% of packages downloaded from PyPI were for Python 3.8. This includes automated downloads as part of CI runs, so it doesn’t mean 3.8 is used in 14% of applications, but that’s still 250 million packages installed in a single day!

Still, there is only so much time you can delay upgrading, and for Python 3.8, the time to upgrade is as soon as possible. Python 3.8 is reaching its end of life at the end of October 2024.

No more bug fixes.

No more security fixes.

“He’s dead, Jim.”

Still not convinced? Let’s see why you want to upgrade.

Read more...
Categories: FLOSS Project Planets

Python⇒Speed: When should you upgrade to Python 3.13?

Planet Python - Wed, 2024-09-11 20:00

Python 3.13 will be out October 1, 2024—but should you switch to it immediately? And if you shouldn’t upgrade just yet, when should you?

Immediately after the release, you probably didn’t want to upgrade just yet. But from December 2024 and onwards, upgrading is definitely worth trying, though it may not succeed. To understand why, we need to consider Python packaging, the software development process, and take a look at the history of past releases.

Read more...
Categories: FLOSS Project Planets

Plasma Wayland Protocols 1.14.0

Planet KDE - Wed, 2024-09-11 20:00

Plasma Wayland Protocols 1.14.0 is now available for packaging.

This adds features needed for the Plasma 6.2 beta.

URL: https://download.kde.org/stable/plasma-wayland-protocols/
SHA256: 1a4385ecfc79f7589f07381cab11c3ff51f6e2fa4b73b78600d6ad096394bf81 Signed by: E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Riddell jr@jriddell.org

Full changelog:

  • add a protocol for externally controlled display brightness
  • output device: add support for brightness in SDR mode
  • plasma-window: add client geometry + bump to v18
  • Add warnings discouraging third party clients using internal desktop environment protocols
Categories: FLOSS Project Planets

Akademy 2024 - The Akademy of Many Changes

Planet KDE - Wed, 2024-09-11 20:00
  Akademy 2024 group photo.  

This year's Akademy in Würzburg, Germany was all about resetting priorities, refocusing goals, and combining individual projects into a final result greater than the sum of its parts.

A shift — or more accurately a broadening of interest — in the KDE community has been gradually emerging over the past few years, and reached a new peak at this year's event. The conference largely focused on delivering high quality, cutting-edge Free Software to end users. However the keynote "Only Hackers will Survive" by tech activist and environmentalist Joanna Murzyn, and Joseph De Veaugh-Geiss' "Opt In? Opt Out? Opt Green!" talk took attendees down a left turn by addressing growing concerns about the impact of IT on the environment and discussed how Free Software can help curb CO2 emissions.

  Joanna Murzyn explains the impact irresponsible IT developments have on the environment.  

KDE has always had a social conscience. The community's vision and mission of providing users with the tools to control their digital lives and protect their privacy is now combined with concern for the preservation of the planet we live on.

But KDE can do more than one thing at a time, and our software is also undergoing profound changes under the hood. Joshua Goins, for example, is working on ways to enable framework, plasma and application development in Rust. According to Joshua, Rust has many advantages, such as its memory safety capabilities, while adding a Qt front-end to Rust projects is much easier than many people believe.

This is in line with one of the new goals adopted by the community during this Akademy: Nicolas Fella, Plasma contributor and key developer of KDE's all-important frameworks, will champion "Streamlined Application Development Experience", a goal aimed at making it easier for new developers to access KDE technologies.

And onboarding new developers is what the "KDE Needs You! 🫵" goal is about. While the growing popularity of KDE software is great, growth puts more stress on the teams that create the applications, environments, and underlying engines and frameworks. Add to that the fact that the veteran developers are getting gray around the temples, and you need a constant influx of new contributors to keep the community and its projects running. The "KDE Needs You! 🫵" goal aims to address this challenge and is being spearheaded by members of the Promo and Mentoring teams. The champions aim to formalize and strengthen KDE's processes for recruiting active contributors, and to make recruiting active contributors to projects a priority and an ongoing task for the community.

These two goals aim to benefit the community and projects in general. KDE's third goal, "We care about your input", will build on their work: proposed by Jakob Petsovits and Gernot Schiller, "input" in this case refers to "input from devices". The goal addresses the fact that there are still a lot of languages and hardware that are not optimally supported by Plasma and KDE applications, and with the move to Wayland, some devices have even temporarily lost a measure of support they enjoyed on X11. Jakob, Gernot, and their team of supporters plan to solve this problem methodically, working to make KDE's software work smoothly and effortlessly on drawing tablets, accessibility devices, and game controllers, as well as software virtual keyboards and input methods for users of the Chinese, Japanese, and Korean languages.

One way to achieve the desired level of integration is to control the entire software stack, right down to the operating system. This is also in the works for KDE, as Harald Sitter is working on a new technologically advanced operating system tentatively named "KDE Linux". This operating system aims to break free of the constraints currently limiting KDE's existing Neon OS and offer a superior experience for KDE's developers, enthusiast users, and everyday users.

KDE Linux's base system will be immutable, meaning that nothing will be able to change critical parts of the system, such as the /etc, /usr, and /bin, directories. User applications will be installed via self-contained packages, such as Flatpaks and Snaps. Adventurous users and developers will be able to overlay anything they want on top of the base system in a non-destructive and reversible way, without ever having to touch the core and risk not-easily-fixable breakage.

This will help provide users with a solid, stable, and secure environment, without sacrificing the ability to run the latest and greatest KDE software.

As the proof of the pudding is in the eating, Harald surprised the audience when he revealed towards the end of his talk that his entire presentation had been delivered using KDE Linux!

The user-facing side of KDE is also changing with the work of Arjen Hiemstra and Andy Betts — and the Visual Design Group at large. Arjen is working on Union, a new theming engine that will eventually replace the different ways of styling in KDE. Up until now, developers and designers have had to deal with multiple ways of doing styling, some of which are quite difficult to use. Union, as the name implies, will end the fragmentation and provide something that is both easier for developer to maintain and more flexible for designers to interact with.

And then Andy Betts told us about Plasma Next, while introducing the audience to the concept of Design Systems -- management systems that allow all aspects of large collections of icons, components, and other graphical assets to be controlled. Using design systems, the VDG is developing a new look for Plasma and KDE applications that may very well replace KDE's current Breeze theme — and be consistently applied across all KDE apps and plasma, to boot!

  A first look at Dolphin rocking Plasma Next icons.  

This means that in the not too distant future, KDE users will be able to enjoy a rock-solid and elegant operating system with a brilliant look to match!

In other Akademy news...
  • Kevin Ottens added another KDE success story to the list, telling us how a wine producing company in Australia has been using hundreds of desktops running KDE Plasma for more than 10 years.
  • Natalie Clarius explained how she is working on adapting Plasma to work better on our vendor partners' hardware products.
  • In the "Openwashing" panel moderated by Markus Felner, Cornelius Schumacher of KDE, Holger Dyroff of OwnCloud, Richard Heigl of HalloWelt! and Leonhard Kugler of OpenCode took on companies that call themselves "open source" but are anything but.
  • David Schlanger shared the harsh truth about AI, his wishful positive vision for the technology, and then faced questions and comments from Lydia Pintscher and Eike Hein in the keynote session on day 2.
  • Ben Cooksley, Volker Kraus, Hannah von Reth, and Julius Künzel took a deep dive into the frameworks, services, and utilities that help KDE projects get their products to users quickly and on a wide variety of platforms.
Akademy Awards

The prestigious KDE Awards were given to:

  • Friedrich W.H. Kossebau for his work on the Okteta hex editor.
  • Albert Astals Cid for his work on the Qt patch collection, KDE Gear release maintenance, i18n, and many, many other things.
  • Nicolas Fella received his award for his work on KDE Frameworks and Plasma.
  • As is traditional, the Akademy organizing team was awarded for putting on a fun, interesting and safe event for the entire KDE community.
Your browser does not support the video tag.

If you would like to see the talks as they happened, the unedited videos are currently available on YouTube. Cut and edited versions with slides will be available soon on both YouTube and PeerTube.

Categories: FLOSS Project Planets

KDE Gear 24.08.1

Planet KDE - Wed, 2024-09-11 20:00

Over 180 individual programs plus dozens of programmer libraries and feature plugins are released simultaneously as part of KDE Gear.

Today they all get new bugfix source releases with updated translations, including:

  • filelight: Update the sidebar when deleting something from the context menu (Commit, fixes bug #492155)
  • kclock: Fix timer circle alignment (Commit, fixes bug #481170)
  • kwalletmanager: Start properly when invoked from the menu on wayland (Commit, fixes bug #492138)

Distro and app store packagers should update their application packages.

Categories: FLOSS Project Planets

Copyright law makes a case for requiring data information rather than open datasets for Open Source AI

Open Source Initiative - Wed, 2024-09-11 16:04

The Open Source Initiative (OSI) is running a blog series to introduce some of the people who have been actively involved in the Open Source AI Definition (OSAID) co-design process. The co-design methodology allows for the integration of diverging perspectives into one just, cohesive and feasible standard. Support and contribution from a significant and broad group of stakeholders is imperative to the Open Source process and is proven to bring diverse issues to light, deliver swift outputs and garner community buy-in.

This series features the voices of the volunteers who have helped shape and are shaping the Definition.

Meet Felix Reda Photo Credit: CC-by 4.0 International Volker Conradus volkerconradus.com.

Felix Reda (he/they) has been an active contributor to the Open Source AI Definition (OSAID) co-design process, bringing his personal interest and expertise in copyright reform to the online forums. Working in digital policy for over ten years, including serving as a member of the European Parliament from 2014 to 2019 and working with the strategic litigation NGO Gesellschaft für Freiheitsrechte (GFF), Felix is currently the director of developer policy at GitHub. He is also an affiliate of the Berkman Klein Center for Internet and Society at Harvard and serves on the board of the Open Knowledge Foundation Germany. He holds an M.A. in political science and communications science from the University of Mainz, Germany.

Data information as a viable alternative

Note: The original text was contributed by Felix Reda to the discussions happening on the Open Source AI forum as a response to Stefano Maffulli’s post on how the draft Open Source AI Definition arrived at its current state, the design principles behind the data information concept and the constraints (legal and technical) it operates under.

When we look at applying Open Source principles to the subject of AI, copyright law comes into play, especially for the topic of training data access. Open datasets have been a continuous discussion point in the collaborative process of writing the Open Source AI Definition. I would like to explain why the concept of data information is a viable alternative for the purposes of the OSAID.

The definition of Open Source software has an access element and a legal element – the access element being the availability of the source code and the legal element being a license rooted in the copyright-protection given to software. The underlying assumption is that the entity making software available as Open Source is the rights holder in the software and is therefore entitled to make the source code available without infringing the copyright of a third party, and to license it for re-use. To the extent that third-party copyright-protected material is incorporated into the Open Source software, it must itself be released under a compatible Open Source license that also allows the redistribution.

When it comes to AI, the situation is fundamentally different: The assumption that an Open Source AI model will only be trained on copyright-protected material that the developer is entitled to redistribute does not hold. Different copyright regimes around the world, including the EU, Japan and Singapore, have statutory exceptions that explicitly allow text and data mining for the purposes of AI training. The EU text and data mining exceptions, which I know best, were introduced with the objective of facilitating the development of AI and other automated analytical techniques. However, they only allow the reproduction of copyright-protected works (aka copying), but not the making available of those works (aka posting them on the internet).

That means that an Open Source AI definition that would require the republication of the complete dataset in order for an AI model to qualify as Open Source would categorically exclude Open Source AI models from the ability to rely on the text and data mining exceptions in copyright – that is despite the fact that the legislator explicitly decided that under certain circumstances (for example allowing rights holders to declare a machine-readable opt-out from training outside of the context of scientific research) the use of copyright-protected material for the purposes of training AI models should be legal. This result would be particularly counterproductive because it would even render Open Source AI models illegal in situations where the reproducibility of the dataset would be complete by the standards discussed on the OSAID forum.

Examples

Imagine an AI model that was trained on publicly accessible text on the internet that was version-controlled, for which the rights holder had not declared an opt-out, but which the rights holder had also not put under a permissive license (all rights reserved). Using this text as training data for an AI model would be legal under copyright law, but re-publishing the training dataset would be illegal. Publishing information about the training dataset that included the version of the data that was used, when and how it was retrieved from which website, and how it was tokenized would meet the requirements of the OSAID v 0.0.8 if (and only if) it put a skilled person in the position to build their own dataset to recreate an equivalent system. 

Neither the developer of the original Open Source AI model nor the skilled person recreating it would violate copyright law in the process, unlike the scenario that required publication of the dataset. Including a requirement in the OSAID to publish the data, in which the AI developer typically does not hold the copyright, would have little added benefit but would drastically reduce the material that could be used for training, despite the existence of explicit legal permissions to use that content for AI training. I don’t think that would be wise.

The international concern of public domain

While I support the creation of public domain datasets that can be republished without restrictions, I would like to caution against pointing to these efforts as a solution to the problem of copyright in training datasets. Public domain status is not harmonized internationally – what is in the public domain in one jurisdiction is routinely protected by copyright in other parts of the world. For example, in US discourse it is often assumed that works generated by US government employees are in the public domain. They are not, they are only in the public domain in the US, while they are copyright-protected in other jurisdictions. 

The same goes for works in which copyright has expired: Although the Berne Convention allows signatory countries to limit the copyright term on works until protection in the work’s country of origin has expired, exceptions to this rule are permitted. For example, although the first incarnation of Mickey Mouse has recently entered the public domain in the US, it is still protected by copyright in Germany due to an obscure bilateral copyright treaty between the US and Germany from 1892. Copyright protection is not conditional on registration of a work, and no even remotely comprehensive, reliable rights information on the copyright status of works exists. Good luck to an Open Source AI developer who tried to stay on top of all of these legal pitfalls.

Bottom line

There are solid legal permissions for using copyright-protected works for AI training (reproductions). There are no equivalent legal permissions for incorporating copyright-protected works into publishable datasets (making available). What an Open Source AI developer thinks is in the public domain and therefore publishable in an open dataset regularly turns out to be copyright-protected after all, at least in some jurisdictions. 

Unlike reproductions, which only need to follow the copyright law of the country in which the reproduction takes place, making content available online needs to be legal in all jurisdictions from which the content can be accessed. If the OSAID required the publication of the dataset, this would routinely lead to situations where Open Source AI models could not be made accessible across national borders, thus impeding their collaborative improvement, one of the great strengths of Open Source. I doubt that with such a restrictive definition, Open Source AI would gain any practical significance. Tragically, the text and data mining exceptions that were designed to facilitate research collaboration and innovation across borders, would only support proprietary AI models, while excluding Open Source AI. The concept of data information will help us avoid that pitfall while staying true to Open Source principles.

How to get involved

The OSAID co-design process is open to everyone interested in collaborating. There are many ways to get involved:

  • Join the forum: share your comment on the drafts.
  • Leave comment on the latest draft: provide precise feedback on the text of the latest draft.
  • Follow the weekly recaps: subscribe to our monthly newsletter and blog to be kept up-to-date.
  • Join the town hall meetings: we’re increasing the frequency to weekly meetings where you can learn more, ask questions and share your thoughts.
  • Join the workshops and scheduled conferences: meet the OSI and other participants at in-person events around the world.
Categories: FLOSS Research

Glyph Lefkowitz: Python macOS Framework Builds

Planet Python - Wed, 2024-09-11 15:43

When you build Python, you can pass various options to ./configure that change aspects of how it is built. There is documentation for all of these options, and they are things like --prefix to tell the build where to install itself, --without-pymalloc if you have some esoteric need for everything to go through a custom memory allocator, or --with-pydebug.

One of these options only matters on macOS, and its effects are generally poorly understood. The official documentation just says “Create a Python.framework rather than a traditional Unix install.” But… do you need a Python.framework? If you’re used to running Python on Linux, then a “traditional Unix install” might sound pretty good; more consistent with what you are used to.

If you use a non-Framework build, most stuff seems to work, so why should anyone care? I have mentioned it as a detail in my previous post about Python on macOS, but even I didn’t really explain why you’d want it, just that it was generally desirable.

The traditional answer to this question is that you need a Framework build “if you want to use a GUI”, but this is demonstrably not true. At first it might not seem so, since the go-to Python GUI test is “run IDLE”; many non-Framework builds also omit Tkinter because they don’t ship a Tk dependency, so IDLE won’t start. But other GUI libraries work fine. For example, uv tool install runsnakerun / runsnake will happily pop open a GUI window, Framework build or not. So it bears some explaining

Wait, what is a “Framework” anyway?

Let’s back up and review an important detail of the mac platform.

On macOS, GUI applications are not just an executable file, they are organized into a bundle, which is a directory with a particular layout, that includes metadata, that launches an executable. A thing that, on Linux, might live in a combination of /bin/foo for its executable and /share/foo/ for its associated data files, is instead on macOS bundled together into Foo.app, and those components live in specified locations within that directory.

A framework is also a bundle, but one that contains a library. Since they are directories, Applications can contain their own Frameworks and Frameworks can contain helper Applications. If /Applications is roughly equivalent to the Unix /bin, then /Library/Frameworks is roughly equivalent to the Unix /lib.

App bundles are contained in a directory with a .app suffix, and frameworks are a directory with a .framework suffix.

So what do you need a Framework for in Python?

The truth about Framework builds is that there is not really one specific thing that you can point to that works or doesn’t work, where you “need” or “don’t need” a Framework build. I was not able to quickly construct an example that trivially fails in a non-framework context for this post, but I didn’t try that many different things, and there are a lot of different things that might fail.

The biggest issue is not actually the Python.framework itself. The metadata on the framework is not used for much outside of a build or linker context. However, Python’s Framework builds also ship with a stub application bundle, which places your Python process into a normal application(-ish) execution context all the time, which allows for various platform APIs like [NSBundle mainBundle] to behave in the normal, predictable ways that all of the numerous, various frameworks included on Apple platforms expect.

Various Apple platform features might want to ask a process questions like “what is your unique bundle identifier?” or “what entitlements are you authorized to access” and even beginning to answer those questions requires information stored in the application’s bundle.

Python does not ship with a wrapper around the core macOS “cocoa” API itself, but we can use pyobjc to interrogate this. After installing pyobjc-framework-cocoa, I can do this

1 2>>> import AppKit >>> AppKit.NSBundle.mainBundle()

On a non-Framework build, it might look like this:

1NSBundle </Users/glyph/example/.venv/bin> (loaded)

But on a Framework build (even in a venv in a similar location), it might look like this:

1NSBundle </Library/Frameworks/Python.framework/Versions/3.12/Resources/Python.app> (loaded)

This is why, at various points in the past, GUI access required a framework build, since connections to the window server would just be rejected for Unix-style executables. But that was an annoying restriction, so it was removed at some point, or at least, the behavior was changed. As far as I can tell, this change was not documented. But other things like user notifications or geolocation might need to identity an application for preferences or permissions purposes, respectively. Even something as basic as “what is your app icon” for what to show in alert dialogs is information contained in the bundle. So if you use a library that wants to make use of any of these features, it might work, or it might behave oddly, or it might silently fail in an undocumented way.

This might seem like undocumented, unnecessary cruft, but it is that way because it’s just basic stuff the platform expects to be there for a lot of different features of the platform.

/etc/ builds

Still, this might seem like a strangely vague description of this feature, so it might be helpful to examine it by a metaphor to something you are more familiar with. If you’re familiar with more Unix style application development, consider a junior developer — let’s call him Jim — asking you if they should use an “/etc build” or not as a basis for their Docker containers.

What is an “/etc build”? Well, base images like ubuntu come with a bunch of files in /etc, and Jim just doesn’t see the point of any of them, so he likes to delete everything in /etc just to make things simpler. It seems to work so far. More experienced Unix engineers that he has asked react negatively and make a face when he tells them this, and seem to think that things will break. But their app seems to work fine, and none of these engineers can demonstrate some simple function breaking, so what’s the problem?

Off the top of your head, can you list all the features that all the files that /etc is needed for? Why not? Jim thinks it’s weird that all this stuff is undocumented, and it must just be unnecessary cruft.

If Jim were to come back to you later with a problem like “it seems like hostname resolution doesn’t work sometimes” or “ls says all my files are owned by 1001 rather than the user name I specified in my Dockerfile” you’d probably say “please, put /etc back, I don’t know exactly what file you need but lots of things just expect it to be there”.

This is what a framework vs. a non-Framework build is like. A Framework build just includes all the pieces of the build that the macOS platform expects to be there. What pieces do what features need? It depends. It changes over time. And the stub that Python’s Framework builds include may not be sufficient for some more esoteric stuff anyway. For example, if you want to use a feature that needs a bundle that has been signed with custom entitlements to access something specific, like the virtualization API, you might need to build your own app bundle. To extend our analogy with Jim, the fact that /etc exists and has the default files in it won’t always be sufficient; sometimes you have to add more files to /etc, with quite specific contents, for some features to work properly. But “don’t get rid of /etc (or your application bundle)” is pretty good advice.

Do you ever want a non-Framework build?

macOS does have a Unix subsystem, and many Unix-y things work, for Unix-y tasks. If you are developing a web application that mostly runs on Linux anyway and never care about using any features that touch the macOS-specific parts of your mac, then you probably don’t have to care all that much about Framework builds. You’re not going to be surprised one day by non-framework builds suddenly being unable to use some basic Unix facility like sockets or files. As long as you are aware of these limitations, it’s fine to install non-Framework builds. I have a dozen or so Pythons on my computer at any given time, and many of them are not Framework builds.

Framework builds do have some small drawbacks. They tend to be larger, they can be a bit more annoying to relocate, they typically want to live in a location like /Library or ~/Library. You can move Python.framework into an application bundle according to certain rules, as any bundling tool for macOS will have to do, but it might not work in random filesystem locations. This may make managing really large number of Python versions more annoying.

Most of all, the main reason to use a non-Framework build is if you are building a tool that manages a fleet of Python installations to perform some automation that needs to know about Python installs, and you want to write one simple tool that does stuff on Linux and on macOS. If you know you don’t need any platform-specific features, don’t want to spend the (not insignificant!) effort to cover those edge cases, and you get a lot of value from that level of consistency (for example, a teaching environment or interdisciplinary development team with a lot of platform diversity) then a non-framework build might be a better option.

Why do I care?

Personally, I think it’s important for Framework builds to be the default for most users, because I think that as much stuff should work out of the box as possible. Any user who sees a neat library that lets them get control of some chunk of data stored on their mac - map data, health data, game center high scores, whatever it is - should be empowered to call into those APIs and deal with that data for themselves.

Apple already makes it hard enough with their thicket of code-signing and notarization requirements for distributing software, aggressive privacy restrictions which prevents API access to some of this data in the first place, all these weird Unix-but-not-Unix filesystem layout idioms, sandboxing that restricts access to various features, and the use of esoteric abstractions like mach ports for communications behind the scenes. We don't need to make it even harder by making the way that you install your Python be a surprise gotcha variable that determines whether or not you can use an API like “show me a user notification when my data analysis is done” or “don’t do a power-hungry data analysis when I’m on battery power”, especially if it kinda-sorta works most of the time, but only fails on certain patch-releases of certain versions of the operating system, becuase an implementation detail of a proprietary framework changed in the meanwhile to require an application bundle where it didn’t before, or vice versa.

More generally, I think that we should care about empowering users with local computation and platform access on all platforms, Linux and Windows included. This just happens to be one particular quirk of how native platform integration works on macOS specifically.

Acknowledgments

Thank you to my patrons who are supporting my writing on this blog. For this one, thanks especially to long-time patron Hynek who requested it specifically. If you like what you’ve read here and you’d like to read more of it, or you’d like to support my various open-source endeavors, you can support my work as a sponsor! I am also available for consulting work if you think your organization could benefit from expertise on topics like “how can we set up our Mac developers’ laptops with Python”.

Categories: FLOSS Project Planets

ImageX: DrupalCon Barcelona 2024: Top Session Picks from Our Team

Planet Drupal - Wed, 2024-09-11 15:24

Authored by Nadiia Nykolaichuk.

DrupalCon 2024 is returning to one of the world’s most enchanting cities — Barcelona! As the event draws near, Drupal enthusiasts from around the globe are tapping out the rhythm of Spanish flamenco with their feet in anticipation. Now is the perfect time to explore the conference’s program and select the sessions that will inspire and invigorate you.

Categories: FLOSS Project Planets

Pages