Feeds

Mario Hernandez: Building an automated DDEV-based Drupal environment

Planet Drupal - Sat, 2024-06-15 22:24

A successful training experience begins before we step foot in the training room. Or, in these days of distance learning, it begins before students login to your training platform. The challenge is having an environment that is easy for students to setup and provides all the tooling required for the training. Configuring a native development environment is no easy task. Web development tools have gotten more complex with the years and it's a huge barrier for even experienced developers. My goal in setting up this new environment was to have everything completely configured and automated so students only needed to run one command. Very ambitious.

WARNING: The codebase shared in this post is only intended for local development. DO NOT use this project in a production website.

About DDEV: The official name is DDEV-Local. For simplicity I use DDEV in this tutorial.

Here are the tools and configuration requirements for this environment:

  • Docker
  • DDEV
  • Composer
  • Drush
  • Drupal 8 and contrib modules
  • Pre-built Drupal entities (content types, paragraph types, views, view modes, taxonomy, image styles, and more)
  • Custom Drupal 8 theme
  • Twig debugging enabled by default and Drupal cache disabled by default
  • NodeJS, NPM, and NVM
  • Pattern Lab
  • Gulp, ESLint, Sass Lint, BrowserSync, Autoprefixer, and many many more node dependencies
Desired behavior

My main objective was to simplify the building and interaction with the environment, by:

  • Only require Docker and DDEV to be installed on host computer
  • Reducing the number of steps for building the environment to one command, ddev start
  • Execute most if not all commands in the containers, not the host machine (Drush, Composer, Pattern Lab, etc.)
  • Access Pattern Lab running in the web container, from the host machine

Yes, it is crazy, but let's see how this turned out

First things first. As I said before, installing Docker and DDEV are the only two things students will need to install. There is no way around this. Luckily there are a lot of resources to help you with this. The DDEV docs is a great place to start.

Setting up a Drupal site

Before we start, I'd like to clarify that this post focuses on outlining the process and steps for automating a local environment. However, before we can automate, we need to build all the pieces. If you just want to grab the final product here's the repo, otherwise, read on.

As of Drupal 8.8.0, Composer project templates are now available as part of Drupal core. These project templates are recommended for building new Drupal sites as they serve as a starting point for creating a Composer-managed Drupal site.

Two things we will be doing with our Drupal project:

  1. Drupal dependencies and modules will not be commited to the git repo. They will be downloaded with Composer when the project is being built.
  2. Composer will not be installed in the host system, instead, we will use the composer version that comes with DDEV's web container.
Let's start
  1. Create a new directory for your project. The directory name should be lowercase and alpha-numeric characters only. For this example I will name it drupaltraining and will create it in my /Sites directory. Using your command line tool create the new directory

    mkdir drupaltraining
  2. Now navigate into the newly created directory

    cd drupaltraining
  3. Run the DDEV command below to setup a new Drupal project

    ddev config --project-type=drupal8 --docroot=web --create-docroot
    • This will create a .ddev directory with basic Drupal configuration.
    • It will also create settings.php and settings.ddev.php` files inside web/sites/default.
    • Finally, it creates a Drush directory.
    • Keep in mind, Drupal is not in place yet, this simply sets the environment for it.
  4. Now start DDEV to create the containers.

    ddev start
    • After the project has been built, you will see a link to open Drupal in the browser. For this project the link should be https://drupaltraining.ddev.site (ignore for now). If you have not installed mkcert (mkcert -install), the link you see may use http instead of https. That's fine, but I'd recommend always using https by installing mkcert.
  5. Now we are going to create the codebase for Drupal by using the official Drupal community
    Composer template
    .

    ddev composer create "drupal/recommended-project:^8"

    Respond Yes when asked if it's okay to override everything in the existing directory.

    • This will setup the codebase for Drupal. This could take a while depending on your connection.
    • composer.json will be updated so Drupal is added as a project's dependency.
    • New settings.php and settings.ddev.php files are created.
    • Using ddev composer create uses the version of composer that comes as part of the DDEV's web container. This means composer is not required to be installed in the host's computer.
    • One thing I love about using the new composer template is the nice list of next steps you get after downloading Drupal's code base. So useful!
  6. Now let's grab some modules

    ddev composer require drupal/devel drupal/admin_toolbar drupal/paragraphs drupal/components drupal/viewsreference drupal/entity_reference_revisions drupal/twig_field_value
    • This will download the modules and update composer.json to set them as dependencies along with Drupal. The modules and Drupal's codebase will not be commited to our repo.
  7. Let's add Drush to the project

    ddev composer require drush/drush
    • Installing Drush as part of the project which will run in the web container will make it possible to run drush commands (ddev drush <command>), even if the host does not have drush installed.
  8. Now launch Drupal in the browser to complete the installation

    ddev launch
    • This will launch Drupal's intall page. Drupal's url will be https://drupaltraining.ddev.site. Complete the installation by using the Standard profile. Since the settings.ddev.php file already exists, the database configuration screen will be skipped from the installation.
  9. Make a copy of example.settings.local.php into web/sites/default

    cp web/sites/example.settings.local.php web/sites/default/settings.local.php
    • This is recommended to override or add new configuration to your Drupal site.
  10. Update settings.php to include settings.local.php.

    if (file_exists($app_root . '/' . $site_path . '/settings.local.php')) { include $app_root . '/' . $site_path . '/settings.local.php'; }
    • I'd suggest adding the above code right after the settings.ddev.php include already in settings.php. This will allow us to override configuration found in settings.ddev.php.
  11. Let's change Drupal's default config directory. Doing this will export any configuration outside the web directory in Drupal. This is a good security measure. Let's also ignore a couple of folders to avoid drupal errors.

    if (empty($settings['config_sync_directory'])) { $settings['config_sync_directory'] = '../config/sync'; } $settings['file_scan_ignore_directories'] = [ 'node_modules', 'bower_components', ];
    • First block changes the default sync directory to drupaltraining/config/sync. If you look inside settings.ddev.php you will see that this directory is inside web. We want to store any configuration changes outside the web directory.
    • Second block sets up Drupal to ignore node_modules and bower_components. Drupal may look for twig templates inside these directories and could cause Drupal to crash. Ignoring these directories solves these issues.
  12. Since we've made changes to DDEV's configuration, restart DDEV

    ddev restart

That's quite the process, isn't it? The good news is this entire process will be eliminated when we finish automating the environment.

Drupal 8 custom theme

The project's theme is called training_theme. This is a node-based theme, and will be built with Mediacurrent's theme generator, which will provide:

  • A best-practices Drupal 8 theme
  • Pattern Lab integration
  • Automated Front-End workflow
  • Component-based-ready environment
  • Production-ready theme

The final DDEV project will include the new Drupal 8 theme so there is no need to create it now, but if you want to see how the Theme Generator works, Watch the video tutorial I recorded.

Automating our environment

Now that Drupal has been setup let's begin the automation process.

Dockerfile

A web Docker container comes with Node and NPM installed. This will work in most cases, but the Drupal theme may use a version of node not currently available in the container. In addition, the web container does not include Node Version Manager (NVM), to manage multiple node versions. If the tools we need are not available in the web or db images/containers, there are ways to modify them to include the required tools. One of those ways is an add-on Dockerfile in your project's .ddev/web-build or .ddev/db-build, depending which container you are trying to modify.

Inside .ddev/web-build create a file called Dockerfile (case sensitive), and in it, add the following code:

ARG BASE_IMAGE FROM $BASE_IMAGE ENV NVM_DIR=/usr/local/nvm ENV NODE_DEFAULT_VERSION=v14.2.0 RUN curl -sL https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh -o install_nvm.sh RUN mkdir -p $NVM_DIR && bash install_nvm.sh RUN echo "source $NVM_DIR/nvm.sh" >>/etc/profile RUN bash -ic "nvm install $NODE_DEFAULT_VERSION && nvm use $NODE_DEFAULT_VERSION" RUN chmod -R ugo+w $NVM_DIR

The code above sets an environment variable for default node version (v14.2.0), which is the node version the theme uses at the time of this setup. It installs and configures NVM, and makes NVM executable by updating the container's bash profile. A Dockerfile runs while the image or container is being built to alter any default configuration with the code found in the Dockerfile.

Custom DDEV commands

The Drupal 8 theme for this project uses Node for most of its tasks. To compile CSS, JavaScript, and Twig we need to run commands such as npm install, npm run build, npm run watch, and others. Our goal is to be able to run these commands in the web container, not the host computer. Doing this eliminates the need for students to install any node-related tools which can get really complicated. While we could achieve this by asking students to first SSH into the web container (ddev ssh), then navigate into /web/themes/custom/training_theme/, then run the commands, I want to make it even easier for them. I want them to be able to run the commands from any directory within the project and to not have to ssh into the web container. To achieve this we need to create a couple of custom commands.

Custom commands can be created to run in containers as well as the host machine. Since we want to run these commands in the web container, we are going to create the commands inside .ddev/commands/web/. Custom commands are bash script files.

NVM custom command
  • Inside .ddev/commands/web/ create a new file called nvm
  • Add the following code in the file:
#!/bin/bash ## Description: Run any nvm command. ## Usage: nvm [flags] [args] ## Example: "nvm use or nvm install" source /etc/profile && cd /var/www/html/web/themes/custom/training_theme && nvm $@
  • Since this is a bash script, #!/bin/bash is required as the first line in the file.
  • A description of the script is a good practice to explain what the script does.
  • Pay close attention to ## Usage: nvm [flgas] [args]. This is what makes the commands work. The [flags] and [args] are ways to pass arguments to the command. For example, nvm on its own won't do much, but use or install can be passed as parameters to complete the commands nvm use or nvm install. Being able to run these commands will allow us to install new versions of Node later on if needed.
  • Next we are adding examples of potential commands that can be run.
  • Finally, you see the code or actual commands. source /etc/profile is basically resetting the container's bash profile so NVM can run. Then we navigate into the training_theme directory within the container where the nvm commands will be executed. So technically in the script above we are running 3 commands in one. Using && in between each command lets us combine or concatenate them. The $@ after nvm represent the flags or arguments we can pass (i.e. use or install).

Running the new commands: Every custom command needs to be executed by adding ddev before the command. For example: ddev nvm use. Using ddev infront of the command instructs the system to run the commands in the containers, rather than the host computer.

NPM custom commands

We will also create a custom script to run NPM commands. This will be similar to the NVM script. This new script will be executed as ddev npm install or ddev npm run build, etc. The npm commands will allow us to install node dependencies by the theme as well as execute tasks like compiling code, linting code, compressing assets, and more.

  • Create a new file inside .ddev/commands/web and call it npm
  • Add the following code in the file:
#!/bin/bash ## Description: Run npm commands inside theme. ## Usage: npm [flags] [args] ## Example: "npm install or npm rebuild node-sass or npm run build or npm run watch" cd /var/www/html/web/themes/custom/training_theme && npm $@
  • Most of the code here is similar to the previous script, except instead of nvm we will run npm.
  • Notice we are again navigating into the theme directory before running the command. This make is possible for the custom commands we are creating to be ran from any directory within our project while still being executed inside the training_theme directory in the container.
Drush custom command

Let's create one last custom command to run drush commands within the DDEV containers

  • Create a new file inside .ddev/commands/web and call it drush
  • Add the following code in the file:
#!/bin/bash ## Description: Run drush inside the web container ## Usage: drush [flags] [args] ## Example: "ddev drush uli" or "ddev drush sql-cli" or "ddev drush --version" drush $@
  • This will allow us to run drush commands in the container but using ddev drush <command> (i.e. ddev drush cr, ddev drush updb -y, etc.).

So that's it for custom commands. By having custom commands for nvm and npm, we can now successfully run any theming related tasks.

Automating Drupal's setup

We want to streamline the drupal installation process. In addition, we want to be able to import a custom database file to have access to all the infrastructure needed during training. This includes content types, paragraph types, views, view modes, image styles, and more. Enter DDEV hooks.

DDEV Hooks

Hooks are a great way to perform tasks before or after DDEV starts. There are tasks that need to happen in specific sequence and hooks allow us to do just that. So what's the differnce between custom commands and hooks? Technically hooks can be considered custom commands, but the difference is that they are executed automatically before or after DDEV starts, whereas custom commands are ran on demand at any time. DDEV needs to be running if custom commands are intended to run in containers. Back to hooks, Let's build a post-start hook.

Hooks can be ran as pre-start, post-start, and after-db-import. Also, hooks can be executed inside containers and/or the host machine. In our case all the tasks we outlined above will be ran after DDEV starts and most of them in Docker's containers.

  • Open .ddev/config.yaml and add the following code at the bottom of the file. There may already be a hooks section in your file. Be sure indentation in the file is correct.
hooks: post-start: - composer: install - exec: /var/www/html/db/import-db.sh - exec: drush updb -y - exec: drush cim -y - exec-host: cp -rf web/sites/example.development.services.yml web/sites/development.services.yml - exec-host: cp -rf web/assets/images/* web/sites/default/files/images/ - exec: drush cr - exec-host: ddev launch /user
  • post-start: indicates tasks declared will run after DDEV container's have started. We want the web and db containers available before running any of the tasks.
  • composer: install will download all dependencies found in composer.json (Drupal core, modules, drush and others).
  • exec: /var/www/html/db/import-db.sh is a custom script which imports a custom database file (provided in this project). We will go over this script shortly. Importing a custom database builds the Drupal site without having to install Drupal as long as Drupal's codebase exists.
  • exec: drush cim -y, exec: drush updb -y, and exec: drush cr, are basic drush commands.
  • exec-host: cp -rf web/sites/example.development.services.yml web/sites/development.services.yml will create a new development.services.yml. This is needed because when Drupal is setup, development.services.yml is overridden and since we are using custom configuration in that file to enable twig debugging, we need to restore the configuration by replacing the file with a copy of our own.
  • exec-host: cp -rf web/assets/images/* web/sites/default/files/images/ copies a collection of images used in the demo content added to the site.
  • Finally, exec-host: ddev launch /user will open a fully configured Drupal website in the browser.

Login to Drupal: Username: admin, password: admin

Import database script

Now let's write the script to import the database we are using in the hook above.

  • In your project's root, create a new directory called db
  • In the new directory create a new file called import-db.sh and add the following code:
#!/bin/bash # Use a table that should exist in your database. if ! mysql -e 'SELECT * FROM node__field_hero;' db 2>/dev/null; then echo 'Importing the database' # Provide path to custom database. gzip -dc /var/www/html/db/drupaltraining.sql.gz | mysql db fi
  • This is again another bash script which performs a database import but only if the database is empty or clean. We do this by checking if one of the tables we expect in the database exists (node__field_hero). If it does, the database is not imported, but if it doesn't it will import the database. This table can be any table you know it should exist.

  • Notice the script calls for a database file named drupaltraining.sql.gz. This means the database file shold exist inside the db directory alongside the import-db.sh script. This database file was created/exported after Drupal was configured with all training required infrastructure and settings.

  • Make the script executable

chmod +x db/import-db.sh
  • This will ensure the script can be executed by DDEV, otherwise we would get a permission denied error.

This does it for automation. The next few tasks are things that improve the development environment. Some of these tasks are optional.

Exposing Pattern Lab's port in host computer

Since our goal is to not have to install any Front-End tools in the host computer, running Pattern Lab has to be done in the web container. The problem is we can't open Pattern Lab in the host's browser if Pattern Lab is running in the container. For this to work we need to expose the port in which Pattern Lab runs to the host machine. In this environment, that port is 3000 (this port number may vary). We identify this port by running npm run watch inside the training_theme directory. This will provide a series of links to access Pattern Lab in the browser.

Under the hood, DDEV uses docker-compose to define and run the multiple containers that make up the local environment for a project. docker-compose supports defining multiple compose files to facilitate sharing Compose configurations between files and projects, and DDEV is designed to leverage this ability.

Creating a new docker-compose.*.yaml

A docker-compose file allows to do many things including exposing ports from the containers to the host computer.

On a typical Pattern lab project if you run npm start you will see Pattern Lab running on http://localhost:3000. In this environment the equivalent command is ddev npm run watch. Since this environment is running Pattern Lab in the web container, the only way to access Pattern Lab in the browser is by having access to the container's port 3000. Exposing the port via the http or https protocols makes it possible to access Pattern Lab's UI page from the host machine.

  1. Inside .ddev/, create a new file called docker-compose.patternlab.yaml

  2. In the new file add the following code:

    # Override the web container's standard HTTP_EXPOSE and HTTPS_EXPOSE # to access patternlab on https://drupaltraining.ddev.site:3000, or http://drupaltraining.ddev.site:3001. version: '3.6' services: web: # ports are a list of exposed *container* ports ports: - "3000" environment: - HTTP_EXPOSE=${DDEV_ROUTER_HTTP_PORT}:80,${DDEV_MAILHOG_PORT}:8025,3001:3000 - HTTPS_EXPOSE=${DDEV_ROUTER_HTTPS_PORT}:80,${DDEV_MAILHOG_HTTPS_PORT}:8025,3000:3000
    • The name of the file is completely optional. It makes sense to use a name that is related to the action, app, or service you are trying to implement. In this example the name docker-compose.patternlab.yaml made sense.
    • How did we arrive at the content above for this file? DDEV comes with two files that can be used as templates for new configuration, one of those files is called .ddev-docker-compose-base.yaml. You can find all the code we added above in this file.
  3. Restart DDEV to allow for the new changes to take effect:

    ddev restart
    • The basics of the code above is modifying the web container's port 3000. We are exposing this port through the http and https protocols on the host machine.
  4. For the above to work Pattern Lab needs to be running in the container:

    ddev npm run watch

Viewing Pattern Lab in the host machine

Using NFS (optional)

Last but not least, enabling NFS in DDEV can help with performance of your application. NFS (Network File System) is a classic, mature Unix technique to mount a filesystem from one device to another. It provides significantly improved webserver performance on macOS and Windows. This is completely optional and in most cases you may not even need to do this. The steps below are for macOS only. Learn more about NFS and how to enable it in other Operating Systems like Windows.

  • In your command line run
id
  • Make note of uid (user id) and gid (group id).
  • Open etc/exports in your code editor, and add the following code, preferably at the top of the file:
/System/Volumes/Data/Users/xxxx/Sites/Docker -alldirs -mapall=502:20 localhost
  • Replace xxxx with your username. The full path shown above is required if you are using macOS Catalina.

  • Replace Sites/Docker (This is my personal project's directory), with the directory name where your DDEV projects are created. Most people would mount the entire user's home directory, but I think only mounting the directory where your projects live is good enough (/Sites/Docker/).

  • Replace 502 and 20 with the values you got when you ran the id command above.

  • Update DDEV's config.yml to enable NFS

nfs_mount_enabled: true
  • Restart DDEV
ddev restart

You should notice an improvement in performance in your Drupal website.

In closing

I realize there is a lot here, but I am pretty happy with how this turned out. Thanks to all the work in this article, when one of the students wants to setup their training environment, all the have to do is run ddev start. I think that's pretty sweet! 🙌 Happy DDEVing!

Giving credit

Before I started on this journey, I knew very little about DDEV. Thanks to the amazing help from Randy Fray from Drud.com and Michael Anello from DrupalEasy.com, I have learned a lot in the past weeks and wanted to share with other community members. You can find both, Randy and Michael on Drupal's DDEV Slack Channel. They are extremely helpful and responsive.

Resources
Categories: FLOSS Project Planets

Mario Hernandez: Building a Drupal Theme with the Theme Generator

Planet Drupal - Sat, 2024-06-15 22:24

On one of my last training workshops I took a chance and decided to let students pick their environment of choice to use during training. As always I hosted an online call prior to training to assist anyone who needed help setting up their environment. About five of the students showed up to the call. This is a first. In the past when I've used a preconfigured training environment typically no one shows up to this prep call because the environment I've put together for them has been fully tested and any potential issues have been addressed.

Although I was able to help everyone get ready for training, and no big issues were encountered during training, I learned that perhpas having a preconfigured training environment is the best way to go. Having done this in the past I found that a preconfigured environment not only provides a consistent experience for everyone but it makes things more predictable for everyone.

I'm going to show you the latest setup I am using when training people. This is a new setup I put together using DDev with a host of other tools including Drupal

See the Theme Generator's project page on Github.

Watch the full tutorial below:

Categories: FLOSS Project Planets

Mario Hernandez: Our best training yet

Planet Drupal - Sat, 2024-06-15 22:24

For the past few years we've been working on a training curriculum around Component Based Development and every time we finish updating it for an upcoming event we feel really good about it and think we finally got it. In reality, a training curriculum for a topic so complex as Component Development is probably never finished.

After BADCamp's training workshop last year we thought we were done updating the training material and we felt it was so good that it would carry us all the way to DrupalCon Seattle 2019. We could not have been more wrong. Soon after BADCamp we made the decision to move from KSS Node to Pattern Lab for handling our pattern's workflow and living styleguide. This meant our entire training material needed to be updated in time for DrupalCon. Needless to say we spent months ensuring the material was current, relevant and effective.

DrupalCon Seattle was a total success. Our training sold out in about two weeks and end up with almost 50 people in attendance which is probably a record for us. But with more students the chances of more issues or something going wrong increase. We make a big effort to ensure we provide an automated environment for student to use. This allows us provide all tools needed during training pre-configued and tested ensuring a more predictable and consistent behavior by students' systems. However, there are always cases in which a student decides not to use our environment and either due to technical skills or strict restrictions on their systems they need to use a workflow we have not tested.
This was certainly the case at DrupalCon and we had to find ways to help those students to be able to follow along during training.

One way to help people with their local setup is by conducting what we call "Office Hours". We reserve an hour or two weeks before the training where anyone who needs assistance with their local environment can join us online and we attempt to get them up and running. We helped several individuals this way. Although it is more work for us it allows us to address any issues off hours rather than spending valuable training time fixing computer problems.

The training was engaging and we received great feedback. Our curriculum, although not perfect, is relevant and well received. Having done this a few times at small and large events gives us confidence and makes it easier next time we do it.

Our team is dedicated and passionate about training and this would not be possible without them. Especial thanks go to Eric Huffman who has done most of the heavy lifting with the local environment automation. Kelly Dassing, who as a Director of Project Management provides a unique perspective to those in attendance who may not be developers. Tobias Williams who is an all around great FE develper and Engineer, and Mark Casias, who is able to answer the hard Back-end question that the rest of the team may not be as proficient as he is.

Should you have training needs please reach out and we would be happy to put together a curriculum that fits your specific needs or that of your team.

For a more in-depth look at how we prepare for a training workshop ready my blog post on Planning for an Effective Training Workshop.

Until next time ... Cheers!.

Categories: FLOSS Project Planets

Mario Hernandez: Handling Drupal attributes in components

Planet Drupal - Sat, 2024-06-15 22:24

In Drupal's twig templates you'll often see an attributes variable being output within the template. This variable is how core and contrib modules inject their CSS classes, an ID, or data attributes onto template markup. You'll also find title_prefix and title_suffix variables. These are used by core and contrib modules to inject markup into twig templates. A good example of this is the core Contextual Links module. If you were to remove the attributes , title_prefix , and title_suffix variables from a node template, for example, then the Contextual Links module would no longer have a way to add its drop-down to the display of nodes.

In some cases this may not be an issue for you, but in general it's best to plan to accommodate those Drupal-specific variables in your component markup so that when you integrate Drupal content into your components, other features can be available too.

Since the attributes variable can include class, id, and data attributes in one variable, we need to make sure we only combine Drupal’s classes with ours, and let the other attributes render without Drupal classes. This can be accomplished on the main wrapper of the component template.

<article class="card{{ attributes ? ' ' ~ attributes.class }}" {{ attributes ? attributes|without(class) }}> {{ title_prefix }} {{ title_suffix }} {% if image %} <div class="card__image"> {{ image }} </div> {% endif %} <div class="card__content"> {% if heading %} {% include '@components/heading/heading.twig' with { "heading": { "title": heading.title, "url": heading.url, "heading_level": heading.heading_level, "classes": 'card__heading' } } only %} {% endif %} </div> </article>

Note that the without twig filter in this example is a Drupal-specific filter, so for the component we'll want to make sure we’re using one that supports Drupal’s custom filters (most design systems such as KSS node, and Pattern Lab have configuration options that support Drupal twig filters).

Now if we integrate our card component with Drupal (i.e. node--card.html.twig), we can ensure Drupal's attributes and contextual links will be available when the component is rendered. The node template, also known as presenter template, would look something like this:

{% set rendered_content = content|render %} {% set heading = { title: label, url: url, heading_level: '4', attributes: title_attributes } %} {% embed '@components/card/card.twig' with { attributes: attributes, title_prefix: title_prefix, title_suffix: title_suffix, heading: heading, image: content.field_image is not empty ? content.field_image, } only %}
  • First we're triggering a full render of the content variable.
  • Then we set up a variable for the Heading field,
  • Finally we are using an embed twig statement to integrate the Card component. In the embed we are mapping all the Card fields with Drupal's data. We also pass in Drupal-specific items such as title_prefix, title_suffix, attributes, etc.

Our card component will be rendered with all Drupal's attributes and the ability to be edited inline thanks to the contextual links.

Hope this helps you in your component development journey.

Categories: FLOSS Project Planets

Mario Hernandez: Surviving the fire

Planet Drupal - Sat, 2024-06-15 22:24

We are used to watching the news and somewhere in California there is a wildfire destroying nature and people's homes. In 30 years I have lived in the Los Angeles area I have never experienced this first hand. I've have been a victim of earthquakes but this was my first as a fire victim.

We first heard about the fires earlier in the week and the night of November 8th we knew the fires were escalating, but they were nowhere near our neighborhood. All this changed when we went to bed Thursday night and we noticed winds were pretty strong to the point that they would make loud and weird noises as they blew through trees and roof of our place.

We went to bed without a single concern. At 3:30 am. the morning of Friday November 9th, we woke up to loud screams and knocks on our door by the Fire Department. As I opened the door still half asleep, they told me we had 15 min minutes to get out and to head to the nearest shelter in place. They had to repeat this information about 3 times as I was still in disbelief and completely confused. As I looked out our door, this was the image I saw:

You probably can't tell from the picture but that fire is only feets away from our house. That is the same hills I hike on and now they are completely burned.

It gets worse

My wife and I got the kids ready and were only able to pick up passports and personal documents. We didn't have enough time to grab anything else except change cloths to something more practical for the weather.

We arrived at the shelter in place at around 4:00 am. and it was packed with people and pets. We also had a pet of our own. After registering we sit there for a while waiting to hear news but nothing was coming in. At around 5:00 am. we decided it would be best to leave to my parent's house, about 28 miles away. As we are ready to leave we see one of our neighbor who proceeds to give us the worst news of the morning. He informed us our building complex has been completely burned and our house is gone. We didn't know how to respond or react to these news. The thought that we no longer had a house was unreal. I recall telling my wife I didn't believe it but I had no way to prove it. I was probably in denial or perhaps hoping it wasn't true.

As we leave the shelter in place at around 5:30 am., my wife asked me to drive by our house to see for ourselves what it looked like. To our surprise, there was zero damage to our building or buildings in our neighborhood. I was both relieved and extremely upset at my neighbor for giving us such as bad and inaccurate information. I hope I can put this behind us, but for now I am still bothered by this.

The fire stopped short of our place and we could not be more fortunate. I was relieved and felt like we had made it through the worst when we saw our place still standing.

Then it gets better

While at the shelter in place, I notified several people at my job, Mediacurrent.com, that I was being evacuated and would not be able to work that day. The response from everyone was unbelievable. Mediacurrent's partner Paul Chason responded within seconds of my email to let me know that they were behind us and to let him know of anything we needed. This was the same message I received from many colleagues and friends. I have always felt fortunate to be part of a great team like Mediacurrent's, but they go above and beyond to support their employees. This is not the first time they have shown me this kind of support. The first time was when my wife was diagnosed with Chronic Kidney disease and Mediacurrent's response was unbelievable.

As other friends in social media learned about our situation, the flood of support was so great and it gave us strength and hope that we were going to be okay.

As we started communicating with other neighbors we began to feel more confident that we would be able to return home soon.

We came back home on Friday evening to pick up some stuff, including my wife's dialysis machine, which she uses every night as treatment for her chronic kidney disease, and also to clean up and change cloths. Things by then looked pretty normal in our neighborhood but the air quality was still pretty bad so we decided to go back to my parent one more night. On Saturday evening we returned home for good and began the cleaning process which was not as bad as it could had been.

In closing

I hope we never have to go through something like this again, and really hope our friends and family never have to go through this, but having the support of our family, friends and coworkers made this horrible experience more manageable. We want to thank our family for always being there, and for providing a place to stay during this emergency. Our Mediacurrent family for showing us that we are more than coworkers, we care for each other. Thank you to our friends who would continuously check on us and offer help. Finally, our deepest and most special thanks to the everyday heroes who risk their lives to help others; first responders, fire fighters and law enforcements who worked and continue working tirelessly to protect us.

Thank you - The Hernandezes. 🙏 ❤️

Categories: FLOSS Project Planets

Mario Hernandez: Adding Social Share Links to Gatsby

Planet Drupal - Sat, 2024-06-15 22:24
Sharing is caring.

I've been working on my personal blog (this site), for a while. I built it with Gatsby and little by little I have been adding extra functionality. Today I'm going to show you how I added social sharing links to allow visitors to share my posts with others using Twitter, Facebook, LinkedIn, and other channels.

For an example of the Sharing links, look at the icons above the hero image on this and every post on this site.

There are many ways to accomplish this but from the begining I wanted to use something that was simple and did not require too much overhead to run. There are solutions out there that require third libraries and scripts and I wanted to avoid that. A while back I was introduced to Responsible Social Share Links. The beauty of Responsible Social Links is that they do not need any Javascript to work. They use the sharing links available for most social media channels.

Let's take a look at some examples of what these links look like:

Facebook

https://www.facebook.com/sharer/sharer.php?u=URL_TO_SHARE

Twitter

<a href="https://twitter.com/intent/tweet/ ?text=Check this out &url=https://mariohernandez.io &via=imariohernandez" target="_blank">Share on Twitter</a>

Most of these links accepts several parameters. You can see these parameters in more details at the Responsible Social Share Links page for additional information. In addtion, some systems may require you to encode the links but luckily for us Reacts does this for us automatically.

Using the links in a Gatsby site (or React for that matter)

You may think, that's so easy, just modify each of the links with my personal information and done. That's true to an extend. However, the tricky part is dynamically passing the current page's URL and post title to your sharing link. So here's how I did it:

  1. Edit your blog post template. In my case my blog post template is /src/templates/blog-post.js This is based on the Gatsby starter I used. Your mileage may vary.

  2. Add the following code where you wish to display the sharing links to generate a twitter share link:

<Share> <ShareLink href={`https://twitter.com/intent/tweet/?text=${post.frontmatter.title} &url=https://mariohernandez.io${post.frontmatter.path}%2F&via=imariohernandez`}> // Optional icon <LinkLabel>Share on Twitter</LinkLabel> </ShareLink> </Share>

The example above creates a twitter share link and uses the data variables I am already using to print the blog post content. As you know, Gatsby uses GraphQL to query the posts and by doing this you have access to each of the fields in your post (i.e. title, path, tags, date, etc.).

In the example above, I am passing ${post.frontmatter.title} so when the post is shared the title of the post is included as your tweet text. In addition, I am linking to the current post by passing ${post.frontmatter.path}. Finally I am passing my twitter handle.

There are other parameters you can pass to your share links. Things like hashtags, mentions, and more. Following the same pattern you can do the same for Facebook, LinkedIn and others.

A much cleaner approach

You may have noticed that I created the sharing snippet directly in the blog-post.js template. A much cleaner approach would be to create a new React component for all yoru sharing links and include the component in your blog-post.js.

Here's the full snippet for all the social channels I am using:

<Share> <ShareLabel>Share this post</ShareLabel> <ShareSocial> <ShareItem> <ShareLink href={`https://twitter.com/intent/tweet/?text=${ post.frontmatter.title }&url=https://mariohernandez.io${post.frontmatter.path}%2F&via=imariohernandez`} > <span> <svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="icon icon-twitter" > <path d="M23 3a10.9 10.9 0 0 1-3.14 1.53 4.48 4.48 0 0 0-7.86 3v1A10.66 10.66 0 0 1 3 4s-4 9 5 13a11.64 11.64 0 0 1-7 2c9 5 20 0 20-11.5a4.5 4.5 0 0 0-.08-.83A7.72 7.72 0 0 0 23 3z" /> </svg> </span> <LinkLabel>Share on Twitter</LinkLabel> </ShareLink> </ShareItem> <ShareItem> <ShareLink href={`https://www.facebook.com/sharer/sharer.php?u=https://mariohernandez.io${ post.frontmatter.path }`} target="_blank" > <span> <svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="icon icon-facebook" > <path d="M18 2h-3a5 5 0 0 0-5 5v3H7v4h3v8h4v-8h3l1-4h-4V7a1 1 0 0 1 1-1h3z" /> </svg> </span> <LinkLabel>Share on Facebook</LinkLabel> </ShareLink> </ShareItem> <ShareItem> <ShareLink href={`https://www.linkedin.com/shareArticle?mini=true&url=https://mariohernandez.io${ post.frontmatter.path }&title=${post.frontmatter.title}&source=${post.frontmatter.title}`} target="_blank" > <span> <svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="icon icon-linkedin" > <path d="M16 8a6 6 0 0 1 6 6v7h-4v-7a2 2 0 0 0-2-2 2 2 0 0 0-2 2v7h-4v-7a6 6 0 0 1 6-6z" /> <rect x="2" y="9" width="4" height="12" /> <circle cx="4" cy="4" r="2" /> </svg> </span> <LinkLabel>Share on LinkedIn</LinkLabel> </ShareLink> </ShareItem> </ShareSocial> </Share> In Closing

If you want to have a clean and light weight way to share your content with others, the Responsible Sharing Links may just be what you need.

Categories: FLOSS Project Planets

Mario Hernandez: How to build a card component

Planet Drupal - Sat, 2024-06-15 22:24

One of the most popular components on any website I've worked on is the "Card" component. Depending on the website, the card component is used to group various pieces of content into a container which can be used throughout your website.

Read the full article on how to build a card comonent.

Categories: FLOSS Project Planets

Mario Hernandez: Flexible Headings with Twig

Planet Drupal - Sat, 2024-06-15 22:24

Proper use of headings h1-h6 in your project presents many advantages incuding semantic markup, better SEO ranking and better accesibility.

Updated April 3, 2020

Building websites using the component based approach presents all kinds of advantages over the traditional page building approach. Today I’m going to show how to create what would normally be an Atom if we use the atomic design approach for building components. We are going to take this simple component to a whole new level by providing a way to dynamically controlling how it is rendered.

The heading component

Headings are normally used for page or section titles and are a big part of making your website SEO friendly. As simple as this may sound, headings need to be carefully planned. A typical heading would look like this:

<h1>This is a Heading 1</h1>

The idea of components is that they are reusable, but how can we possibly turn what already looks like a bare bones component into one that provides options and flexibility? What if we wanted to use a h2 or h3? or what if the title field is a link to another page? Then the heading component would probably not work because we have no way of changing the heading level from h1 to any other level or add a URL. Let's improve the heading component so we make it more dynamic.

Enter Twig and JSON

Twig offers many advantages over plain HTML and today we will use some logic to transform the static heading component into a more dynamic one.

Let’s start by creating a simple JSON object which we will use as data for Twig to consume. We will build some logic around this data to make the heading component more dynamic. This is typically how I build components on projects I work on.

  1. In your project, typically within the components/patterns directory create a new folder called heading
  2. Inside the heading folder create a new file called heading.json
  3. Inside the new file paste the code snippet below
{ "heading": { "heading_level": "", "modifier": "", "title": "This is the best heading I've seen!", "url": "" } }

So we created a simple JSON object with 4 keys: heading_level, modifier, title, and url.

  • The heading_level is something we can use to change the headings from say, h1 to h2 or h3 if we need to.
  • The modifier key allows us to pass a modifier CSS class when we make use of this component. The modifier class will make it possible for us to style the heading differently than other headings, if needed.
  • The title key is the title's string of text that will become the title of a page or a component.
  • ... and finally, the url key, if present, will allow us to wrap the title in an <a> tag, to make it a link.
  1. Inside the heading folder create a new file called heading.twig
  2. Inside the new file paste the code snippet below
<h{{ heading.heading_level|default('2') }} class="heading{{ heading.modifier ? ' ' ~ heading.modifier }}"> {% if heading.url %} <a href="{{ heading.url }}" class="heading__link"> {{ heading.title }} </a> {% else %} {{ heading.title }} {% endif %} </h{{ heading.heading_level|default('2') }}>

Wow! What's all this? 😮

Let's break things down to explain what's happening here since the twig code has changed significantly:

  • First we make use of heading.heading_level to complete the number part of the heading. If a value is not provided for heading_level in the JSON file, we are setting a default of 2. This will ensue that by default we will have a <h2> as the title, much better than <h1> as we saw before. This value can be changed every time the heading isused. The same approach is taken to close the heading tag at the last line of code.
  • Also, in addition to adding a class of heading, we check whether there is a value for the modifier key in JSON. If there is, we pass it to the heading as a CSS class. If no value is provided nothing will be added.
  • In the next line line, we check whether a URL was provided in the JSON file, and if so, we wrap the Flexible Headings with Twig variable in a <a> tag to turn the title into a link. The href value for the link is ``. If no URL is provided in the JSON file, we simply print the value of Flexible Headings with Twig as plain text.
Now what?

Well, our heading component is ready but unfortunately the component on its own does not do any good. The best way to take advantage of our super smart component is to start using it within other components.

Putting the heading component to use

As previously indicated, the idea of components is so they can be reusable which eliminates code duplication. Now that we have the heading component ready, we can reuse it in other templates by taking advantage of twig’s include statements. That will look like this:

<article class="card"> {% include '@components/heading/heading.twig' with { "heading": heading } only %} </article>

The example above shows how we can reuse the heading component in the card component by using a Twig’s include statement.

NOTE: For this to work, the same data structure for the heading needs to exist in the card’s JSON file. Or, you could also alter the heading's values in twig, like this:

<article class="card"> {% include '@components/heading/heading.twig' with { "heading": { "heading_level": 3, "modifier": 'card__title', "title": "This is a super flexible and smart heading", "url": "https://mariohernandez.io" } } only %} </article>

You noticed the part @components? this is only an example of a namespace. If you are not familiar with the component libraries Drupal module, it allows you to create namespaces for your theme which you can use to nest or include components as we see above.

End result

The heading component we built above would look like this when it is rendered:

<h3 class="heading card__title"> <a href="https://mariohernandez.io" class="heading__link"> This is a super flexible and smart heading </a> </h3> In closing

The main goal of this post is to bring light on how important it is to build components that are not restricted and can be used throughout the site in a way that does not feel like you are repeating yourself.

Additional Resources:

Managing heading levels in design systems.

Categories: FLOSS Project Planets

Mario Hernandez: Getting started with Gatsby

Planet Drupal - Sat, 2024-06-15 22:24

As many developers, when I hear the words "static website" I immediate think of creating flat HTML pages and editing them by hand. Times have changed. As you will see, Static Site Generators (SSG), offer some of the most advanced features and make use of latest technologies available on the web.

Static Site Generators are nothing new. If you search for SSG you will find many. One of the most popular ones is Jekyll, which I have personally worked with and it's a really good one. However, this post focusing on Gatsby. Probably one of the hottest system for creating static sites.

What is Gatsby?

Gatsby's primarily objective is to build static sites, but as you will learn, that's just the tip of the iceberg.

Gatsby is a blazing-fast static site generator for React.

How does Gatsby work?

While other SSGs use templating languages like Mustache, Handlebars, among others, Gatsby uses React. This not only allows for building modern component-driven websites, it also provides an incredible fast page rendering. Like mind-blowing fast.

Extending Gatsby

One of the most powerful features of Gatsby is its growing number of "Plugins". Plugins are the building blocks of Gatsby. They allow you to implement new features and functionality by running a couple of commands and making some configuration changes. Anything from adding Sass to your React project, creating a blog, configuring Google Anaylitics and many many more.

Plugins are contributed code kindly provided by the generous Open Source community which totally rocks. Anyone is able to write plugins and make them available to the world to consume and use.

Check out their Plugins page for a full list of ways you can take your static site to the next level.

Editorial Process

So we are building static sites and you may be wondering How do I create content for my site? There are several ways in which you can create a content editign workflow for your site. Probably the easiest way is to use static Markdown files. Markdown is a lightweight markup language with plain text formatting syntax. It is designed so that it can be converted to HTML and many other formats. Markdown is often used to format readme files, for writing messages in online discussion forums, and to create rich text using a plain text editor. This blog is using markdown. Since I am the only creating content I don't need a fancy administrative interface to crate content.

Markdown is only one of the ways you can create content for your static sites. Others include more advanced methods such as plugging in Gatsby with your Content Management System (CMS) of choice. This includes Wordpress, Drupal, Netlify, ContentaCMS, Contenful, and others. This means if you currently use any of those CMSs, you can continue to use them to retain a familiar workflow while moving your front-end workflow to a simpler and easier to manage process. This method is usually referred to as decoupled or headless, as your back-end is independent of your front-end.

Quering Data

As previously mentioned, Gatsby with the power of React create the perfect system for building robust, flexible and super fast static sites. However, there is a third component that takes that power to a whole new level, and that is GraphQL.

GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.

Deploying Gatsby

Hosting for a Gatsby site can be done anywhere where React apps can run. Nowadays that's pretty much anywhere. However, before investing in an expensive and highly complicated hosting environment, take a look at some of the simpler and less expensive options on this page.
You will see that for a basic website, you can use several of the free options such as github pages, netlify and others which already include advanced continuous integration workflows. For more advanced sites where a CMS may be involved, you can also find options for deployment that will simplify your DevOps process.

My own blog is running out of a github repo that automatically get deployed when I push new updates. This is happening via Netlify which to me is probably the easiest way I have ever deployed a website.

In closing

Don't worry if you are a little skeptical about static site generators. I was too. However, I gave Gatsby a try and I see myself building more gatsby sites in the future. Before Gatsby I worked with Jekyll which is also a great static site generator, but what sets Gatsby apart is its seamless integration with React and GraphQL. The combination of those 3 provides endless posibilities in your web building process. Check it out.

Categories: FLOSS Project Planets

Mario Hernandez: Running a training workshop

Planet Drupal - Sat, 2024-06-15 22:24

Update 1-10-19
I wrote an extended version of this post at Mediacurrent's blog, check it out.

As long as I can remember I've enjoyed public speaking. This doesn't mean I am good at it, it simply means I enjoy it. School events, class president, my jobs, etc., they all taught me great lessons about public speaking. So when I started as a developer, sharing my knowledge with others at conferences or meetups came pretty natural.

I'd like to clarify, that after years of doing talks and other methods of public speaking, I am still terrified. I get nervous, my hands sweat, my legs shake, and my voice gets weird. Basically what I am trying to say is that I'm not an expert by any means, but I overcome the phobia of public speaking by doing it frequently.

For many years I have speaking at conferences, but in the past few years I started conducting longer workshops. I first started doing online workshops, which have their pros and cons. While they don't put you face to face with your audience, it also does not give you a good sense for how effective your training is because you can't see people reactions. For this reason I prefer to do face-to-face training.
As part of my job I conduct periodic Front-End training workshops for clients and recently I started conducting all-day training workshops at conferences. I really enjoy it and I'ld like to share some of the lessons learned.

Picking a topic

Ideally you want to pick a topic you feel 100% confident about. I have learned that people attending your training or talks welcome any information you can share no matter how simple or elementary it may feel to you. Don't ever think what you know may not be of interest to others because you would be wrong.

Lately I have been challenging myself a little more when picking a topic to train about. While is good to know the topic well, it is also extremely rewarding to pick a topic you'd like to learn more about. This may feel contrary to what I said ealier but hear me out. When you decide to train on a topic, you will spend a lot of time preparing, training, testing and reharsing. This is exactly how you learn a new skill. I can't tell you how many times I come out of training I did knowing more about the topic than before and also learning from people who attended the training. If you want to learn a new skill, teaching others about it could be the best way for you to learn it.

Preparing for the training

Everyone has their own style for teaching or doing a presentation. Some people like to use slides and screenshots, others show recordings of their project or code. My personal preference is to build a working prototype. This to me presents many advantages, but it also means you will spend more time getting ready.
My training workshops usually include very little slides because the majority of the training will be spent writing actual code and building the prototype during the training.

Here's my typical process for preparing for a training workshop:

  • Identify a prototype that serves the purpose of the training. If I am teaching a workshop about component based development I would normally pick something that involves the different aspects of component based development (attoms, nested components, reusable components, etc.)

  • Build the prototype upfront to ensure you have a working model to demo and go by.

  • Once prototype is built, create a public repo so you can share the working prototype

  • Write step-by-step instructions to building the prototype. Normally I would break the prototype down into small components, atoms.

  • Test, test and test. You want to make sure yoru audience will not run insto unexpected issues while following your instructions. For this reason you need to make sure you test your instructions. Ask a friend or colleague to go through each of your excercises to ensure thing work as expected.

  • Provide a pre-training evaluation. A quick set of questions that will give you an idea of people's skills level as well as environment (Linux, Widows, OSX). This will help you plan ahead of time.

  • Build a simple slide deck for introductions and agenda purposes. Mainly I move away from slides as soon as introduction and agenda is done. The rest of the training is all hands on.

Communicate with your audience ahead of time

As you will learn, one thing that can really kill a lot of the time during training is assisting people with their local environment setup. I have conducted training workshops where I've spent half the time helping people with their environment. For this reason, nowadays I communicate with the people ahead of time to ensure everyone's local environment is ready to go.

I normally make myself available once or twice in an evening through a google hangout to assit anyone who may need help. I also provide detailed instructions on how to get their local environment ready. This could save you a lot of time during training. In addition, for those who did spend the time on getting their environment ready, it's not fair that they have to be held back because someone did not make an effort to setup their environment.
I make myself available ahead of training but if someone is still having issues because of neglect, I don't hold the rest of the class back. I try to help them but at some point I move on.

During training

If possible, get help from someone who is also well-versed with the topic so they can assist you help people who may get stuck. Nothing is more frustrating that havign to break the flow of the training to help people who get stuck. Having someone else help you with this allows you to continue with the training and not have everyone loose momentum.

Finally

Enjoy yourself. Make sure you and your audience have fun. If you show excitement in what you are doing people will get excited as well.

Categories: FLOSS Project Planets

GNU Taler news: GNU Taler plugin for Adobe Commerce (Magento) now available

GNU Planet! - Sat, 2024-06-15 18:00
This project implemented the GNU Taler payment system in Adobe Commerce (formerly Magento). An extension was developed that can now be included in all Adobe Commerce online shops.
Categories: FLOSS Project Planets

This week in KDE: Final Plasma 6.1 polishing and new features for 6.2

Planet KDE - Sat, 2024-06-15 01:11

Plasma 6.1 is due to be released in three days, and lots of attention went into final release readiness activities: QA, bug-fixing, performance profiling, auto-testing, stuff like that. Boring but important! And happily, reviews of the 6.1 beta are, like, really good. So we want to make sure that the final release doesn’t disappoint!

In addition, we’re hard at work on Plasma 6.2, which is now beginning to accumulate features. Major areas of focus are some of the remaining Wayland pain points, including tablet and artist workflows. You can see some progress on that already:

New Features

Added an additional option for how to map the area of your drawing tablet to the area of your screen. It can be useful to have multiple options here, since drawing tablets are as diverse as screens, and their sizes and aspect ratios are unlikely to be identical (Joshua Goins, Plasma 6.2.0. Link):

Added a test mode feature for drawing tablets so you can test your settings (Joshua Goins, Plasma 6.2.0. Link):

Ported the Weather widget to the new API offered by the NOAA weather provider, which unlocked night forecasts in addition to the existing day forecasts (Ismael Asensio, Plasma 6.2.0. Link 1 and link 2)

This is new, so please excuse the UI glitches that are visible here. They’ll be cleaned up before the final release of Plasma 6.2 in four months. UI Improvements

You can now wake up a sleeping screen using a stylus (David Redondo, Plasma 6.1.1. Link)

KWin’s Morphing Popups effect has been deleted. While it added a measure of fanciness, unfortunately the way it worked introduced unfixable visual glitches. After multiple attempts to fix them, with a heavy heart we had to admit defeat. Removing the effect fixed six open Bugzilla tickets, one of which had 20 (!) duplicates. Stability beats fanciness. (David Edmundson, Plasma 6.2.0. Link)

Bug Fixes

Scrolling in Elisa’s lyrics view once again works with a clicky scroll wheel mouse (Jack Hill, Elisa 24.05.1. 24.05.1)

Fixed that annoying bug that affected people not using Systemd (or Plasma’s Systemd integration) which would make some but not all Plasma settings fail to get saved properly (David Edmundson, Plasma 6.1.0. Link)

Fixed a bug that could cause Plasma’s “Unify Outputs” screen action to actually disable certain screens instead of making them mirror the existing one (Xaver Hugl, Plasma 6.1.0. Link)

Quickly adapted to the NOAA weather provider’s continued API changes by implementing a quick fix to keep it working for the Weather widget in Plasma 6.1. (Ismael Asensio, Plasma 6.1.0. Link)

Fixed multiple issues causing System Monitor widgets displaying certain types of days to be visually squished when displayed on Plasma panels (Arjen Hiemstra, Plasma 6.1.0. Link)

Fixed a visual glitch that caused flickering when canceling a quick-tile action initiated by dragging a window (Xaver Hugl, Plasma 6.1.0. Link)

When you change the system volume using the slider in Plasma’s Audio widget, the maximum volume in overdrive mode is once again 150%, not 153% (Ismael Asensio, Plasma 6.1.0. Link)

Fixed an issue that could cause XWayland-using apps to freeze for a short period of time after slowly resizing their windows, especially when using screen scaling (Vlad Zahorodnii, Plasma 6.1.1. Link)

In System Monitor pie charts (both in the app and the widgets), text in the center no longer sometimes overflows when it’s very long (Arjen Hiemstra, Plasma 6.1.1. Link)

Other bug information of note:

Performance & Technical

Improved the memory efficiency of Plasma notifications that display images (Fushan Wen, Plasma 6.2. Link)

Automation & Systematization

Added a test to ensure that Plasma panel focus works as expected (Niccolò Venerandi, link)

Added a test to ensure that images in Plasma notifications appear properly (Fushan Wen, link)

Added a test for some more combinations of settings in Plasma’s Clipboard widget, since there are rather a lot of them (Fushan Wen, link)

Added a test to ensure that the camera usage monitor is working (Fushan Wen, link)

Added a test to make sure the keyboard brightness is settable as expected (Fushan Wen, link)

…And Everything Else

This blog only covers the tip of the iceberg! If you’re hungry for more, check out https://planet.kde.org, where you can find more news from other KDE contributors.

How You Can Help

The KDE organization has become important in the world, and your time and labor have helped to bring it there! But as we grow, it’s going to be equally important that this stream of labor be made sustainable, which primarily means paying for it. Right now the vast majority of KDE runs on labor not paid for by KDE e.V. (the nonprofit foundation behind KDE, of which I am a board member), and that’s a problem. We’ve taken steps to change this with paid technical contractors — but those steps are small due to growing but still limited financial resources. If you’d like to help change that, consider donating today!

Otherwise, visit https://community.kde.org/Get_Involved to discover other ways to be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don’t have to already be a programmer, either. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

Categories: FLOSS Project Planets

GNU Taler news: Real-time GNU Taler auditor

GNU Planet! - Fri, 2024-06-14 18:00
This bachelor thesis implements puts it's focus on the GNU Taler auditor. Cedric Zwahlen and Nicola Eigel made it real-time and added single page application.
Categories: FLOSS Project Planets

GSOC Week 1 Week 2

Planet KDE - Fri, 2024-06-14 14:37
This is the first blog post of my GSOC journey. I will be sharing my works and experiences here. Stay tuned for more updates. In this blog, I’ll be sharing my experiences of the first two weeks of GSOC, what are the works I did, what are challenges I faced and how did I overcome them ( Did I really overcome them :P ). On my first week I tried to understand the codebase of discover first, via doing small changes.
Categories: FLOSS Project Planets

KDE Gear 24.08 release schedule

Planet KDE - Fri, 2024-06-14 13:38

 

This is the release schedule the release team agreed on

  https://community.kde.org/Schedules/KDE_Gear_24.08_Schedule

Dependency freeze is in around 4 weeks (July 18) and feature freeze one
after that. Get your stuff ready!
 

Categories: FLOSS Project Planets

mark.ie: My Drupal Core Contributions for week-ending June 14th, 2024

Planet Drupal - Fri, 2024-06-14 12:00

Here's what I've been working on for my Drupal contributions this week. Thanks to Code Enigma for sponsoring the time to work on these.

Categories: FLOSS Project Planets

The Drop Times: Shawn Perritt on Reintroducing One of the Pioneers of the Internet to the World

Planet Drupal - Fri, 2024-06-14 10:36
Discover the transformative journey of Drupal's rebranding in an exclusive interview with Shawn Perritt, Brand and Creative Director at Acquia. During DrupalCon Portland, Shawn unveiled the refreshed Drupal brand, highlighting the collaborative efforts to modernize its identity while preserving its core values. In conversation with Alka Elizabeth, Sub-editor at The DropTimes, Shawn shares insights into the strategic thinking behind the rebrand, the unique challenges faced, and the aspirations driving this significant change. Join us for an in-depth look at how Drupal is poised to reintroduce itself to the world.
Categories: FLOSS Project Planets

Explaining the concept of Data information

Open Source Initiative - Fri, 2024-06-14 09:53

There seems to be some confusion caused by the concept of Data information included in the draft v0.0.8 of the Open Source AI Definition. Some readers may have seen the original dataset included in the list of optional components and quickly jumped to the wrong conclusions. This post clarifies how the draft arrived at its current state, the design principles behind the Data information concept and the constraints (legal and technical) it operates under.

The objective of the Open Source AI Definition

The objective of the Open Source AI Definition is to replicate in the context of artificial intelligence (AI) the principles of autonomy, transparency, frictionless reuse, and collaborative improvement for end users and developers of AI systems. These are described in the preamble.

Following the preamble is the definition of Open Source AI, an adaptation of the definition of Free Software (also known as “the four freedoms”) to AI nomenclature. The preamble and the four freedoms have been co-designed over several meetings and public discussions, online and in-person, and have not recently received significant comments. 

The Free Software definition specifies that a precondition to the freedom to study and modify a program is to have access to the source code. Source code is defined as “the preferred form of the program for making changes in.” Draft v0.0.8 contains a description of what’s necessary to enjoy the freedoms to study and modify an AI system. This new section titled Preferred form to make modifications to machine-learning systems has generated a heated debate. 

What is the preferred form to make modifications

The concept of “preferred form to make modifications” focuses on machine learning systems because these systems require data and training to produce a working system. Other AI systems are more easily classifiable as software and don’t require a special definition. 

The system analysis phase of the co-design process revealed that studying and modifying machine learning systems requires data, code for training and inference and model parameters. For the parameters, there’s no ambiguity: an Open Source AI must make them available under terms that respect the Open Source principles (no field-of-use restrictions, no discrimination against people, etc). For the data and code requirements, the text in the “preferred form to make modifications” section is longer and harder to parse, generating some confusion. 

The intent of the code and data requirements is to  ensure that end users, deployers and developers of an Open Source AI system have all the tools and instructions to recreate that AI system from scratch, to satisfy the freedoms to study and modify the system. At a high-level view, it makes sense to suggest that training datasets should be mandatorily released with permissive licenses in order to be Open Source AI.

However on close examination, it became clear that sharing the original datasets is full of traps. It actually puts Open Source at a disadvantage compared to opaque and proprietary AI systems.

The issue with data

Data is not software: The legal landscape for data is much wider than copyright. Aggregating large datasets and distributing them internationally is an endless nightmare that includes privacy laws, copyright, sui-generis rights, patents, secrets and more. Without diving deeper into legal issues, let’s focus on practical examples to clarify why the distribution of the training dataset is not spelled out as a requirement in the concept of Data information.

  • The Pile, the open dataset used to train the very open Pythia models, was taken down after an alleged copyright infringement, currently being litigated in the United States. However, the Pile appears to be legal to share in Japan. It’s also unclear whether it can be legally shared in the European Union. 
  • DOLMA, the open dataset used to train the very open OLMo models, was initially released with a restrictive license. It later switched to a permissive one. On further inspection, DOLMA appears to suffer from the same legal uncertainties of the Pile, however the Allen Institute has not been sued yet.
  • Training techniques that preserve privacy like federated learning don’t create datasets. 

All these cases show that requiring the original datasets creates vagueness and uncertainty in applying the Open Source AI Definition:

  • If a dataset is only legal in Japan, is that AI Open Source only in Japan?
  • If a dataset is initially legally available but later retracted, does the AI go from being Open Source to not?
    • If so, what happens to the applications that use such AI?
  • If no dataset is created, then will any AI trained with such techniques ever be Open Source?

Additionally, there are reasons to believe that OpenAI, Anthropic and other proprietary systems have been trained on the same questionable data inside The Pile and DOLMA: Proving that’s the case is a lot harder and expensive though. This is clearly a disincentive to be open and transparent on the data sources, adding a burden to the organizations that try to do the right thing.

The solution to these questions, draft v0.0.8 contains the concept of Data information, coupled with code requirements to obtain the expected result: for end users, developers and deployers of AI systems to be able to reproduce an Open Source AI.

Understanding the concept of Data Information

Data information, in the draft Open Source AI Definition, is defined as: 

Sufficiently detailed information about the data used to train the system, so that a skilled person can recreate a substantially equivalent system using the same or similar data.

Read that from the end: The intention of Data information is to allow developers to recreate a substantially equivalent system using the same or similar data. That means that an Open Source AI must disclose all the ingredients, where they’ve been bought and all the instructions to prepare the dish.  

This is a solution that came out of the co-design process, where reviewers didn’t rank the training datasets as high as they ranked the training code and data transparency requirements. 

Data information and the code requirements also address all of the questions around the legality of distributing data and datasets, or their absence.

If a dataset is only legal in Japan or becomes illegal later, one should still be able to recreate a dataset suitable to train an equivalent system replacing the illegal or unavailable pieces with similar ones.

AI systems trained with federated learning (where a dataset isn’t created) can still be Open Source AI if all instructions and code are released so that a new training with different data can generate an equivalent system.

The Data information concept also solves an example (raised on the forum) of an AI system trained on data licensed directly from Reddit. In this case, if the original developers released enough information to allow another AI developer to recreate a substantially equivalent system with Reddit data taken from an existing dataset, like CommonCrawl, it would be considered Open Source AI.

The proposed alternatives

While generally well received, draft v0.0.8 has been criticized by a few people on the forum for putting the training dataset in the “optional requirements”. Some suggestions and pushback we’ve received:

  • Require the use of synthetic data when the training dataset cannot be legally shared: This technique may work in some corner cases, if the technology evolves to be reliable enough. It’s expensive and untested at scale.
  • Classify as Open Source AI systems where all their components are “open source”: This approach is not rooted in the longstanding practice of the GNU project to accept system library exceptions and other compromises in exchange for more Open Source tools.
  • Datasets built by crawling the internet are the equivalent of theft, they shouldn’t be allowed  at all, let alone allowed in Open Source AI: This pushback ignores the reality that large data aggregators already have acquired legally the rights to accumulate that same data (through scraping and terms of use) and are trading it, exclusively capturing the economic value of what should be in the commons. Read Towards a Books Data Commons for AI Training for more details. There is no general agreement that text and data mining is equivalent to theft.

These demands and suggestions are hard to accept. We need an Open Source AI Definition that can effectively guide users and developers to make the right choice. We need one that doesn’t put developers of Open Source AI at a disadvantage compared to proprietary ones. We need a Definition that contains positive examples from the start so we can practically demonstrate positive qualities to policymakers. 

The discussion about data, how to generate incentives to create datasets that can be distributed internationally, safely, preserving privacy, is extremely complex. It can be addressed separately from the Open Source AI Definition. In collaboration with Open Future Foundation and others, OSI is designing a series of conferences to tackle the data governance issue. We’ll make an announcement soon.

Have your say now

The concept of Data information and code requirements is hard to grasp at first. But the preliminary results of the validation phase confirm that the draft v0.0.8 works as expected: Pythia and OLMo both would be Open Source AI, while Falcon, Grok, Llama, Mistral would not (even if they used OSD-compatible licenses) because they don’t share Data information. BLOOM and StarCoder would fail because of field-of-use restrictions in their models.

Data information can be improved but it’s better than other solutions proposed so far. As we get closer to the release of the stable version of the Open Source AI Definition, we need to hear from you: If you support this concept please comment on the forum today. If you don’t support it, please try to propose an alternative that at least covers the practical examples of Pile, DOLMA and federated learning above. Help the community move the conversation forward.

Continue the conversation in the forum

Categories: FLOSS Research

Web Review, Week 2024-24

Planet KDE - Fri, 2024-06-14 09:14

Let’s go for my web review for the week 2024-24.

Microsoft Will Switch Off Recall by Default After Security Backlash

Tags: tech, microsoft, privacy

Unsurprisingly they had to adjust under the pressure. The most blatant issues might be gone, it is still a bad idea at its core.

https://www.wired.com/story/microsoft-recall-off-default-security-concerns/


AI chatbots are intruding into online communities where people are trying to connect with other humans

Tags: tech, ai, machine-learning, gpt, criticism, ethics

Chatbots can be useful in some cases… but definitely not when people expect to connect with other humans.

https://theconversation.com/ai-chatbots-are-intruding-into-online-communities-where-people-are-trying-to-connect-with-other-humans-229473


Malicious VSCode extensions with millions of installs discovered

Tags: tech, vscode, security, ide

How trustworthy are the extensions you get in your editor or IDE? I’d expect most marketplaces to not be well harmed against such attacks.

https://www.bleepingcomputer.com/news/security/malicious-vscode-extensions-with-millions-of-installs-discovered/


HTTP/3 needs us (and other people) to make firewall changes

Tags: tech, http, quic, firewall

Good reminder that firewalls need to be adjusted for proper HTTP/3 support.

https://utcc.utoronto.ca/~cks/space/blog/sysadmin/HTTP3AndOurFirewalls


HTTP/3 in curl mid 2024 | daniel.haxx.se

Tags: tech, http, quic

Interesting status report about HTTP/3 support in curl. Shows quite well the various alternatives and how special HTTP/3 can be.

https://daniel.haxx.se/blog/2024/06/10/http-3-in-curl-mid-2024/


What is PID 0? · blog.dave.tf

Tags: tech, unix, linux, kernel, system, processes

Interesting deep dive in where the PIDs seen in user space come from. And also yes, there is something matching PID 0 which can be traced back to early UNIX systems.

https://blog.dave.tf/post/linux-pid0/


Scan HTML faster with SIMD instructions: Chrome edition – Daniel Lemire’s blog

Tags: tech, cpu, performance, SIMD

SIMD keeps providing interesting performance boosts for parsing work loads.

https://lemire.me/blog/2024/06/08/scan-html-faster-with-simd-instructions-chrome-edition/


Rolling your own fast matrix multiplication: loop order and vectorization – Daniel Lemire’s blog

Tags: tech, c++, compiler, performance, matrix

The ordering used for matrix multiplications definitely matters.

https://lemire.me/blog/2024/06/13/rolling-your-own-fast-matrix-multiplication-loop-order-and-vectorization/


You’ll regret using natural keys

Tags: tech, databases, design

Good advice on designing your database tables. The comments are good too, they allow to complete the picture.

https://blog.ploeh.dk/2024/06/03/youll-regret-using-natural-keys/


Brain dump – Pagination for database objects

Tags: tech, backend, databases

The right and wrong approaches for paginating results coming from a database.

https://www.n16f.net/blog/pagination-for-database-objects/


Optimal SQLite settings for Django

Tags: tech, django, databases, sqlite

Little and to the point reference on safer SQLite use. I should check if some of this would apply or is used by Akonadi as well.

https://gcollazo.com/optimal-sqlite-settings-for-django/


the Gilbert–Johnson–Keerthi algorithm explained as simply as possible

Tags: tech, geometry, mathematics, algorithm

Need to know if two shapes overlap? Good explanation of an elegant algorithm to do it.

https://computerwebsite.net/writing/gjk


Feynman’s Razor - by Defender of the Basic

Tags: tech, documentation, communication, gui

Nice reminder that even though we try to make things simpler to understand to people, there is a point where we can go too far.

https://defenderofthebasic.substack.com/p/feynmans-razor


Foreword for Fuzz Testing Book

Tags: tech, fuzzing, tests, history

Ever wondered where fuzz testing is coming from? This is an important bit of history.

https://pages.cs.wisc.edu/~bart/fuzz/Foreword1.html


Post-Architecture: An Open Approach to Software Engineering

Tags: tech, software, architecture

Indeed this is not for any environment and projects. So take it with a grain of salt. That said, I think this piece has a core truth to it which is more general. Software architectures shouldn’t be considered as something fixed as soon as they are planned, they need to be validated through use and to be prepared to evolve over time as needed.

https://arendjr.nl/blog/2024/06/post-architecture/


Bye for now!

Categories: FLOSS Project Planets

EuroPython: Humble Data workshop for beginners - Pythonistas and data scientists

Planet Python - Fri, 2024-06-14 08:38

Among the many wonderful workshops at EuroPython this year, we are pleased to announce we will be running the Humble Data workshop in person on Tuesday 9th July 2023, at the Prague Congress Centre (PCC). This is following successful deliveries of this workshop at PyCon US, Ghana, Namibia, Africa, Germany, Italy, PyData Global and of course, EuroPython 2022 and 2023!

How is the workshop?

Curious about the event? Read on.

Humble Data workshops are designed to get those from underrepresented groups started in both Python and data science, in an inclusive, laid-back and empathic environment. The workshops are designed to help people with zero experience with coding to learn some of the most fundamental operations in Python, and in turn, use these to get started with reading, transforming and visualizing data.

Humble Data at EuroPython 2023

The workshop will happen on 9th July 2024 for 6 hours, from 09:30 to 17:00 at the Prague Congress Centre (PCC), Room Club C

As part of the workshop, participants will work through a series of approachable tutorials with the help of a mentor. For 3 hours (breaks included) we will have teams that will work together with mentors to do plenty of exercises, quizzes and games, to go from Zero to Hero in Python data science. All that participants will need to bring is a laptop with internet access - we will help them get started with the rest!

Let us help you get started on your Python data science journey. You can read more about the workshop here.

How can I get involved?

Like the idea? Join us as a mentor or mentee!

If you’re new to coding or data science, and want to learn more in a supportive environment, apply to join us at the Humble Data workshop by filling in this form (attendees) or this form (mentors). Participation is free for anyone with a EuroPython Conference Ticket or Combined Ticket. Please note that you will need a conference ticket to participate in this workshop - we thank you for your understanding!

If you’re interested in mentoring, we would love to have your help! It is no issue if you&aposre not the most experienced programmer or data scientist: rather, we are looking for people who are respectful, patient, friendly, curious, and able to explain technical concepts in a way that is approachable for beginners. In return, you will receive the eternal gratitude of the organisers and attendees, the chance to meet people outside of your bubble, and in turn show that you don’t need to fit a certain mold to “look like” a developer or data scientist.

Our wonderful Humble Data mentor at EuroPython 2023

Finally, if you know anyone attending EuroPython this year who you think would like to either attend or mentor Humble Data, please encourage them to apply!

If you’re interested in attending as a mentee, please fill in this form, or as a mentor, please fill in this form, by July 1st, 2023.

We can’t wait to see you all in Prague this July!

Categories: FLOSS Project Planets

Pages