FLOSS Project Planets

Specbee: Getting started with integrating Drupal and Tailwind CSS

Planet Drupal - Tue, 2024-06-18 02:57
If you’re tired of spending hours tweaking CSS stylesheets to get your website looking just right, then you should consider a new way of CSS’ing (yeah, we just made it up). We’re talking about Tailwind CSS - a different way of doing CSS. It uses a utility-first approach, which means instead of creating a class for a component, let’s say a button, you would use a pre-defined class that’s built to style the button, like a “font-bold” or a “text-white”. All of this right from the comfort of your HTML!  Couple Tailwind CSS, a sleek, utility-based open-source CSS framework, with a modular, highly customizable, and modern CMS like Drupal 10, and you have a powerhouse combination!  In this article, we’ll be talking about Drupal and Tailwind CSS AND how you can integrate them seamlessly to build modern, responsive, and visually stunning websites Why Drupal is a good choice for web development Drupal is an open-source content management system (CMS) that has been gaining popularity among web developers for its versatility and powerful features. The latest version, Drupal 10 brings a host of new features and improvements meant to simplify development and enhance user experience. 1. Highly customizable design: One of the biggest advantages of using Drupal is its highly customizable design. With a wide range of themes and templates available, developers have the freedom to create unique and visually appealing websites that cater to their client's specific needs. 2. Flexibility and scalability: Drupal's modular architecture makes it incredibly flexible and scalable, making it suitable for building websites of any size or complexity. It offers a vast library of modules that can be easily integrated into your website, providing additional functionality such as e-commerce capabilities, social media integration, multilingual support, and more. 3. Robust security measures: Security is a major concern in today's times; hence it is crucial to choose a CMS that prioritizes it. Drupal has always been recognized as one of the most secure CMS platforms due to its stringent security standards and frequent updates that address any potential vulnerabilities promptly.  4. SEO-friendly: Search Engine Optimization (SEO) is vital for every website's success as it determines its visibility on search engines like Google. Drupal's built-in SEO tools assist developers in optimizing their websites by generating search engine-friendly URLs, meta tags, and sitemaps, and providing customizable page titles and descriptions. 5. Active community support: Drupal benefits from a substantial and engaged group of developers who contribute to its ongoing evolution and offer support to fellow users. This community-driven approach ensures that any issues or bugs are addressed promptly, and new updates and features are released regularly. Why Tailwind CSS is great for web developers Tailwind CSS has gained immense popularity in the web development community due to its numerous benefits and advantages.  1. Highly customizable: One of the biggest advantages of Tailwind CSS is its high level of customization. Unlike other popular front-end frameworks like Bootstrap or Foundation, Tailwind does not come with predefined styles and components. Instead, it provides a comprehensive set of utility classes that can be used to create custom designs according to your specific project needs. This allows developers to have complete control over their designs and enables them to create unique and personalized user experiences. 2. Lightweight: Another major benefit of using Tailwind CSS is its lightweight nature. Since it only generates the required classes based on your design choices, it helps keep the file size small and improves website performance. This is especially beneficial for Drupal websites which often have heavy content and functionality that can slow down page loading speed. 3. Mobile-responsive: With more people accessing websites through mobile devices, having a mobile-responsive design has become crucial for any website's success. The use of Tailwind CSS makes it easier to create responsive designs without adding extra code or media queries. Its mobile-first approach ensures that your website looks great on all screen sizes without compromising on performance. 4. Faster development process: By eliminating the need for manually writing complex CSS rules, Tailwind speeds up the development process significantly. It also reduces the chances of errors as developers can easily reuse existing utility classes instead of creating new ones from scratch every time they need a similar style element. 5. Scalable designs: With Tailwind CSS, you can create scalable designs that are easy to maintain and modify in the long run. As your website grows and evolves, making changes to its design becomes more manageable with Tailwind's utility classes, as opposed to traditional frameworks where modifying styles can be a tedious process. Why Integrate Drupal with Tailwind CSS? One of the main reasons to integrate Drupal with Tailwind CSS is its ability to simplify web development. With Tailwind CSS, developers can avoid writing repetitive and lengthy code for styling. Instead, they can use pre-built classes that provide consistent and reusable design patterns. This not only saves time but also results in cleaner code that is easier to maintain. Another benefit of integrating Drupal with Tailwind CSS is its responsive design capabilities. With the increasing use of mobile devices, it has become essential for websites to be optimized for all screen sizes. Thanks to Tailwind's built-in responsive utilities, designers can easily build responsive layouts without having to write separate media queries for different devices. Step-by-Step Guide to Integrating Drupal with Tailwind CSS Step 1: Get your prerequisites ready Local Drupal Setup NPM and Yarn package Managers Drush Lando Step 2: Create a custom theme We will begin by creating a Drupal custom theme and then will integrate Tailwind CSS into the same. In this tutorial, I am using the starter kit theme command to generate my custom theme. Go to the web directory and type in the following command in the terminal php core/scripts/drupal generate-theme drupal_tailwind_theme after pressing the enter key our custom theme will be generated as shown below: Next, we will generate the package.json file which we require to initialize a new node.js project. Go to the terminal and type in the following command npm init and it prompts us with a series of questions regarding our project since for this tutorial I am keeping everything default by pressing the enter key until our package.json file is added as shown in the image. Step 3: Install and set up Tailwind CSS Now let’s install Tailwind and its dependencies and also configure it. In the terminal, run the following command:npm install -D tailwindcss postcss autoprefixer Next, let’s generate a Tailwind CSS config file using the following command:npx tailwindcss init -p Go to the tailwind.config.js file and set the path for Tailwind to scan. Since our Twig templates are in the templates directory, we will use that path as shown in the image. Now we need to create a CSS file to put the Tailwind directives in. Go to the src folder, create a file named index.css, and paste the following directives:@tailwind base;@tailwind components;@tailwind utilities;   Create a folder where the Tailwind CSS will be compiled. On the same level as the src folder, create a folder named dist (or any name you prefer). We now run the following command:npx tailwindcss –input src/index.css –output dist/index.css After executing this command, we will be able to see the compiled version of the Tailwindcss in the dist folder. We can verify it by going to the dist folder as shown in the image. In the package.json file, we will be adding the scripts dev and watch which can be executed using npm run dev & npm run watch or using yarn dev & yarn watch. The script dev compiles the code once as per command while the watch command keeps a watch on any change. Step 4: Adding Tailwind CSS to our Drupal Theme Now we will be adding Tailwind to our Drupal site. For that, we can add a library namely tailwind, and give it a CSS path of the dist/index.css. That means where we have our compiled tailwind. Next, we can add that library either globally or to the specific template. For the purpose of this tutorial, I will be adding it globally. Using Tailwind CSS in our Drupal theme For this tutorial, I am using the page.html.twig file and adding a red background color by including the class bg-red-500 in the markup. Clear the cache and check the output. Our Tailwind CSS class has applied the desired background color.   Final Thoughts Tailwind CSS is a great choice when you want to build websites quickly and easily. It integrates well with Drupal and the combination of these two tools creates a powerful synergy for web development. Reach out to us if you would like to learn more about how Specbee’s Drupal experts can help you create stunning web experiences with Drupal and Tailwind CSS.
Categories: FLOSS Project Planets

Week 3 recap - points on a canvas

Planet KDE - Tue, 2024-06-18 00:56
Hey welcome to the week 3 recap blog. So after some testing, using the pixel art brush preset and clicking on individual pixels on the canvas will trigger functions like kis_paintop:paintAt and kis_brushop:paintAt. KisBrushOp represents a specific pa...
Categories: FLOSS Project Planets

Plasma 6.1

Planet KDE - Mon, 2024-06-17 20:00

Plasma 6 hits its stride with version 6.1. While Plasma 6.0 was all about getting the migration to the underlying Qt 6 frameworks correct (and what a massive job that was), 6.1 is where developers start implementing the features that will take you desktop to a new level.

In this release, you will find features that go far beyond subtle changes to themes and tweaks to animations (although there is plenty of those too), as you delve into interacting with desktops on remote machines, become more productive with usability and accessibility enhancements galore, and discover customizations that will even affect the hardware of your computer.

These features and more are being built directly into Plasma's Wayland version natively, avoiding the need for third party software and hacky extensions required by similar solutions implemented in X.

Things will only get more interesting from here. But meanwhile enjoy what will land on your desktop with your next update.

Note: Due to unforeseen circumstances, we have been unable to ship the new wallpaper, "Reef", with this version of Plasma. However, there will be a new wallpaper coming soon in the next 6.2 version.

If you can't wait, you can download "Reef" here.

We apologise for this delay and the inconvenience this may cause.

What's New Access Remote Plasma Desktops

One of the more spectacular (and useful) features added in Plasma 6.1 is that you can now start up a remote desktop directly from the System Settings app. This means that if you are sysadmin who needs to troubleshoot users' machines, or simply need to work on a Plasma-enabled computer that is out of reach, setting up a connection is just a few clicks away.

Once enabled, you can connect to the remote desktop using a client such as KRDC. You will see the remote machine's Plasma desktop in a window and be able to interact with it from your own computer.

Customization made (more) Visual

We all love customizing our Plasma desktops, don't we? One of the quickest ways to do this is by entering Plasma's Edit Mode (right-click anywhere on the desktop background and select Enter Edit Mode from the menu).

In version 6.1, the visual aspect of Edit Mode has been overhauled and you will now see a slick animation when you activate it. The entire desktop zooms out smoothly, giving you a better overview of what is going on and allowing you to make your changes with ease.

Persistent Apps

Plasma 6.1 on Wayland now has a feature that "remembers" what you were doing in your last session like it did under X11. Although this is still work in progress, If you log off and shut down your computer with a dozen open windows, Plasma will now open them for you the next time you power up your desktop, making it faster and easier to get back to what you were doing.

Sync your Keyboard's Colored LEDs

It wouldn't be a new Plasma release without at least one fancy aesthetic customization features. This time, however, we give you the power to reach beyond the screen, all the way onto your keyboard, as you can now synchronize the LED colours of your keys to match the accent colour on your desktop.

The ultimate enhancement for the fashion-conscious user.

Please note that this feature will not work on unsupported keyboards — support for additional keyboards is on the way!

And all this too...
  • We have simplified the choices you see when you try to exit Plasma by reducing the number of confusing options. For example, when you press Shutdown, Plasma will only list Shutdown and Cancel, not every single power option.

  • Screen Locking gives you the option of configuring it to behave like a traditional screensaver, as you can choose it not to ask you for a password to unlock it.

  • Two visual accessibility changes make it easier to use the cursor in Plasma 6.1:

    • Shake Cursor makes the cursor grow when you "shake" it. This helps you locate that tiny little arrow on your large, cluttered screens when you lose it among all those windows.

    • Edge Barrier is useful if you have a multi-monitor setup and want to access things on the very edge of your monitor. The "barrier" is a sticky area for your cursor near the edge between screens, and it makes it easier to click on things (if that is what you want to do), rather than having the cursor scooting across to the next display.

  • Two major Wayland breakthroughs will greatly improve your Plasma experience:

    • Explicit Sync eliminates flickering and glitches traditionally experienced by NVidia users.

    • Triple Buffering support in Wayland makes animations and screen rendering smoother.

…and there's much more going on. To see the full list of changes, check out the changelog for Plasma 6.1.
Categories: FLOSS Project Planets

Talking Drupal: Talking Drupal #455 - Top 5 uses of AI for Drupal

Planet Drupal - Mon, 2024-06-17 16:08

Today we are talking about AI Tips for Drupal Devs, AI Best Practices, and Drupal Droid with guest Mike Miles. We’ll also cover AI interpolator as our module of the week.

For show notes visit: www.talkingDrupal.com/455

Topics
  • Top 5 tips
    • Idea Generation (Ideation)
    • Code Generation
    • Debugging
    • Content Generation
    • Technical Explanations
  • How do you suggest people use AI for Ideation
  • Is MIT Sloan using AI to help with Drupal Development
  • Does that code get directly inserted into your sites
  • What are some common pitfalls
  • Is your team using AI for debugging
  • Any best practices you have found helping when working with AI
  • Is MIT Sloan using AI for content generation
  • What is an example of how you use AI for technical explanations
  • What is your view ont he future of AI in Drupal, do you think AI will replace Drupal developers
Resources Guests

Michael Miles - mike-miles.com mikemiles86

Hosts

Nic Laflin - nLighteneddevelopment.com nicxvan John Picozzi - epam.com johnpicozzi Randy Fay - rfay

MOTW Correspondent

Martin Anderson-Clutz - mandclu.com mandclu

  • Brief description:
    • Have you ever wanted to use AI to help populate entity fields that were left blank by Drupal content authors? There’s a module for that.
  • Module name/project name:
  • Brief history
    • How old: created in Sep 2023 by Marcus Johansson of FreelyGive
    • Versions available: 1.0.0-rc4
  • Maintainership
    • Actively maintained, recent release in the past month
    • Security coverage - opted in, needs stable release
    • Test coverage
    • Documentation - User guide
    • Number of open issues: 18 open issues, none of which are a bugs
  • Usage stats:
    • 94 sites
  • Module features and usage
    • In scientific fields, interpolation is the process of using known data to extrapolate or estimate unknown data points. In a similar way this module helps your Drupal site provide values for fields that didn’t receive input, based on the information that was provided.
    • Fundamentally Interpolator AI provides a framework and an API, and then relies on companion modules for processing, either by leveraging third-party services like AI LLMs, or PHP-based scripting.
    • There are existing integrations with a variety of AI services, including OpenAI, Dreamstudio, Hugging Face, and more.
    • You can add retrievers to help extract and normalize the content you’re processing, for example photos from an external site, and other tools to help normalize and optimize content and media, and optimize any prompts you will be using with AI services.
    • You can also extend the workflow capabilities of AI Interpolator, for example using the popular and powerful ECA module that we’ve talked about before on this show.
Categories: FLOSS Project Planets

PyBites: Deploying a FastAPI App as an Azure Function: A Step-by-Step Guide

Planet Python - Mon, 2024-06-17 14:07

In this article I will show you how to deploy a FastAPI app as a function in Azure. Prerequisites are that you have an Azure account and have the Azure CLI installed (see here).

Setup Azure

First you need to login to your Azure account:

az login

It should show your subscriptions and you select the one you want to use.

Create a resource group

Like this:

az group create --name myResourceGroup --location eastus

A resource group is a container that holds related resources for an Azure solution (of course use your own names throughout this guide).

It comes back with the details of the resource group you just created.

Create a storage account

Using this command:

az storage account create --name storage4pybites --location eastus --resource-group myResourceGroup --sku Standard_LRS

Note that for me, to get this working, I had to first make an MS Storage account, which I did in the Azure portal.

The storage account name can only contain lowercase letters and numbers. Also make sure it’s unique enough (e.g. mystorageaccount was already taken).

Upon success it comes back with the details of the storage account you just created.

Create a function app

Now, create a function app.

az functionapp create --resource-group myResourceGroup --consumption-plan-location eastus --runtime python --runtime-version 3.11 --functions-version 4 --name pybitesWebhookSample --storage-account storage4pybites --os-type Linux Your Linux function app 'pybitesWebhookSample', that uses a consumption plan has been successfully created but is not active until content is published using Azure Portal or the Functions Core Tools. Application Insights "pybitesWebhookSample" was created for this Function App. You can visit <URL> to view your Application Insights component App settings have been redacted. Use `az webapp/logicapp/functionapp config appsettings list` to view. { ...JSON response... }

A bit more what these switches mean:

  • --consumption-plan-location is the location of the consumption plan.
  • --runtime is the runtime stack of the function app, here Python.
  • --runtime-version is the version of the runtime stack, here 3.11 (3.12 was not available at the time of writing).
  • --functions-version is the version of the Azure Functions runtime, here 4.
  • --name is the name of the function app (same here: make it something unique / descriptive).
  • --storage-account is the name of the storage account we created.
  • --os-type is the operating system of the function app, here Linux (Windows is the default but did not have the Python runtime, at least for me).
Create your FastAPI app

Go to your FastAPI app directory and create a requirements.txt file with your dependencies if not done already:

To demo this from scratch, let’s create a simple FastAPI webhook app (the associated repo is here).

mkdir fastapi_webhook cd fastapi_webhook mkdir webhook

First, create a requirements.txt file:

fastapi azure-functions

The azure-functions package is required to run FastAPI on Azure Functions.

Then create an __init__.py file inside the webhook folder with the FastAPI app code:

import azure.functions as func from fastapi import FastAPI app = FastAPI() @app.post("/webhook") async def webhook(payload: dict) -> dict: """ Simple webhook that just returns the payload. Normally you would do something with the payload here. And add some security like secret key validation (HMAC for example). """ return {"status": "ok", "payload": payload} async def main(req: func.HttpRequest, context: func.Context) -> func.HttpResponse: return await func.AsgiMiddleware(app).handle_async(req, context)

This took a bit of trial and working to get it working, because it was not evident that I had to use AsgiMiddleware to run FastAPI on Azure Functions. This article helped with that.

Next, create a host.json file in the root of the project for the Azure Functions host:

{ "version": "2.0", "extensions": { "http": { "routePrefix": "" } } }

A local.settings.json file for local development:

{ "IsEncrypted": false, "Values": { "AzureWebJobsStorage": "UseDevelopmentStorage=true", "FUNCTIONS_WORKER_RUNTIME": "python" } }

And you’ll need a function.json file in the webhook directory that defines the configuration for the Azure Function:

{ "scriptFile": "__init__.py", "bindings": [ { "authLevel": "anonymous", "type": "httpTrigger", "direction": "in", "name": "req", "methods": ["post"] }, { "type": "http", "direction": "out", "name": "$return" } ] }

I also included a quick test script in the root of the project:

import sys import requests url = "http://localhost:7071/webhook" if len(sys.argv) < 2 else sys.argv[1] data = {"key": "value"} resp = requests.get(url) print(resp) resp = requests.post(url, json=data) print(resp) print(resp.text)

If you run this script, you should see a 404 for the GET request (not implemented) and a 200 for the POST request. For remote testing I can run the script with the live webhook URL (see below).

Run the function locally

To verify that the function works locally run it with:

func start

Then run the test script:

$ python test.py <Response [404]> <Response [200]> {"status":"ok","payload":{"key":"value"}}

Awesome!

Deploy it to Azure

Time to deploy the function to Azure:

(venv) √ fastapi_webhook (main) $ func azure functionapp publish pybitesWebhookSample Getting site publishing info... [2024-06-17T16:06:08.032Z] Starting the function app deployment... Creating archive for current directory... ... ... Remote build succeeded! [2024-06-17T16:07:35.491Z] Syncing triggers... Functions in pybitesWebhookSample: webhook - [httpTrigger] Invoke url: https://pybiteswebhooksample.azurewebsites.net/webhook

Let’s see if it also works on Azure:

$ python test.py https://pybiteswebhooksample.azurewebsites.net/webhook <Response [404]> <Response [200]> {"status":"ok","payload":{"key":"value"}}

Cool, it works!

If something goes wrong remotely, you can check the logs:

az webapp log tail --name pybitesWebhookSample --resource-group myResourceGroup

And when you make changes, you can redeploy with:

func azure functionapp publish pybitesWebhookSample

That’s it, a FastAPI app running as an Azure Function! I hope this article helps you if you want to do the same. If you have any questions, feel free to reach out to me on X | Fosstodon | LinkedIn -> @bbelderbos.

Categories: FLOSS Project Planets

ADCI Solutions: How to add a live chat to a website using Mercure

Planet Drupal - Mon, 2024-06-17 11:58
<p>We are currently working on a platform for contacting influencers and media to promote products, services, and personal brands. As the platform was developing, the client felt the need to <a href="https://www.adcisolutions.com/work/mercure-chat?utm_source=planetdrupal%26utm_medium=rss_feed%26utm_campaign=mercure_chat">add live chat functionality</a> so that platform clients could communicate with its administrators and copywriters. But it had to store its history on the client server.&nbsp;</p><img data-entity-uuid="8baeedd5-7f77-4fad-af7b-78aba79bfae1" data-entity-type="file" src="https://www.adcisolutions.com/sites/default/files/inline-images/adding-a-live-chat-to-website.png" width="1324" height="842" alt="adding a live chat to a website">
Categories: FLOSS Project Planets

Real Python: Ruff: A Modern Python Linter for Error-Free and Maintainable Code

Planet Python - Mon, 2024-06-17 10:00

Linting is essential to writing clean and readable code that you can share with others. A linter, like Ruff, is a tool that analyzes your code and looks for errors, stylistic issues, and suspicious constructs. Linting allows you to address issues and improve your code quality before you commit your code and share it with others.

Ruff is a modern linter that’s extremely fast and has a simple interface, making it straightforward to use. It also aims to be a drop-in replacement for many other linting and formatting tools, such as Flake8, isort, and Black. It’s quickly becoming one of the most popular Python linters.

In this tutorial, you’ll learn how to:

  • Install Ruff
  • Check your Python code for errors
  • Automatically fix your linting errors
  • Use Ruff to format your code
  • Add optional configurations to supercharge your linting

To get the most from this tutorial, you should be familiar with virtual environments, installing third-party modules, and be comfortable with using the terminal.

Ruff cheat sheet: Click here to get access to a free Ruff cheat sheet that summarizes the main Ruff commands you’ll use in this tutorial.

Take the Quiz: Test your knowledge with our interactive “Ruff: A Modern Python Linter” quiz. You’ll receive a score upon completion to help you track your learning progress:

Interactive Quiz

Ruff: A Modern Python Linter

In this quiz, you'll test your understanding of Ruff, a modern linter for Python. By working through this quiz, you'll revisit why you'd want to use Ruff to check your Python code and how it automatically fixes errors, formats your code, and provides optional configurations to enhance your linting.

Installing Ruff

Now that you know why linting your code is important and how Ruff is a powerful tool for the job, it’s time to install it. Thankfully, Ruff works out of the box, so no complicated installation instructions or configurations are needed to start using it.

Assuming your project is already set up with a virtual environment, you can install Ruff in the following ways:

Shell $ python -m pip install ruff Copied!

In addition to pip, you can also install Ruff with Homebrew if you’re on macOS or Linux:

Shell $ brew install ruff Copied!

Conda users can install Ruff using conda-forge:

Shell $ conda install -c conda-forge ruff Copied!

If you use Arch, Alpine, or openSUSE Linux, you can also use the official distribution repositories. You’ll find specific instructions on the Ruff installation page of the official documentation.

Additionally, if you’d like Ruff to be available for all your projects, you might want to install Ruff with pipx.

You can check that Ruff installed correctly by using the ruff version command:

Shell $ ruff version ruff 0.4.7 Copied!

For the ruff command to appear in your PATH, you may need to close and reopen your terminal application or start a new terminal session.

Linting Your Python Code

While linting helps keep your code consistent and error-free, it doesn’t guarantee that your code will be bug-free. Finding the bugs in your code is best handled with a debugger and adequate testing, which won’t be covered in this tutorial. Coming up in the next sections, you’ll learn how to use Ruff to check for errors and speed up your workflow.

Checking for Errors

The code below is a simple script called one_ring.py. When you run it, it gets a random Lord of the Rings character name from a tuple and lets you know if that character bore the burden of the One Ring. This code has no real practical use and is just a bit of fun. Regardless of the size of your code base, the steps are going to be the same:

Read the full article at https://realpython.com/ruff-python/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Real Python: Quiz: Ruff: A Modern Python Linter

Planet Python - Mon, 2024-06-17 08:00

In this quiz, you’ll test your understanding of Ruff, a modern linter for Python.

By working through this quiz, you’ll revisit why you’d want to use Ruff to check your Python code and how it automatically fixes errors, formats your code, and provides optional configurations to enhance your linting.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Robin Wilson: Accessing Planetary Computer STAC files in DuckDB

Planet Python - Mon, 2024-06-17 05:36

Microsoft Planetary Computer is a wonderful archive of geospatial datasets (primarily raster images of various types), provided with a STAC catalog to enable them to be easily searched through an API. That’s fine for normal usage where you want to find a selection of images and access the images themselves, but less useful when you want to do some analysis of the metadata itself.

For that use case, Planetary Computer provide the STAC metadata in bulk, stored as GeoParquet files. Their documentation page explains how to use this with geopandas and do some large-ish scale processing with Dask. I wanted to try this with DuckDB – a newer tool that is excellent at accessing and processing Parquet files, including efficiently accessing files available via HTTP, and only downloading the relevant parts of the file. So, this post explains how I managed to do this – showing various different approaches I tried and how (or if!) each of them worked.

Getting URLs

Planetary Computer URLs are basically just URLs to files on Azure Blob Storage. However, the URLs require signing with a Shared Access Signature (SAS) key. Luckily, there is a Planetary Computer Python module that will use an API to generate the correct keys for us.

In the docs for using the STAC GeoParquet files, they give this code:

catalog = pystac_client.Client.open( "https://planetarycomputer.microsoft.com/api/stac/v1/", modifier=planetary_computer.sign_inplace, ) asset = catalog.get_collection("io-lulc-9-class").assets["geoparquet-items"]

This gets a collection, and the geoparquet-items collection-level asset, having used a ‘modifier’ to sign the URLs as they are acquired from the STAC API. The URL is then stored in asset.href.

Using the DuckDB CLI – with HTTP URLs

When using GeoPandas, the docs show that you can pass a storage_options parameter to the read_parquet method and it will ‘just work’:

df = geopandas.read_parquet( asset.href, storage_options=asset.extra_fields["table:storage_options"] )

However, to use the file in DuckDB we need a full URL. The SQL code we’re going to use in DuckDB for a quick test looks like this:

SELECT * FROM <URL> LIMIT 1

This just gets the first row of whatever URL is given.

Unfortunately, though, the URL provided by the STAC catalog looks like this:

abfs://items/io-lulc-9-class.parquet

If we try that with DuckDB, we find it doesn’t know how to deal with it. So, we need to convert this to a standard HTTP URL that can be downloaded as if it were just typed into a web browser (or used with cURL or wget). After a bit of playing around, I found I could create a full URL with this Python code:

url = f'https://{asset.extra_fields["table:storage_options"] ["account_name"]}.dfs.core.windows.net/{asset.href[7:]}? {asset.extra_fields["table:storage_options"]["credential"]}'

That’s taking the account_name and sticking it in as part of the host part of the UK, then extracting the path from the original URL and adding that, and then finishing with the SAS token (stored as credential in the storage options) as a URL query parameter.

That results in a URL that looks like this (split over multiple lines for readability):

https://pcstacitems.dfs.core.windows.net/items/io-lulc-9-class.parquet ?st=2024-06-14T19%3A04%3A20Z&se=2024-06-15T19%3A49%3A20Z&sp=rl&sv=2024-05-04& sr=c&skoid=9c8ff44a-6a2c-4dfb-b298-1c9212f64d9a &sktid=72f988bf-86f1-41af-91ab-2d7cd011db47&skt=2024-06-15T09%3A00%3A14Z &ske=2024-06-22T09%3A00%3A14Z&sks=b &skv=2024-05-04 &sig=A0e%2BihbAagHmIZ%2Bg8gzH71TavRYQMZiHWJ/Uk9j0Who%3D

(note: that specific URL won’t work for you, as the SAS tokens are time-limited).

If we insert that into our DuckDB SQL statement then we find it works (click to enlarge):

Great! Now let’s try with a different collection on Planetary Computer – in this case the Sentinel 2 collection. We can make the URL in the same way as before, by changing how we define asset:

asset = catalog.get_collection("sentinel-2-l2a").assets["geoparquet-items"]

and then using the same query in DuckDB with the new URL. Unfortunately, this time we get an error:

Invalid Input Error: File '<URL>' too small to be a Parquet file

The problem here is that for larger collections, the STAC metadata is spread out over a series of GeoParquet files inside a folder, and the URL we’ve been given is just to the folder (even though it ends in .parquet). As we’re just using HTTP, there is no way to do things like list the files in a folder, so DuckDB has no way to find out what files are within the folder and start reading them. We need to find another way.

Using the DuckDB CLI – with the Azure extension

Conveniently, there is an Azure extension for DuckDB, and it lets you use URLs like 'abfss://⟨my_filesystem⟩/⟨path⟩/⟨my_file⟩.⟨parquet_or_csv⟩'. That’s a slightly different URL scheme to the one we’ve been given (abfss as opposed to abfs), but we can easily sort that.

Looking at the authentication docs though, it seems to require you to specify either a connection string, a service principal or using the ‘Azure Credential Chain’ (which, I think, works with the az command line and various environment variables that you may have set up). We don’t have any of those – they’re all a far broader scope than what we’ve got, which is just a SAS token for a specific file or folder. It looks like the Azure extension doesn’t support this, so we’ll have to look for another way.

Using the DuckDB Python library – with an Azure connector

As well as the command-line interface, DuckDB can also be used from Python. To set this up, just run pip install duckdb. While you’re at it, you might want to install pandas and the adlfs library for connecting to Azure Storage.

Using this is actually quite simple, first import the libraries:

import duckdb from adlfs.spec import AzureBlobFileSystem

then set up the Azure connection:

a = AzureBlobFileSystem(account_name='pcstacitems', sas_token=asset.extra_fields["table:storage_options"]['credential'])

Note how here we’re using the sas_token parameter to provide a SAS token. We could use a different parameter here to provide a connection string or some other credential type. I couldn’t find much real-world use of this sas_token parameter when looking online – so this is probably the key ‘less well documented’ bit to take away from this article.

Continuing, we then connect to DuckDB and ‘register’ this Azure connection:

connection = duckdb.connect() connection.register_filesystem(a)

From there, we can run a SQL query like this:

query = connection.sql(f""" SELECT * FROM 'abfs://items/sentinel-2-l2a.parquet/*.parquet' LIMIT 1; """) query.to_df()

The call to to_df at the end converts the result into a Pandas DataFrame for easy viewing and manipulation. The results are shown below (click to enlarge):

Note that we’re passing a URL of abfs://items/sentinel-2-l2a.parquet/*.parquet – this is the URL from the STAC catalog with /*.parquet added to the end to ensure DuckDB picks up the large number of Parquet files stored there. I’d have thought this would have worked without that, but if I miss that out I get a error saying:

TypeError: int() argument must be a string, a bytes-like object or a real number, not 'NoneType'

I suspect this is something to do with how things are passed to the Azure connection that we registered with DuckDB, but I’m not entirely sure. If you do know, then please leave a comment!

A few fun queries

So, now we’ve got this working, what can we do? Well, we can – fairly efficiently – run queries across all the STAC metadata for Sentinel 2 images. I’ll just give a few examples of queries I’ve run below.

We can find out how many images there are in the collection in total:

SELECT COUNT(*) FROM 'abfs://items/sentinel-2-l2a.parquet/*.parquet'

We can do the same for just the images acquired in 2020 – by using a fact we know about how the individual parquet files are named (from the Planetary Computer docs):

SELECT COUNT(*) FROM 'abfs://items/sentinel-2-l2a.parquet/*2020*.parquet'

And we can start looking at how many scenes were captured with different amounts of cloud cover:

SELECT COUNT(*) / (SELECT COUNT(*) FROM 'abfs://items/sentinel-2-l2a.parquet/*2020*.parquet'), CASE WHEN "eo:cloud_cover" between 0 and 5 then '0-5%' WHEN "eo:cloud_cover" between 5 and 10 then '5-10%' WHEN "eo:cloud_cover" between 10 and 15 then '10-15%' END AS category FROM 'abfs://items/sentinel-2-l2a.parquet/*2020*.parquet' GROUP BY category,

This tells us that 20% of scenes had between 0 and 5% cloud cover (quite a high number, I thought – but then again, I am used to living in the UK!), and around 4-5% in each of the 5-10% and 10-15% categories.

There are plenty of other analyses that you could do with these Parquet files, of course. At some point I might even get around to the task which initially made me look into this: that is, trying to find Landsat and Sentinel 2 scenes that were acquired over the same location at very similar times. I think I’ll leave that for another day though…

Categories: FLOSS Project Planets

Greg Casamento: Keysight laid me off in January!

GNU Planet! - Mon, 2024-06-17 01:24
A little history first. Keysight is a large company that, primarily, makes testing equipment such as oscilloscopes and other electronics. They bought a company a few years back named TestPlant. Prior to that, TestPlant bought a company by the name of Redstone that produced a product known as Eggplant. Recently, I was laid off for economic reasons (at least that's what they said). It occurs to me that nothing in this world lasts forever. I was so depressed when I was let go because Keysight was the perfect home for me... they used GNUstep deeply. So, as you can imagine, I was deeply upset when things ended... but all things do. 
 I think it happened for several reasons: 
  • Economic - This is what was explained to me, but I am not sure I believe it 
  • Politics - I think this part is because I expressed my opinions HONESTLY about the direction of the company given that they wanted to make the application into a VSCode plugin.
  • Perception - I am 54 years old... so I think that they believed that Objective-C was my one and only talent, it's not... I know many other languages and have many other skills. 
Unfortunately, in the US, any employer can let go of any employee or contractor for ANY reason. This is known as at-will employment, making it very hard to take any action against any employer (not that this is something I considered).
Keysight is and will remain a major contributor to GNUstep.
That being said, I recently ran into something rather disturbing at another company.   I have been working with a company based out of New Mexico that is interested in space applications.  They have been using GNUstep and have been awaiting funding.
The lead of this effort expressed something during a meeting saying "We will work on the GNUstep side of this because there is no reason we should have to pay for any of this."   This hit a sour note with me to say the very least.   As it turns out he was under the mistaken impression that, because the work was on GNUstep, it was for free... which is WRONG.
I wonder if the same impression was present at Keysight or if other companies believe this.  The saying, according to RMS, is "Free as in freedom, not as in beer."   If you are a manager at a company who is under the mistaken impression that work on any Free Software or Open Source project is free when your product depends on it, please correct your thinking.   Just because it is someone's passion project does NOT mean that they are going to do that work for free and prioritize the things that need to be done for your organization.
All of that being said the positive sides are this:
  1. More time to code on GNUstep without interruption
  2. More time to work on my own projects
  3. Time to rest and relax
So, as much as I hate being unemployed there ARE some upsides to it.  Here's to hoping something works out soon.   I literally loved my job at Keysight and, honestly, hope to return.   I have my eye on their changes as well as those of others just like any other member of the community.  Yours, GC
Categories: FLOSS Project Planets

The Drop Times: Tech Talks, Brand Strategies, and the Future of Drupal: Highlights from The DropTimes

Planet Drupal - Mon, 2024-06-17 01:16

An enterprise web solution is inherently complex. Its endless integrations with various applications that are added or loaded off as per the changing requirements, multiple front-ends that provide varied and personalised digital experiences, the cultural nuances catered to in its l10n editions, the time-zone differences that are to be taken care of, etc., etc., are the most basic parts that add to the complexity.

Continuous investment, development and deployment are part and parcel of such a solution. Even so, it becomes amiable only if the complexity is masked out in the simplicity of the consumer touch points.

Two separate sets of consumers should be considered there. One is the enterprise consumers, the content creators and marketers, who are the immediate benefactors of the solution. The second, a more prominent set, is the customers to whom the enterprises cater.

Drupal, the platform for building a digital experience, invested heavily in satisfying the end customers. That was what Drupal agencies were doing all these years. They got an exoskeleton on which they built the end-user experiences. But what about the enterprise consumers themselves? Weren’t they the ones who ultimately decided to ditch or continue using the services? How good were their experiences sans the involvement of a hired developer?

The decision to simplify the touch points of enterprise consumers is a development and design decision. These content creators and marketers will benefit the most from the Starshot Initiative. It supports the marketing team by reducing the go-to-market time and eases the content creators by providing the most valuable tools out of the box. A complex solution is now draped in a simple cassock.

The first story I’m sharing today concerns Drupal’s new brand strategy and future directions. The Starshot Initiative manifests this strategy and the direction the project is taking. An interview with Shawn Perritt, Senior Director of Brand & Creative at Acquia, shines a light on this brand refresh. Read Shawn’s conversation with Alka Elizabeth, our sub-editor.

Montreal became the hub for tech talks in the past week. We conducted one-question interviews with three featured speakers at Evolve Drupal Montreal. Kazima Abbas, our sub-editor, spoke with Josh Koenig, co-founder of Pantheon, Joe Kwan, Manager of Web Development at the University of Waterloo, and the formidable Mike Herchel, Senior Front-end Developer at Agileana. Their answers form part of the feature published here.

Kazima also conducted an exclusive interview with Baddý Sonja Breidert, CEO and Co-Founder of 1xINTERNET. The interview focused on the agency’s support for the Starshot Initiative based on its experience with ‘Try Drupal’, which implements a default Drupal deployment for enterprises. It explains why they did not wait or hesitate to offer all-out support for the new initiative.

Alka Elizabeth and Ben Peter Mathew, our community manager, had jointly covered the Starshot Session on ‘Strategic Milestones in Product Definition’ led by Dries Buytaert and Cristina Chumillas on June 7. The report can be read here.

The next two stories are our signature stories, written by our founder and lead, Anoop John, CTO of Zyxware Technologies. From the start, Anoop was vocal about ‘DrupalCollab’, an idea he had been nurturing for years. As part of the initiative, he has published a list of the largest 500+ cities in the world with a sizable population of people tagged ‘Drupal developers’ on LinkedIn. You can read the analysis here. The second story published with this list is, On Using LinkedIn to Analyze the Size of the Drupal Community, an article on the methodology for this analysis using LinkedIn, associated errors and justifications. Based on these stories, Anoop is trying to build momentum to restart local Drupal meetups in as many cities as possible. Also, read his LinkedIn article on the initiative here.

Mautic 5.1 Andromeda Edition was released last week with major updates. Read our report here. We published a quick review of the Drupal Quick Exit Module developed by Oomph Inc. Ben Peter Mathew spoke with Alyssa Varsanyi of Oomph to produce this report.

To ensure brevity, I shall list the rest of the important stories below without much ado.

That is for the week, dear readers. I wish you all a very sacred Eid Ul Adha. Let the festival of sacrifice be a time to ponder the things we are ready to give up for the better common good.

To get timely updates, follow us on LinkedIn, Twitter and Facebook. Also, join us on Drupal Slack at #thedroptimes.

Thanks and regards.

Sincerely,
Sebin A. Jacob,
Editor-in-Chief, The DropTimes

Categories: FLOSS Project Planets

GSoC'24 Okular | Week 2-3 Recap

Planet KDE - Mon, 2024-06-17 01:00
People of the Internet,

While working on keystroke events, I realized my improvements to the event.change property were still inconsistent for certain Unicode characters. This led me to delve into code units, code points, graphemes, and other cool Unicode concepts. I found this blog post to be very enlightening.

Here’s an update on my progress over the past two weeks:

MRs merged:
  • event.change : The change property of the event object now correctly handles Unicode, with adjustments to selStart and selEnd calculations. !MR998
  • cursor position and undo/redo fix : Fixed cursor position calculations to account for rejected input text and resolved merging issues with undo/redo commands in text fields. !MR1011
  • DocOpen Event implementation : Enabled document-level scripts to access the event object by implementing the DocOpen event. !MR1003
  • Executing validation events correctly : Fixed a bug where validation scripts wouldn’t run after KeystrokeCommit scripts in Okular. !MR999
  • Widget refresh functions for RadioButton, ListEdit and ComboEdit : Added refresh functions as slots for RadioButton, ListEdit, and ComboEdit widgets, aiding in reset functionality and script updates. !MR1012
  • Additional document actions in Poppler : Implemented reading additional document actions (CloseDocument, SaveDocumentStart, SaveDocumentFinish, PrintDocumentStart, PrintDocumentFinish) in the qt5 and qt6 frontends for Poppler. !MR1561 (in Poppler)
MRs currently under review:
  • Reset form implementation in qt6 frontend for Okular : Working on the reset form functionality in Okular, currently focusing on qt6 frontend details. !MR1564 (in Poppler)
  • Reset form in Okular : Using the Poppler API to reset forms within Okular. !MR1007
  • Fixing order of execution of events for text form fields : Addressing the incorrect execution order of certain events (e.g., calculation scripts) and ensuring keystroke commit, validation, and format scripts are evaluated correctly when setting fields via JavaScript. !MR1002

For the coming weeks, my focus will be on implementing reset forms, enhancing keystroking and formatting scripts, and possibly starting on submit forms. Let’s see how it goes.

See you next time. Cheers!

Categories: FLOSS Project Planets

Quansight Labs Blog: How Narwhals and scikit-lego came together to achieve dataframe-agnosticism

Planet Python - Sun, 2024-06-16 20:00
And how your Python library can become dataframe-agnostic too
Categories: FLOSS Project Planets

Report From KDEPIM Spring Sprint 2024

Planet KDE - Sun, 2024-06-16 10:04

Like last year, the KDEPIM team convened in Toulouse to hold it’s traditional spring sprint. Unlike last year this time it was late spring, almost made it into summer. This time we’ve been hosted by Étincelle Coworking in one of their spaces. It was fairly nice. Lots of space, comfortable, well situated in the center. I definitely recommend.

The Warm-up

Some of the team (namely Carl and Volker) arrived early before the official start. I’m pretty sure Carl made it early to have lunch in the infamous Cake Place™ (at last). Of course a stupid amount of cake slices was involved. And after a short stroll we got to the meeting space we would use the rest of the time.

We started preparing the work board and finished collecting ideas as Dan arrived.

Day 1

Once again, we had two types of tasks: discussions and actual hacking. During that first day we mostly focused on the conversations but did some hacking on the side. Since the attendance was a bit low this year, we didn’t go for break out groups and did them all as a single group.

Said discussions covered mostly the following topics:

  • Retiring some old software or now unused resources
  • Trying to devise plans to remove some duplication of efforts (did you know we have several copies of the time tree parser?)
  • Some minor stability issues were discussed to have a plan to solve them
  • How to improve the usage of Akonadi on mobile which revolved mostly around some better partial syncing and some tight coupling with widgets at places
  • Devising a strategy to have some end to end testing of the supported resources
  • How to have better tag support and indexing in Akonadi (which opens the door to really cool features so stay tuned)
  • And finally, we spent quite some time discussing how to improve our little PR efforts and how to recruit new contributors

Of course, like last year we’re contrained by the size of the team and so need to use smartly everyone’s time. Again no grand plans, but some of the items we plan to deliver during the year could have very nice consequences. We’ll focus on a better quality of life both for users and contributors.

Day 2

We already started some of the technical tasks the previous day, but Sunday was mostly devoted to them. We had a break for a nice brunch of course but still people worked on their branches around it.

As far as technical tasks, we got the following things done already:

  • Some obscure and broken feature around notes in KMail has been removed
  • KJots notes can now be imported into MarkNotes giving an upgrade path
  • Akonadi tag discovery in caldav resources is a thing
  • The Akonadi brand should finally not appear anywhere in the applications anymore
  • A way to clean up the data coming from retired resources is now in place

But there’s more in the works, namely:

  • the completion of the infrastructure for QtQuick based config dialogs
  • the switch to KConfigDialogManager where is makes sense (instead of using an old PIM specific legacy system)
  • the rewrite of the indexing system (which should improve performance and have instant indexing of payloads)
  • the migration agent to apply tag discovery also to old and already existing items
  • the move of KMime to KDE Frameworks is getting prepared
  • the IMAP resource configuration system is in the works to be able to jump on the QtQuick infrastructure (it’s one of the more complex ones with lots of QtWidgets dependencies)
  • the KDAV test suite is being completed for better coverage

The mentioned tasks and all the ones created during the conversations have been added to our Technical Roadmap on Gitlab. The work is still on going for some of them.

What now?

The most noticeable is probably one thing we acted on immediately after our discussions. We added milestones in the PIM group. Indeed, it’s probably not always easy for people wanting to join to know which are the bigger on going goals in the KDEPIM team and what to work on to help.

This is now encoded through those milestones. It’s the early days for them so they need to be improved and more tasks added, but at least this gives an idea of what has higher priority for now.

We plan to have a BoF at Akademy 2024 to be announced later when the schedule for the BoFs actually opens. There we’ll plan to focus at discussing the experience of contributors and see how it can be improved. We’ll also present our milestones. We might sign you up on tasks there, don’t hesitate to show up, we want to hear from you.

Last but not least

If you want to know more or engage with us, please join the KDEPIM matrix channel! Let’s chat further.

Also, I’d like to thank again Étincelle Coworking and KDE e.V. to make this event possible. This wouldn’t be possible without a venue and without at least partial support in the travel expenses.

Finally, if you like such meetings to happen in the future so that we can push forward your favorite software, please consider making a tax-deductible donation to the KDE e.V. foundation.

Categories: FLOSS Project Planets

KDE PIM Sprint 2024 edition

Planet KDE - Sun, 2024-06-16 06:10

This year again I participated to the KDE PIM Sprint in Toulouse. As always it was really great to meet other KDE contributors and to work together for one weekend. And as you might have seen on my Mastodon account, a lot of food was also involved.

Day 1 (Friday Afternoon)

We started our sprint on Thursday with a lunch at the legendary cake place, which I missed last year due to my late arrival.

Picture of some delicious cakes: a piece of cheesecake raspberry and basil, a piece of lemon tart with meringue and a piece of carrot cake)

We then went to the coworking space where we would spend the remaining of this sprint and started working on defining tasks to work on and putting them on real kanban board.

A kanban board with tasks to discuss and to implement

To get a good summary of the specific topics we discussed, I invite you to consult the blog of Kevin.

That day, aside from the high level discussion, I proceeded to port away the IMAP authentification mechanism for Outlook accounts away from the KWallet API to use the more generic QtKeychain API. I also removed a large dependency from libkleo (the KDE library to interact with GPG).

Day 2 (Saturday)

On the second day, we were greated by a wonderful breakfast (thanks Kevin).

Picture of croissant, brioche and chocolatine

I worked on moving EventViews (the library that renders the calendar in KOrganizer) and IncidenceEditor (the library that provides the event/todo editor in KOrganizer) to KOrganizer. This will allow to reduce the number of libraries in PIM.

For lunch, we ended up eating at the excellent Mexican restaurant next to the location of the previous sprint.

Mexican food

I also worked on removing the “Add note” functionality in KMail. This feature allow to store notes to emails following RFC5257. Unfortunatelty this RFC never left the EXPERIMENTAL state and so these notes were only stored in Akonadi and not synchronized with any services.

This allow to remove the relevant widget from the pimcommon library and the Akonadi attribute.

I also started removing another type of notes: the KNotes app which provided sticky notes. This application was not maintained anymore, didn’t work so well with Wayland. If you were using KNotes, to make sure you don’t loose your notes, I added support in Marknote to import notes from KNotes.

Marknote with the context menu to import notes

Finally I worked on removing visible Akonadi branding from some KDE PIM applications. The branding was usually only visible when an issue occurred, which didn’t help with Akonadi reputation.

We ended up working quite late and ordering Pizzas. I personally got one with a lot of cheese (but no photo this time).

Day 3 (Sunday)

The final day, we didn’t had any breakfast :( but instead a wonderful brunch.

Picture of the brunch

Aside from eating, I started writing a plugin system for the MimeTreeParser which powers the email viewer in Merkuro and in Kleopatra. In the short term, I want to be able to add Itinerary integration in Merkuro but in the longer term the goal is to bring this email viewer to feature parity with the email viewer from KMail and then replace the KMail email viewer with the one from Merkuro. Aside from removing duplicate code, this will improve the security since the individual email parts are isolated from each other and this will makes it easier for the email view to follow KDE styling as this is just normal QML instead of fake HTML components.

I also merged and rebased some WIP merge requests in Marknote in preparation of a new release soon and reviewed merge requests from the others.

Last but not least

If you want to know more or engage with us, please join the KDE PIM and the Merkuro matrix channels! Let’s chat further.

Also, I’d like to thank again Étincelle Coworking and KDE e.V. to make this event possible. This wouldn’t be possible without a venue and without at least partial support in the travel expenses.

Finally, if you like such meetings to happen in the future so that we can push forward your favorite software, please consider making a tax-deductible donation to the KDE e.V. foundation.

Categories: FLOSS Project Planets

Ned Batchelder: Math factoid of the day: 62

Planet Python - Sun, 2024-06-16 06:00

There are two Archimedean solids with 62 faces:

rhomb­icosi­dodeca­hedron truncated icosi­dodeca­hedron

They both have 62 faces because of their roots in the dodecahedron and icosahedron. They have a face for each of the faces, vertices, and edges of either of those polyhedra: 12 + 20 + 30 = 62.

The rhomb­icosi­dodeca­hedron shows up in more places than you might expect for a complex polyhedron. The Zometool building kit uses it for hubs, though the squares are adjusted to rectangles:

Recently one also showed up on Kickstarter: Glint, a rhomb­icosi­dodeca­hedron machined in solid brass. I have one. It’s satisfyingly heavy and precise. Here it is with its little Zome sister:

The rhomb­icosi­dodeca­hedron is very pleasing: nearly round, but with simple polygonal faces roughly equal in size. Who’d have thought a number as seemingly uninteresting as 62 could appear in such serendipitous places?

Categories: FLOSS Project Planets

Zato Blog: Using OAuth in API Integrations

Planet Python - Sun, 2024-06-16 04:00
Using OAuth in API Integrations 2024-06-16, by Dariusz Suchojad

OAuth is often employed in processes requiring permissions to be granted to frontend applications and end users. Yet, what we typically need in API systems integrations is a way to secure connections between the integration middleware and backend systems without a need for any ongoing human interactions.

OAuth can be a good choice for that scenario and this article shows how it can be achieved in Python, with backend systems using REST and HL7 FHIR.

What we would like to have

Let's say we have a typical integration scenario as in the diagram below:

  • External systems and applications invoke the interoperability layer (Zato) which is expected to further invoke a few backend systems, e.g. a REST and HL7 FHIR one so as to return a combined result of backend API invocations. It does not matter what technology the client systems use, i.e. whether they are REST ones or not.

  • The interoperability layer needs to identify itself with the backend systems before it is allowed to invoke them - they need to make sure that it really is Zato and that it accesses only the resources allowed.

  • An OAuth server issues time-based access tokens, which are simple strings, like web browser session cookies, confirming that such and such bearer of the said token is allowed to make such and such requests. Note that the tokens have an explicit expiration time, e.g. they will become invalid after one hour. Also observe that Zato stores the tokens as-is, they are genuinely opaque strings.

  • If a client system invokes the interoperability layer, the layer will obtain a token from the OAuth server and keep it in an internal cache. Next, Zato will invoke the backend systems, bearing the token among other HTTP headers. Each invoked backend system will extract the token from the incoming request and validate it.

How the validation looks like in practices is something that Zato will not be aware of because it treats the token as an opaque string but, in practice, if the token is self-contained (e.g. JWT data) the system may validate it on its own, and if it is not self-contained, the system may invoke an introspection endpoint on the OAuth server to validate the access token from Zato.

Once the validation succeeds, the backend system will reply with the business data and the interoperability layer will combine the results for the calling application's benefit.

In subsequent requests, the same access token will be reused by Zato with the same flow of messages as previously. However, if the cached token expires, Zato will request a new one from the OAuth server - this will be transparent to the calling application - and the flow will resume.

In OAuth terminology, what is described above has specific names, the overall flow of messages between Zato and the OAuth server is called a "Client Credential Flow" and Zato is then considered a "client" from the OAuth server's perspective.

Configuring OAuth

First, we need to create an OAuth security definition that contains the OAuth server's connection details. In this case, the server is Okta. Note the scopes field - it is a list of permissions ("scopes") that Zato will be able to make use of.

What exactly the list of scopes should look like is something to be coordinated with the people who are responsible for the configuration of the OAuth server. If it is you personally, simply ensure that what is in the the OAuth server and in Zato is in sync.

Calling REST

To invoke REST services, fill out a form as below, pointing the "Security" field to the newly created OAuth definition. This suffices for Zato to understand when and how to obtain new tokens from the underlying OAuth server.

Here is sample code to invoke a backend REST system - note that we merely refer to a connection by its name, without having to think about security at all. It is Zato that knows how to get and use OAuth tokens as required.

# -*- coding: utf-8 -*- # Zato from zato.server.service import Service class GetClientBillingPlan(Service): """ Returns a billing plan for the input client. """ def handle(self): # In a real service, this would be read from input payload = {'client_id': 123} # Get a connection to the server .. conn = self.out.rest['REST Server'].conn # .. invoke it .. response = conn.get(self.cid, payload) # .. and handle the response here. ... Calling HL7 FHIR

Similarly to REST endpoints, to invoke HL7 FHIR servers, fill out a form as below and let the "Security" field point to the OAuth definition just created. This will suffice for Zato to know when and how to use tokens received from the underlying OAuth server.

Here is sample code to invoke a FHIR server system - as with REST servers above, observe that we only refer to a connection by its name and Zato takes care of OAuth.

# -*- coding: utf-8 -*- # Zato from zato.server.service import Service class GetPractitioner(Service): """ Returns a practictioner matching input data. """ def handle(self) -> 'None': # Connection to use conn_name = 'My EHR' # In a real service, this would be read from input practitioner_id = 456 # Get a connection to the server .. with self.out.hl7.fhir[conn_name].conn.client() as client: # Get a reference to a FHIR resource .. practitioners = client.resources('Practitioner') # .. look up the practitioner .. result = practitioners.search(active=True, _id=practitioner_id).get() # .. and handle the response here. ... What about the API clients?

One aspect omitted above are the initial API clients - this is on purpose. How they invoke Zato, using what protocols, with what security mechanisms, and how to build responses based on their input data, this is completely independent of how Zato uses OAuth in its own communication with backend systems.

All of these aspects can and will be independent in practice, e.g. clients will use Basic Auth rather than OAuth. Or perhaps the clients will use AMQP, Odoo, SAP, or IBM MQ, without any HTTP, or maybe there will be no explicit API invocations and what we call "clients" will be actually CSV files in a shared directory that your services will be scheduled to periodically pick up. Yet, once more, regardless of what makes the input data available, the backend OAuth mechanism will work independently of it all. ```



Next steps

API programming screenshots
➤ Python API integration tutorial
➤ More API programming examples in Python

More blog posts
Categories: FLOSS Project Planets

Mario Hernandez: Automating your Drupal Front-end with ViteJS

Planet Drupal - Sat, 2024-06-15 22:24

Modern web development relies heavily on automation to stay productive, validate code, and perform repetitive tasks that could slow developers down. Front-end development in particular has evolved, and it can be a daunting task to configure effective automation. In this post, I'll try to walk you through basic automation for your Drupal theme, which uses Storybook as its design system.

Recently I worked on a large Drupal project that needed to migrate its design system from Patternlab to Storybook. I knew switching design systems also meant switching front-end build tools. The obvious choice seemed to be Webpack, but as I looked deeper into build tools, I discovered ViteJS.

Vite is considered the Next Generation Frontend Tooling, and when tested, we were extremely impressed not only with how fast Vite is, but also with its plugin's ecosystem and its community support. Vite is relatively new, but it is solid and very well maintained. Learn more about Vite.

The topics covered in this post can be broken down in two categories:

  1. Preparing the Front-end environment

  2. Automating the environment

1. Build the front-end environment with Vite & Storybook

In a previous post, I wrote in detail how to build a front-end environment with Vite and Storybook, I am going to spare you those details here but you can reference them from the original post.

  1. In your command line, navigate to the directory where you wish to build your environment. If you're building a new Drupal theme, navigate to your site's web/themes/custom/
  2. Run the following commands (Storybook should launch at the end):
npm create vite@latest storybook cd storybook npx storybook@latest init --type react

Fig. 1: The first command builds the Vite project, and the last one integrates Storybook into it.

Reviewing Vite's and Storybook's out of the box build scripts

Vite and Storybook ship with a handful of useful scripts. We may find some of them already do what we want or may only need minor tweaks to make them our own.

  • In your code editor, open package.json from the root of your newly built project.
  • Look in the scripts section and you should see something like this:
"scripts": { "dev": "vite", "build": "vite build", "lint": "eslint . --ext js,jsx --report-unused-disable-directives --max-warnings 0", "preview": "vite preview", "storybook": "storybook dev -p 6006", "build-storybook": "storybook build" },

Fig. 2: Example of default Vite and Storybook scripts out of the box.

To run any of those scripts, prefix them with npm run. For example: npm run build, npm run lint, etc. Let's review the scripts.

  • dev: This is a Vite-specific command which runs the Vite app we just build for local development
  • build: This is the "do it all" command. Running npm run build on a project runs every task defined in the build configuration we will do later. CI/CD runners run this command to build your app for production.
  • lint: Will lint your JavaScript code inside .js or .jsx files.
  • preview: This is also another Vite-specific command which runs your app in preview mode.
  • storybook: This is the command you run to launch and keep Storybook running while you code.
  • build-storybook: To build a static version of Storybook to package it or share it, or to run it as a static version of your project.
Building your app for the first time Getting a consistent environment

In front-end development, it is important everyone in your team use the same version of NodeJS while working in the same project. This ensures consistency in your project's behavior for everyone in your team. Differences in the node version your team uses can lead to inconsistencies when the project is built. One way to ensure your team is using the same node version when working in the same project, is by adding a .nvmrc file in the root of your project. This file specifies the node version your project uses. The node version is unique to each project, which means different projects can use different node versions.

  • In the root of your theme, create a file called .nvmrc (mind the dot)
  • Inside .nvmrc add the following: v20.14.0
  • Stop Storybook by pressing Ctrl + C in your keyboard
  • Build the app:
nvm install npm install npm run build

Fig. 3: Installs the node version defined in .nvmrc, then installs node packages, and finally builds the app.

NOTE: You need to have NVM installed in your system to execute nvm commands.
You only need to run nvm install once per project unless the node version changes. If you switch to a project that uses a different node version, when you return to this project, run nvm use to set your environment back to the right node version.

The output in the command line should look like this:

Fig. 4: Screenshot of files compiled by the build command.

By default, Vite names the compiled files by appending a random 8-character string to the original file name. This works fine for Vite apps, but for Drupal, the libraries we'll create expect for CSS and JS file names to stay consistent and not change. Let's change this default behavior.

  • First, install the glob extension. We'll use this shortly to import multiple CSS files with a single import statement.
npm i -D glob
  • Then, open vite.config.js in your code editor. This is Vite's main configuration file.
  • Add these two imports around line 3 or directly after the last import in the file
import path from 'path'; import { glob } from 'glob';
  • Still in vite.config.js, replace the export default... with the following snippet which adds new settings for file names:
export default defineConfig({ plugins: [ ], build: { emptyOutDir: true, outDir: 'dist', rollupOptions: { input: glob.sync(path.resolve(__dirname,'./src/**/*.{css,js}')), output: { assetFileNames: 'css/[name].css', entryFileNames: 'js/[name].js', }, }, }, })

Fig. 5: Build object to modify where files are compiled as well as their name preferences.

  • First we imported path and { glob }. path is part of Vite and glob was added by the extension we installed earlier.
  • Then we added a build configuration object in which we defined several settings:
    • emptyOutDir: When the build job runs, the dist directory will be emptied before the new compiled code is added.
    • outDir: Defines the App's output directory.
    • rollupOptions: This is Vite's system for bundling code and within it we can include neat configurations:
      • input: The directory where we want Vite to look for CSS and JS files. Here's where the path and glob imports we added earlier are being used. By using src/**/**/*.{css,js}, we are instructing Vite to look three levels deep into the src directory and find any file that ends with .css or .js.
      • output: The destination for where CSS and JS will be compiled into (dist/css and dist/js), respectively. And by setting assetFileNames: 'css/[name].css', and entryFileNames: 'css/[name].js', CSS and JS files will retain their original names.

Now if we run npm run build again, the output should be like this:

Fig. 6: Screenshot of compiled code using the original file names.

The random 8-character string is gone and notice that this time the build command is pulling more CSS files. Since we configured the input to go three levels deep, the src/stories directory was included as part of the input path.

2. Restructure the project

The out of the box Vite project structure is a good start for us. However, we need to make some adjustments so we can adopt the Atomic Design methodology. This is today's standards and will work well with our Component-driven Development workflow. At a high level, this is the current project structure:

> .storybook/ > dist/ > public/ > src/ |- stories/ package.json vite.config.js

Fig. 7: Basic structure of a Vite project listing only the most important parts.

  • > .storybook is the main location for Storybook's configuration.
  • > dist is where all compiled code is copied into and where the production app looks for all code.
  • > public is where we can store images and other static assets we need to reference from our site. Equivalent to Drupal's /sites/default/files/.
  • > src is the directory we work out of. We will update the structure of this directory next.
  • package.json tracks all the different node packages we install for our app as well as the scripts we can run in our app.
  • vite.config.js is Vite's main configuration file. This is probably where we will spend most of our time.
Adopting the Atomic Design methodology

The Atomic Design methodology was first introduced by Brad Frost a little over ten years ago. Since then it has become the standard for building web projects. Our environment needs updating to reflect the structure expected by this methodology.

  • First stop Storybook from running by pressing Ctrl + C in your keyboard.
  • Next, inside src, create these directories: base, components, and utilities.
  • Inside components, create these directories: 01-atoms, 02-molecules, 03-organisms, 04-layouts, and 05-pages.
  • While we're at it, delete the stories directory inside src, since we won't be using it.
NOTE: You don't need to use the same nomenclature as what Atomic Design suggests. I am using it here for simplicity. Update Storybook's stories with new paths

Since the project structure has changed, we need to make Storybook aware of these changes:

  • Open .storybook/main.js in your code editor
  • Update the stories: [] array as follows:
stories: [ "../src/components/**/*.mdx", "../src/components/**/*.stories.@(js|jsx|mjs|ts|tsx)", ],

Fig. 8: Updating stories' path after project restructure.

The Stories array above is where we tell Storybook where to find our stories and stories docs, if any. In Storybook, stories are the components and their variations.

Add pre-built components

As our environment grows, we will add components inside the new directories, but for the purpose of testing our environment's automation, I have created demo components.

  • Download demo components (button, title, card), from src/components/, and save them all in their content part directories in your project.
  • Feel free to add any other components you may have built yourself. We'll come back to the components shortly.
3. Configure TwigJS

Before we can see the newly added components, we need to configure Storybook to understands the Twig and YML code we are about to introduce within the demo components. To do this we need to install several node packages.

  • In your command line run:
npm i -D vite-plugin-twig-drupal @modyfi/vite-plugin-yaml twig twig-drupal-filters html-react-parser
  • Next, update vite.config.js with the following configuration. Add the snippet below at around line 5:
import twig from 'vite-plugin-twig-drupal'; import yml from '@modyfi/vite-plugin-yaml'; import { join } from 'node:path';

Fig. 9: TwigJS related packages and Drupal filters function.

The configuration above is critical for Storybook to understand the code in our components:

  • vite-plugin-twig-drupal, is the main TwigJS extension for our project.
  • Added two new imports which are used by Storybook to understand Twig:
    • vite-plugin-twig-drupal handles transforming Twig files into JavaScript functions.
    • @modyfi/vite-plugin-yaml let's us pass data and variables through YML to our Twig components.
Creating Twig namespaces
  • Still in vite.config.js, add the twig and yml() plugins to add Twig namespaces for Storybook.
plugins: [ twig({ namespaces: { atoms: join(__dirname, './src/components/01-atoms'), molecules: join(__dirname, './src/components/02-molecules'), organisms: join(__dirname, './src/components/03-organisms'), layouts: join(__dirname, './src/components/04-layouts'), pages: join(__dirname, './src/components/05-pages'), }, }), yml(), ],

Fig. 10: Twig namespaces reflecting project restructure.

Since we removed the react() function by using the snippet above, we can remove import react from '@vitejs/plugin-react' from the imports list as is no longer needed.

With all the configuration updates we just made, we need to rebuild the project for all the changes to take effect. Run the following commands:

npm run build npm run storybook

The components are available but as you can see, they are not styled even though each component contains a CSS stylesheet in its directory. The reason is Storybook has not been configured to find the component's CSS. We'll address this shortly.

4. Configure postCSS

What is PostCSS? It is a JavaScript tool or transpiler that turns a special PostCSS plugin syntax into Vanilla CSS.

As we start interacting with CSS, we need to install several node packages to enable functionality we would not have otherwise. Native CSS has come a long way to the point that I no longer use Sass as a CSS preprocessor.

  • Stop Storybook by pressing Ctrl + C in your keyboard
  • In your command line run this command:
npm i -D postcss postcss-import postcss-import-ext-glob postcss-nested postcss-preset-env
  • At the root of your theme, create a new file called postcss.config.js, and in it, add the following:
import postcssImport from 'postcss-import'; import postcssImportExtGlob from 'postcss-import-ext-glob'; import postcssNested from 'postcss-nested'; import postcssPresetEnv from 'postcss-preset-env'; export default { plugins: [ postcssImportExtGlob(), postcssImport(), postcssNested(), postcssPresetEnv({ stage: 4, }), ], };

Fig. 11: Base configuration for postCSS.

One cool thing about Vite is that it comes with postCSS functionality built in. The only requirement is that you have a postcss.config.js file in the project's root. Notice how we are not doing much configuration for those plugins except for defining them. Let's review the code above:

  • postcss-import the base for importing CSS stylesheets.
  • postcss-import-ext-glob to do bulk @import of all CSS content in a directory.
  • postcss-nested to unwrap nested rules to make its syntax closer to Sass.
  • postcss-preset-env defines the CSS browser support level we need. Stage 4 means we want the "web standards" level of support.
5. CSS and JavaScript configuration

The goal here is to ensure that every time a new CSS stylesheet or JS file is added to the project, Storybook will automatically be aware and begin consuming their code.

NOTE: This workflow is only for Storybook. In Drupal we will use Drupal libraries in which we will include any CSS and JS required for each component.

There are two types of styles to be configured in most project, global styles which apply site-wide, and components styles which are unique to each component added to the project.

Global styles
  • Inside src/base, add two stylesheets: reset.css and base.css.
  • Copy and paste the styles for reset.css and base.css.
  • Inside src/utilities create utilities.css and in it paste these styles.
  • Inside src/, create a new stylesheet called styles.css.
  • Inside styles.css, add the following imports:
@import './base/reset.css'; @import './base/base.css'; @import './utilities/utilities.css';

Fig. 12: Imports to gather all global styles.

The order in which we have imported our stylesheets is important as the cascading order in which they load makes a difference. We start from reset to base, to utilities.

  • reset.css: A reset stylesheet (or CSS reset) is a collection of CSS rules used to clear the browser's default formatting of HTML elements, removing potential inconsistencies between different browsers before any of our styles are applied.
  • base.css: CSS Base applies a style foundation for HTML elements that is consistent for baseline styles such as typography, branding and colors, font-sizes, etc.
  • utilities.css: Are a collection of pre-defined CSS rules we can apply to any HTML element. Rules such as variables for colors, font size, font color, as well as margin, sizes, z-index, animations, etc.
Component styles

Before our components can be styled with their unique and individual styles, we need to make sure all our global styles are loaded so the components can inherit all the base/global styles.

  • Inside src/components create a new stylesheet, components.css. This is where we are going to gather all components styles.
  • Inside components.css add glob imports for each of the component's categories:
@import-glob './01-atoms/**/*.css'; @import-glob './02-molecules/**/*.css';

Fig. 13: Glob import for all components of all categories.

NOTE: Since we only have Atoms and Molecules to work with, we are omitting imports for 03-organisms, 04-layouts, 05-pages. Feel free to add them if you have that kind of components. Updating Storybook's Preview

There are several ways in which we can make Storybook aware of our styles and javascript. We could import each component's stylesheet and javascript into each *.stories.js file, but this could result in some components with multiple sub-components having several CSS and JS imports. In addition, this is not an automated system which means we need to manually do imports as they become available. The approach we are going to take is importing the stylesheets we created above into Storybook's preview system. This provides a couple of advantages:

  • The component's *.stories.js files are clean without any css imports as all CSS will already be available to Storybook.
  • As we add new components with individual stylesheets, these stylesheets will automatically be recognized by Storybook.

Remember, the order in which we import the styles makes a difference. We want all global and base styles to be imported first, before we import component styles.

  • In .storybook/preview.js add these imports at the top of the page around line 2.
import Twig from 'twig'; import drupalFilters from 'twig-drupal-filters'; import '../src/styles.css'; /* Contains reset, base, and utilities styles. */ import '../src/components/components.css'; /* Contains all components CSS. */ function setupFilters(twig) { twig.cache(); drupalFilters(twig); return twig; } setupFilters(Twig);

Fig. 14: Importing all styles, global and components.

In addition to importing two new extensions: twig and twig-drupal-filters, we setup a setupFilters function for Storybook to read Drupal filters we may use in our components. We are also importing two of the stylesheets we created earlier:

  • styles.css contains all the CSS code from reset.css, base.css, and utilities.css (in that order)
  • components.css contains all the CSS from all components. As new components are added and they have their own stylesheets, they will automatically be included in this import.
IMPORTANT: For Storybook to immediately display changes you make in your CSS, the imports above need to be from the src directory and not dist. I learned this the hard way. JavaScript compiling

On a typical project, you will find that the majority of your components don't use JavaScript, and for this reason, we don't need such an elaborate system for JS code. Importing the JS files in the component's *.stories.js should work just fine. Since the demo components dont use JS, I have commented near the top of card.stories.js how the component's JS file would be imported if JS was needed.

If the need for a more automated JavaScript processing workflow arose, we could easily repeat the same CSS workflow but for JS.

Build the project again

Now that our system for CSS and JS is in place, let's build the project to ensure everything is working as we expect it.

npm run build npm run storybook

You may notice that now the components in Storybook look styled. This tells us our new system is working as expected. However, the Card component, if you used the demo components, is missing an image. We will address this issue in the next section.

This concludes the preparation part of this post. The remaining part will focus on creating automation tasks for compiling, minifying and linting code, copying static assets such as images, and finally, watching for code changes as we code. 6. Copying images and other assets

Copying static assets like images, icons, JS, and other files from src into dist is a common practice in front-end projects. Vite comes with built-in functionality to do this. Your assets need to be placed in the public directory and Vite will automatically copy them on build. However, sometimes we may have those assets alongside our components or other directories within our project.

In Vite, there are many ways to accomplish any task, in this case, we will be using a nice plugin called vite-plugin-static-copy. Let's set it up.

  • If Storybook is running, kill it with Ctrl + C in your keyboard
  • Next, install the extension by running:
npm i -D vite-plugin-static-copy
  • Next, right after all the existing imports in vite.config.js, import one more extension:
import { viteStaticCopy } from 'vite-plugin-static-copy';
  • Lastly, still in vite.config.js, add the viteStaticCopy function configuration inside the plugins:[] array:
viteStaticCopy({ targets: [ { src: './src/components/**/*.{png,jpg,jpeg,svg,webp,mp4}', dest: 'images', }], }),

Fig. 15: Adds tasks for copying JavaScript and Images from src to dist.

The viteStaticCopy function we added allows us to copy any type of static assets anywhere within your project. We added a target array in which we included src and dest for the images we want copied. Every time we run npm run build, any images inside any of the components, will be copied into dist/images.
If you need to copy other static assets, simply create new targets for each.

  • Build the project again:
npm run build npm run storybook

The missing image for the Card component should now be visible. Pretty sweet! 🍰

7. The Watch task

A watch task makes it possible for developers to see the changes they are making as they code, and without being interrupted by running commands. Depending on your configuration, a watch task watches for any changes you make to CSS, JavaScript and other file types, and upon saving those changes, code is automatically compiled, and a Hard Module Reload (HMR) is evoked, making the changes visible in Storybook.

Although there are extensions to create watch tasks, we will stick with Storybook's out of the box watch functionality because it does everything we need. In fact, I have used this very approach on a project that supports over one hundred sites.

I actually learned this the hard way, I originally was importing the key stylesheets in .storybook/preview.js using the files from dist. This works to an extend because the code is compiled upon changes, but Storybook is not aware of the changes unless we restart Storybook. I spent hours debugging this issue and tried so many other options, but at the end, the simple solution was to import CSS and JS into Storybook's preview using the source files. For example, if you look in .storybook/preview.js, you will see we are importing two CSS files which contain all of the CSS code our project needs:

import '../src/styles.css'; import '../src/components/components.css';

Fig. 16: Importing source assets into Storybook's preview.

Importing source CSS or JS files into Storybook's preview allows Storybook to become aware immediately of any code changes.

The same, or kind of the same works for JavaScript. However, the difference is that for JS, we import the JS file in the component's *.stories.js, which in turn has the same effect as what we've done above for CSS. The reason for this is that typically not every component we build needs JS.

A real watch task

Currently we are running npm run storybook as a watch task. Nothing wrong with this. However, to keep up with standards and best practices, we could rename the storybook command, watch, so we can run npm run watch. Something to consider.

You could also make a copy of the storybook command and name it watch and add additional commands you wish to run with watch, while leaving the original storybook command intact. Choices, choices.

8. Linting CSS and JavaScript

Our workflow is coming along nicely. There are many other things we can do but for now, we will end with one last task: CSS and JS linting.

  • Install the required packages. There are several of them.
npm i -D eslint stylelint vite-plugin-checker stylelint-config-standard stylelint-order stylelint-selector-pseudo-class-lvhfa
  • Next, after the last import in vite.config.js, add one more:
import checker from 'vite-plugin-checker';
  • Then, let's add one more plugin in the plugins:[] array:
checker({ eslint: { lintCommand: 'eslint "./src/components/**/*.{js,jsx}"', }, stylelint: { lintCommand: 'stylelint "./src/components/**/*.css"', }, }),

Fig. 17: Checks for linting CSS and JavaScript.

So we can execute the above checks on demand, we can add them as commands to our app.

  • In package.json, within the scripts section, add the following commands:
"eslint": "eslint . --ext js,jsx --report-unused-disable-directives --max-warnings 0", "stylelint": "stylelint './src/components/**/*.css'",

Fig. 18: Two new npm commands to lint CSS and JavaScript.

  • We installed a series of packages related to ESLint and Stylelint.
  • vite-plugin-checker is a plugin that can run TypeScript, VLS, vue-tsc, ESLint, and Stylelint in worker thread.
  • We imported vite-plugin-checker and created a new plugin with two checks, one for ESLint and the other for Stylelint.
  • By default, the new checks will run when we execute npm run build, but we also added them as individual commands so we can run them on demand.
Configure rules for ESLint and Stylelint

Both ESLint and Stylelint use configuration files where we can configure the various rules we want to enforce when writing code. The files they use are eslint.config.js and .stylelintrc.yml respectively. For the purpose of this post, we are only going to add the .stylelintrc.yml in which we have defined basic CSS linting rules.

  • In the root of your theme, create a new file called .stylelintrc.yml (mind the dot)
  • Inside .stylelintrc.yml, add the following code:
extends: - stylelint-config-standard plugins: - stylelint-order - stylelint-selector-pseudo-class-lvhfa ignoreFiles: - './dist/**' rules: at-rule-no-unknown: null alpha-value-notation: number color-function-notation: null declaration-empty-line-before: never declaration-block-no-redundant-longhand-properties: null hue-degree-notation: number import-notation: string no-descending-specificity: null no-duplicate-selectors: true order/order: - - type: at-rule hasBlock: false - custom-properties - declarations - unspecified: ignore disableFix: true order/properties-alphabetical-order: error plugin/selector-pseudo-class-lvhfa: true property-no-vendor-prefix: null selector-class-pattern: null value-keyword-case: - lower - camelCaseSvgKeywords: true ignoreProperties: - /^--font/

Fig. 19: Basic CSS Stylelint rules.

The CSS rules above are only a starting point, but should be able to check for the most common CSS errors.

Test the rules we've defined by running either npm run build or npm run stylelint. Either command will alert you of a couple of errors our current code contains. This tells us the linting process is working as expected. You could test JS linting by creating a dummy JS file inside a component and writing bad JS in it.

9. One last thing

It goes without saying that we need to add storybook.info.yml and storybook.libraries.yml files for this to be a true Drupal theme. In addition, we need to create the templates directory somewhere within our theme.

storybook.info.yml

The same way we did for Storybook, we need to create namespaces for Drupal. This requires the Components module and storybook.info.yml configuration is like this:

components: namespaces: atoms: - src/components/01-atoms molecules: - src/components/02-molecules organisms: - src/components/03-organisms layouts: - src/components/04-layouts pages: - src/components/05-pages templates: - src/templates

Fig. 20: Drupal namespaces for nesting components.

storybook.libraries.yml

The recommended method for adding CSS and JS to components or a theme in Drupal is by using Drupal libraries. In our project we would create a library for each component in which we will include any CSS or JS the component needs. In addition, we need to create a global library which includes all the global and utilities styles. Here are examples of libraries we can add in storybook.libraries.yml.

global: version: VERSION css: base: dist/css/reset.css: {} dist/css/base.css: {} dist/css/utilities.css: {} button: css: component: dist/css/button.css: {} card: css: component: dist/css/card.css: {} title: css: component: dist/css/title.css: {}

Fig. 21: Drupal libraries for global styles and component's styles.

/templates

Drupal's templates' directory can be created anywhere within the theme. I typically like to create it inside the src directory. Go ahead and create it now.

  • Inside storybook.info.yml, add a new Twig namespace for the templates directory. See example above. Update your path accordingly based on where you created your templates directory.

P.S: When the Vite project was originally created at the begining of the post, Vite created files such as App.css, App.js, main.js, and index.html. All these files are in the root of the project and can be deleted. It won't affect any of the work we've done, but Vite will no longer run on its own, which we don't need it to anyway.

In closing

I realize this is a very long post, but there is really no way around it when covering these many topics in a single post. I hope you found the content useful and can apply it to your next Drupal project. There are different ways to do what I've covered in this post, and I challenge you to find better and more efficient ways. For now, thanks for visiting.

Download the theme

A full version of the Drupal theme built with this post can be downloaded.

Download the theme

Make sure you are using the theme branch from the repo.

Categories: FLOSS Project Planets

Mario Hernandez: Integrating Drupal with Storybook components

Planet Drupal - Sat, 2024-06-15 22:24

Hey you're back! 🙂 In the previous post we talked about how to build a custom Drupal theme using Storybook as the design system. We also built a simple component to demonstrate how Storybook, using custom extensions, can understand Twig. In this post, the focus will be on making Drupal aware of those components by connecting Drupal to Storybook.
If you are following along, we will continue where we left off to take advantage of all the prep work we did in the previous post. Topics we will cover in this post include:

  1. What is Drupal integration
  2. Installing and preparing Drupal for integration
  3. Building components in Storybook
  4. Building a basic front-end workflow
  5. Integrating Drupal with Storybook components
What is Drupal integration?

In the context of Drupal development using the component-driven methodology, Drupal integration means connecting Drupal presenter templates such as node.html.twig, block.html.twig, paragraph.html.twig, etc. to Storybook by mapping Drupal fields to component fields in Storybook. This in turn allows for your Drupal content to be rendered wrapped in the Storybook components.

The advantage of using a design system like Storybook is that you are in full control of the markup when building components, as a result your website is more semantic, accessible, and easier to maintain.

Building more components in Storybook

The title component we built in the previous post may not be enough to demonstrate some of the advanced techniques when integrating components. We will build a larger component to put these techniques in practice. The component we will build is called Card and it looks like this:

When building components, I like to take inventory of the different parts that make up the components I'm building. The card image above shows three parts: An image, a title, and teaser text. Each of these parts translates into fields when I am defining the data structure for the component or building the entity in Drupal.

Building the Card component
  • Open the Drupal site in your code editor and within your code editor navigate to the storybook theme (web/themes/custom/storybook)
  • Create two new directories inside components called 01-atoms and 02-molecules
  • Inside 02-molecules create a new directory called card
  • Inside the card directory add the following four files:
    • card.css: component's styles
    • card.twig: component's markup and logic
    • card.stories.jsx: Storybook's story
    • card.yml: component's demo data
  • Add the following code snippet to card.yml:
--- modifier: '' image: <img src="https://source.unsplash.com/cHRDevKFDBw/640x360" alt="Palm trees near city buildings" /> title: level: 2 modifier: '' text: 'Tours & Experiences' url: 'https://mariohernandez.io' teaser: 'Step inside for a tour. We offer a variety of tours and experiences to explore the building’s architecture, take you backstage, and uncover the best food and drink. Tours are offered in different languages and for different levels of mobility.'
  • Add the following to card.twig to provide the markup and logic for the card:
{{ attach_library('storybook/card') }} <article class="card{{ modifier ? ' ' ~ modifier }}{{- attributes ? ' ' ~ attributes.class -}}" {{- attributes ? attributes|without(class) -}}> {% if image %} <div class="card__image"> <figure> {{ image }} </figure> </div> {% endif %} <div class="card__content"> {% if title %} {% include "@atoms/title/title.twig" with { 'level': title.level, 'modifier': title.modifier, 'text': title.text, 'url': title.url, } only %} {% endif %} {% if teaser %} <p class="card__teaser">{{ teaser }}</p> {% endif %} </div> </article>

Code snippet for building card

  • Copy and paste these styles into card.css.

  • Finally, let's create the Storybook card story by adding the following to card.stories.jsx:

import parse from 'html-react-parser'; import card from './card.twig'; import data from './card.yml'; import './card.css'; const component = { title: 'Molecules/Card', }; export const Card = { render: (args) => parse(card(args)), args: { ...data }, }; export default component;

Let's go over a few things regarding the code above:

  • The data structure in card.yml reflects the data structure and type we will use in Drupal.
    • The image field uses the entire <img> element rather than just using the image src and alt attributes. The reason for this is so when we get to Drupal, we can use Drupal's full image entity. This is a good practice for caching purposes.
  • card.twig reuses the title component we created in the previous post. Rather than build a title from scratch for the Card and repeat the code we already wrote, reusing the existing components keeps us DRY.
  • card.stories.jsx in the Storybook story for the Card, notice how the code in this file is very similar to the code in the title.stories.jsx. Even with complex components, when we port them into Storybook as stories, most times the code will be similar as what you see above because Storybook is simply parsing whatever is in .twig and .yml files. There are exceptions when the React code may have extra parameters or logic which typically happens when we're building stories variations. Maybe a topic for a different blog post. 😉
Before we preview the Card, some updates are needed

You may have noticed in card.twig we used the namespace @atoms when nesting the title component. This namespace does not exist, and we need to create it now. In addition, we need to move the title component into the 01-atoms directory:

  • In your code editor or command line (whichever is easier), move the title directory into the 01-atoms directory
  • In your editor, open title.stories.jsx and change the line
    title: 'Components/Title' to title: 'Atoms/Title'. This will display the title component within the Atoms category in Storybook's sidebar.
  • Rather than have you make individual changes to vite.config.js, let's replace/overwrite all its content with the following:
/* eslint-disable */ import { defineConfig } from 'vite' import yml from '@modyfi/vite-plugin-yaml'; import twig from 'vite-plugin-twig-drupal'; import { join } from 'node:path' export default defineConfig({ root: 'src', publicDir: 'public', build: { emptyOutDir: true, outDir: '../dist', rollupOptions: { input: { 'reset': './src/css/reset.css', 'styles': './src/css/styles.css', 'card': './src/components/02-molecules/card/card.css', }, output: { assetFileNames: 'css/[name].css', }, }, sourcemap: true, }, plugins: [ twig({ namespaces: { atoms: join(__dirname, './src/components/01-atoms'), molecules: join(__dirname, './src/components/02-molecules'), }, }), // Allows Storybook to read data from YAML files. yml(), ], })

Let's go over some of the most noticeable updates inside vite.config.js:

  • We have defined a few things to improve the functionality of our Vite project, starting with using src as our app root directory and public for publicDir. This helps the app understand the project structure in a relative manner.

  • Next, we defined a Build task which provides the app with defaults for things like where should it compiled code to (i.e. /dist), and rollupOptions for instructing the app which stylesheets to compile and what to call them.

  • As part of the rollupOptions we also defined two stylesheets for global styles (reset.css and styles.css). We'll create these next.

    Important This is as basic as it gets for a build workflow and in no way would I recommend this be your front-end build workflow. When working on bigger projects with more components, it is best to define a more robust and dynamic workflow that provides automation for all the repetitive tasks performed on a typical front-end project.
  • Under the Plugins section, we have defined two new namespaces, @atoms and @molecules, each of which points to specific path within our components directory. These are the namespaces Storybook understands when nesting components. You can have as many namespaces as needed.

Adding global styles
  • Inside storybook/src, create a new directory called css
  • Inside the css directory, add two new files, reset.css and styles.css
  • Here are the styles for reset.css and styles.css. Please copy them and paste them into each of the stylesheets.
  • Now for Storybook to use reset.css and styles.css, we need to update /.storybook/preview.js by adding these two imports directly after the current imports, around line 4.
import '../dist/css/reset.css'; import '../dist/css/styles.css'; Previewing the Card in Storybook Remember, you need NodeJS v20 or higher as well as NVM installed on your machine
  • In your command line, navigate to the storybook directory and run:
nvm install npm install npm run build npm run storybook

A quick note about the commands above:

  • nvm install and npm install are typically only done once in your app. These commands will first install and use the node version specified in .nvmrc, and will install all the required node packages found in package.json. If you happen to be workign on another project that may use a different version of node, when you comeback to the Storybook project you will need to run nvm use in order to resume using the right node version.
  • npm run build is usually only ran when you have made configuration changes to the project or are introducing new files.
  • npm run storybook is the command you will use all the time when you want to run Storybook.

After Storybook launches, you should see two story categories in Storybook's sidebar, Atoms and Molecules. The title component should be under Atoms and the Card under Molecules. See below:

Installing Drupal and setting up the Storybook theme

We have completed all the prep work in Storybook and our attention now will be all in Drupal. In the previous post all the work we did was in a standalone project which did not require Drupal to run. In this post, we need a Drupal site to be able to do the integration with Storybook. If you are following along and already have a Drupal 10 site ready, you can skip the first step below.

  1. Build a basic Drupal 10 website (I recommend using DDEV).
  2. Add the storybook theme to your website. If you completed the excercise in the previous post, you can copy the theme you built into your site's /themes/custom/ directory, Otherwise, you can clone the previous post repo into the same location so it becomes your theme. After this your theme's path should be themes/custom/storybook.
  3. No need to enable the theme just yet, we'll come back to the theme shortly.
  4. Finally, create a new Article post that includes a title, body content and an image. We'll use this article later in the process.
Creating Drupal namespaces and adding Libraries

Earlier we created namespaces for Storybook, now we will do the same but this time for Drupal. It is best if the namesapces' names between Storybook and Drupal match for consistency. In addition, we will create Drupal libraries to allow Drupal to use the CSS we've written.

  • Install and enable the Components module
  • Add the following namespaces at the end of storybook.info.yml (mind your indentation):
components: namespaces: atoms: src/components/01-atoms molecules: src/components/02-molecules
  • Replace all content in storybook.libraries.yml with the following:
global: version: VERSION css: base: dist/css/reset.css: {} dist/css/styles.css: {} card: css: component: dist/css/card.css: {}
  • Let's go over the changes to both, storybook.info.yml and storybook.libraries.yml files:

    • Using the Components module we created two namespaces: @atoms and @molecules. Each namespace is associated with a specific path to the corresponding components. This is important because Drupal by default only looks for Twig templates inside the /templates directory and without the Components module and the namespaces it would not know to look for our component's Twig templates inside the components directory.
    • Then we created two Drupal libraries: global and card. The Global library includes two CSS stylesheets (reset.css and styles.css), which handle base styles in our theme. the Card library includes the styles we wrote for the Card component. If you noticed, when we created the Card component, the first line inside card.twig is a Twig attach library statement. Basically card.twig is expecting a Drupal library called card.
Turn Twig debugging on

All the pieces are in place to Integrate the Card component so Drupal can use it to render article nodes when viewed in teaser view mode.

  • The first thing we need to do to begin the integration process is to determine which Twig template Drupal uses to render article nodes in teaser view mode. One easy way to do this is by turning Twig debugging on. This used to be a complex configuration but starting with Drupal 10.1 you can now do it directly in Drupal's UI:

    • While logged in with admin access, navigate to /admin/config/development/settings on your browser. This will bring up the Development settings page.
    • Check all the boxes on this page and click Save settings. This will enable Twig debugging and disable caching.
    • Now navigate to /admin/config/development/performance so we can turn CSS and JS aggregation off.
    • Under Bandwidth optimization cleared the two boxes for CSS and Javascript aggregation then click on Save configuration.
    • Lastly, click the Clear all caches button. This will ensure any CSS or JS we write will be available without having to clear caches.
  • With Twig debugging on, go to the homepage where the Article we created should be displayed in teaser mode. If you right-click on any part of the article and select inspect from the context menu, you will see in detail all the templates Drupal is using to render the content on the current page. See example below.

    Note I am using a new basic Drupal site with Olivero as the default theme. If your homepage does not display Article nodes in teaser view mode, you could create a simple Drupal view to list Article nodes in teaser view mode to follow along.

In the example above, we see a list of templates that start with node...*. These are called template suggestions and are the names Drupal is suggesting we can assign our custom templates. The higher the template appears on the list, the more specific it is to the piece of content being rendered. For example, changes made to node.html.twig would affect ALL nodes throughout the site, whereas changes made to node--1--teaser.html.twig will only affect the first node created on the site but only when it's viewed in teaser view mode.

Notice I marked the template name Drupal is using to render the Article node. We know this is the template because it has an X before the template name.

In addition, I also marked the template path. As you can see the current template is located in core/themes/olivero/templates/content/node--teaser.html.twig.

And finally, I marked examples of attributes Drupal is injecting in the markup. These attributes may not always be useful but it is a good practice to ensure they are available even when we are writing custom markup for our components.

Create a template suggestion

By looking at the path of the template in the code inspector, we can see that the original template being used is located inside the Olivero core theme. The debugging screenshot above shows a pretty extensive list of templates suggestions, and based on our requirements, copying the file node--teaser.html.twig makes sense since we are going to be working with a node in teaser view mode.

  • Copy /core/themes/olivero/templates/content/node--teaser.html.twig into your theme's /storybook/templates/content/. Create the directory if it does not exist.
  • Now rename the newly copied template to node--article--teaser.html.twig.
  • Clear Drupal's cache since we are introducing a new Twig template.

As you can see, by renaming the template node--article--teaser (one of the names listed as a suggestion), we are indicating that any changes we make to this template will only affect nodes of type Article which are displayed in Teaser view mode. So whenever an Article node is displayed, if it is in teaser view mode, it will use the Card component to render it.

The template has a lot of information that may or may not be needed when integrating it with Storybook. If you recall, the Card component we built was made up of three parts: an image, a title, and teaser text. Each of those are Drupal fields and these are the only fields we care about when integrating. Whenever when I copy a template from Drupal core or a module into my theme, I like to keep the comments on the template untouched. This is helpful in case I need to reference any variables or elements of the template.

The actual integration ...Finally
  1. Delete everything from the newly copied template except the comments and the classes array variable
  2. At the bottom of what is left in the template add the following code snippet:
{% set render_content = content|render %} {% set article_title = { 'level': 2, 'modifier': 'card__title', 'text': label, 'url': url, } %} {% include '@molecules/card/card.twig' with { 'attributes': attributes.addClass(classes), 'image': content.field_image, 'title': article_title, 'teaser': content.body, } only %}
  • We set a variable with content|render as its value. The only purpose for this variable is to make Drupal aware of the entire content array for caching purposes. More info here.
  • Next, we setup a variable called article_title which we structured the same way as data inside card.yml. Having similar data structures between Drupal and our components provides many advantages during the integration process.
    • Notice how for the text and url properties we are using Drupal specific variables (label and url), accordingly. If you look in the comments in node--article--teaser.html.twig you will see these two variables.
  • We are using a Twig include statement with the @molecules namespace to nest the Card component into the node template. The same way we nested the Title component into the Card.
  • We mapped Drupal's attributes into the component's attributes placeholder so Drupal can inject any attributes such as CSS classes, IDs, Data attributes, etc. into the component.
  • Finally, we mapped the image, title and teaser fields from Drupal to the component's equivalent fields.
  • Save the changes to the template and clear Drupal's cache.
Enable the Storybook theme

Before we forget, let's enable the Storybook theme an also make it your default theme, otherwise all the work we are doing will not be visible since we are currently using Olivero as the default theme. Clear caches after this is done.

Previewing the Article node as a Card

Integration is done and we switched our default theme to Storybook. After clearing caches if you reload the homepage you should be able to see the Article node you wrote but this time displayed as a card. See below:

  • If you right-click on the article and select Inspect, you will notice the following:
    • Drupal is now using node--article--teaser.html.twig. This is the template we created.
    • The template path is now themes/custom/storybook/src/templates/content/.
    • You will also notice that the article is using the custom markup we wrote for the Card component which is more semantic, accessible, but in addition to this, the <article> tag is also inheriting several other attributes that were provided by Drupal through its Attributes variable. See below:

If your card's image size or aspect ratio does not look as the one in Storybook, this is probably due to the image style being used in the Article Teaser view mode. You can address this by:

  • Going to the Manage display tab of the Article's Teaser view mode (/admin/structure/types/manage/article/display/teaser).
  • Changing the image style for the Image field for one that may work better for your image.
  • Preview the article again on the homepage to see if this looks better.
In closing

This is only a small example of how to build a simple component in Storybook using Twig and then integrate it with Drupal, so content is rendered in a more semantic and accessible manner. There are many more advantages of implementing a system like this. I hope this was helpful and see the potential of a component-driven environment using Storybook. Thanks for visiting.

Download the code For a full copy of the code base which includes the work in this and the previous post, clone or download the repo and switch to the card branch. The main branch only includes the previous post code.

Download the code

Categories: FLOSS Project Planets

Mario Hernandez: Migrating from Patternlab to Storybook

Planet Drupal - Sat, 2024-06-15 22:24

Building a custom Drupal theme nowadays is a more complex process than it used to be. Most themes require some kind of build tool such as Gulp, Grunt, Webpack or others to automate many of the repeatitive tasks we perform when working on the front-end. Tasks like compiling and minifying code, compressing images, linting code, and many more. As Atomic Web Design became a thing, things got more complicated because now if you are building components you need a styleguide or Design System to showcase and maintain those components. One of those design systems for me has been Patternlab. I started using Patternlab in all my Drupal projects almost ten years ago with great success. In addition, Patternlab has been the design system of choice at my place of work but one of my immediate tasks was to work on migrating to a different design system. We have a small team but were very excited about the challenge of finding and using a more modern and robust design system for our large multi-site Drupal environment.

Enter Storybook

After looking a various options for a design system, Storybook seemed to be the right choice for us for a couple of reasons: one, it has been around for about 10 years and during this time it has matured significantly, and two, it has become a very popular option in the Drupal ecosystem. In some ways, Storybook follows the same model as Drupal, it has a pretty active community and a very healthy ecosystem of plugins to extend its core functionality.

Storybook looks very promising as a design system for Drupal projects and with the recent release of Single Directory Components or SDC, and the new Storybook module, we think things can only get better for Drupal front-end development. Unfortunately for us, technical limitations in combination with our specific requirements, prevented us from using SDC or the Storybook module. Instead, we built our environment from scratch with a stand-alone integration of Storybook 8.

INFO: At the time of our implementation, TwigJS did not have the capability to resolve SDC's namespace. It appears this has been addressed and using SDC should now be possible with this custom setup. I haven't personally tried it and therefore I can't confirm. Our process and requirements

In choosing Storybook, we went through a rigorous research and testing process to ensure it will not only solve our immediate problems with our current environment, but it will be around as a long term solution. As part of this process, we also tested several available options like Emulsify and Gesso which would be great options for anyone looking for a ready-to-go system out of the box. Some of our requirements included:

1. No components refactoring

The first and non-negotiable requirement was to be able to migrate components from Patternlab to a new design system with the least amount of refactoring as possible. We have a decent amount of components which have been built within the last year and the last thing we wanted was to have to rebuild them again because we are switching design system.

2. A new Front-end build workflow

I personally have been faithful to Gulp as a front-end build tool for as long as I can remember because it did everything I needed done in a very efficient manner. The Drupal project we maintain also used Gulp, but as part of this migration, we wanted to see what other options were out there that could improve our workflow. The obvious choice seemed to be Webpack, but as we looked closer into this we learned about ViteJS, "The Next Genration Frontend Tooling". Vite delivers on its promise of being "blazing fast", and its ecosystem is great and growing, so we went with it.

3. No more Sass in favor of PostCSS

CSS has drastically improved in recent years. It is now possible with plain CSS, to do many of the things you used to be able to only do with Sass or similar CSS Preprocessor. Eliminating Sass from our workflow meant we would also be able to get rid of many other node dependencies related to Sass. The goal for this project was to use plain CSS in combination with PostCSS and one bonus of using Vite is that Vite offers PostCSS processing out of the box without additional plugins or dependencies. Ofcourse if you want to do more advance PostCSS processing you will probably need some external dependencies.

Building a new Drupal theme with Storybook

Let's go over the steps to building the base of your new Drupal theme with ViteJS and Storybook. This will be at a high-level to callout only the most important and Drupal-related parts. This process will create a brand new theme. If you already have a theme you would like to use, make the appropriate changes to the instructions.

1. Setup Storybook with ViteJS ViteJS
  • In your Drupal project, navigate to the theme's directory (i.e. /web/themes/custom/)
  • Run the following command:
npm create vite@latest storybook
  • When prompted, select the framework of your choice, for us the framework is React.
  • When prompted, select the variant for your project, for us this is JavaScript

After the setup finishes you will have a basic Vite project running.

Storybook
  • Be sure your system is running NodeJS version 18 or higher
  • Inside the newly created theme, run this command:
npx storybook@latest init --type react
  • After installation completes, you will have a new Storybook instance running
  • If Storybook didn't start on its own, start it by running:
npm run storybook TwigJS

Twig templates are server-side templates which are normally rendered with TwigPHP to HTML by Drupal, but Storybook is a JS tool. TwigJS is the JS-equivalent of TwigPHP so that Storybook understands Twig. Let's install all dependencies needed for Storybook to work with Twig.

  • If Storybook is still running, press Ctrl + C to stop it
  • Then run the following command:
npm i -D vite-plugin-twig-drupal html-react-parser twig-drupal-filters @modyfi/vite-plugin-yaml
  • vite-plugin-twig-drupal: If you are using Vite like we are, this is a Vite plugin that handles transforming twig files into a Javascript function that can be used with Storybook. This plugin includes the following:
    • Twig or TwigJS: This is the JavaScript implementation of the Twig PHP templating language. This allows Storybook to understand Twig.
      Note: TwigJS may not always be in sync with the version of Twig PHP in Drupal and you may run into issues when using certain Twig functions or filters, however, we are adding other extensions that may help with the incompatability issues.
    • drupal attribute: Adds the ability to work with Drupal attributes.
  • twig-drupal-filters: TwigJS implementation of Twig functions and filters.
  • html-react-parser: This extension is key for Storybook to parse HTML code into react elements.
  • @modifi/vite-plugin-yaml: Transforms a YAML file into a JS object. This is useful for passing the component's data to React as args.
ViteJS configuration

Update your vite.config.js so it makes use of the new extensions we just installed as well as configuring the namesapces for our components.

import { defineConfig } from "vite" import yml from '@modyfi/vite-plugin-yaml'; import twig from 'vite-plugin-twig-drupal'; import { join } from "node:path" export default defineConfig({ plugins: [ twig({ namespaces: { components: join(__dirname, "./src/components"), // Other namespaces maybe be added. }, }), // Allows Storybook to read data from YAML files. yml(), ], }) Storybook configuration

Out of the box, Storybook comes with main.js and preview.js inside the .storybook directory. These two files is where a lot of Storybook's configuration is done. We are going to define the location of our components, same location as we did in vite.config.js above (we'll create this directory shortly). We are also going to do a quick config inside preview.js for handling drupal filters.

  • Inside .storybook/main.js file, update the stories array as follows:
stories: [ "../src/components/**/*.mdx", "../src/components/**/*.stories.@(js|jsx|mjs|ts|tsx)", ],
  • Inside .storybook/preview.js, update it as follows:
/** @type { import('@storybook/react').Preview } */ import Twig from 'twig'; import drupalFilters from 'twig-drupal-filters'; function setupFilters(twig) { twig.cache(); drupalFilters(twig); return twig; } setupFilters(Twig); const preview = { parameters: { controls: { matchers: { color: /(background|color)$/i, date: /Date$/i, }, }, }, }; export default preview; Creating the components directory
  • If Storybook is still running, press Ctrl + C to stop it
  • Inside the src directory, create the components directory. Alternatively, you could rename the existing stories directory to components.
Creating your first component

With the current system in place we can start building components. We'll start with a very simple component to try things out first.

  • Inside src/components, create a new directory called title
  • Inside the title directory, create the following files: title.yml and title.twig
Writing the code
  • Inside title.yml, add the following:
--- level: 2 modifier: 'title' text: 'Welcome to your new Drupal theme with Storybook!' url: 'https://mariohernandez.io'
  • Inside title.twig, add the following:
<h{{ level|default(2) }}{% if modifier %} class="{{ modifier }}"{% endif %}> {% if url %} <a href="{{ url }}">{{ text }}</a> {% else %} <span>{{ text }}</span> {% endif %} </h{{ level|default(2) }}>

We have a simple title component that will print a title of anything you want. The level key allows us to change the heading level of the title (i.e. h1, h2, h3, etc.), and the modifier key allows us to pass a modifier class to the component, and the url will be helpful when our title needs to be a link to another page or component.

Currently the title component is not available in storybook. Storybook uses a special file to display each component as a story, the file name is component-name.stories.jsx.

  • Inside title create a file called title.stories.jsx
  • Inside the stories file, add the following:
/** * First we import the `html-react-parser` extension to be able to * parse HTML into react. */ import parse from 'html-react-parser'; /** * Next we import the component's markup and logic (twig), data schema (yml), * as well as any styles or JS the component may use. */ import title from './title.twig'; import data from './title.yml'; /** * Next we define a default configuration for the component to use. * These settings will be inherited by all stories of the component, * shall the component have multiple variations. * `component` is an arbitrary name assigned to the default configuration. * `title` determines the location and name of the story in Storybook's sidebar. * `render` uses the parser extension to render the component's html to react. * `args` uses the variables defined in title.yml as react arguments. */ const component = { title: 'Components/Title', render: (args) => parse(title(args)), args: { ...data }, }; /** * Export the Title and render it in Storybook as a Story. * The `name` key allows you to assign a name to each story of the component. * For example: `Title`, `Title dark`, `Title light`, etc. */ export const TitleElement = { name: 'Title', }; /** * Finally export the default object, `component`. Storybook/React requires this step. */ export default component;
  • If Storybook is running you should see the title story. See example below:
  • Otherwise start Storybook by running:
npm run storybook

With Storybook running, the title component should look like the image below:


The controls highlighted at the bottom of the title allow you to change the values of each of the fields for the title.

I wanted to start with the simplest of components, the title, to show how Storybook, with help from the extensions we installed, understands Twig. The good news is that the same approach we took with the title component works on even more complex components. Even the React code we wrote does not change much on large components.

In the next blog post, we will build more components that nest smaller components, and we will also add Drupal related parts and configuration to our theme so we can begin using the theme in a Drupal site. Finally, we will integrate the components we built in Storybook with Drupal so our content can be rendered using the component we're building. Stay tuned. For now, if you want to grab a copy of all the code in this post, you can do so below.

Download the code

Resources In closing

Getting to this point was a team effort and I'd like to thank Chaz Chumley, a Senior Software Engineer, who did a lot of the configuration discussed in this post. In addition, I am thankful to the Emulsify and Gesso teams for letting us pick their brains during our research. Their help was critical in this process.

I hope this was helpful and if there is anything I can help you with in your journey of a Storybook-friendly Drupal theme, feel free to reach out.

Categories: FLOSS Project Planets

Pages