Feeds

The Drop Times: Blueprint to the Stars!

Planet Drupal - Mon, 2024-06-03 12:25

Dear Readers,

Since my younger days, I have been fascinated by a quote, and it has kept me in check in all my endeavours.

"If you fail to plan, you plan to fail!"

Achieving milestones in any initiative requires meticulous planning and strategic execution, and the Drupal Starshot Initiative is no exception. As the guiding star for Drupal's future, the Starshot Initiative embodies a collaborative effort to propel the platform to unprecedented heights. The process behind this ambitious endeavour is as critical as the vision itself.

Central to the success of the Starshot Initiative is the detailed roadmap that outlines the project's phases, from ideation to implementation. This roadmap is not just a timeline of events but a dynamic blueprint that evolves with contributions from the Drupal community. Regular interactive sessions, such as the one headed by Dries Buytaert last week and the upcoming sessions headed by Drupal stalwarts, play a crucial role in this process. These sessions are designed to provide updates, gather feedback, and refine strategies to ensure that every step is aligned with the overarching goals.

The interactive nature of these sessions fosters a sense of unity and shared purpose, encouraging everyone to participate actively. By joining the #starshot channel on Drupal Slack, community members can stay informed and contribute meaningfully to the initiative. Through this collective effort, the Drupal community is working together to make the Starshot Initiative a beacon of innovation and excellence in the digital landscape.

With that, let's move on to last week's important news.

In a stimulating conversation with our sub-editor, Kazima Abbas, Brian Perry discusses the latest updates and future vision for the API Client Initiative. Dive into the in-depth interview to learn how the Drupal API Client is revolutionizing interaction with Drupal APIs and explore the exciting opportunities it presents for web development enthusiasts.

DrupalJam 2024, set to take place on June 12th at the Fabrique in Utrecht, marks a significant milestone as it celebrates its 20th edition. Organized by a team of dedicated volunteers, DrupalJam has earned international recognition for its professional quality and community-driven ethos. Kazima Abbas brings you exclusive insights from the organizers into the event's schedule, speakers, and opportunities for engagement.

Drupal Community is agile with a great deal of events. DrupalCon Portland 2024 concluded with the hope of two more Drupal conferences this year: DrupalCon Barcelona and DrupalCon Singapore 2024. The programs for DrupalCon Barcelona are now live, and DrupalCon Asia has opened up Sponsorship opportunities. Regarding the regional events, Drupal Developer Days Bulgaria 2024 is set to take place in Burgas from June 26 to 28, and tickets are now available! 

Excitement is afoot with announcements of events happening next year. Save the date for Florida DrupalCamp 2025, slated to convene at Florida Technical College from February 21 to 23, 2025. The 5th edition of Drupal MountainCamp is scheduled for March 11 to 13, 2025, in Davos, Switzerland.

The DropTimes has compiled notable Drupal events happening throughout the week of June 3rd to June 9th. This curated list offers a glimpse into the varied activities taking place within the Drupal community, catering to enthusiasts of all skill levels. Read here.

On May 31, 2024, Dries Buytaert led the first interactive Zoom session of the Drupal Starshot series, focusing on participation, funding, and governance. This session covered various topics, including the sentiment around DrupalCon pledges and blog posts, ways for the community to get involved, and innovative funding ideas such as Drupal Certified Partners. 

Wim Leers announced the official opening of the 0.x branch for the Experience Builder initiative. Sponsored full-time by Acquia, Dries Buytaert formally introduced the initiative at DrupalCon Portland 2024, following extensive research conducted by Drupal core product manager Lauri Eskola.

Indian Space Research Organization, ISRO, modernizes its grant management with the I-GRASP initiative, partnering with Quilltez and leveraging Drupal to streamline proposal submissions and reviews. The new online platform enhances efficiency, transparency, and security, reducing processing time and fostering stronger research collaborations. This technological advancement marks a significant milestone in ISRO's mission to drive innovation in space exploration.

The Drupal Association has announced the launch of a new initiative to empower local Drupal communities worldwide. Led by Programs Manager Joi Garrett, the Local Associations Initiative is designed to support the success of Drupal Local Associations by engaging directly with community leaders who promote the Drupal project in their regions. 

Jürgen Haas has announced the release of ECA 2.0.0-beta1 for Drupal, marking a milestone in the lead-up to the final ECA 2 release. This beta version introduces several major improvements and new features designed to enhance the functionality and performance of Drupal sites. Additionally, the latest version of the Smart Date module, 4.1, has been officially released, marking a significant milestone exactly one year after the first stable release of version 4.0. Led by Martin Anderson-Clutz, this update brings a host of improvements and new functionalities, making it ready for Drupal 11.

Ines Wallon, a Drupal Practice Leader and advocate for FLOSS, has launched Drupal GitLab Toolbox, a new project designed to enhance the continuous integration (CI) process for Drupal developers. The project offers a versatile GitLab CI pipeline specifically tailored for Drupal projects.

We acknowledge that there are more stories to share. However, due to selection constraints, we must pause further exploration for now.

To get timely updates, follow us on LinkedIn, Twitter and Facebook. Also, join us on Drupal Slack at #thedroptimes.

Thank you,
Sincerely
Alka Elizabeth
Sub-editor, The DropTimes.

Categories: FLOSS Project Planets

Real Python: String Interpolation in Python: Exploring Available Tools

Planet Python - Mon, 2024-06-03 10:00

String interpolation allows you to create strings by inserting objects into specific places in a target string template. Python has several tools for string interpolation, including f-strings, the str.format() method, and the modulo operator (%). Python’s string module also provides the Template class, which you can use for string interpolation.

In this tutorial, you’ll:

  • Learn how to use f-strings for eager string interpolation
  • Perform lazy string interpolation using the str.format() method
  • Learn the basics of using the modulo operator (%) for string interpolation
  • Decide whether to use f-strings or the str.format() method for interpolation
  • Create templates for string interpolation with string.Template

To get the most out of this tutorial, you should be familiar with Python strings, which are represented by the str class.

Get Your Code: Click here to download the free sample code you’ll use to explore string interpolation tools in Python.

Take the Quiz: Test your knowledge with our interactive “String Interpolation in Python: Exploring Available Tools” quiz. You’ll receive a score upon completion to help you track your learning progress:

Interactive Quiz

String Interpolation in Python: Exploring Available Tools

Take this quiz to test your understanding of the available tools for string interpolation in Python, as well as their strengths and weaknesses. These tools include f-strings, the .format() method, and the modulo operator.

String Interpolation in Python

Sometimes, when working with strings, you’d make up strings by using multiple different string values. Initially, you could use the plus operator (+) to concatenate strings in Python. However, this approach results in code with many quotes and pluses:

Python >>> name = "Pythonista" >>> day = "Friday" # Of course 😃 >>> "Hello, " + name + "! Today is " + day + "." 'Hello, Pythonista! Today is Friday.' Copied!

In this example, you build a string using some text and a couple of variables that hold string values. The many plus signs make the code hard to read and write. Python must have a better and cleaner way.

Note: To learn more about string concatenation in Python, check out the Efficient String Concatenation in Python tutorial.

The modulo operator (%) came to make the syntax a bit better:

Python >>> "Hello, %s! Today is %s." % (name, day) 'Hello, Pythonista! Today is Friday.' Copied!

In this example, you use the modulo operator to insert the name and day variables into the string literals. The process of creating strings by inserting other strings into them, as you did here, is known as string interpolation.

Note: Formatting with the modulo operator is inspired by printf() formatting used in C and many other programming languages.

The %s combination of characters is known as a conversion specifier. They work as replacement fields. The % operator marks the start of the specifier, while the s letter is the conversion type and tells the operator that you want to convert the input object into a string. You’ll learn more about conversion specifiers in the section about the modulo operator.

Note: In this tutorial, you’ll learn about two different types of string interpolation:

  1. Eager interpolation
  2. Lazy interpolation

In eager interpolation, Python inserts the values into the string at execution time in the same place where you define the string. In lazy interpolation, Python delays the insertion until the string is actually needed. In this latter case, you create string templates at one point in your code and fill the template with values at another point.

But the story doesn’t end with the modulo operator. Later, Python introduced the str.format() method:

Python >>> "Hello, {}! Today is {}.".format(name, day) 'Hello, Pythonista! Today is Friday.' Copied!

The method interpolates its arguments into the target string using replacement fields limited by curly brackets. Even though this method can produce hard-to-read code, it represents a significant advance over the modulo operator: it supports the string formatting mini-language.

Note: String formatting is a fundamental topic in Python, and sometimes, people think that formatting and interpolation are the same. However, they’re not. In this tutorial, you’ll only learn about interpolation. To learn about string formatting and the formatting mini-language, check out the Python’s Format Mini-Language for Tidy Strings tutorial.

Python continues to evolve, and every new version brings new, exciting features. Python 3.6 introduced formatted string literals, or f-strings for short:

Python >>> f"Hello, {name}! Today is {day}." 'Hello, Pythonista! Today is Friday.' Copied!

F-strings offer a more readable and clean way to create strings that include other strings. To make an f-string, you must prefix it with an f or F. Again, curly brackets delimit the replacement fields.

Note: To learn more about f-strings, check out the Python’s F-String for String Interpolation and Formatting tutorial.

Read the full article at https://realpython.com/python-string-interpolation/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Mike Driscoll: Python Logging Book Released!

Planet Python - Mon, 2024-06-03 09:09

The latest Python book from Michael Driscoll is now out. You can get Python Logging today on any of your favorite platforms!  The Kindle version of the book is only 99 cents for a limited time!

What does every new developer do when they are first learning to program? They print out strings to their terminal. It’s how we learn! But printing out to the terminal isn’t what you do with most professional applications.

In those cases, you log in. Sometimes, you log into multiple locations at once. These logs may serve as an audit trail for compliance purposes or help the engineers debug what went wrong.

Python Logging teaches you how to log in the Python programming language. Python is one of the most popular programming languages in the world. Python comes with a logging module that makes logging easy.

What You’ll Learn

In this book, you will learn how about the following:

  • Logger objects
  • Log levels
  • Log handlers
  • Formatting your logs
  • Log configuration
  • Logging decorators
  • Rotating logs
  • Logging and concurrency
  • and more!
Where to Purchase

You can get Python Logging at the following websites:

Download the Code

You can get the code for the book from GitHub:

 

The post Python Logging Book Released! appeared first on Mouse Vs Python.

Categories: FLOSS Project Planets

PyCharm: The State of Django 2024

Planet Python - Mon, 2024-06-03 08:19

Are you curious to discover the latest trends in Django development?

In collaboration with the Django Foundation, PyCharm surveyed more than 4,000 Django developers from around the globe and analyzed the trends in framework usage based on their answers.

In this blog post, we share the following key findings with you:

  • Every third Django developer also uses Flask or FastAPI.
  • Most developers use Django for both full-stack and API development.
  • 61% of Django developers use asynchronous technologies.
  • And many more insights!

Dive in to learn about these findings in more detail and discover other trends in Django development, while also benefiting from illustrative infographics.

Backend: Every third Django developer also uses Flask or FastAPI

Django remains the go-to framework for 74% of developers, though this does represent a minor dip from last year when this figure stood at 83%. FastAPI has managed to maintain its popularity, as 25% of respondents reported using it. Meanwhile, Flask saw a slight decline in popularity (29% in 2022 to 26% in 2023).

33% of web developers who work primarily with Django also use Flask or FastAPI, showing diverse backend skills.

Taking into account that the majority of fully employed developers (49%) report working on several projects at the same time, this may indicate that they choose different tools for different purposes:

  • Django – for larger, more complex web apps due to its “batteries included” approach.
  • Flask – for simpler applications (especially static sites) or microservices.
  • FastAPI – to create API endpoints, especially if your application includes a lot of IO calls (especially for real-time web applications).

The fact that only 11% of all Django developers use all three frameworks might mean that most of them use Flask and FastAPI for similar purposes, shifting to FastAPI due to its async capabilities.

Want to learn how Django is different from Flask and FastAPI? Check out our detailed comparisons between Django and Flask, and Django and FastAPI to see which framework will best suit your needs.

Developing APIs: Most developers use Django for both full-stack and API development

This year’s survey revealed Django is popular for both full-stack (74%) and API development (60%), with a trend towards API work among fully employed devs. Fully employed developers are more likely to use Django for REST API development (65% vs. 60% on average), but less likely to use it for full-stack development (68% vs. 74% on average).

With the rising popularity of htmx, the trend might change in favor of using Django more for full-stack development. 

Interestingly, while DRF has retained the pole position among third-party packages, its popularity dipped as Django Ninja, known for its speed and typing capabilities, continues to gain ground. Django Ninja offers high performance and asynchronous capabilities, similar to another very popular choice for creating APIs, FastAPI, but within the Django ecosystem, which makes the learning curve shorter.

Work with APIs? Read this tutorial to learn how to build APIs with the Django REST framework.

Async: 61% of Django developers use async

There’s a clear transition towards using asynchronous technologies among Django developers, with 61% now incorporating async in their projects (up from 53% last year).

FastAPI, which was built with async programming in mind, is now used by 21% of all Django developers who use async technologies. Django async views are also being used by more respondents (14%), though FastAPI is still more popular for async tasks. With more async support planned for upcoming Django 5 releases, the interest in using async in Django might increase even more.

Frontend: a shift in Django developers’ preferences towards htmx, Alpine.js, and Tailwind CSS

When it comes to frontend, JavaScript is still the most popular language, with 68% of developers stating that they use it. That said, it’s gradually conceding its comfortable lead (75% in 2021 and 2022) to TypeScript, which has grown significantly from 19% in 2021 to 28% in 2023. ​​This increase in popularity is probably attributable to its static typing features, which help catch errors early in the development process and thus make code more robust and maintainable.

Professional Django developers still clearly favor JavaScript frameworks over their competitors, with usage rates of 26% for Vue, 35% for jQuery, and 42% for React, though their overall use is declining year over year.

Newer frameworks like htmx (which grew from 16% in 2022 to 23% in 2023) and Alpine.js (which grew from 6% to 10%) are rapidly gaining traction, suggesting a shift towards simpler tools for modern user interfaces. There is a dedicated django-htmx package developed by Adam Johnson.

Dennis Ivy on htmx:

“Happy to see HTMX here. I’m not the biggest fan (I default to React) but I think it’s suitable for many Django projects that need a little flexibility without having to move to a full fledged JS framework/library.”

We continue to see a downward trend for Bootstrap and significant growth for Tailwind CSS, whose popularity has doubled in the last two years. The increasing preference for Tailwind CSS over Bootstrap suggests a desire for more customizable and less prescriptive approaches to styling in web projects. Read this article from package creator Tim Kamanin for a guided introduction to using Tailwind CSS in Django

Dennis Ivy on TailwindCSS:

“Love to see the rise of Tailwind-Django usage. Hope to see more native integration and educational content in this area.” Databases: 75% of Django developers favor PostgreSQL, and 50% rely on Redis for caching

In the Django ecosystem, PostgreSQL leads the pack as the primary database choice (76%) among developers, highlighting the preference for robust, SQL-based systems for web applications. The interest in NoSQL databases like MariaDB (10%) and MongoDB (8%) is also notable, reflecting a diversifying database landscape.

The inclusion of the schema-less and scalable MongoDB among the top database choices, despite its lack of official Django support, reflects developers’ willingness to integrate more flexible, document-oriented databases.

Do you work with MongoDB? Read this step-by-step guide on how to connect Django with MongoDB.

Dennis Ivy on MongoDB:

“This number surprises me considering the different approach Mongo takes and the incompatibility with Django. Curious to know the experience level of those 8%. I would suspect that those are newer devs and / or devs running experimental projects. 

After speaking to several MongoDB team members I was able to conclude my thoughts on this integration in an article.”

On the caching front, Redis remains the go-to solution (54%) for enhancing web app responsiveness, with Memcached (20%) also gaining traction.

Orchestration: More than 50% of Django developers use container orchestration 

Among the favored services, Amazon ECS/Fargate (19%) leads by virtue of its ease of use and integration with AWS, making it a natural choice for developers in the AWS ecosystem. 

Self-managed Kubernetes (14%) appeals to those seeking flexibility and control over their infrastructure, as well as the ability to easily migrate and share between private and public clouds. The popularity of Amazon EKS (12%) and Docker Swarm (12%) might be attributable to the balance they offer between manageability and scalability, catering to various deployment needs.

Deal with Kubernetes infrastructure? Read this guide on how to deploy Django apps in Kubernetes.

CI systems: GitHub Actions is leading the industry

GitHub Actions’ growth (45% in 2023 compared to 35% in 2021) in the CI field highlights its convenience for developers who already use GitHub for source code management. Its simplicity, leveraging straightforward YAML files for pipeline management, makes it an accessible and efficient tool for automating software workflows directly within GitHub’s ecosystem. Moreover, it offers the flexibility of using custom hardware configurations that meet your needs with enough processing power or memory to run larger jobs.

IaC: Infrastructure as code is used by 39% of Django developers

The use of infrastructure as code solutions by 39% of respondents underscores a growing trend towards automation and infrastructure management through code. For larger projects, IaC can ensure more reliable, repeatable, and scalable infrastructure setups. Terraform is the most commonly used infrastructure as code provisioning engine, with 20% of all respondents preferring it over the other options.

Interestingly, an open-source solution, Pulumi, was chosen by 5% of the respondents. The reason for this could be the fact that, right from its inception, Pulumi has offered the flexibility to use any programming language for managing infrastructure. This makes Pulumi widely accessible to developers and DevOps engineers from any background. Terraform began offering a similar option via a CDK in 2022.

Insights based on Django developers’ jobs and experience Django learning resources

Fully employed developers watch less YouTube to learn Django (32% vs. 39% on average) and use fewer AI tools for this purpose (22% vs. 25% on average).

Among team leads, the Django News newsletter, HackerNews, Reddit, and even X (formerly Twitter) are more popular methods of staying up to date with Django developments. They even report learning more often from their friends (16% vs. 11% on average). 

Junior professionals use YouTube and StackOverflow much more often for both educational purposes and to keep abreast of developments in the Django ecosystem. When it comes to their preferred learning method, they tend to use new AI tools more often than their senior colleagues (38% vs. 25% on average).

Miscellaneous
  • Fully employed Django developers are more likely to use Django only for work (23% vs. 17% on average). 
  • The core component that team leads and fully employed developers as a group favor most is the possibility of migration, with their less preferred components including authentication, templates, and even class-based views.
  • As their main editor, team leads tend to prefer PyCharm (31% vs. 29% on average) and Vim (12% vs. 7% on average) to VS Code (31% vs. 29% on average).
Start developing Django apps with PyCharm

Do you work with Django? PyCharm is the best-in-class IDE for Django. Code faster with Django-specific code insights, code completion, and highlighting. Navigate across your project easily. Connect to your database in a single click, and work on TypeScript, JavaScript, and other frontend frameworks. PyCharm also supports Flask and FastAPI out of the box.

Try PyCharm for free

Survey demographics

After filtering out duplicate and unreliable responses, the data set includes around 4,000 responses collected between September and October 2023.

Regional distribution

44% of respondents are based in Europe, 19% in North America, and 17% in Asia.

Age distribution

Most of the respondents are in the 21–49 age range. 38% of all respondents are aged 31–39, while 30% are aged 21–29.

Professional coding experience

The majority of respondents have been coding professionally for 11+ years. 24% of the survey participants have 3–5 years of professional coding experience, and 19% have worked as professional developers for 6–10 years. Those who have been developing professionally for less than two years represent another 25% of the survey respondents.

Job roles

79% of respondents stated that their job roles include development/programming or software engineering. 16% of respondents are team leads. 10% of the respondents stated that their job includes data analysis, data engineering, or data science. 

The data set includes responses only from official Django Software Foundation channels. The responses were collected through the promotion of the survey on official Django channels, such as djangoproject.com and the DSF’s X (formerly Twitter) account, without the involvement of any JetBrains channels. In order to prevent the survey from being slanted in favor of any specific tool or technology, no product-, service-, or vendor-related channels were used to collect responses.

Want to learn more? Explore the Django Developers Survey 2023 to see the full survey data.

Categories: FLOSS Project Planets

Real Python: Quiz: String Interpolation in Python: Exploring Available Tools

Planet Python - Mon, 2024-06-03 08:00

Test your understanding of Python’s tools for string interpolation, including f-strings, the .format() method, and the modulo operator.

Take this quiz after reading our String Interpolation in Python: Exploring Available Tools tutorial.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Recent side projects

Planet KDE - Mon, 2024-06-03 07:55

In addition to my larger projects in KDE and elsewhere, I’ve been working on a number of small projects over the years.

Since these naturally are hard to find, I want to present each one here briefly. Maybe you’ll find some of them useful.

MatePay

MatePay is a small payment system developed for the student hackerspace I’m part of, Spline.

It has a small built-in shop that we have been mostly using for the beverages that we provide in the space. However it also features an API that applications can use to process payments with. We use that to run public printers in the University.

MatePay is based on a simple SQLite database, it only makes sense to use when trusting the party hosting it.

Mateprint

Mateprint is a printing web interface with a payment feature. It can also print multiple copies and double-sided pages.

It is just a simple static executable written in Rust. All the hard work is done by CUPS.

Rawqueued

Originally the public printers were all connected over the network, using HP network add-on cards. Unfortunately just like the printers these are becoming really old (~30 years), and started dying regularly lately.

Since for some reason these cards are expensive I instead decided to replace them with Raspberry Pis. I wrote a small IPP server that just forwards the payload it receives to the printer, which is pretty much what the network cards did as well.

This setup is a bit simpler than maintaining cups filters on multiple devices, so the more complicated parts can all run on a central virtual machine.

You can find the repository here.

Wasfaehrt

In the University we have a departure board in one of the student managed rooms. It is powered by the same code that also powers KDE Itinerary (KPublicTransport).

You can find it on Codeberg.

SpaceAPI

For the Spline room, we of course needed a SpaceAPI endpoint. There is a small server that provides the SpaceAPI endpoint and an API to update whether the door is open or not. The updates are sent by a daemon running on a Raspberry Pi.

Categories: FLOSS Project Planets

Robin Wilson: Introducing pyAURN – a Python package for accessing UK air quality data

Planet Python - Mon, 2024-06-03 06:04

I realised recently that I’d never actually blogged about my pyAURN package – so it’s about time that I did.

When doing some freelance work on air quality a while back, I wanted an easy way to access UK air quality from the Automatic Urban and Rural Network (AURN). Unfortunately, there isn’t a nice API for accessing the data. Strangely, though, they do provide the data in a series of RData files for use by the openair R package. I wanted to use the data from Python though – but conveniently there is a Python package for reading RData files.

So, I put all these together into a simple Python package called pyAURN. It is strongly based upon openair, but is a lot more limited in functionality – it basically only covers importing the data, and doesn’t have many plotting or analysis functions.

Here’s an example of how to use it:

from pyaurn import importAURN, importMeta, timeAverage # Download metadata of site IDs, names, locations etc metadata = importMeta() # Download 4 years of data for the Marylebone Road site # (MY1 is the site ID for this site) # Note: range(2016, 2022) will produce a list of six years: 2016, 2017, 2018, 2019, 2020, and 2021. # Alternatively define a list of years to use eg. [2016,2017,2018,2019,2020,2021] data = importAURN("MY1", range(2016, 2022)) # Group the DataFrame by a frequency of monthly, and the statistic mean(). data_monthly = timeAverage(data,avg_time="month",statistic="mean")

I found this really useful for my air quality analysis work – and I hope you do too. The package is on PyPI, so you can run pip install pyaurn or view the project on Github.

Categories: FLOSS Project Planets

The Drop Times: First Drupal Starshot Session Engages Over 200 Participants; Outlines Vision and Next Steps

Planet Drupal - Mon, 2024-06-03 01:38
Following its announcement at DrupalCon Portland 2024, the first Drupal Starshot session led by Dries Buytaert on May 31, 2024, engaged over 200 participants. The session discussed community participation, funding strategies, and governance plans, highlighting the project's goal to simplify Drupal site building through enhanced features and accessibility. The next update meeting is scheduled for June 6, 2024, focusing on the Experience Builder initiative.
Categories: FLOSS Project Planets

GSoC'24 Okular | An amazing start

Planet KDE - Mon, 2024-06-03 00:30
People of the Internet,

While I have mostly been silent in my blog posts until now, I’d like to put it out there that GSoC’24 at KDE has been going strong for me.

The Project : Forms/Javascript support improvement for Okular

Okular, the cross-platform universal document viewer developed by KDE supports PDFs with forms. These forms often use Javascript to make forms more convenient for its users. However, as of today, the support for Javascript within Okular is lacking. During this Google Summer of Code timeline, I’ll be working on improving this form and javascript support in Okular.

Okular uses the QJSEngine provided by the Qt framework for running the javascript in a sandboxed environment. While QJSEngine provides the engine, all the necessary JS objects supported by the PDF specification need to be supported by us. Along with this, many Acrobat specific pre-defined scripts need to be implemented in order to allow for PDFs to work consistently with other PDF readers like Adobe Reader, PDF.js, etc.

My project involves providing support for as many features as possible in Okular forms. I’d like to thank Albert Astals Cid for mentoring me for this project.

Week-1 Recap

Coding period officially started from the 27th of May and in week 1 itself, I had the following merge requests merged (Yayy!!).

  • event.selStart && event.selEnd : These event properties allows script writers to correctly use the part of the text that was selected during Keystroke events using event.selStart and event.selEnd. These properties would also help me further implement the Keystroke event pre-defined methods. !MR981
  • AFPercent_Format : This pre-defined method allows for correct formatting of data entered in percentage form fields. !MR982

Along with this, I have some more MRs under review right now.

  • AFTime_Keystroke : This pre-defined method allows only the acceptable input in the form fields that are supposed to be for time data. !MR987
  • event.change : While event.change already has an implementation, it is inconsistent with the other PDF Readers. It currently evaluates the change from the first point of difference to the end of the string. In fact, it should only reflect the incoming changes. !MR998

While the first week was off to a great start, there are many more features to implement here. For the next weeks I’m planning on getting the above MRs merged and then working on the other keystroke pre-defined methods.

So that was it for this blog, see you next time. Cheers!

Categories: FLOSS Project Planets

Week 1 recap- research and prep

Planet KDE - Sun, 2024-06-02 20:57
Hi welcome to the blog. It's Week 1 and most of this time is allocated to researching and learning how the code base works. The main point is that I am trying to understand how exactly Krita "draws" aka puts polygons onto the canvas. Right now there ...
Categories: FLOSS Project Planets

Gizra.com: Private Composer Repos Using DDEV

Planet Drupal - Sun, 2024-06-02 20:00

We do not usually make use of private composer repos. The reason is simple, all our private code lives inside a single repo.

But sometimes, we need to re-use a project for multiple sites, and we still want to keep the code private. In those cases, a private composer repo makes sense.

Categories: FLOSS Project Planets

Amarok 3.0.1 released

Planet KDE - Sun, 2024-06-02 16:00

The Amarok Development Squad is happy to announce the immediate availability of Amarok 3.0.1, the first bugfix release for Amarok 3.0 "Castaway"

3.0.1 features a number of small improvements and bug fixes, the oldest fulfilled feature request dating back to 2010 this time. Wikipedia applet, UI strings, and playlist generation and collection filtering are among the components that have received multiple improvements in this release. The efforts to both further polish the Qt5/KF5 version, and keep doing clean-up and preparations that bring a Qt6/KF6 version closer, have been ongoing and will continue.

Changes since 3.0.0 FEATURES:
  • Added an option to copy image to clipboard in Wikipedia applet, and a clickable notification if a non-Wikipedia link was clicked.
  • Added an option to select if track's artist is shown for entries under various artists / different album artist in context browser (BR 276039, BR 248101)
  • Indicate which search option is active in Wikipedia applet (BR 332010)
CHANGES:
  • Amarok now depends on KDE Frameworks 5.78.
  • Improve strings in user interface (incl. BR 343896, BR 234854)
  • Reduce CPU usage by minimized/hidden analyzer (BR 390063) and other components.
BUGFIXES: Getting Amarok

In addition to source code, Amarok is available for installation from many distributions' package repositories, which are likely to update to 3.0.1 soon. A flatpak is currently available on flathub-beta.

Packager section

You can find the tarball package on download.kde.org and it has been signed with Tuomas Nurmi's GPG key.

Categories: FLOSS Project Planets

Colin Watson: Free software activity in May 2024

Planet Debian - Sun, 2024-06-02 06:53

My Debian contributions this month were all sponsored by Freexian.

The bulk of my Debian time this month went towards trying to haul more Python packages up to current versions, but I got a few other bits and pieces done as well.

  • I did a little work on improving debbugs’ autopkgtest status.
  • openssh:
    • I fixed an OpenSSL version mismatch error in openssh-ssh1.
    • I finally tracked down a baffling CI issue in openssh, unblocking several contributed merge requests that I’d been sitting on until I could get CI to pass for them. (Special thanks to Andreas Hasenack; GSS-API integration tests will make my life much easier.)
    • I removed the user_readenv=1 option from openssh’s PAM configuration, and did some work on release notes to document this change for affected users.
    • I started work on the first stage of my plan to split out GSS-API key exchange support to separate packages.
  • Python team:
    • I upgraded bitstruct, flufl.enum, flufl.testing, gunicorn, langtable, psycopg3, pygresql, pylint-flask, python-click-didyoumean, python-gssapi, python-httplib2, python-json-log-formatter, python-persistent, python-pgspecial, python-pyld, python-repoze.tm2, python-serializable, python-tenacity, python-typing-extensions, python-unidiff, responses, shortuuid (including an upstream packaging tweak), sqlparse, vulture, zc.lockfile, and zope.interface to new upstream versions.
    • I cherry-picked an upstream PR to fix a pytest 8 incompatibility in ipywidgets.
  • I decided that fixing my old troffcvt package to support groff 1.23.0 wasn’t worth the time investment, and filed a removal request instead.
  • I NMUed bidentd and linuxtv-dvb-apps to declare Architecture: linux-any (and in the latter case also to fix a build failure due to 64-bit time), and worked with the buildd team to remove several of the other remaining entries from Packages-arch-specific.

You can support my work directly via Liberapay.

Categories: FLOSS Project Planets

Steinar H. Gunderson: SIMD detection of nested quotes

Planet Debian - Sun, 2024-06-02 06:10

I recently spent some time thinking about the problem of detecting quoted strings using SIMD, and I've come annoyingly close without actually having a practical solution; yet, I thought I would share my half-solution because I think it's fairly interesting in its own right.

To give a brief recap, here's an idealized version of the problem:

  • You have 16 (or whatever) ASCII bytes in a single SIMD register (SSE2, NEON, etc.).
  • Strings are delineated with "double quotes" or 'single quotes'.
  • Your task is, to efficiently as possible, make a value that is all-ones for each character within quotes and all-zeros everywhere else. (We don't care about what the mask says about the quotes themselves; that's easy to fix up afterwards anyway.)

The use, of course, is to be able to mask out special characters within strings. In my case, we were looking for the first right brace (}) in a byte stream, except that one wouldn't count those that were within quoted strings.

I'm going to recap first how to do this with only one kind of quotes; it's widely known, but I think it's useful to think about why it works. (There's also a variant based on carryless multiplication, which is now often efficient because CPUs implement them for cryptographic purposes, but I'm not going to go into it.) We'll then try to make an analogous solution that supports the harder case where quote markers can be within other quotes and themselves ignored. (For instance, in the string "quo'ted" not quoted "'", the “not quoted” part is between single quotes, but nevertheless, it should not be considered as quoted, because the single quote itself was quoted and thus ignored.)

If you only have one kind of quote (say, "), then the question for any given character is simply whether an odd number of quotes was before it. " turns on quoting, " turns off quoting again. This is computed most easily by comparing each character (in parallel) with ", and then XOR-ing the booleans as we go. This is called a prefix sum (just with XOR instead of add), and I'm going to write it out in full for eight bytes:

r7 = x7 ^ x6 ^ x5 ^ x4 ^ x3 ^ x2 ^ x1 ^ x0; r6 = x6 ^ x5 ^ x4 ^ x3 ^ x2 ^ x1 ^ x0; r5 = x5 ^ x4 ^ x3 ^ x2 ^ x1 ^ x0; r4 = x4 ^ x3 ^ x2 ^ x1 ^ x0; r3 = x3 ^ x2 ^ x1 ^ x0; r2 = x2 ^ x1 ^ x0; r1 = x1 ^ x0; r0 = x0;

So x0..x7 is whether each byte was a quote or not, and r0..r7 is whether we consider that byte quoted or not. (In the full problem, we'll need to consider whether we started our byte group in a quoted state or not, so technically, this only says whether each byte should invert the initial state or not. But for XOR, this is easy; just XOR in the initial value somewhere, either into x0 or into all of the r0..r7 as a post-processing step.)

For scalar code, there's a very simple and efficient way to compute this:

r0 = x0; r1 = r0 ^ x1; r2 = r1 ^ x2; r3 = r2 ^ x3; r4 = r3 ^ x4; r5 = r4 ^ x5; r6 = r5 ^ x6; r7 = r6 ^ x7;

However, this doesn't really work well for SIMD; we get one long dependency chain, with zero parallelism. We want to use those fancy ALUs for something. So instead, we can look at the expression and regroup it a bit:

r7 = (x7 ^ x6) ^ (x5 ^ x4) ^ (x3 ^ x2) ^ (x1 ^ x0); r6 = (x6 ^ x5) ^ (x4 ^ x3) ^ (x2 ^ x1) ^ x0; r5 = (x5 ^ x4) ^ (x3 ^ x2) ^ (x1 ^ x0); r4 = (x4 ^ x3) ^ (x2 ^ x1) ^ x0; r3 = (x3 ^ x2) ^ (x1 ^ x0); r2 = (x2 ^ x1) ^ x0; r1 = (x1 ^ x0); r0 = x0;

This hints that making some temporaries would be a useful step. Let's compute all of those pairs that we just created:

y7 = x7 ^ x6; y6 = x6 ^ x5; y5 = x5 ^ x4; y4 = x4 ^ x3; y3 = x3 ^ x2; y2 = x2 ^ x1; y1 = x1 ^ x0; y0 = x0;

This is very efficient to do in basically any SIMD instruction set; just shift the x0..x7 vector by 8 bits, and then do a XOR between the two vectors. (In some instruction sets, you may not have full 128-bit shifts available. It is fairly easy to work around at a slight loss in efficiency, so let's not bother ourselves with it.)

This allows us to rewrite our expression as:

r7 = y7 ^ y5 ^ y3 ^ y1; r6 = y6 ^ y4 ^ y2 ^ y0; r5 = y5 ^ y3 ^ y1; r4 = y4 ^ y2 ^ y0; r3 = y3 ^ y1; r2 = y2 ^ y0; r1 = y1; r0 = y0;

Hopefully, after staring a bit at this, you can see where we're going even though it's slightly different from before. We can make new parentheses and new temporaries and rewrite again:

z7 = y7 ^ y5; z6 = y6 ^ y4; z5 = y5 ^ y3; z4 = y4 ^ y2; z3 = y3 ^ y1; z2 = y2 ^ y0; z1 = y1; z0 = y0; r7 = z7 ^ z3; r6 = z6 ^ z2; r5 = z5 ^ z1; r4 = z4 ^ z0; r3 = z3; r2 = z2; r1 = z1; r0 = z0;

This is exactly the same, just shift by 16 bits instead of 8. And the last step is similarly easy.

Now, it's useful to thinking about what properties of our XOR operation we actually needed. Notably, we didn't need the property that XOR-ing something twice cancels out. And we didn't need that a ^ b = b ^ a (commutativity). Pretty much the only thing we needed was that our operation was associative ((a ^ b) ^ c = a ^ (b ^ c)) and that we could combine our elements before we knew the quoted state at the point in time, i.e. our values could be looked upon at an action on the state, and not manipulating the state directly until at later.

Now let's look at the more complicated example where we don't have just quoted/non-quoted, but three states for any byte. Let's number them:

  • 0: We are not within quotes.
  • 1: We are within double quotes.
  • 2: We are within single quotes.

For the purposes of the final answer, we don't really care about the difference between 1 and 2, but for figuring out the effect of any given input character, we definitely need to understand it. Let's look at a trivial character; call it c. It is not a quote, so what does it do in each of our states?

  • 0: Not within quotes, so we stay within quotes (stay in 0).
  • 1: In double quotes, so we are still within double quotes (stay in 1).
  • 2: In single quotes, so we are still within single quotes (stay in 2).

OK, so 0 → 0, 1 → 1, 2 → 2. Let's call that 012 for convenience of notation. What about the double-quote character "?

  • 0: Not within quotes, but after this, we are (move to 1).
  • 1: Within double quotes, so we're ending that (move to 0).
  • 2: Within single quotes, so the character is to be ignored (move to 2).

So double quotes gave rise to the mapping 0 → 1, 1 → 0, 2 → 2, or 102 for short. And we'll similarly find out that a single quote character ' will give the mapping 0 → 2, 1 → 1, 2 → 0, or 210 for short.

Now, the crucial insight comes: This is exactly like our XOR case! We can combine these without knowing what state we are in; just keep following the effect for all three states. For instance, the character sequence "x will trivially become 102 just like the double quote itself did. "x" will trivially become 012 (no effect). And we have some more interesting examples like "'; if we follow both maps in turn, we'll have 0 → 1 → 1, 1 → 0 → 2, 2 → 2 → 0. So this combination of signs will give is 120, or written out:

  • If we started with no quotes (0), we are now within double quotes (1).
  • If we started in double quotes (1), we are now in single quotes (2).
  • If we started in single quotes (2), we are now not in quotes (0).

Can we always do this? Yes! Group theory tells us that this is the symmetric group S₃. And groups are always closed (always return a new element of the group), and always associative. We know that S₃ has 3! = 6 elements, and it's not hard to construct examples of how to get into all 6 of them. We can label them 012 021 102 120 201 210, or we can give them easier-for-a-computer names like 0 1 2 3 4 5.

So that's the general battle plan. For each character, find out which state permutation it belongs to (one out of three). Then run the same cascade as before, just with the group product instead of XOR, given by the following table (trivially calculated by just following the states for each case; note that it is not symmetric, i.e., our group is nonabelian, but remember, we never required commutativity):

| 012 021 102 120 210 201 ----+------------------------ 012 | 012 021 102 120 210 201 021 | 021 012 201 210 120 102 102 | 102 120 012 021 201 210 120 | 120 102 210 201 021 012 210 | 210 201 120 102 012 021 201 | 201 210 021 012 102 120

or, equivalently with shorter names:

| 0 1 2 3 4 5 --+-------------- 0 | 0 1 2 3 4 5 1 | 1 0 4 5 2 3 2 | 2 3 0 1 5 4 3 | 3 2 5 4 0 1 4 | 4 5 1 0 3 2 5 | 5 4 3 2 1 0

and then finally combine with the status from the previous case. If we're in the 012 or 021 state (0 or 1), we're not within quotes for that character; otherwise, we are.

Now, of course, here's the problem: It's not obvious how to do this group product effieciently without a table. If we had only 16 elements, we could have used PSHUFB, but we have 36. When looking at the numeric variant, there are some values we could special-case (e.g. the entire first row and column), but e.g. the last row is only nearly trivial to calculate but not quite. So we'd be at an annoying amount of table lookups to get this to work.

So, that's where I stand. Faster than linear in theory, but probably less efficient in practice unless you had absolutely huge vectors and/or unusually efficient gather instructions. (I guess for an FPGA solution, this method wouldn't be so bad?) I didn't really bother trying to write up the actual code; I added some code to detect the difficult cases (they happen if any character thinks it's both within single quotes and double quotes at the same time) and error out if it were found, and then mostly called it a day. I guess the most efficient solution, if you really need to handle this case branch-free, would be a character-by-character one, but based on something like Sheng.

Edit: I noticed that there is actually a little respite; the group product table in this formulation is vertically antisymmetric (rows 0/5, 1/4 and 2/3 form pairs where one is 5-x of the other; some are also reverses of other rows, but that's perhaps less interesting), which makes it possible to do only two shuffles and then a lot of comparing and blending and fixup. I implemented it and it actually works, but is definitely too slow to be useful in practice. It seems there's also an AVX512 instruction that actually gives a 64-entry LUT, which would be really nice if it's fast, and well, if AVX512 actually existed everywhere. :-)

Categories: FLOSS Project Planets

Zato Blog: New API Integration Tutorial in Python

Planet Python - Sun, 2024-06-02 04:00
New API Integration Tutorial in Python 2024-06-02, by Dariusz Suchojad

Do you know what airports, telecom operators, defense forces and health care organizations have in common?

They all rely heavily on deep-backend software systems which are integrated and automated using principled methodologies, innovative techniques and well-defined implementation frameworks.

If you'd like to learn how to integrate and automate such complex systems correctly, head over to the new API integration tutorial that will show you how to do it in Python too.

# -*- coding: utf-8 -*- # Zato from zato.server.service import Service # ############################################################################## class MyService(Service): """ Returns user details by the person's name. """ name = 'api.my-service' # I/O definition input = '-name' output = 'user_type', 'account_no', 'account_balance' def handle(self): # For later use name = self.request.input.name or 'partner' # REST connections crm_conn = self.out.rest['CRM'].conn billing_conn = self.out.rest['Billing'].conn # Prepare requests crm_request = {'UserName':name} billing_params = {'USER':name} # Get data from CRM crm_data = crm_conn.get(self.cid, crm_request).data # Get data from Billing billing_data = billing_conn.post(self.cid, params=billing_params).data # Extract the business information from both systems user_type = crm_data['UserType'] account_no = crm_data['AccountNumber'] account_balance = billing_data['ACC_BALANCE'] self.logger.info(f'cid:{self.cid} Returning user details for {name}') # Now, produce the response for our caller self.response.payload = { 'user_type': user_type, 'account_no': account_no, 'account_balance': account_balance, } # ##############################################################################



API programming screenshots
➤ Here's the API integration tutorial again
➤ More API programming examples in Python

More blog posts
Categories: FLOSS Project Planets

Jacob Adams: What to Do When You Forget Your Root Password

Planet Debian - Sat, 2024-06-01 20:00

Forgetting your root password would initially seem like a problem requiring a full re-install, one that you can’t easily recover from without wiping everything away.

Forgetting your user password can of course be solved by changing it as root, as in the following, which changes the password for user jacob:

# passwd jacob

but only the root user can change their own password, so you need to somehow get root access in order to do so.

Changing Root’s Password with Sudo

This one is probably obvious, but if you have a user with the ability to use sudo, then you can change root’s password without access to the root account by running:

$ sudo passwd

which will reset the password for the root account without requiring the existing password.

Boot Directly to a Shell

Getting root access to any Linux machine you have physical access to is surprisingly simple. You can just boot the machine directly into a root shell without any access control, i.e. passwords.

Why You Should Always Encrypt Your Storage1

To boot directly to a shell you need to append the following text to the kernel command line:

init=/bin/sh

(You could use pretty much any program here, but you’re putting your system into a weird state doing this, and so I’d recommend the simplest approach.)

GRUB

GRUB will allow you to edit boot parameters on startup using the e key. You’ll then be presented with a editor2 that you can use to change the kernel command line by appending to the linux line.

E.g. If your editor looks like this:

load_video insmod gzio if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi insmod part_gpt insmod ext2 search --no-floppy --fs-uuid --set=root abcd1234-5678-0910-1112-abcd12345678 echo 'Loading Linux 6.1.0-21-amd64 ...' linux /boot/vmlinuz-6.1.0-21-amd64 root=UUID=abcd1234-5678-0910-1112-abcd12345678 ro quiet echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-6.1.0-21-amd64

Then you would add init=/bin/sh like so:

load_video insmod gzio if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi insmod part_gpt insmod ext2 search --no-floppy --fs-uuid --set=root abcd1234-5678-0910-1112-abcd12345678 echo 'Loading Linux 6.1.0-21-amd64 ...' linux /boot/vmlinuz-6.1.0-21-amd64 root=UUID=abcd1234-5678-0910-1112-abcd12345678 ro quiet init=/bin/sh echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-6.1.0-21-amd64

Once you’ve edited it you can start your machine with Ctrl+x, as you can see from the prompt text under the editor.

Raspberry Pi cmdline.txt

On a Raspberry Pi you’ll want to append the above to only line of the cmdline.txt file on the boot partition of the SD card. This is the first partition of the disk, the one that is FAT32.

You’ll need to do this on another machine, since if you had root access to edit cmdline.txt you could also just change your password.

As it is a FAT32 partition on an SD card, it should be editable on any other machine that supports SD cards.

E.g. If your cmdline.txt looks like this

console=serial0,115200 console=tty1 root=PARTUUID=fb33757d-02 rootfstype=ext4 fsck.repair=yes rootwait quiet

Then you would add init=/bin/sh like so:

console=serial0,115200 console=tty1 root=PARTUUID=fb33757d-02 rootfstype=ext4 fsck.repair=yes rootwait quiet init=/bin/sh Mount Read / Write

Since you’re replacing the init process of the machine with a shell, no other processes will be running.

Also, your root filesystem will be mounted read-only, as init is expected to remount it read-write as needed.

# mount -o remount,rw / Change Root Password

Once you’ve remounted the root filesystem, all that’s needed is to run the passwd command.

# passwd

Since you’re running the command as root you won’t need to provide your existing password, and will only need to type a new password twice.

Now of course you simply need to remember that password in order to ensure you don’t need to do this again.

Reboot Safely

You now cannot follow the standard reboot process here, as you’re only running one process.

Therefore it’s important to put your root filesystem back into read-only before powering off your machine:

# mount -o remount,ro /

Once you’ve done that you just need to hold down the power button until the machine completely powers off or pull the plug.

And then you’re done! Boot the computer again and you’ll have everything working as normal, with a root password you remember.

  1. Not that this is the only reason, anyone with physical access to your machine could also boot it into another operating system they control, or just remove your storage device and put it into another computer, or probably other things I’m not thinking of now. You should always encrypt your devices. 

  2. The editor uses emacs-like keybindings. The manual includes a list of all the options available. 

Categories: FLOSS Project Planets

Mario Hernandez: Automating your Drupal Front-end with ViteJS

Planet Drupal - Sat, 2024-06-01 20:00

Modern web development relies heavily on automation to stay productive, validate code, and perform repetitive tasks that could slow developers down. Front-end development in particular has evolved, and it can be a daunting task to configure effective automation. In this post, I'll try to walk you through basic automation for your Drupal theme, which uses Storybook as its design system.

Recently I worked on a large Drupal project that needed to migrate its design system from Patternlab to Storybook. I knew switching design systems also meant switching front-end build tools. The obvious choice seemed to be Webpack, but as I looked deeper into build tools, I discovered ViteJS.

Vite is considered the Next Generation Frontend Tooling, and when tested, we were extremely impressed not only with how fast Vite is, but also with its plugin's ecosystem and its community support. Vite is relatively new, but it is solid and very well maintained. Learn more about Vite.

The topics covered in this post can be broken down in two categories:

  1. Preparing the Front-end environment

  2. Automating the environment

NOTE: The project covered in this post does not use Single Directory Components nor the Storybook module. 1. Build the front-end environment with Vite & Storybook

In a previous post, I wrote in detail how to build a front-end environment with Vite and Storybook, I am going to spare you those details here but you can reference them from the original post.

  1. In your command line, navigate to the directory where you wish to build your environment. If you're building a new Drupal theme, navigate to your site's web/themes/custom/
  2. Run the following commands (Storybook should launch at the end):
npm create vite@latest storybook cd storybook npx storybook@latest init --type react

Fig. 1: The first command builds the Vite project, and the last one integrates Storybook into it.

Reviewing Vite's and Storybook's out of the box build scripts

Vite and Storybook ship with a handful of useful scripts. We may find some of them already do what we want or may only need minor tweaks to make them our own.

  • In your code editor, open package.json from the root of your newly built project.
  • Look in the scripts section and you should see something like this:
"scripts": { "dev": "vite", "build": "vite build", "lint": "eslint . --ext js,jsx --report-unused-disable-directives --max-warnings 0", "preview": "vite preview", "storybook": "storybook dev -p 6006", "build-storybook": "storybook build" },

Fig. 2: Example of default Vite and Storybook scripts out of the box.

To run any of those scripts, prefix them with npm run. For example: npm run build, npm run lint, etc. Let's review the scripts.

  • dev: This is a Vite-specific command which runs the Vite app we just build for local development
  • build: This is the "do it all" command. Running npm run build on a project runs every task defined in the build configuration we will do later. CI/CD runners run this command to build your app for production.
  • lint: Will lint your JavaScript code inside .js or .jsx files.
  • preview: This is also another Vite-specific command which runs your app in preview mode.
  • storybook: This is the command you run to launch and keep Storybook running while you code.
  • build-storybook: To build a static version of Storybook to package it or share it, or to run it as a static version of your project.
Building your app for the first time Getting a consistent environment

In front-end development, it is important everyone in your team use the same version of NodeJS while working in the same project. This ensures consistency in your project's behavior for everyone in your team. Differences in the node version your team uses can lead to inconsistencies when the project is built. One way to ensure your team is using the same node version when working in the same project, is by adding a .nvmrc file in the root of your project. This file specifies the node version your project uses. The node version is unique to each project, which means different projects can use different node versions.

  • In the root of your theme, create a file called .nvmrc (mind the dot)
  • Inside .nvmrc add the following: v20.14.0
  • Stop Storybook by pressing Ctrl + C in your keyboard
  • Build the app:
nvm install npm install npm run build

Fig. 3: Installs the node version defined in .nvmrc, then installs node packages, and finally builds the app.

NOTE: You need to have NVM installed in your system to execute nvm commands.
You only need to run nvm install once per project unless the node version changes. If you switch to a project that uses a different node version, when you return to this project, run nvm use to set your environment back to the right node version.

The output in the command line should look like this:

Fig. 4: Screenshot of files compiled by the build command.

By default, Vite names the compiled files by appending a random 8-character string to the original file name. This works fine for Vite apps, but for Drupal, the libraries we'll create expect for CSS and JS file names to stay consistent and not change. Let's change this default behavior.

  • First, install the glob extension. We'll use this shortly to import multiple CSS files with a single import statement.
npm i -D glob
  • Then, open vite.config.js in your code editor. This is Vite's main configuration file.
  • Add these two imports around line 3 or directly after the last import in the file
import path from 'path'; import { glob } from 'glob';
  • Still in vite.config.js, replace the export default... with the following snippet which adds new settings for file names:
export default defineConfig({ plugins: [ ], build: { emptyOutDir: true, outDir: 'dist', rollupOptions: { input: glob.sync(path.resolve(__dirname,'./src/**/*.{css,js}')), output: { assetFileNames: 'css/[name].css', entryFileNames: 'js/[name].js', }, }, }, })

Fig. 5: Build object to modify where files are compiled as well as their name preferences.

  • First we imported path and { glob }. path is part of Vite and glob was added by the extension we installed earlier.
  • Then we added a build configuration object in which we defined several settings:
    • emptyOutDir: When the build job runs, the dist directory will be emptied before the new compiled code is added.
    • outDir: Defines the App's output directory.
    • rollupOptions: This is Vite's system for bundling code and within it we can include neat configurations:
      • input: The directory where we want Vite to look for CSS and JS files. Here's where the path and glob imports we added earlier are being used. By using src/**/**/*.{css,js}, we are instructing Vite to look three levels deep into the src directory and find any file that ends with .css or .js.
      • output: The destination for where CSS and JS will be compiled into (dist/css and dist/js), respectively. And by setting assetFileNames: 'css/[name].css', and entryFileNames: 'css/[name].js', CSS and JS files will retain their original names.

Now if we run npm run build again, the output should be like this:

Fig. 6: Screenshot of compiled code using the original file names.

The random 8-character string is gone and notice that this time the build command is pulling more CSS files. Since we configured the input to go three levels deep, the src/stories directory was included as part of the input path.

2. Restructure the project

The out of the box Vite project structure is a good start for us. However, we need to make some adjustments so we can adopt the Atomic Design methodology. This is today's standards and will work well with our Component-driven Development workflow. At a high level, this is the current project structure:

> .storybook/ > dist/ > public/ > src/ |- stories/ package.json vite.config.js

Fig. 7: Basic structure of a Vite project listing only the most important parts.

  • > .storybook is the main location for Storybook's configuration.
  • > dist is where all compiled code is copied into and where the production app looks for all code.
  • > public is where we can store images and other static assets we need to reference from our site. Equivalent to Drupal's /sites/default/files/.
  • > src is the directory we work out of. We will update the structure of this directory next.
  • package.json tracks all the different node packages we install for our app as well as the scripts we can run in our app.
  • vite.config.js is Vite's main configuration file. This is probably where we will spend most of our time.
Adopting the Atomic Design methodology

The Atomic Design methodology was first introduced by Brad Frost a little over ten years ago. Since then it has become the standard for building web projects. Our environment needs updating to reflect the structure expected by this methodology.

  • First stop Storybook from running by pressing Ctrl + C in your keyboard.
  • Next, inside src, create these directories: base, components, and utilities.
  • Inside components, create these directories: 01-atoms, 02-molecules, 03-organisms, 04-layouts, and 05-pages.
  • While we're at it, delete the stories directory inside src, since we won't be using it.
NOTE: You don't need to use the same nomenclature as what Atomic Design suggests. I am using it here for simplicity. Update Storybook's stories with new paths

Since the project structure has changed, we need to make Storybook aware of these changes:

  • Open .storybook/main.js in your code editor
  • Update the stories: [] array as follows:
stories: [ "../src/components/**/*.mdx", "../src/components/**/*.stories.@(js|jsx|mjs|ts|tsx)", ],

Fig. 8: Updating stories' path after project restructure.

The Stories array above is where we tell Storybook where to find our stories and stories docs, if any. In Storybook, stories are the components and their variations.

Add pre-built components

As our environment grows, we will add components inside the new directories, but for the purpose of testing our environment's automation, I have created demo components.

  • Download demo components (button, title, card), from src/components/, and save them all in their content part directories in your project.
  • Feel free to add any other components you may have built yourself. We'll come back to the components shortly.
3. Configure TwigJS

Before we can see the newly added components, we need to configure Storybook to understands the Twig and YML code we are about to introduce within the demo components. To do this we need to install several node packages.

  • In your command line run:
npm i -D vite-plugin-twig-drupal @modyfi/vite-plugin-yaml twig twig-drupal-filters html-react-parser
  • Next, update vite.config.js with the following configuration. Add the snippet below at around line 5:
import twig from 'vite-plugin-twig-drupal'; import yml from '@modyfi/vite-plugin-yaml'; import { join } from 'node:path';

Fig. 9: TwigJS related packages and Drupal filters function.

The configuration above is critical for Storybook to understand the code in our components:

  • vite-plugin-twig-drupal, is the main TwigJS extension for our project.
  • Added two new imports which are used by Storybook to understand Twig:
    • vite-plugin-twig-drupal handles transforming Twig files into JavaScript functions.
    • @modyfi/vite-plugin-yaml let's us pass data and variables through YML to our Twig components.
Creating Twig namespaces
  • Still in vite.config.js, add the twig and yml() plugins to add Twig namespaces for Storybook.
plugins: [ twig({ namespaces: { atoms: join(__dirname, './src/components/01-atoms'), molecules: join(__dirname, './src/components/02-molecules'), organisms: join(__dirname, './src/components/03-organisms'), layouts: join(__dirname, './src/components/04-layouts'), pages: join(__dirname, './src/components/05-pages'), }, }), yml(), ],

Fig. 10: Twig namespaces reflecting project restructure.

Since we removed the react() function by using the snippet above, we can remove import react from '@vitejs/plugin-react' from the imports list as is no longer needed.

  • Finally, since we updated our project structure earlier, let's update the rollupOptions' input path within the Twig build object configuration:
build: { emptyOutDir: true, outDir: 'dist', rollupOptions: { input: glob.sync(path.resolve(__dirname,'./src/components/**/*.{css,js}')), output: { assetFileNames: 'css/[name].css', entryFileNames: 'js/[name].js', }, }, },

Fig. 11: Twig plugin with updated input path.

With all the configuration updates we just made, we need to rebuild the project for all the changes to take effect. Run the following commands:

npm run build npm run storybook

The components are available but as you can see, they are not styled even though each component contains a CSS stylesheet in its directory. The reason is Storybook has not been configured to find the component's CSS. We'll address this shortly.

4. Configure postCSS

What is PostCSS? It is a JavaScript tool or transpiler that turns a special PostCSS plugin syntax into Vanilla CSS.

As we start interacting with CSS, we need to install several node packages to enable functionality we would not have otherwise. Native CSS has come a long way to the point that I no longer use Sass as a CSS preprocessor.

  • Stop Storybook by pressing Ctrl + C in your keyboard
  • In your command line run this command:
npm i -D postcss postcss-import postcss-import-ext-glob postcss-nested postcss-preset-env
  • At the root of your theme, create a new file called postcss.config.js, and in it, add the following:
import postcssImport from 'postcss-import'; import postcssImportExtGlob from 'postcss-import-ext-glob'; import postcssNested from 'postcss-nested'; import postcssPresetEnv from 'postcss-preset-env'; export default { plugins: [ postcssImportExtGlob(), postcssImport(), postcssNested(), postcssPresetEnv({ stage: 4, }), ], };

Fig. 12: Base configuration for postCSS.

One cool thing about Vite is that it comes with postCSS functionality built in. The only requirement is that you have a postcss.config.js file in the project's root. Notice how we are not doing much configuration for those plugins except for defining them. Let's review the code above:

  • postcss-import the base for importing CSS stylesheets.
  • postcss-import-ext-glob to do bulk @import of all CSS content in a directory.
  • postcss-nested to unwrap nested rules to make its syntax closer to Sass.
  • postcss-preset-env defines the CSS browser support level we need. Stage 4 means we want the "web standards" level of support.
5. CSS and JavaScript configuration

The goal here is to ensure that every time a new CSS stylesheet or JS file is added to the project, Storybook will automatically be aware and begin consuming their code.

NOTE: This workflow is only for Storybook. In Drupal we will use Drupal libraries in which we will include any CSS and JS required for each component.

There are two types of styles to be configured in most project, global styles which apply site-wide, and components styles which are unique to each component added to the project.

Global styles
  • Inside src/base, add two stylesheets: reset.css and base.css.
  • Copy and paste the styles for reset.css and base.css.
  • Inside src/utilities create utilities.css and in it paste these styles.
  • Inside src/, create a new stylesheet called styles.css.
  • Inside styles.css, add the following imports:
@import './base/reset.css'; @import './base/base.css'; @import './utilities/utilities.css';

Fig. 13: Imports to gather all global styles.

The order in which we have imported our stylesheets is important as the cascading order in which they load makes a difference. We start from reset to base, to utilities.

  • reset.css: A reset stylesheet (or CSS reset) is a collection of CSS rules used to clear the browser's default formatting of HTML elements, removing potential inconsistencies between different browsers before any of our styles are applied.
  • base.css: CSS Base applies a style foundation for HTML elements that is consistent for baseline styles such as typography, branding and colors, font-sizes, etc.
  • utilities.css: Are a collection of pre-defined CSS rules we can apply to any HTML element. Rules such as variables for colors, font size, font color, as well as margin, sizes, z-index, animations, etc.
Component styles

Before our components can be styled with their unique and individual styles, we need to make sure all our global styles are loaded so the components can inherit all the base/global styles.

  • Inside src/components create a new stylesheet, components.css. This is where we are going to gather all components styles.
  • Inside components.css add glob imports for each of the component's categories:
@import-glob './01-atoms/**/*.css'; @import-glob './02-molecules/**/*.css';

Fig. 14: Glob import for all components of all categories.

NOTE: Since we only have Atoms and Molecules to work with, we are omitting imports for 03-organisms, 04-layouts, 05-pages. Feel free to add them if you have that kind of components. Updating Storybook's Preview

There are several ways in which we can make Storybook aware of our styles and javascript. We could import each component's stylesheet and javascript into each *.stories.js file, but this could result in some components with multiple sub-components having several CSS and JS imports. In addition, this is not an automated system which means we need to manually do imports as they become available. The approach we are going to take is importing the stylesheets we created above into Storybook's preview system. This provides a couple of advantages:

  • The component's *.stories.js files are clean without any css imports as all CSS will already be available to Storybook.
  • As we add new components with individual stylesheets, these stylesheets will automatically be recognized by Storybook.

Remember, the order in which we import the styles makes a difference. We want all global and base styles to be imported first, before we import component styles.

  • In .storybook/preview.js add these imports at the top of the page around line 2.
import Twig from 'twig'; import drupalFilters from 'twig-drupal-filters'; import '../src/styles.css'; /* Contains reset, base, and utilities styles. */ import '../src/components/components.css'; /* Contains all components CSS. */ function setupFilters(twig) { twig.cache(); drupalFilters(twig); return twig; } setupFilters(Twig);

Fig. 15: Importing all styles, global and components.

In addition to importing two new extensions: twig and twig-drupal-filters, we setup a setupFilters function for Storybook to read Drupal filters we may use in our components. We are also importing two of the stylesheets we created earlier:

  • styles.css contains all the CSS code from reset.css, base.css, and utilities.css (in that order)
  • components.css contains all the CSS from all components. As new components are added and they have their own stylesheets, they will automatically be included in this import.
IMPORTANT: For Storybook to immediately display changes you make in your CSS, the imports above need to be from the src directory and not dist. I learned this the hard way. JavaScript compiling

On a typical project, you will find that the majority of your components don't use JavaScript, and for this reason, we don't need such an elaborate system for JS code. Importing the JS files in the component's *.stories.js should work just fine. Since the demo components dont use JS, I have commented near the top of card.stories.js how the component's JS file would be imported if JS was needed.

If the need for a more automated JavaScript processing workflow arose, we could easily repeat the same CSS workflow but for JS.

Build the project again

Now that our system for CSS and JS is in place, let's build the project to ensure everything is working as we expect it.

npm run build npm run storybook

You may notice that now the components in Storybook look styled. This tells us our new system is working as expected. However, the Card component, if you used the demo components, is missing an image. We will address this issue in the next section.

This concludes the preparation part of this post. The remaining part will focus on creating automation tasks for compiling, minifying and linting code, copying static assets such as images, and finally, watching for code changes as we code. 6. Copying images and other assets

Copying static assets like images, icons, JS, and other files from src into dist is a common practice in front-end projects. Vite comes with built-in functionality to do this. Your assets need to be placed in the public directory and Vite will automatically copy them on build. However, sometimes we may have those assets alongside our components or other directories within our project.

In Vite, there are many ways to accomplish any task, in this case, we will be using a nice plugin called vite-plugin-static-copy. Let's set it up.

  • If Storybook is running, kill it with Ctrl + C in your keyboard
  • Next, install the extension by running:
npm i -D vite-plugin-static-copy
  • Next, right after all the existing imports in vite.config.js, import one more extension:
import { viteStaticCopy } from 'vite-plugin-static-copy';
  • Lastly, still in vite.config.js, add the viteStaticCopy function configuration inside the plugins:[] array:
viteStaticCopy({ targets: [ { src: './src/components/**/*.{png,jpg,jpeg,svg,webp,mp4}', dest: 'images', }], }),

Fig. 16: Adds tasks for copying JavaScript and Images from src to dist.

The viteStaticCopy function we added allows us to copy any type of static assets anywhere within your project. We added a target array in which we included src and dest for the images we want copied. Every time we run npm run build, any images inside any of the components, will be copied into dist/images.
If you need to copy other static assets, simply create new targets for each.

  • Build the project again:
npm run build npm run storybook

The missing image for the Card component should now be visible. Pretty sweet! 🍰

7. The Watch task

A watch task makes it possible for developers to see the changes they are making as they code, and without being interrupted by running commands. Depending on your configuration, a watch task watches for any changes you make to CSS, JavaScript and other file types, and upon saving those changes, code is automatically compiled, and a Hard Module Reload (HMR) is evoked, making the changes visible in Storybook.

Although there are extensions to create watch tasks, we will stick with Storybook's out of the box watch functionality because it does everything we need. In fact, I have used this very approach on a project that supports over one hundred sites.

I actually learned this the hard way, I originally was importing the key stylesheets in .storybook/preview.js using the files from dist. This works to an extend because the code is compiled upon changes, but Storybook is not aware of the changes unless we restart Storybook. I spent hours debugging this issue and tried so many other options, but at the end, the simple solution was to import CSS and JS into Storybook's preview using the source files. For example, if you look in .storybook/preview.js, you will see we are importing two CSS files which contain all of the CSS code our project needs:

import '../src/styles.css'; import '../src/components/components.css';

Fig. 17: Importing source assets into Storybook's preview.

Importing source CSS or JS files into Storybook's preview allows Storybook to become aware immediately of any code changes.

The same, or kind of the same works for JavaScript. However, the difference is that for JS, we import the JS file in the component's *.stories.js, which in turn has the same effect as what we've done above for CSS. The reason for this is that typically not every component we build needs JS.

A real watch task

Currently we are running npm run storybook as a watch task. Nothing wrong with this. However, to keep up with standards and best practices, we could rename the storybook command, watch, so we can run npm run watch. Something to consider.

You could also make a copy of the storybook command and name it watch and add additional commands you wish to run with watch, while leaving the original storybook command intact. Choices, choices.

8. Linting CSS and JavaScript

Our workflow is coming along nicely. There are many other things we can do but for now, we will end with one last task: CSS and JS linting.

  • Install the required packages. There are several of them.
npm i -D eslint stylelint vite-plugin-checker stylelint-config-standard stylelint-order stylelint-selector-pseudo-class-lvhfa
  • Next, after the last import in vite.config.js, add one more:
import checker from 'vite-plugin-checker';
  • Then, let's add one more plugin in the plugins:[] array:
checker({ eslint: { lintCommand: 'eslint "./src/components/**/*.{js,jsx}"', }, stylelint: { lintCommand: 'stylelint "./src/components/**/*.css"', }, }),

Fig. 18: Checks for linting CSS and JavaScript.

So we can execute the above checks on demand, we can add them as commands to our app.

  • In package.json, within the scripts section, add the following commands:
"eslint": "eslint . --ext js,jsx --report-unused-disable-directives --max-warnings 0", "stylelint": "stylelint './src/components/**/*.css'",

Fig. 19: Two new npm commands to lint CSS and JavaScript.

  • We installed a series of packages related to ESLint and Stylelint.
  • vite-plugin-checker is a plugin that can run TypeScript, VLS, vue-tsc, ESLint, and Stylelint in worker thread.
  • We imported vite-plugin-checker and created a new plugin with two checks, one for ESLint and the other for Stylelint.
  • By default, the new checks will run when we execute npm run build, but we also added them as individual commands so we can run them on demand.
Configure rules for ESLint and Stylelint

Both ESLint and Stylelint use configuration files where we can configure the various rules we want to enforce when writing code. The files they use are eslint.config.js and .stylelintrc.yml respectively. For the purpose of this post, we are only going to add the .stylelintrc.yml in which we have defined basic CSS linting rules.

  • In the root of your theme, create a new file called .stylelintrc.yml (mind the dot)
  • Inside .stylelintrc.yml, add the following code:
extends: - stylelint-config-standard plugins: - stylelint-order - stylelint-selector-pseudo-class-lvhfa ignoreFiles: - './dist/**' rules: at-rule-no-unknown: null alpha-value-notation: number color-function-notation: null declaration-empty-line-before: never declaration-block-no-redundant-longhand-properties: null hue-degree-notation: number import-notation: string no-descending-specificity: null no-duplicate-selectors: true order/order: - - type: at-rule hasBlock: false - custom-properties - declarations - unspecified: ignore disableFix: true order/properties-alphabetical-order: error plugin/selector-pseudo-class-lvhfa: true property-no-vendor-prefix: null selector-class-pattern: null value-keyword-case: - lower - camelCaseSvgKeywords: true ignoreProperties: - /^--font/

Fig. 20: Basic CSS Stylelint rules.

The CSS rules above are only a starting point, but should be able to check for the most common CSS errors.

Test the rules we've defined by running either npm run build or npm run stylelint. Either command will alert you of a couple of errors our current code contains. This tells us the linting process is working as expected. You could test JS linting by creating a dummy JS file inside a component and writing bad JS in it.

9. One last thing

It goes without saying that we need to add storybook.info.yml and storybook.libraries.yml files for this to be a true Drupal theme. In addition, we need to create the templates directory somewhere within our theme.

storybook.info.yml

The same way we did for Storybook, we need to create namespaces for Drupal. This requires the Components module and storybook.info.yml configuration is like this:

components: namespaces: atoms: - src/components/01-atoms molecules: - src/components/02-molecules organisms: - src/components/03-organisms layouts: - src/components/04-layouts pages: - src/components/05-pages templates: - src/templates

Fig. 21: Drupal namespaces for nesting components.

storybook.libraries.yml

The recommended method for adding CSS and JS to components or a theme in Drupal is by using Drupal libraries. In our project we would create a library for each component in which we will include any CSS or JS the component needs. In addition, we need to create a global library which includes all the global and utilities styles. Here are examples of libraries we can add in storybook.libraries.yml.

global: version: VERSION css: base: dist/css/reset.css: {} dist/css/base.css: {} dist/css/utilities.css: {} button: css: component: dist/css/button.css: {} card: css: component: dist/css/card.css: {} title: css: component: dist/css/title.css: {}

Fig. 22: Drupal libraries for global styles and component's styles.

/templates

Drupal's templates' directory can be created anywhere within the theme. I typically like to create it inside the src directory. Go ahead and create it now.

  • Inside storybook.info.yml, add a new Twig namespace for the templates directory. See example above. Update your path accordingly based on where you created your templates directory.

P.S: When the Vite project was originally created at the begining of the post, Vite created files such as App.css, App.js, main.js, and index.html. All these files are in the root of the project and can be deleted. It won't affect any of the work we've done, but Vite will no longer run on its own, which we don't need it to anyway.

In closing

I realize this is a very long post, but there is really no way around it when covering these many topics in a single post. I hope you found the content useful and can apply it to your next Drupal project. There are different ways to do what I've covered in this post, and I challenge you to find better and more efficient ways. For now, thanks for visiting.

Download the theme

A full version of the Drupal theme built with this post can be downloaded.

Download the theme

Make sure you are using the theme branch from the repo.

Categories: FLOSS Project Planets

Pages