Feeds

PreviousNext: Entity theming with Pinto

Planet Drupal - Wed, 2024-10-02 20:22

Learn how to make entity theming a breeze using the Pinto module. If you haven’t already, check out the first part of this series for an introduction to all things Pinto.

by adam.bramley / 3 October 2024

In our last post, we discussed Pinto concepts and how to use Theme objects to encapsulate theming logic in a central place for a component. Next, we’ll apply that knowledge to theming an entity. This will demonstrate the power of Pinto and how it will dramatically improve the velocity of delivering new components. 

One of the hardest things about theming Drupal is outputting markup that matches your design system. 

For example:

  • Removing the “div soup” of Drupal fields
  • Adding custom classes or attributes to field output
  • Wrapping fields in custom tags (e.g. an h2)

While there are plenty of modules to alleviate this, it can often mean you have a mix of YAML configuration for markup, preprocess hooks, overridden templates, etc., to pull everything together. Pinto allows you to easily render an entity while reusing your frontender’s perfect template!

We need to cover a few more concepts and set things up to pull this all together. Once set up, new bundles or entity types can be added with ease.

We'll continue our Card component example from the previous post and cover:

  1. Setting up a bundle class. In this example, we will implement it as a Block Content bundle
  2. Using a custom entity view builder
  3. Theming a Card block using Pinto
Bundle classes

In case you’re not aware, Drupal introduced the concept of Bundle classes almost three years ago. They essentially allow business logic for each bundle to be encapsulated in its own PHP class and benefit from regular PHP concepts such as code sharing via Traits, Interfaces, etc.

At PreviousNext, our go-to for implementing bundle classes is the BCA module, which allows you to define a class as a custom Bundle class via an attribute, removing the need for hook_entity_bundle_info_alter.

Our standard setup on projects is:

  • An Interface per entity type (e.g MyProjectBlockContentInterface)
  • An abstract base class per entity type (e.g. MyProjectBlockContentBase)
  • A Bundle class per bundle
  • Traits and interfaces for any shared fields/logic (e.g. BodyTrait for all bundles that have a Body field)

My preferred approach is to have a directory structure that matches the entity type located inside the project’s profile (e.g. src/Entity/BlockContent/Card.php. Feel free to set this up however you like. For example, some people may prefer to separate entity types into different modules.

Let’s set up our Card bundle class:

namespace Drupal\my_project_profile\Entity\BlockContent; use Drupal\bca\Attribute\Bundle; use Drupal\my_project_profile\Traits\DescriptionTrait; use Drupal\my_project_profile\Traits\ImageTrait; use Drupal\my_project_profile\Traits\TitleTrait; #[Bundle(entityType: self::ENTITY_TYPE_ID, bundle: self::BUNDLE)] final class Card extends MyProjectBlockContentBase { use TitleTrait; use DescriptionTrait; use ImageTrait; public const string BUNDLE = 'card'; }

Here we use the Bundle attribute provided by the BCA module to automatically register this class as the bundle class for the card bundle. We’re using constants here to make it easy to reference this machine name anywhere in our codebase. The ENTITY_TYPE_ID constant comes from the parent interface.

NOTE: I won’t go into too much detail about how the interfaces, base classes, and traits are set up. There are plenty of examples of how you might write these. Check out the change record for some basic examples! 

In our case, each trait is a getter/setter pair for each of our fields required to build our Card component: 

  • Title - a plain text field
  • Description - another plain text field
  • Image - a Media reference field.
Custom entity view builder

EntityViewBuilders are PHP classes that contain logic on how to build (or render) an entity. Entity types can have custom EntityViewBuilders; for example BlockContent has its own defined in core. These are defined in the view_builder handler in an entity type's annotation and can also be overridden by using hook_entity_type_alter.

By default, the view builder class takes all of your configuration in an entity view display (i.e. field formatter settings, view modes, etc.) and renders it. We are using a custom view builder class to bypass all of that and simply return a render array via a Pinto object.

The function that drives this is getBuildDefaults so that’s all we need to override.

For this example, a custom view builder for the block content entity type can be as simple as:

namespace Drupal\my_project_profile\Handler; use Drupal\Core\Cache\CacheableMetadata; use Drupal\Core\Entity\EntityInterface; use Drupal\block_content\BlockContentViewBuilder; use Drupal\my_project_profile\Entity\Interface\BuildableEntityInterface; class MyProjectBlockContentViewBuilder extends BlockContentViewBuilder { /**  * {@inheritdoc}  */ public function getBuildDefaults(EntityInterface $entity, $view_mode) {   $build = parent::getBuildDefaults($entity, $view_mode);   if (!$entity instanceof BuildableEntityInterface || !$entity->shouldBuild($view_mode)) {     return $build;   }   $cache = CacheableMetadata::createFromRenderArray($build);   $build = $entity->build($view_mode);   $cache->merge(CacheableMetadata::createFromRenderArray($build))     ->applyTo($build);   return $build; } }

Here, we check for a custom BuildableEntityInterface and call a shouldBuild method. If either of those are FALSE then we fall back to Drupal’s default behaviour. Otherwise, we gather cacheable metadata from both the default build and the result of calling the build method, and then return the output. We will cover these in more detail shortly.

Now we just need an alter hook to wire things up:

use Drupal\my_project_profile\Handler\MyProjectBlockContentViewBuilder; /** * Implements hook_entity_type_alter(). */ function my_project_profile_entity_type_alter(array &$entity_types): void {  /** @var \Drupal\Core\Entity\ContentEntityType $blockContentDefinition */  $blockContentDefinition = $entity_types['block_content'];  // Override view builder class.  $blockContentDefinition->setViewBuilderClass(MyProjectBlockContentViewBuilder::class); }

Pro tip: Use the Hux module to do this in a Hooks class.

Now, any BlockContent bundle class that implements BuildableEntityInterface and returns TRUE from its shouldBuild method will completely bypass Drupal’s standard entity rendering and instead just return whatever we want from its build method.

BuildableEntityInterfacenamespace Drupal\my_project_profile\Entity\Interface; /** * Interface for entities which override the view builder. */ interface BuildableEntityInterface { /**  * Default method to build an entity.  */ public function build(string $viewMode): array; /**  * Determine if the entity should be built for the given view mode.  */ public function shouldBuild(string $viewMode): bool; }

This interface can be added to the Bundle class itself or the custom entity type interface we discussed earlier to keep all bundles consistent. This doesn’t just apply to the Block content entity type; you can use this for Paragraphs, Media, or your custom entity types. You’ll just need to override the view builder for each. 

It is generally not recommended to use this for Node since you’re more likely to get value out of something like Layout Builder for rendering nodes. Those layouts would then have block content added to them, which in turn will be rendered via this method.

Back to our Card example. It was extending a custom base class MyProjectBlockContentBase. That class may look something like this:

namespace Drupal\my_project_profile\Entity\BlockContent; use Drupal\block_content\BlockContentTypeInterface; use Drupal\block_content\Entity\BlockContent; abstract class MyProjectBlockContentBase extends BlockContent implements MyProjectBlockContentInterface { /**  * {@inheritdoc}  */ public function shouldBuild(string $viewMode): bool {   return TRUE; } }

Our base class extends core’s BlockContent class and implements our custom interface.

That custom interface can then extend BuildableEntityInterface.

The shouldBuild method is an optional implementation detail, but it is nice if you have multiple view modes for a bundle, which need to have differing logic. For example, you might have a media_library view mode that you want to continue to use Drupal’s standard rendering.

Now, all we need to do is implement the build method on our BlockContent bundle classes.

Let’s look at the Card example:

use Drupal\my_project_ds\ThemeObject\Card as PintoCard; final class Card extends MyProjectBlockContentBase {    // Trimmed for easy reading.  /**   * {@inheritdoc}   */  public function build(string $viewMode): array {    return PintoCard::createFromCardBlock($this)();  } }

Here, we’re simply returning the render array that results from invoking our Card Pinto object (aliased as PintoCard via the use statement).

We have also introduced a factory method createFromCardBlock on the Pinto theme object, which takes the entity and injects its data into the object.

This is what the fully implemented Pinto object would look like

namespace Drupal\my_project_ds\ThemeObject; use Drupal\Core\Cache\CacheableDependencyInterface; use Drupal\my_project_profile\Entity\BlockContent\Card as CardBlock; use Drupal\my_project_ds\MyProjectDs\MyProjectObjectTrait; use Pinto\Attribute\ThemeDefinition; #[ThemeDefinition([ 'variables' => [   'title' => '',   'description' => '',   'image' => '', ], ])] final class Card implements CacheableDependencyInterface { use MyProjectObjectTrait; private function __construct(   private readonly string $title,   private readonly array $image,   private readonly ?string $description, ) {} public static function createFromCardBlock(CardBlock $card): static {   return new static(     $card->getTitle(),     $card->getImage(),     $card->getDescription(),   ); } protected function build(mixed $build): mixed {   return $build + [     '#title' => $this->title,     '#description' => $this->description,     '#image' => $this->image,   ]; } }

The build and constructor methods were covered in our previous Pinto post. All that’s new here is the createFromCardBlock method, where we use the getters from the bundle class traits to inject the entity’s data into the constructor.

We also briefly mentioned cacheable metadata in our last post. Since our Pinto object implements CacheableDependencyInterface, we can add that metadata directly to the theme object. For example, you should enhance the bundle class’ build method to add the Image media entity as a cacheable dependency. That way if the media entity is updated, the Card output is invalidated.

/** * {@inheritdoc} */ public function build(string $viewMode): array { $build = PintoCard::createFromCardBlock($this); $image = $this->image->entity; if ($image) {    $build->addCacheableDependency($image);  } return $build(); }

Now, we have end-to-end rendering of a Drupal entity using Pinto Theme objects to render templates defined in a Storybook design system.

New bundles are simple to implement. All that’s needed is to click together the fields in the UI to build the content model, add the new Theme object, and wire that together with a bundle class.

I can’t overstate how much this has sped up our backend development. My latest project utilised Pinto from the very beginning, and it has made theming the entire site extremely fast and even… fun! 😀

Categories: FLOSS Project Planets

Dries Buytaert: Solving the Maker-Taker problem

Planet Drupal - Wed, 2024-10-02 13:29

Recently, a public dispute has emerged between WordPress co-founder Matt Mullenweg and hosting company WP Engine. Matt has accused WP Engine of misleading users through its branding and profiting from WordPress without adequately contributing back to the project.

As the Founder and Project Lead of Drupal, another major open source Content Management System (CMS), I hesitated to weigh in on this debate, as this could be perceived as opportunistic. In the end, I decided to share my perspective because this conflict affects the broader open source community.

I've known Matt Mullenweg since the early days, and we've grown both our open source projects and companies alongside each other. With our shared interests and backgrounds, I consider Matt a good friend and can relate uniquely to him. Equally valuable to me are my relationships with WP Engine's leadership, including CEO Heather Brunner and Founder Jason Cohen, both of whom I've met several times. I have deep admiration for what they’ve achieved with WP Engine.

Although this post was prompted by the controversy between Automattic and WP Engine, it is not about them. I don't have insight into their respective contributions to WordPress, and I'm not here to judge. I've made an effort to keep this post as neutral as possible.

Instead, this post is about two key challenges that many open source projects face:

  1. The imbalance between major contributors and those who contribute minimally, and how this harms open source communities.
  2. The lack of an environment that supports the fair coexistence of open source businesses.

These issues could discourage entrepreneurs from starting open source businesses, which could harm the future of open source. My goal is to spark a constructive dialogue on creating a more equitable and sustainable open source ecosystem. By solving these challenges, we can build a stronger future for open source.

This post explores the "Maker-Taker problem" in open source, using Drupal's contribution credit system as a model for fairly incentivizing and recognizing contributors. It suggests how WordPress and other open source projects could benefit from adopting a similar system. While this is unsolicited advice, I believe this approach could help the WordPress community heal, rebuild trust, and advance open source productively for everyone.

The Maker-Taker problem

At the heart of this issue is the Maker-Taker problem, where creators of open source software ("Makers") see their work being used by others, often service providers, who profit from it without contributing back in a meaningful or fair way ("Takers").

Five years ago, I wrote a blog post called Balancing Makers and Takers to scale and sustain Open Source, where I defined these concepts:

The difference between Makers and Takers is not always 100% clear, but as a rule of thumb, Makers directly invest in growing both their business and the open source project. Takers are solely focused on growing their business and let others take care of the open source project they rely on.

In that post, I also explain how Takers can harm open source projects. By not contributing back meaningfully, Takers gain an unfair advantage over Makers who support the open source project. This can discourage Makers from keeping their level of contribution up, as they need to divert resources to stay competitive, which can ultimately hurt the health and growth of the project:

Takers harm open source projects. An aggressive Taker can induce Makers to behave in a more selfish manner and reduce or stop their contributions to open source altogether. Takers can turn Makers into Takers.

Solving the Maker-Taker challenge is one of the biggest remaining hurdles in open source. Successfully addressing this could lead to the creation of tens of thousands of new open source businesses while also improving the sustainability, growth, and competitiveness of open source – making a positive impact on the world.

Drupal's approach: the Contribution Credit System

In Drupal, we've adopted a positive approach to encourage organizations to become Makers rather than relying on punitive measures. Our approach stems from a key insight, also explained in my Makers and Takers blog post: customers are a "common good" for an open source project, not a "public good".

Since a customer can choose only one service provider, that choice directly impacts the health of the open source project. When a customer selects a Maker, part of their revenue is reinvested into the project. However, if they choose a Taker, the project sees little to no benefit. This means that open source projects grow faster when commercial work flows to Makers and away from Takers.

For this reason, it's crucial for an open source community to:

  1. Clearly identify the Makers and Takers within their ecosystem
  2. Actively support and promote their Makers
  3. Educate end users about the importance of choosing Makers

To address these needs and solve the Maker-Taker problem in Drupal, I proposed a contribution credit system 10 years ago. The concept was straightforward: incentivize organizations to contribute to Drupal by giving them tangible recognition for their efforts.

We've since implemented this system in partnership with the Drupal Association, our non-profit organization. The Drupal Association transparently tracks contributions from both individuals and organizations. Each contribution earns credits, and the more you contribute, the more visibility you gain on Drupal.org (visited by millions monthly) and at events like DrupalCon (attended by thousands). You can earn credits by contributing code, submitting case studies, organizing events, writing documentation, financially supporting the Drupal Association, and more.

A screenshot of an issue comment on Drupal.org. You can see that jamadar worked on this patch as a volunteer, but also as part of his day job working for TATA Consultancy Services on behalf of their customer, Pfizer.

Drupal's credit system is unique and groundbreaking within the Open Source community. The Drupal contribution credit system serves two key purposes: it helps us identify who our Makers and Takers are, and it allows us to guide end users towards doing business with our Makers.

Here is how we accomplish this:

  • Certain benefits, like event sponsorships or advertising on Drupal.org, are reserved for organizations with a minimum number of credits.
  • The Drupal marketplace only lists Makers, ranking them by their contributions.
  • Top contributors appear first, and organizations that stop contributing gradually drop in rankings or are removed.
  • We encourage end users to require open source contributions from their vendors. Drupal users like Pfizer and the State of Georgia only allow Makers to apply in their vendor selection process.
A slide from my recent DrupalCon Barcelona State of Drupal keynote showcasing key contributors to Drupal Starshot. This slide showcases how we recognize and celebrate Makers in our community, encouraging active participation in the project. Governance and fairness

Fairness in the open source credit system requires oversight by an independent, neutral party. This entity must objectively assess contributions to maintain equity.

In the Drupal ecosystem, the Drupal Association fulfills this crucial role. The Drupal Association operates independently, free from control by any single company within the Drupal ecosystem. Some of the Drupal Association's responsibilities include:

  1. Organizing DrupalCons
  2. Managing Drupal.org
  3. Overseeing the contribution tracking and credit system

It's important to note that while I serve on the Drupal Association's Board, I am just one of 12 members and have not held the Chair position for several years. My company, Acquia, receives no preferential treatment in the credit system; the visibility of any organization, including Acquia, is solely determined by its contributions over the preceding twelve months. This structure ensures fairness and encourages active participation from all members of the Drupal community.

Drupal's credit system certainly isn't perfect. It is hard to accurately track and fairly value diverse contributions like code, documentation, mentorship, marketing, event organization, etc. Some organizations have tried to game the system, while others question whether the cost-benefit is worthwhile.

As a result, Drupal's credit system has evolved significantly since I first proposed it ten years ago. The Drupal Association continually works to improve the system, aiming for a credit structure that genuinely drives positive behavior.

Recommendations for WordPress

WordPress has already taken steps to address the Maker-Taker challenge through initiatives like the Five for the Future program, which encourages organizations to contribute 5% of their resources to WordPress development.

Building on this foundation, I believe WordPress could benefit from adopting a contribution credit system similar to Drupal's. This system would likely require the following steps to be taken:

  1. Expanding the current governance model to be more distributed.
  2. Providing clear definitions of Makers and Takers within the ecosystem.
  3. Implementing a fair and objective system for tracking and valuing various types of contributions.
  4. Implementing a structured system of rewards for Makers who meet specific contribution thresholds, such as priority placement in the WordPress marketplace, increased visibility on WordPress.org, opportunities to exhibit at WordPress events, or access to key services.

This approach addresses both key challenges highlighted in the introduction: it balances contributions by incentivizing major involvement, and it creates an environment where open source businesses of all sizes can compete fairly based on their contributions to the community.

Conclusion

Addressing the Maker-Taker challenge is essential for the long-term sustainability of open source projects. Drupal's approach may provide a constructive solution not just for WordPress, but for other communities facing similar issues.

By transparently rewarding contributions and fostering collaboration, we can build healthier open source ecosystems. A credit system can help make open source more sustainable and fair, driving growth, competitiveness, and potentially creating thousands of new open source businesses.

As Drupal continues to improve its credit system, we understand that no solution is perfect. We're eager to learn from the successes and challenges of other open source projects and are open to ideas and collaboration.

Categories: FLOSS Project Planets

The Open Source AI Definition RC1 is available for comments

Open Source Initiative - Wed, 2024-10-02 13:04

A little over a month after v.0.0.9, we have a Release Candidate version of the Open Source AI Definition. This was reached with lots of community feedback: 5 town hall meetings, several comments on the forum and on the draft, and in person conversations at events in Austria, China, India, Ghana, and Argentina.

There are three relevant changes to the part of the definition pertaining to the “preferred form to make modifications to a machine learning system.”

The feature that will draw most attention is the new language of Data Information. It clarifies that all the training data needs to be shared and disclosed. The updated text comes from many conversations with several individuals who engaged passionately with the design process, on the forum, in person and on hackmd. These conversations helped describe four types of data: open, public, obtainable and unshareable data, well described in the FAQ. The legal requirements are different for each. All are required to be shared in the form that the law allows them to be shared. 

Two new features are equally important. RC1 clarifies that Code must be complete, enough for downstream recipients to understand how the training was done. This was done to reinforce the importance of the training, both for transparency, security and other practical reasons. Training is where innovation is happening at the moment and that’s why you don’t see corporations releasing their training and data processing code. We believe, given the current status of knowledge and practice, that this is required to meaningfully fork (study and modify) AI systems.

Last, there is new text that is meant to explicitly acknowledge that it is admissible to require copyleft-like terms for any of the Code, Data Information and Parameters, individually or as bundled combinations. A demonstrative scenario is a consortium owning rights to training code and a dataset deciding to distribute the bundle code+data with legal terms that tie the two together, with copyleft-like provisions. This sort of legal document doesn’t exist yet but the scenario is plausible enough that it deserves consideration. This is another area that OSI will monitor carefully as we start reviewing these legal terms with the community.

A note about science and reproducibility

The aim of Open Source is not and has never been to enable reproducible software. The same is true for Open Source AI: reproducibility of AI science is not the objective. Open Source’s role is merely not to be an impediment to reproducibility. In other words, one can always add more requirements on top of Open Source, just like the Reproducible Builds effort does.

Open Source means giving anyone the ability to meaningfully “fork” (study and modify) a system, without requiring additional permissions, to make it more useful for themselves and also for everyone. This is why OSD #2 requires that the “source code” must be provided in the preferred form for making modifications. This way everyone has the same rights and ability to improve the system as the original developers, starting a virtuous cycle of innovation. Forking in the machine learning context has the same meaning as with software: having the ability and the rights to build a system that behaves differently than its original status. Things that a fork may achieve are: fixing security issues, improving behavior, removing bias. All these are possible thanks to the requirements of the Open Source AI Definition.

What’s coming next

With the release candidate cycle starting today, the drafting process will shift focus: no new features, only bug fixes. We’ll watch for new issues raised, watching for major flaws that may require significant rewrites to the text. The main focus will be on the accompanying documentation, the Checklist and the FAQ. We also realized that in our zeal to solve the problem of data that needs to be provided but cannot be supplied by the model owner for good reasons, we had failed to make clear the basic requirement that “if you can share the data you must.” We have already made adjustments in RC1 and will be seeking views on how to better express this in an RC2. 

In the next weeks until the 1.0 release of October 28, we’ll focus on:

  • Getting more endorsers to the Definition
  • Continuing to collect feedback on hackmd and forum, focusing on new, unseen-before concerns
  • Preparing the artifacts necessary for the launch at All Things Open
  • Iterating on the Checklist and FAQ, preparing them for deployment.

Link to the Open Source AI Definition Release Candidate 1

Categories: FLOSS Research

Drupal blog: State of Drupal presentation (September 2024)

Planet Drupal - Wed, 2024-10-02 12:30

This blog has been re-posted and edited with permission from Dries Buytaert's blog.

DrupalCon Barcelona 2024 Driesnote presentation

Approximately 1,100 Drupal enthusiasts gathered in Barcelona, Spain, last week for DrupalCon Europe. As per tradition, I delivered my State of Drupal keynote, often referred to as the "DriesNote".

If you missed it, you can watch the video or download my slides (177 MB).

In my keynote, I gave an update on Drupal Starshot, an ambitious initiative we launched at DrupalCon Portland 2024. Originally called Drupal Starshot, inspired by President Kennedy's Moonshot challenge, the product is now officially named Drupal CMS.

The goal of Drupal CMS is to set the standard for no-code website building. It will allow non-technical users, like marketers, content creators, and site builders, to create digital experiences with ease, without compromising on the power and flexibility that Drupal is known for.

A four-month progress report

A preview of Drupal.org's front page with the updated Drupal brand and content.

While Kennedy gave NASA eight years, I set a goal to deliver the first version of Drupal CMS in just eight months. It's been four months since DrupalCon Portland, which means we're halfway through.

So in my keynote, I shared our progress and gave a 35-minute demo of what we've built so far. The demo highlights how a fictional marketer, Sarah, can build a powerful website in just hours with minimal help from a developer. Along her journey, I showcased the following key innovations:

  1. A new brand for a new market: A brand refresh of Drupal.org, designed to appeal to both marketers and developers. The first pages are ready and available for preview at new.drupal.org, with more pages launching in the coming months.
  2. A trial experience: A trial experience that lets you try Drupal CMS with a single click, eliminating long-standing adoption barriers for new users. Built with WebAssembly, it runs entirely in the browser – no servers to install or manage.
  3. An improved installer: An installer that lets users install recipes – pre-built features that combine modules, configuration, and default content for common website needs. Recipes bundle years of expertise into repeatable, shareable solutions.
  4. Events recipe: A simple events website that used to take an experienced developer a day to build can now be created in just a few clicks by non-developers.
  5. Project Browser support for recipes: Users can now browse the Drupal CMS recipes in the Project Browser, and install them in seconds.
  6. First page of documentation: New documentation created specifically for end users. Clear, effective documentation is key to Drupal CMS's success, so we began by writing a single page as a model for the quality and style we aim to achieve.
  7. AI for site building: AI agents capable of creating content types, configuring fields, building Views, forms, and more. These agents will transform how people build and manage websites with Drupal.
  8. Responsible AI policy: To ensure responsible AI development, we've created a Responsible AI policy. I'll share more details in an upcoming blog, but the policy focuses on four key principles: human-in-the-loop, transparency, swappable large language models (LLMs), and clear guidance.
  9. SEO Recipe: Combines and configures all the essential Drupal modules to optimize a Drupal site for search engines.
  10. 14 recipes in development: In addition to the Events and SEO recipes, 12 more are in development with the help of our Drupal Certified Partners. Each Drupal CMS recipe addresses a common marketing use case outlined in our product strategy. We showcased both the process and progress during the Initiative Lead Keynote for some of the tracks. After DrupalCon, we'll begin developing even more recipes and invite additional contributors to join the effort.
  11. AI-assisted content migration: AI will crawl your source website and handle complex tasks like mapping unstructured HTML to structured Drupal content types in your destination site, making migrations faster and easier. This could be a game-changer for website migrations.
  12. Experience Builder: An early preview of a brand new, out-of-the-box tool for content creators and designers, offering layout design, page building, basic theming and content editing tools. This is the first time I've showcased our progress on stage at a DrupalCon.
  13. Future-proof admin UI with React: Our strategy for modernizing Drupal's backend UI with React.
  14. The "Adopt-a-Document" initiative: A strategy and funding model for creating comprehensive documentation for Drupal CMS. If successful, I'm hopeful we can expand this model to other areas of Drupal. For more details, please read the announcement on drupal.org.
  15. Global Documentation Lead: The Drupal Association's commitment to hire a dedicated Documentation Lead, responsible for managing all aspects of Drupal's documentation, beyond just Drupal CMS.

The feedback on my presentation has been incredible, both online and in-person. The room was buzzing with energy and positivity! I highly recommend watching the recording.

Attendees were especially excited about the AI capabilities, Experience Builder, and recipes. I share their enthusiasm as these capabilities are transformative for Drupal.

Many of these features are designed with non-developers in mind. Our goal is to broaden Drupal's reach beyond its traditional user base and reach more people than ever before.

Release schedule

Our launch plan targets Drupal CMS's release on Drupal's upcoming birthday: January 15, 2025. It's also just a couple of weeks after the Drupal 7 End of Life, marking the end of one era and the beginning of another.

The next milestone is DrupalCon Singapore, taking place from December 9–11, 2024, less than 3 months away. We hope to have a release candidate ready by then.

Now that we're back from DrupalCon and have key milestone dates set, there is a lot to coordinate and plan in the coming weeks, so stay tuned for updates.

Call for contribution

Ambitious? Yes. But achievable if we work together. That's why I'm calling on all of you to get involved with Drupal CMS. Whether it's building recipes, enhancing the Experience Builder, creating AI agents, writing tests, improving documentation, or conducting usability testing – there are countless ways to contribute and make a difference. If you're ready to get involved, visit https://drupal.org/starshot to learn how to get started.

Thank you

This effort has involved so many people that I can't name them all, but I want to give a huge thank you to the Drupal CMS Leadership Team, who I've been working with closely every week: Cristina Chumillas (Lullabot), Gábor Hojtsy (Acquia), Lenny Moskalyk (Drupal Association), Pamela Barone (Technocrat), Suzanne Dergacheva (Evolving Web), and Tim Plunkett (Acquia).

A special shoutout goes to the demo team we assembled for my presentation: Adam Hoenich (Acquia), Amber Matz (Drupalize.me), Ash Sullivan (Acquia), Jamie Abrahams (FreelyGive), Jim Birch (Kanopi), Joe Shindelar (Drupalize.me), John Doyle (Digital Polygon), Lauri Timmanee (Acquia), Marcus Johansson (FreelyGive), Martin Anderson-Clutz (Acquia), Matt Glaman (Acquia), Matthew Grasmick (Acquia), Michael Donovan (Acquia), Tiffany Farriss (Palantir.net), and Tim Lehnen (Drupal Association).

I also want to thank the Drupal CMS track leads and contributors for their development work. Additionally, I'd like to recognize the Drupal Core Committers, Drupal Association staff, Drupal Association Board of Directors, and Certified Drupal Partners for continued support and leadership. There are so many people and organizations whose contributions deserve recognition that I can't list everyone individually, partly to avoid the risk of overlooking anyone. Please know your efforts are deeply appreciated.

Lastly, thank you to everyone who helped make DrupalCon Barcelona a success. It was excellent!

Categories: FLOSS Project Planets

FSF Blogs: September GNU spotlight with Amin Bandali

GNU Planet! - Wed, 2024-10-02 12:00
Fourteen new GNU releases in the last month (as of September 30, 2024):
Categories: FLOSS Project Planets

September GNU spotlight with Amin Bandali

FSF Blogs - Wed, 2024-10-02 12:00
Fourteen new GNU releases in the last month (as of September 30, 2024):
Categories: FLOSS Project Planets

ComputerMinds.co.uk: Automatically generate forms from config schema

Planet Drupal - Wed, 2024-10-02 11:08

Drupal's form API has been brilliant for many years. Still, recently I found myself wondering why I needed to build a configuration form if I already had a schema for my config. Defining a schema facilitates API-first validation (including some pretty smart constraints), specific typing (e.g. actual booleans or integers instead of '0' or '1' strings), and even translation in Drupal. 

That last part got me thinking; if Drupal automatically provides translation forms for typed configuration, why must I build a form? I started diving into the code and found config_translation_config_schema_info_alter() which maps certain config data types to element classes. The ConfigTranslationFormBase::buildForm() class fetches the schema for each config property from the 'config.typed' service (\Drupal\Core\Config\TypedConfigManager) before building the appropriate elements. So Drupal core automatically provides this translation form - notice the long textarea for the 'body' property:

Screenshot of a config translation form from Drupal core

I had built a block plugin that needed some regex-based validation on a couple of its configuration properties. Validation constraints seemed like a natural fit for these, as an inherent part of the property definitions, rather than just on the form level. Drupal has had good support for validation constraints on configuration since version 10.2. This allows forms to be simpler, and config to be fully validatable, even outside the context of a form (e.g. for setting via APIs or config synchronisation). So I defined my config schema like this:

block.settings.mymodule_myblock: type: block_settings label: 'MyBlock block settings' mapping: svcid: type: string label: 'Service ID' constraints: Regex: pattern: '/^[a-zA-Z0-9_\-]+$/' message: "The %value can only contain simple letters, numbers, underscores or hyphens." default: 'abcde' locked: true envid: type: string label: 'Environment ID' constraints: Regex: pattern: '/^[a-zA-Z0-9_\-]+$/' message: "The %value can only contain simple letters, numbers, underscores or hyphens." default: 'x-j9WsahRe_1An51DhErab-C'

Then I set myself the challenge of building a configuration form 'automatically' from this schema - without using core's config_translation module at all, as this was for a monolingual site. 

I only had two string properties, which meant two textfields, but I wrote the code to use form elements that could be appropriate for other types of property that might get added in the future. The #title of each element could come directly from each label in the schema. (Why do we usually set these in both the schema and form element?!) I added custom default and locked properties to the schema to help encapsulate everything I considered 'inherent' to each part of the config in one place. This meant the configuration form for my block could be fairly simple:

public function blockForm($form, FormStateInterface $form_state) { // Each config property will be returned with its schema from $this->getConfigurables(). foreach ($this->getConfigurables() as $key => $schema_info) { $form[$key] = [ '#type' => match ($schema_info['type']) { 'string', 'label' => 'textfield', 'text' => 'textarea', 'boolean' => 'checkbox', 'integer', 'float' => 'number', 'email' => 'email', }, '#title' => $schema_info['label'], '#default_value' => $this->configuration[$key], '#required' => empty($schema_info['nullable']), '#disabled' => $schema_info['locked'] ?? FALSE, ]; } return $form; }

Hopefully that gives an idea of how simple a config form could be - and this could really be reduced further by refactoring it into a generic trait. The code in core's config_translation module for mapping the type of each property to an element type could be much more useful than the fairly naïve match statement above, if it was refactored out to be available even to monolingual sites.

You can explore my full code at https://gist.github.com/anotherjames/bcb7ba55ec56359240b26d322fe2f5a5. That includes the getConfigurables() method which pulls the schema from the TypedConfigManager.

You'll see that I went a little further and picked up the regex constraints for each config property, for use in #pattern form API properties. This provides quick feedback to admins about what characters are allowed using the HTML5 pattern attribute:

Not all configuration constraints could be built into the form level. It's arguable that since the Regex constraint and HTML pattern attribute support slightly different regular expression features, this particular one shouldn't be included in a generic trait. Then again, the Choice constraint could be especially useful to include, as it could be used to populate #options for select, radios, or checkboxes elements. We've started using backed Enums with labels for fixed sets of options. Can we wire those up to choice constraints together, I wonder?

Whereas my example was for a configurable plugin's form (which I don't believe can use #config_target), Joachim Noreiko (joachim) has submitted a feature request to Drupal core for forms extending ConfigFormBase to get automatically built from schema. This idea of generating form elements from config schema is still in its infancy, so its limits and benefits need to be explored further. Please let us know in a comment here, or in Joachim's feature request, if you have done anything similar, or have ideas or concerns to point out!

Categories: FLOSS Project Planets

Real Python: A Guide to Modern Python String Formatting Tools

Planet Python - Wed, 2024-10-02 10:00

When working with strings in Python, you may need to interpolate values into your string and format these values to create new strings dynamically. In modern Python, you have f-strings and the .format() method to approach the tasks of interpolating and formatting strings.

In this tutorial, you’ll learn how to:

  • Use f-strings and the .format() method for string interpolation
  • Format the interpolated values using replacement fields
  • Create custom format specifiers to format your strings

To get the most out of this tutorial, you should know the basics of Python programming and the string data type.

Get Your Code: Click here to download the free sample code that shows you how to use modern string formatting tools in Python.

Take the Quiz: Test your knowledge with our interactive “A Guide to Modern Python String Formatting Tools” quiz. You’ll receive a score upon completion to help you track your learning progress:

Interactive Quiz

A Guide to Modern Python String Formatting Tools

You can take this quiz to test your understanding of modern tools for string formatting in Python. These tools include f-strings and the .format() method.

Getting to Know String Interpolation and Formatting in Python

Python has developed different string interpolation and formatting tools over the years. If you’re getting started with Python and looking for a quick way to format your strings, then you should use Python’s f-strings.

Note: To learn more about string interpolation, check out the String Interpolation in Python: Exploring Available Tools tutorial.

If you need to work with older versions of Python or legacy code, then it’s a good idea to learn about the other formatting tools, such as the .format() method.

In this tutorial, you’ll learn how to format your strings using f-strings and the .format() method. You’ll start with f-strings to kick things off, which are quite popular in modern Python code.

Using F-Strings for String Interpolation

Python has a string formatting tool called f-strings, which stands for formatted string literals. F-strings are string literals that you can create by prepending an f or F to the literal. They allow you to do string interpolation and formatting by inserting variables or expressions directly into the literal.

Creating F-String Literals

Here you’ll take a look at how you can create an f-string by prepending the string literal with an f or F:

Python 👇 >>> f"Hello, Pythonista!" 'Hello, Pythonista!' 👇 >>> F"Hello, Pythonista!" 'Hello, Pythonista!' Copied!

Using either f or F has the same effect. However, it’s a more common practice to use a lowercase f to create f-strings.

Just like with regular string literals, you can use single, double, or triple quotes to define an f-string:

Python 👇 >>> f'Single-line f-string with single quotes' 'Single-line f-string with single quotes' 👇 >>> f"Single-line f-string with double quotes" 'Single-line f-string with single quotes' 👇 >>> f'''Multiline triple-quoted f-string ... with single quotes''' 'Multiline triple-quoted f-string\nwith single quotes' 👇 >>> f"""Multiline triple-quoted f-string ... with double quotes""" 'Multiline triple-quoted f-string\nwith double quotes' Copied!

Up to this point, your f-strings look pretty much the same as regular strings. However, if you create f-strings like those in the examples above, you’ll get complaints from your code linter if you have one.

The remarkable feature of f-strings is that you can embed Python variables or expressions directly inside them. To insert the variable or expression, you must use a replacement field, which you create using a pair of curly braces.

Interpolating Variables Into F-Strings

The variable that you insert in a replacement field is evaluated and converted to its string representation. The result is interpolated into the original string at the replacement field’s location:

Python >>> site = "Real Python" 👇 >>> f"Welcome to {site}!" 'Welcome to Real Python!' Copied!

In this example, you’ve interpolated the site variable into your string. Note that Python treats anything outside the curly braces as a regular string.

Read the full article at https://realpython.com/python-formatted-output/ »

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

The Drop Times: SystemSeed Explores Human-Centered Design at DrupalCon Barcelona 2024

Planet Drupal - Wed, 2024-10-02 09:50
At DrupalCon Barcelona 2024, Elise West of SystemSeed presented a session on Human-Centered Design (HCD), explaining its growing relevance in Drupal projects. The session highlighted HCD’s role in aligning development with user needs, making it essential for project managers, developers, and product leads. More insights from SystemSeed's experience will follow.
Categories: FLOSS Project Planets

Kushal Das: Thank you Gnome Nautilus scripts

Planet Python - Wed, 2024-10-02 09:33

As I upload photos to various services, I generally resize them as required based on portrait or landscape mode. I used to do that for all the photos in a directory and then pick which ones to use. But, I wanted to do it selectively, open the photos in Gnome Nautilus (Files) application and right click and resize the ones I want.

This week I noticed that I can do that with scripts. Those can be in any given language, the selected files will be passed as command line arguments, or full paths will be there in an environment variable NAUTILUS_SCRIPT_SELECTED_FILE_PATHS joined via newline character.

To add any script to the right click menu, you just need to place them in ~/.local/share/nautilus/scripts/ directory. They will show up in the right click menu for scripts.

Below is the script I am using to reduce image sizes:

#!/usr/bin/env python3 import os import sys import subprocess from PIL import Image # paths = os.environ.get("NAUTILUS_SCRIPT_SELECTED_FILE_PATHS", "").split("\n") paths = sys.argv[1:] for fpath in paths: if fpath.endswith(".jpg") or fpath.endswith(".jpeg"): # Assume that is a photo try: img = Image.open(fpath) # basename = os.path.basename(fpath) basename = fpath name, extension = os.path.splitext(basename) new_name = f"{name}_ac{extension}" w, h = img.size # If w > h then it is a landscape photo if w > h: subprocess.check_call(["/usr/bin/magick", basename, "-resize", "1024x686", new_name]) else: # It is a portrait photo subprocess.check_call(["/usr/bin/magick", basename, "-resize", "686x1024", new_name]) except: # Don't care, continue pass

You can see it in action (I selected the photos and right clicked, but the recording missed that part):

Categories: FLOSS Project Planets

Real Python: Quiz: A Guide to Modern Python String Formatting Tools

Planet Python - Wed, 2024-10-02 08:00

Test your understanding of Python’s tools for string formatting, including f-strings and the .format() method.

Take this quiz after reading our A Guide to Modern Python String Formatting Tools tutorial.

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Categories: FLOSS Project Planets

Use `ripgrep-all` / `ripgrep` to improve search in Dolphin

Planet KDE - Wed, 2024-10-02 06:30

In the next release of Dolphin, the search backend (when Baloo indexing is disabled) will be faster and support more file types, by using external projects ripgrep-all and ripgrep to do the search. Merge Request

What are ripgrep and ripgrep-all?

ripgrep is a fast text search tool that uses various optimizations including multi-threading (compared to grep and Dolphin's internal search backend which are single-threaded).

ripgrep-all, quote its homepage, is "ripgrep, but also search in PDFs, E-Books, Office documents, zip, tar.gz, etc.".

How to enable it

Install the ripgrep-all package from your distribution's package manager (which should also install ripgrep). Then Dolphin will automatically use it for content search, when Baloo is disabled.

If your distribution doesn't provide ripgrep-all, you can also try installing ripgrep. Then Dolphin will use it for content search, but without the additional file type support.

Limitations
  • It only works in content search mode, and when Baloo content indexing is disabled. File name search still uses the internal backend.

  • It only works in local directories. When searching in remote directories (e.g. Samba, ssh), the internal search backend is used. Although we can run ripgrep in remote directories through the kio-fuse plugin, testing shows it can be 3 times slower than the internal backend, so it's not used.

  • It doesn't work on Windows. Although both ripgrep and ripgrep-all have releases for Windows, I personally don't have Windows experience to integrate them. Merge request to enable it on Windows is welcome.

Customization

You can change the command line with which Dolphin calls the external tools. Copy /usr/share/kio_filenamesearch/kio-filenamesearch-grep to ~/.local/share/kio_filenamesearch/, and modify the script there. The script contains comments on the calling convention between Dolphin and it, and explanations on the command line options.

One option you might want to remove is -j2. It limits the number of threads ripgrep (and ripgrep-all) uses to 2. Using more threads can make the search much slower in hard disks (HDD). I tried to detect HDD automatically, but it's not reliable, so I went with a conservative default. It's still faster than the internal backend, but if you have an SSD, you can remove the option to unlock the full speed of ripgrep.

You can also use a different external tool. (E.g. the silver search (ag). Or a full-text search engine other than Baloo) Just make sure it outputs paths separated by NUL. Usually a -0 option will do that.

More customization

You can even modify the script so that you can specify different external tools in the search string. For example, you can insert the following code before the original code that calls ripgrep-all:

...(line 1-33) --run) if test "$2" = "@git"; then exec sh -c 'git status -s -z|cut -c 4- -z' fi ...

Then if you search for "@git" in a git directory, it will show you changed files.

Future works

There are quite a lot to improve in Dolphin's search (when not using Baloo). The content search should also search in file names. The search string is currently interpreted as a regular expression, but a fuzzy match or shell globbing seems to be a more sensible default (probably with regexp as an option). Hopefully future works will address these issues.

Categories: FLOSS Project Planets

LN Webworks: LN Webworks Amazing Experience at DrupalCon Barcelona 2024

Planet Drupal - Wed, 2024-10-02 03:56

As a Top-rated Drupal Development Company, attending DrupalCon Barcelona for the first time exceeded all of our expectations. The energy of the event was incredible, and it gave us the opportunity to connect with so many people in person. One of the standout moments was the inspiring StarShot initiative, whose marketing strategy makes a compelling case for businesses to consider Drupal as a solution.

Starshot / Drupal CMS Product Strategy

NO CODE website building, built on top of Drupal core itself. So, it will be easily able to beat the other no-code solutions like WIX, SQUARESPACE, and Shopify while still being able to maintain its open-source nature where you still will be able to have full control to customize and override things on your own.

Categories: FLOSS Project Planets

Python Software Foundation: Python 3.13 and the Latest Trends: A Developer's Guide to 2025 - Live Stream Event

Planet Python - Wed, 2024-10-02 03:30

Join Tania Allard, PSF Board Member, and Łukasz Langa, CPython Developer-in-Residence, for ‘Python 3.13 and the Latest Trends: A Developer’s Guide to 2025’, a live stream event hosted by Paul Everitt from JetBrains. Thank to JetBrains for partnering with us on the Python Developers Survey and this event to highlight the current state of Python!

The session will take place tomorrow, October 3, at 5:00 pm CEST (11:00 am EDT). Tania and Łukasz will be discussing the exciting new features in Python 3.13, plans for Python 3.15 and current Python trends gathered from the 2023 Annual Developers Survey. Don't miss this chance to hear directly from the experts behind Python’s development!

Watch the live stream event on YouTube

Don’t forget to enable YouTube notifications for the stream and mark your calendar.

Categories: FLOSS Project Planets

Chris Rose: uv, direnv, and simple .envrc files

Planet Python - Wed, 2024-10-02 03:05

I have adopted uv for a lot of Python development. I'm also a heavy user of direnv, which I like as a tool for setting up project-specific environments.

Much like Hynek describes, I've found uv sync to be fast enough to put into the chdir path for new directories. Here's how I'm doing it.

Direnv Libraries

First, it turns out you can pretty easily define custom direnv functions like the built-in ones (layout python, etc...). You do this by adding functions to ~/.config/direnv/direnvrc or in ~/.config/direnv/lib/ as shell scripts. I use this extensively to make my .envrc files easier to maintain and smaller. Now that I'm using uv here is my default for python:

function use_standard-python() { source_up_if_exists dotenv_if_exists source_env_if_exists .envrc.local use venv uv sync } What does that even mean?

Let me explain each of these commands and why they are there:

  • source_up_if_exists -- this direnv stdlib function is here because I often group my projects into directories with common configuration. For example, when working on Chicon 8, I had a top level .envrc that set up the AWS configuration to support deploying Wellington and the Chicon 8 website. This searches up til it finds a .envrc in a higher directory, and uses that. source_up is the noisier, less-adaptable sibling.

  • dotenv_if_exists -- this loads .env from the current working directory. 12-factor apps often have environment-driven configuration, and docker compose uses them relatively seamlessly as well. Doing this makes it easier to run commands from my shell that behave like my development environment.

  • source_env_if_exists .envrc.local -- sometimes you need more complex functionality in a project than just environment variables. Having this here lets me use .envrc.local for that. This comes after .env because sometimes you want to change those values.

  • use venv -- this is a function that activates the project .venv (creating it if needed); I'm old and set in my ways, and I prefer . .venv/bin/activate.fish in my shell to the more newfangled "prefix it with a runner" mode.

  • uv sync -- this is a super fast, "install my development and main dependencies" command. This was way, way too slow with pip, pip-tools, poetry, pdm, or hatch, but with uv, I don't mind having this in my .envrc

Using it in a sentence

With this set up in direnv's configuration, all I need in my .envrc file is this:

use standard-python

I've been using this pattern for a while now; it lets me upgrade how I do default Python setups, with project specific settings, easily.

Categories: FLOSS Project Planets

PyCharm: Prompt AI Directly in the Editor

Planet Python - Wed, 2024-10-02 02:19

With PyCharm, you now have the support of AI Assistant at your fingertips. You can interact with it right where you do most of your work – in the editor. 

Stuck with an error in your code? Need to add documentation or tests? Just start typing your request on a new line in the editor, just as if you were typing in the AI Assistant chat window. PyCharm will automatically recognize your natural language request and generate a response.  

PyCharm leaves a purple mark in the gutter next to lines changed by AI Assistant so you can easily see what has been updated. 

If you don’t like the initial suggestion, you can generate a new one by pressing Tab. You can also adjust the initial input by clicking on the purple block in the gutter or simply pressing Ctrl+/ or /.

Want to get assistance with a specific argument? You can narrow the context that AI Assistant uses for its response as much as you want. Just put the caret in the relevant context, type the $ or ? symbol, and start writing. PyCharm will recognize your prompt and take the current context into account for its suggestions. 

The new inline AI assistance works for Python, JavaScript, TypeScript, JSON, and YAML file formats, while the option to narrow the context works only for Python so far.

This feature is available to all AI Assistant subscribers in the second PyCharm 2024.3 EAP build. You can get a free trial version of AI Assistant straight in the IDE: to enable AI Assistant, open a project in PyCharm, click the AI icon on the right-hand toolbar, and follow the instructions that appear.

Download PyCharm 2024.3 EAP

Categories: FLOSS Project Planets

Tryton News: Security Release for issue #93

Planet Python - Wed, 2024-10-02 02:00

Cédric Krier has found that python-sql does not escape non-Expression for unary operators (like And and Or) which makes any system exposing those vulnerable to an SQL injection attack.

Impact

CVSS v3.0 Base Score: 9.1

  • Attack Vector: Network
  • Attack Complexity: Low
  • Privileges Required: Low
  • User Interaction: None
  • Scope: Changed
  • Confidentiality: High
  • Integrity: Low
  • Availability: Low
Workaround

There is no known workaround.

Resolution

All affected users should upgrade python-sql to the latest version.

Affected versions: <= 1.5.1
Non affected versions: >= 1.5.2

Reference Concerns?

Any security concerns should be reported on the bug-tracker at https://bugs.tryton.org/python-sql with the confidential checkbox checked.

2 posts - 2 participants

Read full topic

Categories: FLOSS Project Planets

William Minchin: u202410012332

Planet Python - Wed, 2024-10-02 01:32

Microblogging v1.3.0 for Pelican released! Posts should now sort as expected. Thanks @ashwinvis. on PyPI

Categories: FLOSS Project Planets

Tales from the Akademy

Planet KDE - Tue, 2024-10-01 19:15

This being my first post on the KDE sphere (or any other sphere), it was supposed to be just a touch of contact with the world of blogging. But since time pass by in a blast, let's just summarize how I lived my third one in-person Akademy 2024.

Würzbug. Back to Germany

This year's Akademy happily got me back to Germany, which has become like a second home and a place I like to visit at least once a year (yeah, I missed the Dürüms).

I had bought the D-Ticket, which allowed my to board any public transport immaginable (well, except for ICE trains, but I haven't heard much good about them either) for bare 49€. It brought me some memories back as a student in Dresden, enjoying the same perks with the Semesterticket, just on a regional scope. Thanks to Itinerary and its route planner I was able to make it to Würzburg even an hour earlier than anticipated (less 20min train delay which I've heard it's currently quite a good metric).

After having my hotel booking cancelled last minute due to needed repair works, I had booked an appartment because the hotel prices were a bit over the top. I was really lucky to find that just around the corner I had a bus stop to go to the venue, and also Andy Betts and Richard Wagner as ilustrious neighbors. And one of the best rated Dönner places in the city. Very lucky indeed!

The Talks

It's hard to make a better summary of the Talks days that our very own Promo Team's report, which I agree with on many points.

What I particulary felt on these Akademy's talks was a high focus into the future. Some words were thematically present along most of the talks: story, product, and impact.

The story we as the KDE community want to tell is not just a bunch of code packages that live in an ethereal world to be grabbed by a few enthusiasts or distros, but a full useful product for the end users, an inviting environment for fellow developers, and a reliable asset for manufactures on their very concrete hardware.

There were many reveals and surprises to achieve this goal. Projects that had been incubating for some time, were now made public on this Akademy: the KDE OS Codename Banana by Harald Sitter, the Next Project and design system by Andy Betts and the Union theme engine by Arjen Hiemstra.

Some talks addressed the social and environmental impact of the technology we create. The one that specially got to me was the small story Nicole Teal told at her lightning talk. How a group of kids gave many "older" PCs a new life installing KDE, while learning new skills and making community, felt really true and a spur to continue contributing to FOSS. It really matters.

From the technical talks, I enjoyed "What is color, anyway" by Xaver Hugl, and unfortunately had to miss some other ones (Python and Rust integration with Qt). This is the hardest part, where you cannot just .clone() yourself and attend to two talks at the same time. Maybe I would have learnt to do that if I had attended the Rust presentation? (yeah, sorry bad Rust dad joke)

It was also on Sunday when Aniqa and Carl took me by surprise to agains my will happily answer to a small video interview. Just joking, it was fun. Just preemptively preparing myself for when the final video comes out and I can see what words I did babble :D.

The BoFs

After a very intense weekend of talks and the social event and post-event on Sunday, I took the Monday's morning off to have some rest. In the evening, Andy and Manuel showed me a bit more about the design system they're using and the icon exporter Manuel has been developing to streamline the process between designers and the final product. Amazing stuff!

I also started a draft of this very blog post, which wasn't much successful as you can imagine by its final release date.

The big BoF day for me was mostly Tuesday, where I focussed on the Plasma and the VDG ones, though I missed those on KWin's roadmap and window tiling, due to competing schedules. During the Plasma BoF, we could experiment in real time, the step-by-step process of realeasing the Plasma 6.1.5 version, thanks to Jonathan, our Plasma release manager.

Finally, on Thursday I got to enjoy the brand new Sticker BoF. Besides me not having any stickers on my own to share, and being mostly minimalistic when it comes to decoration, I had a great time and ended up sticking my laptop up and about, including an very limited unit of the Sticker BoF's sticker. Thanks Kieryn for organizing it. Of course, Carl won the sticker's award 😄.

On a more personal level, I regret a bit not having participated more on some of the BOFs. Most of my KDE's contributions this summer have been improvements on very niche aspects: the Weather widget and the tool to preview keyboard layouts (tastenbrett), so I felt a bit "out of the loop" on the more general and pressing matters in Plasma.

The Socials

Where the Akademy really shines is in putting together some hundreds of amazing people with some common interests, that in the end happen to make the best software products and computing ecosystem out there.

It is a real warp of space and time. On the Welcome Event I got to meet Eva Brucherseifer, one of the attendants and founders of the very first Akademys, and also recent joiners to the community I only knew via chat or MR interactions.

When the Biergarten that was booked for the Sunday Social Event did cancel due to a storm warning, I could immediately check two things:

  • that the Weather Widget did correctly report the Warnung vor starkem Gewitter
  • and that the local organizing team went the extra mile to make the Akademy a success, even against the elements. Beer, pizzas and good people was all required to have an enjoyable evening.

Finally I was really happy to meet again with friends from the previous Akademys and the Plasma Sprint in 2023, sharing opinions on widespread topics, suchs as immovable OSes, ingenuous ways to open a beer bottle, keyboard input methods, or the torture and punishment customs of German cities in medieval times.

Thanks to the organizing team, the speakers, the attendants, the patrons and the whole KDE Community which made possible yet another amazing Akademy!

Categories: FLOSS Project Planets

PreviousNext: Vite and Storybook frontend tooling for Drupal

Planet Drupal - Tue, 2024-10-01 18:15

We’ve just completed an extensive overhaul of our frontend tooling, with Vite and Storybook at the centre. Let’s go over it piece by piece.

by jack.taranto / 2 October 2024

The goal of the overhaul was to modernise all aspects of the build stack, remove legacy dependencies and optimise development processes.

Tooling is split into four pieces: asset building, styleguide, linting and testing.

Asset building for Drupal with Vite

We have always utilised two separate tools to build CSS and JS assets. Until now, this was PostCSS and Rollup, in the past Sass and Webpack have been in the mix. 

With Vite it’s one tool to build both types of assets. To introduce Vite to anyone not already familiar with it, I would say it’s a super fast version of Rollup without the configuration headaches. 

Moving to Vite sped up our development build times and production build times (in CI), simplified our config files and removed a huge number of NPM dependencies.

Vite library mode

A typical Vite build pipeline is most suitable for single-page apps. It involves an index.html file where Vite dynamically adds CSS and JS assets. However, with Drupal, we do not have an index.html file; we have the Drupal libraries system to load assets, with which Vite has no way of communicating.

Luckily, Vite ships with something called Library mode, which is seemingly tailor-made for Drupal assets! Library mode allows us to output all our frontend assets to a single directory, where we can include them in a libraries.yml file or via a Pinto Theme Object.

To use our config, you’ll first need a few dependencies. 

npm i -D vite postcss-preset-env tinyglobby browserslist-to-esbuild

Our vite.config.js looks like this:

import { defineConfig } from 'vite' import { resolve } from 'path' import { globSync } from 'tinyglobby' import browserslist from 'browserslist-to-esbuild' import postcssPresetEnv from 'postcss-preset-env' const entry = globSync(['**/*.entry.js', '**/*.css'], { ignore: [   '**/_*.css',   'node_modules',   'vendor',   'web/sites',   'web/core',   'web/libraries',   '**/contrib',   'web/storybook', ], }) export default defineConfig(({ mode }) => ({ build: {   lib: {     entry,     formats: ['es'],   },   target: browserslist(),   cssCodeSplit: true,   outDir: resolve(import.meta.dirname, './web/libraries/library-name'),   sourcemap: mode === 'development', }, css: {   postcss: {     plugins: [       postcssPresetEnv(),     ],     map: mode === 'development',   }, }, }))

We define entry points as any *.css file and any *.entry.js file. We exclude certain directories, so we aren’t building assets that are included with core or contrib. Additionally, we exclude CSS partials, which use an underscore prefix. This allows us to add asset source files anywhere in our project. They could be added in the theme, a module, or (as we have been doing recently) inside a /components directory in the project root.

The Vite config itself enables library mode using build.lib, passing all source assets through using build.lib.entry and building JS assets using the es format.

build.cssCodeSplit is required when passing CSS files through to build.lib.entry.

build.outDir specifies a folder inside the Drupal libraries directory where all built assets will be sent. Drupal libraries.yml definitions are then updated to include files from this directory.

build.sourcemap will output JS sourcemaps in development mode only.

Finally, we pass through any PostCSS plugins with css.postcss.plugins. Vite includes postcss-import by default, so you do not need to add that. It will also handle resolving to custom directories without including resolve options for postcss-import, meaning you’ll only need to add your specific plugins. In this case, we reduced ours to just postcss-preset-env. Add more as needed!

We also enable CSS sourcemaps with css.postcss.map.

This config allowed us to completely remove the PostCSS config file, PostCSS CLI, Rollup, its config and all Rollup plugins.

The config file above is a starting point—a minimum viable setup you’ll need to build assets using Vite’s library mode. Add to it as you need to, and familiarise yourself with Vite’s documentation.

Using Browerslist with Vite

Vite uses ESBuild to determine the output feature set based on the build.target. For many years now, we have used Browserslist to determine feature sets for both PostCSS and Rollup, and it works really well. We weren’t ready to lose this functionality by moving to Vite.

This is where the browserslist-to-esbuild dependency comes in. We added the following .browserlistrc file to our project root:

> 1% in AU

By calling browserslist() in build.target we get our browser feature set provided by Browserslist instead of ESBuild.

NPM scripts for development mode and production builds

We use NPM scripts for consistent usage of non-standard commands both locally and on CI for production builds.

"scripts": { "dev-vite": "vite build -w -m development", "build-vite": "vite build" },

To watch and build source assets whilst developing locally, we use npm run dev-vite. Unlike Vite’s dev command, this still uses Rollup under the hood (instead of ESBuild), so we miss out on the extreme speed of Vite’s dev mode. However, it’s still very fast—faster than default Rollup. It’s a tradeoff that provides what we need, which is building our assets while we are editing them in a way that works with Drupal. We lose hot reloading, but that’s less important when we have Storybook at our disposal.

Production builds happen on CI using npm run build-vite.

Using Storybook with Drupal

Although we had been using Storybook in our projects for some time now, we hadn’t yet standardised on it or provided a default setup. And with Vite now baked into Storybook, it seemed like an excellent time to provide this.

If you have a spare 15 minutes, I would first suggest checking out Lee Rowland’s lightning talk from Drupal South to see just how fluid a frontend development experience Storybook brings to Drupal.

Storybook is easy to setup using its wizard with:

npx storybook@latest init

It will present you with a few choices. Just make sure you choose HTML and Vite for your project type. When using Vite with Storybook, Storybook provides its necessary config to Vite; however, it will still read your projects vite.config.js file for any additional config. This includes the PostCSS config we setup above and any additional functionality you provide.

Now, install Lee’s Twig plugin. This plugin will allow us to write components using Twig that can be imported into our stories.js files. First, install the plugin:

npm i -D vite-plugin-twig-drupal

Then register the plugin by adding the following lines to the vite.config.js default export:

plugins: [ twig(), ],

See the vite-plugin-twig-drupal documentation for more details, including how to set up Twig namespaces.

Writing stories

To use Twig in Storybook, it’s quite similar to any other framework. Here’s an example story of a card component:

import Component from './card.html.twig' const meta = { component: Component, args: {   title: `<a href="#">Card title</a>`,   description:     'Lorem ipsum dolor sit amet, consectetur adipiscing elit. Etiam eu turpis molestie, dictum est a, mattis tellus. Sed dignissim, metus nec fringilla accumsan, risus sem sollicitudin lacus.', }, } export default meta export const Card = {}

We import the twig file as Component and then add that to the stories meta. We can pass through args, which will show up in the Twig file as variables, and we can use HTML here.

Writing stories is covered in more detail in our front-end nirvana blog post.

NPM scripts for developing with Vite and Storybook at once

Our standard development practice involves building and testing components in Storybook and then integrating them with Drupal using Pinto. To do this, we need to run Storybook and our Vite tooling at once so we have both Storybook dev mode and our built frontend assets available to us.

Running two NPM scripts in parallel can be a pain, so we have implemented concurrently to streamline this approach.

npm i -D concurrently

Then we use the following in our NPM scripts:

{ "scripts": {   "dev": "concurrently -k -n \"VITE,STORYBOOK\" -c \"#636cff,#ff4785\" \"npm run dev-vite\" \"npm run dev-storybook\"",   "build": "concurrently -n \"VITE,STORYBOOK\" -c \"#636cff,#ff4785\" \"npm run build-vite\" \"npm run build-storybook\"",   "dev-storybook": "storybook dev -p 6006 --no-open",   "build-storybook": "storybook build -o web/storybook",   "dev-vite": "vite build -w -m development",   "build-vite": "vite build" },

With npm run dev we get coloured output so we can see which tool is running and what it’s doing. npm run build is used on CI.

Linting with Prettier, Stylelint and ESLint

These three tools have been a staple on our projects for a long time, but with ESLint introducing a new flat configuration method, it seemed like a good time to review the tooling.

First, we’ll need some more dependencies.

npm i -D prettier stylelint stylelint-config-standard eslint@8.57.0 @eslint/js@8.57.0 eslint-config-prettier eslint-config-drupalFormatting source assets with Prettier

We are using Prettier to format both CSS and JS files. With PHPStorm, you can set this to happen on file save. We also have an NPM script to do this on demand and before committing. NPM commands are at the end of this section.

Reducing Stylelint configuration

Past iterations of our Stylelint tooling involved extensive configuration on each project. Using Stylelints latest standard configuration, it sets sensible defaults, which lets us remove most config options. We’re left with the following:

const config = { extends: ['stylelint-config-standard'], rules: {   'custom-property-empty-line-before': null,   'no-descending-specificity': null,   'import-notation': 'string',   'selector-class-pattern': [     '^([a-z])([a-z0-9]+)(-[a-z0-9]+)?(((--)?(__)?)([a-z0-9]+)(-[a-z0-9]+)?)?$',     {       message:         'Expected class selector to be BEM selector matching either .block__element or .block--modifier',     },   ],   'selector-nested-pattern': '^&', }, } export default config

We added a custom rule to ensure project BEM selectors are used.

Like prettier, we also use a .stylelintignore file to exclude core and contrib folders.

Moving to ESLint flat config

The new config format isn’t yet supported by all plugins (there’s a compatibility tool to help with this), but where it is, it’s much simpler.

The following config can be used in conjunction with Prettier.

import js from '@eslint/js' import globals from 'globals' import prettier from 'eslint-config-prettier' import drupal from 'eslint-config-drupal' export default [ js.configs.recommended, prettier, {   languageOptions: {     globals: {       ...globals.browser,       ...globals.node,       ...drupal.globals,       dataLayer: true,       google: true,       once: true,     },   }, }, {   rules: {     'no-console': 'error',     'no-unused-expressions': [       'error',       {         allowShortCircuit: true,         allowTernary: true,       },     ],     'consistent-return': 'warn',     'no-unused-vars': 'off',   }, }, {   ignores: [     'node_modules',     'vendor',     'bin',     'web/core',     'web/sites',     'web/modules/contrib',     'web/themes/contrib',     'web/profiles/contrib',     'web/libraries',     'web/storybook',   ], }, ]

This includes linting for Storybook files and tests as well. Additionally, it ignores core and contrib files.

NPM scripts for linting

We use the following NPM scripts to run our linting commands locally and on CI.

"scripts": { "format": "prettier --write \"**/*.{css,ts,tsx,js,jsx,json}\"", "lint": "npm run lint-prettier && npm run lint-css && npm run lint-js", "lint-prettier": "prettier --check \"**/*.{css,ts,tsx,js,jsx,json}\"", "lint-css": "stylelint \"**/*.css\"", "lint-js": "eslint ." },

These commands work so well because we have excluded all Drupal core and contrib folders using ignore files. 

Testing using Storybook test runner

Storybook test runner provides the boilerplate-free ability to run automated snapshot and accessibility tests on each story in Storybook. Our previous test tooling involved using Jest and Axe to handle this, but we needed to manually write tests for each component. With Storybook test runner, this is handled automatically.

To set it up, first, install some dependencies.

npm i -D @storybook/test-runner axe-playwright

Then create the following test-runner.js file inside your .storybook directory.

import { waitForPageReady } from '@storybook/test-runner' import { injectAxe, checkA11y } from 'axe-playwright' import { expect } from '@storybook/test'; /* * See https://storybook.js.org/docss/writing-tests/test-runner#test-hook-api * to learn more about the test-runner hooks API. */ const config = { async preVisit(page) {   await injectAxe(page) }, async postVisit(page) {   await waitForPageReady(page)   // Automated snapshot testing for each story.   const elementHandler = await page.$('#storybook-root')   const innerHTML = await elementHandler.innerHTML()   expect(innerHTML).toMatchSnapshot()   // Automated accessibility testing for each story.   await checkA11y(page, '#storybook-root', {     detailedReport: true,     detailedReportOptions: {       html: true,     },   }) }, } export default config

This config will loop through all your stories, wait for them to be ready, then snapshot them and run Axe against them. You’ll get great output from the command, so you can see exactly what’s going on.

NPM scripts for testing Storybook locally and on CI

First, install a few more dependencies:

npm i -D http-server wait-on

The following scripts will run the complete Storybook test base and update snapshots as needed.

"scripts": { "test-storybook": "test-storybook", "test-storybook:update": "test-storybook -u", "test-storybook:ci": "concurrently -k -s first -n \"SERVER,TEST\" -c \"magenta,blue\" \"npm run http-server\" \"wait-on tcp:6006 && npm run test-storybook\"", "http-server": "http-server web/storybook -p 6006 --silent" },

To run tests on CI we use http-server to serve the built version of Storybook and wait-on to delay the test run until the server is ready. The concurrently command smooths the output of both these commands.

Wrapping up

See the complete workflow, including all config and ignore files in the pnx-frontend-build-tools-blog repository I've setup for this post.

The repository and this blog post have been designed to provide the necessary pieces so you can implement this workflow on your existing (or new) projects. However, a lot more functionality can be gained, including easily adding support for Typescript, React and Vitest.

Tagged Storybook, Vite
Categories: FLOSS Project Planets

Pages