FLOSS Project Planets
Opt In? Opt Out? Opt Green! KDE Eco's New Sustainability Initiative - Akademy 2024
By Joseph De Veaugh-Geiss
What consumers indicate they want, Free Software can provide, though many consumers may not know it .... yet! With the newly-funded project "Opt Green: Sustainable Software For Sustainable Hardware" KDE Eco aims to change that. A 2020 Eurobarometer poll found that 80% of European consumers believe manufacturers should make it easier to repair digital devices, while 50% indicate that the reason they purchase a new device is due to performance issues and non-functioning software. Free Software communities already understand that you don't need to buy new hardware to have an efficient, well-functioning, and up-to-date digital device; you just need the right software! Now, KDE Eco wants to make sure everyone else knows it, too.
For the next 2 years, the "Opt Green" initiative will bring KDE Eco's work on sustainable software -- and, in turn, sustainable hardware -- directly to consumers. And this is as good a time as ever. In 2025, the end of support for Windows 10 is estimated to make e-waste out of 240 million computers ineligible for Windows 11. One year later in 2026, at the earliest, macOS support for Intel-based Apple computers -- the last of which were sold in 2020 -- is predicted to end, rendering even more millions upon millions of functioning computers obsolete. Every one of these functioning, yet vendor-abandoned devices can stay out of the landfill and in use for years to come with sustainable Free Software. (Consider, by comparison, that only in 2022 did Linus Torvalds suggest ending support for 1989's Intel 486 processors. That's 33 years of Linux kernel support!)
By design, Free Software is right-to-repair software: it gives users control over their hardware by removing vendor dependencies and guaranteeing transparency and user autonomy. In this talk, I will present KDE Eco's new "Opt Green" project in terms of the whys, whats, and hows for bringing sustainable Free Software to new users. A target audience for the project are eco-consumers, those whose consumer behaviors are driven by principles related to the environment, and not necesssarily convenience or cost. Through online and offline campaigns as well as installation workshops, KDE Eco will demonstrate at fair-trade, organic, and artisinal markets the power of Free Software to drive down energy consumption and keep devices in use for years beyond official vendor support. With independent, sustainable software designed for users' needs, not vendors', it is possible to run efficient, cutting-edge software on the digital devices you already have at home or in your pocket. Opt green today! The most environmentally-friendly device is the one you already own.
Openwashing - How do we handle (and enforce?) OSS policies in products? - Akademy 2024
By Markus Feilner (grommunio, Feilner-IT, Press), Holger Dyroff (ownCloud), Richard Heigl (Hallo Welt! GmbH), Leonhard Kugler (Center for digital sovereignty (ZenDiS)), and Cornelius Schumacher (KDE)
Hosted by senior journalist Markus Feilner, the panel of prominent open source players will discuss the ongoing topic of openwashing and what we can or should do about it - from cloud to AI and public administration.
Especially in these three fields the term "opensource" has become a valuable asset, but more and more companies feel urged to call their solutions "Open Source". Despite the great success e.g. in public tenders, many company owners are actually still afraid of publishing source code, not all are following the basic rules, not everybody understands what open source actually means. Evasion strategies abound.
On the other hand, companies need to make money, even (!) with open source. How can that be accomplished in the different communities of cloud, public administration and the world of wikipedia and knowledgemanagement? How can the barely two years old center for digital sovereignty (ZenDiS) help the OSS community and companies? Hint: The ZenDiS was recently invited by the United nations and has received world-wide acknowledements.
We are very proud to have Holger Dyroff (COO ownCloud), Richard Heigl (CEO HalloWelt/BlueSpice Mediawiki, OSS Alternative to Atlassian Confluence), Leonhard Kugler, director of Open CoDE at ZenDis and Cornelius Schumacher (KDE Board) on stage.
Terri Oda: Best practices in practice: Black, the Python code formatter
I’m starting a little mini-series about some of the “best practices” I’ve tried out in my real-life open source software development. These can be specific tools, checklists, workflows, whatever. Some of these have been great, some of them have been not so great, but I’ve learned a lot. I wanted to talk a bit about the usability and assumptions made in various tools and procedures, especially relative to the wider conversations we need to have about open source maintainer burnout, mentoring new contributors, and improving the security and quality of software.
So let’s start with a tool that I love: Black.
Black’s tagline is “the uncompromising Python code formatter” and it pretty much is what it says on the tin: it can be used to automatically format Python code, and it’s reasonably opinionated about how it’s done with very few options to change. It starts with pep8 compliance (that’s the python style guide for those of you don’t need to memorize such things) and takes it further. I’m not going to talk about the design decisions they made but the black style guide is actually an interesting read if you’re into this kind of thing.
I’m probably a bit more excited about style guides than the average person because I spent several years reading and marking student code, including being a teaching assistant for a course on Perl, a language that is famously hard to read. (Though I’ve got to tell you, the first year undergraduates’ Java programs were absolutely worse to read than Perl.) And then in case mounds of beginner code wasn’t enough of a challenge, I also was involved in a fairly well-known open source project (GNU Mailman) with a decade of code to its name even when I joined so I was learning a lot about the experience of integrating code from many contributors into a single code base. Both of these are… kind of exhausting? I was young enough to not be completely set in my ways, but especially with the beginner Java code, it became really clear that debugging was harder when the formatting was adding a layer of obfuscation to the code. I’d have loved to have an autoformatter for Java because so many students could find their bugs easier once I showed them how to fix their indents or braces.
And then I spent years as an open source project maintainer rather than just a contributor, so it was my job to enforce style as part of code reviews. And… I kind of hated that part of it? It’s frustrating to have the same conversation with people over and over about style and be constantly leaving the same code review comments, and then on top of that sometimes people don’t *agree* with the style and want to argue about it, or people can’t be bothered to come back and fix it themselves so I either have to leave a potentially good bug fix on the floor or I have to fix it myself. Formatting code elegantly can be fun once in a while, but doing it over and over and over and over quickly got old for me.
So when I first heard about Black, I knew it was a thing I wanted for my projects.
Now when someone submits a thing to my code base, Black runs alongside the other tests, and they get feedback right away if their code doesn’t meet our coding standards. It hardly any time to run so sometimes people get feedback very fast. Many new contributors even notice failing required test and go do some reading and fix it before I even see it, and for those that don’t fix issues before I get there I get a much easier conversation that amounts to “run black on your files and update the pull request.” I don’t have to explain what they got wrong and why it matters — they don’t even need to understand what happens when the auto-formatter runs. It just cleans things up and we move on with life.
I feel like the workflow might actually be better if Black was run in our continuous integration system and automatically updated the submitted code, but there’s some challenges there around security and permissions that we haven’t gotten around to solving. And honestly, it’s kind of nice to have an easy low-stress “train the new contributors to use the tools we use” or “share a link to the contributors doc” opening conversation, so I haven’t been as motivated as I might be to fix things. I could probably have a bot leave those comments and maybe one of those days we’ll do that, but I’m going to have to look at the code for code review anyhow so I usually just add it in to the code review comments.
The other thing that Black itself calls out in their docs is that by conforming to a standard auto-format, we really reduce the differences between existing code and new code. It’s pretty obvious when the first attempt has a pile of random extra lines and is failing the Black check. We get a number of contributors using different integrated development environments (IDEs) that are pretty opinionated themselves, and it’s been freeing to not to deal with whitespace nonsense in pull requests or have people try to tell me on the glory if their IDE of choice when I ask them to fix it. Some python IDEs actually support Black so sometimes I can just tell them to flip a switch or whatever and then they never have to think about it again either. Win for us all!
So here’s the highlights about why I use Black:
As a contributor:
- Black lets me not think about style; it’s easy to fix before I put together a pull request or patch.
- It saves me from the often confusing messages you get from other style checkers.
- Because I got into the habit of running it before I even run my code or tests, it serves as a quick mistake checkers.
- Some of the style choices, like forcing trailing commas in lists, make editing existing code easier and I suspect increase code quality overall because certain types of bug are more obvious.
As a an open source maintainer:
- Black lets me not think about style.
- It makes basic code quality conversations easier. I used to have a *lot* of conversations about style and people get really passionate about it, but it wasted a lot of time when the end result was usually going to be “conform to our style if you want to contribute to this project”
- Fixing bad style is fast, either for the contributor or for me as needed.
- It makes code review easier because there aren’t obfuscating style issues.
- It allows for very quick feedback for users even if all our maintainers are busy. Since I regularly work with people in other time zones, this can potentially save days of back and forth before code can be used.
- It provides a gateway for users to learn about code quality tools. I work with a lot of new contributors through Google Summer of Code and Hacktoberfest, so they may have no existing framework for professional development. But also even a lot of experienced devs haven’t used tools like Black before!
- It provides a starting point for mentoring users about pre-commit checks, continuous integration tests, and how to run things locally. We’ve got other starting points but Black is fast and easy and it helps reduce resistance to the harder ones.
- It reduces “bike shedding” about style. Bikeshedding can be a real contributor to burnout of both maintainers and contributors, and this reduces one place where I’ve seen it occur regularly.
- It decreases the cognitive overhead of reading and maintainin a full code base which includes a bunch of code from different contributors or even from the same contributor years later. If you’ve spent any time with code that’s been around for decades, you know what I’m talking about.
- In short: it helps me reduce maintainer burnout for me and my co-maintainers.
So yeah, that’s Black. It improves my experience as an open source maintainer and as a mentor for new contributors. I love it, and maybe you would too? I highly recommend trying it out on your own code and new projects. (and it’s good for existing projects, even big established ones, but choosing to apply it to an existing code base gets into bikeshedding territory so proceed with caution!)
It’s only for Python, but if you have similar auto-formatters for other languages that you love, let me know! I’d love to have some to recommend to my colleagues at work who focus on other languages.
comments
Only hackers will survive - Akademy 2024
By Joanna Murzyn
In this talk, I'll explore how the hackers' ethos - defined by open knowledge sharing, bold experimentations, and collective problem-solving - is vital for tackling global challenges, especially those one related to lack of regenerative resources management.
I'll take you on a journey from crucial mineral mining hubs to electronic waste dumpsters, uncovering the intricate connections between code, hardware, open source principles as well as social and environmental justice.
You'll gain a new perspective and discover how the KDE community's work is shaping a more resilient, regenerative future, and explore ways to
extend those principles to create positive impact beyond tech world.
Looking back: What's next? - Akademy 2024
By Nicolas Fella
This year we completed the major milestone of releasing Plasma 6 and KDE Frameworks 6, the culmination of years of work across the community. In this talk we are going to look at how we approached this transition, what went well, what didn't, and what we can learn for the future.
We are also going to explore what might come next in the future of KDE development.
Lint all the things! - Akademy 2024
By Albert Astals Cid and Alexander Lohnau
QML is a great language to write fast User Interfaces but given its runtime nature it makes it a bit fragile to refactors. This lightning talk will try to convince you to enable qmllint in your compilation steps so that QML issues are found on compile time instead of runtime.
JSON files play an important role in KDE's sources: We use them as metadata files for applets, embed them in plugin metadata.
In order to avoid runtime problems or implicit type conversions, a CI job is added to all repos. But this is only a small benefit of the improved validation and schema efforts.
This talk will highlight some benefits and show you, how you may utilize them.
KWin Effects: The Next Generation - Akademy 2024
By David Edmundson
Plasma 6 saw the return of the desktop cube, but the underlying story is so much bigger. KWin gained support for an entire new infrastructure to tightly couple QtQuick with Kwin's own rendering and with Kwin's content available.
In this talk we go through the problems with the current approach, what we created, and look at what this enables creative people (like you!) to do next.
KDE to Make Wines — Using KDE Software on Enterprise Desktops a Return on Experience - Akademy 2024
By Kevin Ottens
If we told you there is a company with hundreds of desktops running KDE Plasma? If we also told you they've been using it for more than 10 years? Finally, what if we told you they're in Australia and making wines? Wouldn't you be curious to know more about them and what they think of our software?
Well, good for us they do exist: they are De Bortoli Wines, an Australian winemaking company.
It turns out they became an enioka Haute Couture customer and we developed an interesting relationship. The work they pushed our way has been interesting and challenging. As such this gave us an interesting insight on how KDE software can be used and the constraint such entreprise desktops can encounter.
We ended up looking at application code like Okular, to frameworks like KIO, or even dug deeper exploring issues close to the kernel. This might give ideas of features to prioritize or tests to carry to cater to such users.
If you're interested in the enterprise desktop use case, or if you like to hear about funny bugs and wine labels, this talk will be for you.
KDE Apps Initiative - Akademy 2024
By Carl Schwan
This talk is about the my new KDE Apps initiative, to try to get people to write more KDE applications. I will describe the current state of the KDE app ecosystem and explain why it is important to get more KDE apps and what we can do to improve the situation.
This will be continuation of this blog post and includes the progress made since then.
KDE's CI and CD infrastructure - Akademy 2024
By Ben Cooksley, Hannah von Reth, Julius Künzel, and Volker Krause
From compiling, automated testing, linting and license verification over producing application packages for various platforms to shipping signed production releases to app stores, KDE's CI/CD system offers many ways to support you in developing software and getting it to your users.
With the migration from Jenkins to Gitlab which was concluded earlier this year, the CI/CD infrastructure not only gained new capabilities but also became more accessible for contributors to set up and customize things. In this talk, we will give an overview of the available features and how to best employ those for your application.
We will cover continuous integration (CI), that is compiling, testing and linting changes as they appear in Git or in merge requests, on all supported platforms as well as continuous delivery (CD), that is producing ready-made runnable/installable application packages in various formats, for testing individual changes or for production releases to app stores.
Finally we'll also look at current developments and future plans.
The Drop Times: The la_eu Site Project Takes a Step Further at Barcelona
The Drop Times: DrupalCon Barcelona Wrap-Up: The Third and Final Day
Real Python: The Real Python Podcast – Episode #222: Using Virtual Environments in Docker & Comparing Python Dev Tools
Should you use a Python virtual environment in a Docker container? What are the advantages of using the same development practices locally and inside a container? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.
[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
Web Review, Week 2024-39
Let’s go for my web review for the week 2024-39.
We have lift-off! Element X, Call and Server Suite are ready!Tags: tech, matrix, ux
Definitely a big announcement for Matrix. Could it be the beginning of going mainstream? I suspect it’ll be now or never. I’m slightly concerned about the desktop support being apparently ignored, the UX there is far from great still.
https://element.io/blog/we-have-lift-off-element-x-call-and-server-suite-are-ready/
Tags: tech, mozilla, privacy, surveillance, gdpr
It was to be expected that complaints against Mozilla could happen in Europe. They’ve been asking for it lately…
https://noyb.eu/en/firefox-tracks-you-privacy-preserving-feature
Tags: tech, data, culture, history, ecology
Excellent piece, we’re a civilisation whose culture is built on shifting sands and… toy plastics. Guess what will survive us?
https://lilysthings.org/blog/no-data-lasts-forever/
Tags: tech, ai, machine-learning, gpt, criticism, law
This is clearly less high profile than the Scarlett Johanssen vs OpenAI one. Still this shows it has the potential to become a widespread (even though shady) practice. This might need some regulation fairly soon.
https://www.jeffgeerling.com/blog/2024/they-stole-my-voice-ai
Tags: tech, ai, machine-learning, gpt, science
This is indeed important to be able to run such models locally. Will still require more optimization but it’s slowly getting there. The reproducibility it brings is especially necessary for science.
https://www.nature.com/articles/d41586-024-02998-y
Tags: tech, ai, machine-learning, gpt, security, safety
People are putting LLM related feature out there too hastily for my taste. At least they should keep in mind the security and safety implications.
https://owasp.org/www-project-top-10-for-large-language-model-applications/
Tags: tech, automotive, security
Could we just stop connecting cars with web access for features we don’t really need? Please?
https://www.wired.com/story/kia-web-vulnerability-vehicle-hack-track/
Tags: tech, c++, security, safety
Lots of good stuff definitely coming. This should definitely help make it more approachable to lots of people.
https://github.com/CppCon/CppCon2024/blob/main/Presentations/Peering_Forward_Cpps_Next_Decade.pdf
Tags: tech, c++, rust, security, safety
Excellent proof of why you don’t want to “rewrite it all in Rust”. It’s important to respect the old code and focus on applying safety practices on the new code. This is also why the upcoming changes to C++ are worth it, it might improve the interoperability factor almost for free.
https://security.googleblog.com/2024/09/eliminating-memory-safety-vulnerabilities-Android.html
Tags: tech, linux, kernel, rust
Despite the drama, Rust is slowly making its way into the kernel.
https://lwn.net/SubscriberLink/991062/b0df468b40b21f5d/
Tags: tech, linux, system
Wondering what io_uring is for? This is a good explanation.
https://mazzo.li/posts/uring-multiplex.html
Tags: tech, cpu, portability
Nice list of common portability issues one can encounter at the machine architecture level. But don’t be fooled, this doesn’t have implications only for C and C++, those problems leak in higher level languages as well.
https://blogs.gentoo.org/mgorny/2024/09/23/overview-of-cross-architecture-portability-problems/
Tags: tech, python
Interesting problem I didn’t realize PyPI had. Indeed I hope they start looking into reproducibility issue to reduce the bandwidth and space they use.
https://kristoff.it/blog/python-training-wheels/
Tags: tech, python, refactoring
Interesting trick to help with project wide renames for Python codebases.
https://jackevans.bearblog.dev/refactoring-python-with-tree-sitter-jedi/
Tags: tech, tools
What can I say? I love Makefiles as well.
https://switowski.com/blog/i-like-makefiles/
Tags: tech, tools, version-control, git
Ooh! This looks like a really neat improvement. I wonder how reliable this is, I’ll definitely test it.
https://github.com/tummychow/git-absorb
Tags: tech, software, design
Nice short post about cohesion in software design. Also gives clue about what proxy we can use to gauge this cohesion.
https://explaining.software/archive/similar-but-different/
Tags: tech, architecture, microservices, reliability, research
I’m obviously not in love with the complexity this type of architecture brings. That being said, this thesis brings an interesting approach to better detect failure scenarios in such systems.
https://christophermeiklejohn.com/publications/cmeiklej_phd_s3d_2024.pdf
Tags: tech, architecture, organization, conway
This law is unfortunately too little known. Here is a nice and short primer. Be careful though, it’s short but packed with information, might require more reading around the concepts highlighted in this article.
https://ncatlab.org/nlab/show/Conway%27s+law
Tags: tech, project-management, quality, metrics
When I read the content of this article I wonder how useful the metrics really were. I mean clearly they helped the team realize which changes to bring… but the practice changes were all somewhat conventional in a way. You go a long way when you focus on quality and create the space for it.
https://medium.com/booking-com-development/dora-metrics-at-work-46c835a86a89
Bye for now!
Drupal Association blog: DrupalCon Barcelona 2024 brings the open source Drupal community together
DrupalCon, the main event about the digital experience platform Drupal, was held this year in Barcelona, Spain, from 24-27 September. DrupalCon unites thousands of people from around the globe who use, develop, design, and support the Drupal platform. Over 1,300 digital experts and Drupal professionals gathered to exchange ideas, work on the Drupal project, and propel Drupal innovation.
On 24 September, Founder and Project Lead Dries Buytaert gave an inspiring keynote about developing an exciting new product - Drupal CMS. Dries shared a progress update on the timeline of Drupal CMS, showing a demo of the current state, the event-building and SEO capabilities, and the acceleration of site building through AI tools. This will allow Drupal experts and potential end users to drastically change how they build websites, using prompts to create in minutes what used to take days.
A three-day program of over 100 sessionsDrupalCon Barcelona hosted over 100 workshops, presentations, and several exciting keynotes in a packed three-day program. After the Opening Ceremony, the Women in Drupal Awards were held, recognizing the incredible impact Drupal has on Diversity, Equity, and Inclusion. Winners Pamela Barone, All Petrovska, and Esmeralda Tijhoff took home the awards, acknowledging their accomplishments in the Drupal community.
For almost ten years, the results of the Drupal Business Survey have been presented at DrupalCon. The Drupal Business Survey, organized by the Drupal Business Network, has been collecting data from Drupal service providers and agencies from all over the world. On 26 September, insights from the 2024 survey were shared in an informative session. While the digital industry is not unaware of worldly events that impact economy and sales, many agencies are excited about Drupal CMS. Over 70 digital agencies working with Drupal came to gather to review and work on their strategy for 2025 and beyond.
In the Expo Hall, DrupalCon Barcelona attendees learned more about the Ripple Makers membership and Drupal Certified Partners at the Drupal Association booth. Ripple Makers is the revamped Drupal Association Membership, supporting driving innovation and adoption of Drupal as a high-impact digital public good. The Drupal Certified Partner program recognizes and awards agencies that demonstrate significant innovation, philanthropic leadership, and contribution to the Drupal project. To learn more about the Drupal Certified Partner program, visit the Certified Partner Program page on Drupal.org.
Driving Drupal innovation through collaborationThe Drupal Association Board got together the weekend prior to DrupalCon for their bi-annual in-person board meeting. Topics of discussion included Drupal CMS, its roadmap, and the overall strategy and ambitions for Drupal. They held their public board meeting on Wednesday at DrupalCon to announce the newest Board Members: Alejandro Moreno, Sachiko Muto, and Stella Power. During the public meeting, CEO Tim Doyle also announced the launch of a new program, Adopt-a-Document.
On the last day of the conference, Drupal experts stayed for Contribution Day, where they worked on Drupal’s next exciting innovations. Drupal’s community has over 100,000 passionately committed users driving innovation of one of the world’s largest open source projects. Many social events occur in the evenings at DrupalCon, including Drupal’s pub quiz called Trivia Night on Thursday.
At the closing ceremony, the city for DrupalCon Europe 2025 was announced: Vienna.
About the Drupal AssociationThe Drupal Association is a nonprofit organization focused on accelerating Drupal, fostering the growth of the Drupal community, and supporting the project’s vision to create a safe, secure, and open web for everyone. The Drupal Association also administers Drupal.org on behalf of the Drupal community. You can also support the continued success of the Drupal project by getting involved. Learn more.
Python GUIs: Introduction to the QGraphics framework — Creating vector interfaces using the QGraphics View framework
The Qt Graphics View Framework allows you to develop fast and efficient 2D vector graphic scenes. Scenes can contain millions of items, each with their own features and behaviors. By using the Graphics View via PySide6 you get access to this highly performant graphics layer in Python. Whether you're integrating vector graphics views into an existing PySide6 application, or simply want a powerful vector graphics interface for Python, Qt's Graphics View is what you're looking for.
Some common uses of the Graphics View include data visualization, mapping applications, 2D design tools, modern data dashboards and even 2D games.
In this tutorial we'll take our first steps looking at the Qt Graphics View framework, building a scene with some simple vector items. This will allow us to familiarize ourselves with the API and coordinate system, which we'll use later to build more complex examples.
Table of Contents- The Graphics View Framework
- A simple scene
- Another way to create objects.
- Building a more complex scene
- Adding graphics views to Qt layouts
The Graphics View framework consists of 3 main parts QGraphicsView, QGraphicsScene, and QGraphicsItem, each with different responsibilities.
The framework can be interpreted using the Model-View paradigm, with the QGraphicsScene as the Model and the QGraphicsView as the View. Each scene can have multiple views. The QGraphicsItems within the scene can be considered as items within the model, holding the visual data that the scene combines to define the complete image.
QGraphicsScene is the central component that glues everything together. It acts as a whiteboard on which all items are drawn (circles, rectangles, lines, pixmaps, etc). The QGraphicsView has the responsibility of rendering a given scene -- or part of it, with some transformation (scaling, rotating, shearing) -- to display it to the user. The view is a standard Qt widget and can be placed inside any Qt layout.
QGraphicsScene provides some important functionalities out of the box, so we can use them to develop advanced applications without struggling with low-level details. For example --
- Collision Detection, detect a graphics item is collided with another item.
- Item Selection, gives us the ability to deal with multiple items at the same time, for example, the user can select multiple items, and when pressing delete, a function asks the scene to give the list for all selected items, and then delete them.
- Items discovery, the scene can tell us what items are present (or part of them) at a specific point or inside some defined region, for example, if the user adds an item that intersects with a forbidden area, the program will detect them and give them another (mostly red) color.
- Events Propagation, the scene receives the events and then propagates them to items.
To define a QGraphicsScene you define it's boundaries or sceneRect which defines the x & y origins and dimensions of the scene. If you don't provide a sceneRect it will default to the minimum bounding rectangle for all child items -- updating as items are added, moved or removed. This is flexible but less efficient.
Items in the scene are represented by QGraphicsItem objects. These are the basic building block of any 2D scene, representing a shape, pixmap or SVG image to be displayed in the scene. Each item has a relative position inside the sceneRect and can have different transformation effects (scale, translate, rotate, shear).
Finally, the QGraphicsView is the renderer of the scene, taking the scene and displaying it -- either wholly or in part -- to the user. The view itself can have transformations (scale, translate, rotate and shear) applied to modify the display without affecting the underlying scene. By default the view will forward mouse and keyboard events to the scene allowing for user interaction. This can be disabled by calling view.setInteractive(False).
A simple sceneLet's start by creating a simple scene. The following code creates QGraphicsScene, defining a 400 x 200 scene, and then displays it in a QGraphicsView.
python import sys from PySide6.QtWidgets import QGraphicsScene, QGraphicsView, QApplication app = QApplication(sys.argv) # Defining a scene rect of 400x200, with it's origin at 0,0. # If we don't set this on creation, we can set it later with .setSceneRect scene = QGraphicsScene(0, 0, 400, 200) view = QGraphicsView(scene) view.show() app.exec()If you run this example you'll see an empty window.
The empty graphics scene, shown in a QGraphicsView window.
Not very exciting yet -- but this is our QGraphicsView displaying our empty scene.
As mentioned earlier, QGraphicsView is a widget. In Qt any widgets without a parent display as windows. This is why our QGraphicsView appears as a window on the desktop.
Adding itemsLet's start adding some items to the scene. There are a number of built-in graphics items which you can customize and add to your scene. In the example below we use QGraphicsRectItem which draws a rectangle. We create the item passing in it's dimensions, and then set it's position pen and brush before adding it to the scene.
python import sys from PySide6.QtWidgets import QGraphicsScene, QGraphicsView, QGraphicsRectItem, QApplication from PySide6.QtGui import QBrush, QPen from PySide6.QtCore import Qt app = QApplication(sys.argv) # Defining a scene rect of 400x200, with it's origin at 0,0. # If we don't set this on creation, we can set it later with .setSceneRect scene = QGraphicsScene(0, 0, 400, 200) # Draw a rectangle item, setting the dimensions. rect = QGraphicsRectItem(0, 0, 200, 50) # Set the origin (position) of the rectangle in the scene. rect.setPos(50, 20) # Define the brush (fill). brush = QBrush(Qt.red) rect.setBrush(brush) # Define the pen (line) pen = QPen(Qt.cyan) pen.setWidth(10) rect.setPen(pen) scene.addItem(rect) view = QGraphicsView(scene) view.show() app.exec()Running the above you'll see a single, rather ugly colored, rectangle in the scene.
A single rectangle in the scene
Adding more items is simply a case of creating the objects, customizing them and then adding them to the scene. In the example below we add an circle, using QGraphicsEllipseItem -- a circle is just an ellipse with equal height and width.
python import sys from PySide6.QtWidgets import QGraphicsScene, QGraphicsView, QGraphicsRectItem, QGraphicsEllipseItem, QApplication from PySide6.QtGui import QBrush, QPen from PySide6.QtCore import Qt app = QApplication(sys.argv) # Defining a scene rect of 400x200, with it's origin at 0,0. # If we don't set this on creation, we can set it later with .setSceneRect scene = QGraphicsScene(0, 0, 400, 200) # Draw a rectangle item, setting the dimensions. rect = QGraphicsRectItem(0, 0, 200, 50) # Set the origin (position) of the rectangle in the scene. rect.setPos(50, 20) # Define the brush (fill). brush = QBrush(Qt.red) rect.setBrush(brush) # Define the pen (line) pen = QPen(Qt.cyan) pen.setWidth(10) rect.setPen(pen) ellipse = QGraphicsEllipseItem(0, 0, 100, 100) ellipse.setPos(75, 30) brush = QBrush(Qt.blue) ellipse.setBrush(brush) pen = QPen(Qt.green) pen.setWidth(5) ellipse.setPen(pen) # Add the items to the scene. Items are stacked in the order they are added. scene.addItem(ellipse) scene.addItem(rect) view = QGraphicsView(scene) view.show() app.exec()The above code will give the following result.
A scene with two items
The order you add items affects the stacking order in the scene -- items added later will always appear on top of items added first. However, if you need more control you can set the stacking order using .setZValue.
python ellipse.setZValue(500) rect.setZValue(200)Now the circle (ellipse) appears above the rectangle.
Using Zvalue to order items in the scene
Try experimenting with setting the Z value of the two items -- you can set it before or after the items are in the scene, and can change it at any time.
Z in this context refers to the Z coordinate. The X & Y coordinates are the horizontal and vertical position in the scene respectively. The Z coordinate determines the relative position of items toward the front and back of the scene -- coming "out" of the screen towards the viewer.
There are also the convenience methods .stackBefore() and .stackAfter() which allow you to stack your QGraphicsItem behind, or in front of another item in the scene.
python ellipse.stackAfter(rect) Making items moveableOur two QGraphicsItem objects are currently fixed in position where we place them, but they don't have to be! As already mentioned Qt's Graphics View framework allows items to respond to user input, for example allowing them to be dragged and dropped around the scene at will. Simple functionality like is actually already built in, you just need to enable it on each QGraphicsItem. To do that we need to set the flag QGraphicsItem.GraphicsItemFlags.ItemIsMoveable on the item.
The full list of graphics item flags is available here.
python import sys from PySide6.QtWidgets import QGraphicsScene, QGraphicsView, QGraphicsItem, QGraphicsRectItem, QGraphicsEllipseItem, QApplication from PySide6.QtGui import QBrush, QPen from PySide6.QtCore import Qt app = QApplication(sys.argv) # Defining a scene rect of 400x200, with it's origin at 0,0. # If we don't set this on creation, we can set it later with .setSceneRect scene = QGraphicsScene(0, 0, 400, 200) # Draw a rectangle item, setting the dimensions. rect = QGraphicsRectItem(0, 0, 200, 50) # Set the origin (position) of the rectangle in the scene. rect.setPos(50, 20) # Define the brush (fill). brush = QBrush(Qt.red) rect.setBrush(brush) # Define the pen (line) pen = QPen(Qt.cyan) pen.setWidth(10) rect.setPen(pen) ellipse = QGraphicsEllipseItem(0, 0, 100, 100) ellipse.setPos(75, 30) brush = QBrush(Qt.blue) ellipse.setBrush(brush) pen = QPen(Qt.green) pen.setWidth(5) ellipse.setPen(pen) # Add the items to the scene. Items are stacked in the order they are added. scene.addItem(ellipse) scene.addItem(rect) ellipse.setFlag(QGraphicsItem.ItemIsMovable) view = QGraphicsView(scene) view.show() app.exec()In the above example we've set ItemIsMovable on the ellipse only. You can drag the ellipse around the scene -- including behind the rectangle -- but the rectangle itself will remain locked in place. Experiment with adding more items and configuring the moveable status.
If you want an item to be selectable you can enable this by setting the ItemIsSelectable flag, for example here using .setFlags() to set multiple flags at the same time.
python ellipse.setFlags(QGraphicsItem.ItemIsMovable | QGraphicsItem.ItemIsSelectable)If you click on the ellipse you'll now see it surrounded by a dashed line to indicate that it is selected. We'll look at how to use item selection in more detail in a later tutorial.
A selected item in the scene, highlighted with dashed lines
Another way to create objects.So far we've been creating items by creating the objects and then adding them to the scene. But you can also create an object in the scene directly by calling one of the helper methods on the scene itself, e.g. scene.addEllipse(). This creates the object and returns it so you can modify it as before.
python import sys from PySide6.QtWidgets import QGraphicsScene, QGraphicsView, QGraphicsRectItem, QApplication from PySide6.QtGui import QBrush, QPen from PySide6.QtCore import Qt app = QApplication(sys.argv) scene = QGraphicsScene(0, 0, 400, 200) rect = scene.addRect(0, 0, 200, 50) rect.setPos(50, 20) # Define the brush (fill). brush = QBrush(Qt.red) rect.setBrush(brush) # Define the pen (line) pen = QPen(Qt.cyan) pen.setWidth(10) rect.setPen(pen) view = QGraphicsView(scene) view.show() app.exec()Feel free to use whichever form you find most comfortable in your code.
You can only use this approach for the built-in QGraphicsItem object types.
Building a more complex sceneSo far we've built a simple scene using the basic QGraphicsRectItem and QGraphicsEllipseItem shapes. Now let's use some other QGraphicsItem objects to build a more complex scene, including lines, text and QPixmap (images).
python from PySide6.QtCore import QPointF, Qt from PySide6.QtWidgets import QGraphicsRectItem, QGraphicsScene, QGraphicsView, QApplication from PySide6.QtGui import QBrush, QPainter, QPen, QPixmap, QPolygonF import sys app = QApplication(sys.argv) scene = QGraphicsScene(0, 0, 400, 200) rectitem = QGraphicsRectItem(0, 0, 360, 20) rectitem.setPos(20, 20) rectitem.setBrush(QBrush(Qt.red)) rectitem.setPen(QPen(Qt.cyan)) scene.addItem(rectitem) textitem = scene.addText("QGraphics is fun!") textitem.setPos(100, 100) scene.addPolygon( QPolygonF( [ QPointF(30, 60), QPointF(270, 40), QPointF(400, 200), QPointF(20, 150), ]), QPen(Qt.darkGreen), ) pixmap = QPixmap("cat.jpg") pixmapitem = scene.addPixmap(pixmap) pixmapitem.setPos(250, 70) view = QGraphicsView(scene) view.setRenderHint(QPainter.Antialiasing) view.show() app.exec()If you run the example above you'll see the following scene.
Scene with multiple items including a rectangle, polygon, text and a pixmap.
Let's step through the code looking at the interesting bits.
Polygons are defined using a series of QPointF objects which give the coordinates relative to the items position. So, for example if you create a polygon object with a point at 30, 20 and then move this polygon object X & Y coordinates 50, 40 then the point will be displayed at 80, 60 in the scene.
Points inside an item are always relative to the item itself, and item coordinates are always relative to the scene -- or the item's parent, if it has one. We'll take a closer look at the Graphics View coordinate system in the next tutorial.
To add an image to the scene we can open it from a file using QPixmap(). This creates a QPixmap object, which can then in turn add to the scene using scene.addPixmap(pixmap). This returns a QGraphicsPixmapItem which is the QGraphicsItem type for the pixmap -- a wrapper than handles displaying the pixmap in the scene. You can use this object to perform any changes to item in the scene.
The multiple layers of objects can get confusing, so it's important to choose sensible variable names which make clear the distinction between, e.g. the pixmap itself and the pixmap item that contains it.
Finally, we set the flag RenderHint,Antialiasing on the view to smooth the edges of diagonal lines. You almost always want to enable this on your views as otherwise any rotated objects will look very ugly indeed. Below is our scene without antialiasing enabled, you can see the jagged lines on the polygon.
Scene with antialiasing disabled.
Antialiasing has a (small) performance impact however, so if you are building scenes with millions of rotated items it may in some cases make sense to turn it off.
Adding graphics views to Qt layoutsThe QGraphicsView is subclassed from QWidget, meaning it can be placed in layouts just like any other widget. In the following example we add the view to a simple interface, with buttons which perform a basic effect on the view -- raising and lowering selected item's ZValue. This has the effect of allowing us to move items in front and behind other objects.
The full code is given below.
python import sys from PySide6.QtCore import Qt from PySide6.QtGui import QBrush, QPainter, QPen from PySide6.QtWidgets import ( QApplication, QGraphicsEllipseItem, QGraphicsItem, QGraphicsRectItem, QGraphicsScene, QGraphicsView, QHBoxLayout, QPushButton, QSlider, QVBoxLayout, QWidget, ) class Window(QWidget): def __init__(self): super().__init__() # Defining a scene rect of 400x200, with it's origin at 0,0. # If we don't set this on creation, we can set it later with .setSceneRect self.scene = QGraphicsScene(0, 0, 400, 200) # Draw a rectangle item, setting the dimensions. rect = QGraphicsRectItem(0, 0, 200, 50) rect.setPos(50, 20) brush = QBrush(Qt.red) rect.setBrush(brush) # Define the pen (line) pen = QPen(Qt.cyan) pen.setWidth(10) rect.setPen(pen) ellipse = QGraphicsEllipseItem(0, 0, 100, 100) ellipse.setPos(75, 30) brush = QBrush(Qt.blue) ellipse.setBrush(brush) pen = QPen(Qt.green) pen.setWidth(5) ellipse.setPen(pen) # Add the items to the scene. Items are stacked in the order they are added. self.scene.addItem(ellipse) self.scene.addItem(rect) # Set all items as moveable and selectable. for item in self.scene.items(): item.setFlag(QGraphicsItem.ItemIsMovable) item.setFlag(QGraphicsItem.ItemIsSelectable) # Define our layout. vbox = QVBoxLayout() up = QPushButton("Up") up.clicked.connect(self.up) vbox.addWidget(up) down = QPushButton("Down") down.clicked.connect(self.down) vbox.addWidget(down) rotate = QSlider() rotate.setRange(0, 360) rotate.valueChanged.connect(self.rotate) vbox.addWidget(rotate) view = QGraphicsView(self.scene) view.setRenderHint(QPainter.Antialiasing) hbox = QHBoxLayout(self) hbox.addLayout(vbox) hbox.addWidget(view) self.setLayout(hbox) def up(self): """ Iterate all selected items in the view, moving them forward. """ items = self.scene.selectedItems() for item in items: z = item.zValue() item.setZValue(z + 1) def down(self): """ Iterate all selected items in the view, moving them backward. """ items = self.scene.selectedItems() for item in items: z = item.zValue() item.setZValue(z - 1) def rotate(self, value): """ Rotate the object by the received number of degrees """ items = self.scene.selectedItems() for item in items: item.setRotation(value) app = QApplication(sys.argv) w = Window() w.show() app.exec()If you run this, you will get a window like that shown below. By selecting an item in the graphics view and then clicking either the "Up" or "Down" button you can move items up and down within the scene -- behind and in front of one another. The items are all moveable, so you can drag them around too. Clicking on the slider will rotate the currently selected items by the set number of degrees.
A graphics scene with some custom controls
The raising and lowering is handled by our custom methods up and down, which work by iterating over the currently selected items in the scene -- retrieved using scene.selectedItems() and then getting the items z value and increasing or decreasing it respectively.
python def up(self): """ Iterate all selected items in the view, moving them forward. """ items = self.scene.selectedItems() for item in items: z = item.zValue() item.setZValue(z + 1)While rotation is handled using the item.setRotation method. This receives the current angle from the QSlider and again, applies it to any currently selected items in the scene.
python def rotate(self, value): """ Rotate the object by the received number of degrees. """ items = self.scene.selectedItems() for item in items: item.setRotation(value)Take a look at the QGraphicsItem documentation for some other properties you can control with widgets and try extending the interface to allow you to change them dynamically.
Hopefully this quick introduction to the Qt Graphics View framework has given you some ideas of what you can do with it. In the next tutorials we'll look at how events and user interaction can be handled on items and how to create custom & compound items for your own scenes.
Matt Layman: Postgres to SQLite - Building SaaS #204
Reproducible Builds (diffoscope): diffoscope 278 released
The diffoscope maintainers are pleased to announce the release of diffoscope version 278. This version includes the following changes:
[ Chris Lamb ] * Temporarily remove procyon-decompiler from Build-Depends as it was removed from testing (#1057532). (Closes: #1082636) * Add a helpful contextual message to the output if comparing Debian .orig tarballs within .dsc files without the ability to "fuzzy-match" away the leading directory. (Closes: reproducible-builds/diffoscope#386) * Correctly invert "X% similar" value and do not emit "100% similar". (Closes: reproducible-builds/diffoscope#391) * Update copyright years.You find out more by visiting the project homepage.
PreviousNext: Next level theming with Pinto
Discover how to take your Drupal theming to the next level with the brand-new Pinto module.
by adam.bramley / 27 September 2024Pinto is a new module written by Daniel Phin (dpi). At PreviousNext, all new projects use it to dramatically improve the developer experience and velocity for theming Drupal sites.
Pinto allows all theming logic to be encapsulated in reusable Theme Objects, removing the need for many traditional Drupal theming architectures such as theme hooks, preprocessing, field formatters, and even Display settings. It can easily integrate with tools such as Storybook to reuse templates—eliminating duplicated markup and making your frontenders happy!
This blog post assumes the use of a design system tool such as Storybook, which will help us paint a picture of how Pinto can level up your Drupal theming.
Issues with traditional Drupal themingTraditional Drupal theming might look something like this:
- A Frontender builds components in Storybook, containing CSS, JS, and twig templates
- A Backender implements one or more of the following:
- Theme hooks in a module, profile, or theme
- Templates are copied from Storybook and massaged into Drupal’s theme system
- A field formatter injects field data into the theme function
- Preprocess hooks massage or stitch together other data
- Twig templates contain logic to further massage variables for output in twig
- Entity view displays wire up the formatter
This approach can lead to several maintenance issues in the long term.
Have you ever looked at a page and wondered how something got there? Twig debugging can help track down specific templates, but it can be difficult to pinpoint exactly how a particular piece of data made its way onto the page.
Pinto eliminates all of this guesswork by allowing you to break components down into theme objects (i.e. PHP classes) and encapsulate all logic in a single place to be reused anywhere that component is needed.
Pinto basicsIn this section, we’ll discuss how to set Pinto up and explain some of the architecture of how the module works.
First off, we need to install Pinto:
composer require drupal/pintoPinto automatically discovers new theme objects inside designated namespaces. You can define any number of these, but we’ll use a single namespace called "MyProjectDs" (Ds stands for Design system) for our purposes. For Pinto to discover the objects in this namespace, you need to define the pinto.namespaces parameter in your services.yml file:
parameters: pinto.namespaces: - 'MyProjectDs'Pinto Theme objects are registered via a PHP Enum inside the namespace defined above. This Enum must implement \Pinto\List\ObjectListInterface. The Enum should also use the Pinto library’s ObjectListTrait which contains logic for asset discovery and automatically registering theme definitions from your Theme objects.
The enum will look something like this. We recommend placing it inside a module dedicated to your design system. For this demo, we’ll call it my_project_ds.
namespace Drupal\my_project_ds\MyProjectDs; use Pinto\List\ObjectListInterface; use Pinto\List\ObjectListTrait; enum MyProjectList: string implements ObjectListInterface { use ObjectListTrait; }You’ll need to implement at least three methods on this Enum:
- templateDirectory - This tells Pinto where to look for the twig template for the specific theme object it is registering.
- cssDirectory - This tells Pinto where to find css files. For this demonstration, we’ll assume your design system outputs all CSS and JS files into a libraries directory inside the project's web root.
- jsDirectory - As above, but for JS files!
For the latter two methods, it’s as simple as defining the root directory where your compiled CSS and JS files are output. We usually just define these as:
/** * {@inheritdoc} */ public function cssDirectory(): string { return '/libraries/my_project'; } /** * {@inheritdoc} */ public function jsDirectory(): string { return '/libraries/my_project'; }The templateDirectory method must return the directory in which a specific component’s template lives, which can be complicated since, in a Storybook setup, each component will most likely live in its own directory. For this logic, we can use Attributes on the Enum case to tell Pinto where the template is for each Theme object.
Before we talk about Attributes, let’s look at an example of how to define a Theme Object in the Enum itself.
Each Theme object must be defined in the Enum as a case.
NOTE: Clear the cache after adding new cases for Pinto to discover newly added theme objects.
In this example, we will define a Card Theme object:
namespace Drupal\my_project_ds\MyProjectDs; use Drupal\my_project_ds\ThemeObject\Card; use Pinto\List\ObjectListTrait; enum MyProjectList: string implements ObjectListInterface { use ObjectListTrait; #[Definition(Card::class)] case Card = 'card';This definition uses the Pinto library’s Definition attribute to tie the "Card" case to our Card theme object. By default (via the ObjectListTrait), the backed-value of the case (i.e. the string value assigned to it) is used as the template name, library name, and theme definition name.
Now, back to our templateDirectory. We can define our own Attributes to make it easy to specify which directory a component’s template is in.
In a perfect world, we want to reuse our frontender’s twig templates in the component’s Storybook directory. Storybook generally lives outside of your web root, but with a few helpers we can easily target these templates. Let’s assume your Storybook directory structure looks something like this, where components is at the root of your repository:
We can utilise the Components module to give us a handy twig namespace to use. Let’s define this in our my_project_ds.info.yml file:
name: My Project Design System description: 'Provides Pinto theme objects for my project.' core_version_requirement: '>=10' type: module components: namespaces: my_project_components: - ../../../../components/srcThis means Drupal’s theme engine will be able to use @my_project_components to target the root of your Storybook components directory.
NOTE: This assumes the module is inside app/modules/custom/my_project_ds, hence the four parent directories.
Next, we need to define an attribute that takes a component path and will be used to wire up our Theme object with the directory it lives in.
The Attribute class:
namespace Drupal\my_project_ds\MyProjectDs; #[\Attribute(flags: \Attribute::TARGET_CLASS_CONSTANT)] class MyProjectComponent { public function __construct( public readonly string $componentPath, ) { } }Now on the Enum case, we can add the attribute:
#[Definition(Card::class)] #[MyProjectComponent('Components/Card')] case Card = 'card';Then we just need a sprinkle of reflection in the templateDirectory function, and we’re done!
/** * {@inheritdoc} */ public function templateDirectory(): string { $reflection = new \ReflectionEnumUnitCase($this::class, $this->name); $definition = ($reflection->getAttributes(MyProjectComponent::class)[0] ?? NULL)?->newInstance() ?? throw new \LogicException('All component cases must have a `' . MyProjectComponent::class . '.'); return \sprintf('@my_project_components/%s', $definition->componentPath); }While this may look scary, it’s not! We’re simply getting the MyProjectComponent attribute from the case and appending the componentPath (i.e. the string "Components/Card" parameter above) to our custom twig namespace.
After this setup, it becomes extremely easy to add new theme objects, vastly speeding up your theming development. Let’s look at how to do that now!
An example Pinto componentFollowing on from the previous example, let’s set up a Card component. A Card is a fairly universal component in almost every project. In this example, a Card has:
- A title
- An image
- An optional description
The twig template in Storybook might look something like this (reminder that this would be inside the components/src/Components/Card/card.html.twig file):
<div class="card"> <div class="image"> {{ image }} </div> <div class="card__content"> <div class="card__title">{{ title }}</div> {% if description %} <div class="card__description"> {{ description }} </div> {% endif %} </div> </div>At its most basic level (don’t worry, we’ll get more advanced in the next blog post!), all we need for this theme object is something like this:
namespace Drupal\my_project_ds\ThemeObject; use Drupal\Core\Cache\CacheableDependencyInterface; use Drupal\my_project_ds\MyProjectDs\MyProjectObjectTrait; use Pinto\Attribute\ThemeDefinition; #[ThemeDefinition([ 'variables' => [ 'title' => '', 'description' => '', 'image' => '', ], ])] final class Card implements CacheableDependencyInterface { use MyProjectObjectTrait; private function __construct( private readonly string $title, private readonly array $image, private readonly ?string $description, ) {} protected function build(mixed $build): mixed { return $build + [ '#title' => $this->title, '#description' => $this->description, '#image' => $this->image, ]; } }Let’s go through each part to understand what’s going on.
ThemeDefinition attribute
This is essentially Pinto's version of hook_theme. It takes whatever is inside the theme definition and uses it when registering the Theme object as a Drupal theme hook. In this example, we simply define the variables we need to pass through. Pinto will even throw exceptions when any of these variables are missing from the resulting render array!
MyProjectObjectTrait
This is another helper trait that will be used across all components. It provides convenience functions and wraps other traits to make adding new Theme objects a breeze.
namespace Drupal\my_project_ds\MyProjectDs; use Drupal\Core\Cache\CacheableDependencyInterface; use Drupal\Core\Cache\CacheableMetadata; use Drupal\Core\Cache\RefinableCacheableDependencyTrait; use Drupal\pinto\Object\DrupalObjectTrait; trait MyProjectObjectTrait { use DrupalObjectTrait; use RefinableCacheableDependencyTrait; public function __invoke(): mixed { return $this->pintoBuild(function (mixed $build): mixed { if (!$this instanceof CacheableDependencyInterface) { throw new \LogicException(static::class . ' must implement CacheableDependencyInterface'); } (new CacheableMetadata())->addCacheableDependency($this)->applyTo($build); return $this->build($build); }); } abstract protected function build(mixed $build): mixed; }First up is the __invoke function. This is a PHP magic method that is becoming more and more common in PHP codebases (including Drupal!). It’s run when calling an object as a function. The pattern lets us “invoke” Pinto Theme objects like this (note the brackets at the end):
$card = new Card('Title', 'Description', $image); $build = $card();We then have an abstract build function, which you can see in the Card example above. We simply take the properties passed to the card and output them into the theme variables.
Our object trait also extends Pinto traits and core's trait for adding cacheable metadata to a Theme object—more on this in the next blog post!
Constructor
The constructor on the Theme object lets us define strictly typed properties for what we need for each theme variable. Generally speaking, we use a mix of constructors and factory methods to create Theme objects and then invoke them, as we’ve seen above.
What about CSS and JS?
Pinto has built-in attributes that automatically include CSS and JS when a theme object is invoked. This is done by adding the attributes to the Theme object class.
use Pinto\Attribute\Asset\Css; #[Css('card.css', preprocess: TRUE)] final class Card implements CacheableDependencyInterface {Based on the cssDirectory method on our MyProjectListenum, this will include the /libraries/my_project/card.css file automatically whenever a Card Theme object is rendered. There is also an equivalent Js Attribute.
How to render Pinto Theme objectsIt may seem like a lot of set up upfront, but hopefully, you can see that once the initial enums and traits are established, it’s very easy to add new Theme objects.
Now that we’re all set up, we can invoke this Theme object anywhere on our Drupal site! We can even chain/nest these Theme Objects together!
You may have noticed that the $image parameter for the Card was an array. Invoked Theme objects output a render array, so we could do something like this:
$image = new Image('path/to/image.png'); $card = new Card('My Card Title', 'My Description', $image()); return $card();Where Image is another Theme object. Or, you can simply use Drupal’s existing render arrays to theme an image!
Next upIn this blog post, we focused on the basic principles of Pinto. We also looked at how to hit the ground running with some nice helper functions and integration with a design system such as Storybook.
In the next blog, we’ll look at integrating this with bundle classes to streamline theming and really make Pinto sing! Following that, we'll also be doing a comparison with Single Directory Components (SDC) and discuss why we prefer to use Pinto instead.