Feeds
The Drop Times: Out-of-the-Box Functionality Survey Reveals the Community's Enthusiasm for Starshot
The Drupal community has taken another step forward under the Starshot Initiative. Recently, the team concluded a survey aimed at pinpointing the most desired out-of-the-box features and contributed modules for the upcoming ‘Drupal CMS’. This survey targeted ambitious marketers as part of the broader Drupal Starshot strategy, resulting in 60 detailed submissions and over 100 feature suggestions. These insights, now available on Drupal.org thanks to Pamela Barone's announcement, will play a crucial role in shaping the platform’s future.
The feedback received from the survey highlights a strong community interest in several key areas. Among the most frequently mentioned were enhancements to page-building tools, SEO capabilities, improved form builders, and content management functionalities. The desire for better security, media management, and multilingual support also stood out as significant themes. Interestingly, while many of these suggestions align with existing development initiatives, the survey also introduced several fresh ideas that are now under consideration by the Drupal leadership team.
Particularly noteworthy are the suggestions for modules that could elevate Drupal’s out-of-the-box experience. Modules like Metatag, Webform, and Admin Toolbar were repeatedly mentioned and are now being evaluated for possible inclusion in future releases. These modules, known for their functionality and ease of use, could significantly enhance the user experience if integrated into the out-of-the-box Drupal CMS offering.
While the survey is not being treated as a direct vote, it serves as a powerful validation tool. The results ensure that the Drupal development tracks are closely aligned with the needs and expectations of its community. As the leadership team assesses these suggestions, they are keenly aware of the balance between innovation and the consistency of user experience that Drupal is known for.
Curious about the detailed findings and how they might shape the next generation of Drupal? You can dive deeper into the survey results here: Community Demands Enhanced Out-of-the-Box Features in DrupalCMS. As the Starshot Initiative continues to gather momentum, the community eagerly awaits the next steps in this exciting journey.
As we turn our attention to the latest from The Drop Times, the focus has been on the ongoing Drupal Association Board Elections. As part of their "Meet the Candidate" campaign, several candidates have shared their visions and plans for Drupal's future.
Matthew Saunders discusses his candidacy in an interview with Alka Elizabeth, a sub-editor at The Drop Times. Focusing on improving governance, fostering inclusivity, and supporting neurodiverse individuals, Matthew outlines his motivations for running for the Drupal Association Board. His ideas provide valuable insights for voters as the election progresses.
Kevin Quillen, Practice Lead at Velir, brings over 16 years of experience to his candidacy. In his interview with Alka Elizabeth, Kevin emphasizes the importance of modernizing Drupal.org, attracting new developers, and enhancing Drupal's global appeal. His vision for the future could significantly impact the platform’s evolution.
Albert Hughes, Product Owner at Stanford University, offers a unique perspective on expanding Drupal’s reach. His candidacy is grounded in his diverse experiences and a strong commitment to innovation. As the election continues, Albert’s ideas for growth and development resonate with many in the community.
In the final installment of The Drop Times' campaign, Alejandro Moreno Lopez, Partner Manager and Developer Relations at Pantheon, shares his journey within the Drupal community. Alejandro is passionate about reducing the Association's dependency on DrupalCon and fostering collaboration and innovation. His interview provides a compelling case for his candidacy as voting continues until September 5th.
Discover why Drupal's latest product will be called 'Drupal CMS' and not just 'Drupal.' An insightful article authored by Sebin A Jacob, Editor-in-Chief of The Drop Times, explore the strategic decision-making, community feedback, and future implications behind this significant naming shift that redefines the way we think about Drupal's evolution.
The Drupal Decoupled project, also known as headless Drupal, has introduced a new feature to simplify the adoption and implementation of decoupled architecture. This project, which separates the back-end content management from the front-end display, now leverages "Recipes" and can be easily adopted as a Composer Project Template. Jesús Manuel Olivas, Co-Founder and CEO of Octahedroid and Composabase, recently announced this update.
Morpht has launched its "Content Recommendation Playbook," showcasing how personalized content recommendations using Recombee's service can enhance user experiences. The playbook explains how to integrate these systems into Drupal and GovCMS to deliver tailored content based on user behavior, boosting engagement.
During DrupalCon Portland 2022, concerns over the sustainability of free software led to the conception of Drupal Forge, a platform aimed at financially supporting project maintainers. The idea, sparked by Webform developer Jacob Rockowitz, was further developed by Darren Oh, who proposed adding a launch button for trial sites on project pages to generate recurring revenue. While the initiative has garnered interest, challenges remain in implementing and scaling this solution.
Sponsorship opportunities for BADCamp 2024, set for October 24-25 in Oakland, California, are now open, offering extensive visibility to organizations within the Drupal community. With packages ranging from $1,000 to $2,000, sponsors can gain exposure through speaking engagements, branding at summits, and hosting social events.
Chattanooga Open Source Camp, featuring DrupalCamp Chattanooga 2024, seeks sponsors for its November 2nd event at Chattanooga State Community College. Sponsorships range from $20 to $2,000, offering opportunities for businesses to gain visibility within the tech community. In-kind sponsorships are also welcomed, with a total event budget of $6,500.
The Drop Times has been named the official Media Partner for DrupalCamp Pune 2024, set for October 19-20 at Yashada, Pune. This partnership will ensure comprehensive coverage of the event, featuring sessions, workshops, and keynotes from industry leaders. Organized by the Drupalers Association Pune, the camp aims to foster innovation, learning, and networking within the Drupal community.
The Splash Awards will debut in Asia during DrupalCon Singapore 2024, with submissions open until September 27. The prestigious event, recognizing excellence in Drupal web development, will culminate in a ceremony on December 9 at the Garden Ballroom, PARKROYAL Collection Marina Bay.
The Drupal CEO Network and the Drupal Association have extended the deadline for the 2024 Drupal Business Survey to September 4th. This annual survey gathers crucial insights from Drupal business leaders, shaping an anonymized industry report to guide strategic decisions. The results will be unveiled at DrupalCon Barcelona 2024, with discussions set for September 25 and 26.
The Aten Design Group will host an online session on August 28, 2024, at 2:00 PM EDT to discuss the recent release of Drupal 11. Seth Hill, Senior Developer at Aten, will lead the session designed for Drupal site owners, content administrators, and developers who want to learn more about the new version and its potential benefits.
We acknowledge that there are more stories to share. However, due to selection constraints, we must pause further exploration for now.
To get timely updates, follow us on LinkedIn, Twitter and Facebook. You can also, join us on Drupal Slack at #thedroptimes.
Thank you,
Sincerely
Kazima Abbas
Sub-editor, The DropTimes.
FrOScon 2024
This year, I attended FrOScon for the first time . FrOScon is the biggest conference about free and open-source software in Germany. It takes place every year in Bonn/Siegburg (Germany) at the weekend and is free to attend.
For the first time, I was not at a conference to staff a KDE stand. My employer had a stand there, and it was a great occasion for me to meet some colleagues, fellow KDE, and Matrix contributors.
So I spent the majority of my time at the GnuPG stand and discussing many things with Volker, including KDE PIM and the future of KWallet.
I also meet many Matrix community members and am excited to attend the Matrix Conference next month.
All in one, it was a great conference and I hope to see more KDE people there next year and maybe even having out own KDE stand.
Python Bytes: #398 Open source makes you rich? (and other myths)
Zato Blog: Integrating with Jira APIs
Continuing in the series of articles about newest cloud connections in Zato 3.2, this episode covers Atlassian Jira from the perspective of invoking its APIs to build integrations between Jira and other systems.
There are essentially two use modes of integrations with Jira:
- Jira reacts to events taking place in your projects and invokes your endpoints accordingly via WebHooks. In this case, it is Jira that explicitly establishes connections with and sends requests to your APIs.
- Jira projects are queried periodically or as a consequence of events triggered by Jira using means other than WebHooks.
The first case is usually more straightforward to conceptualize - you create a WebHook in Jira, point it to your endpoint and Jira invokes it when a situation of interest arises, e.g. a new ticket is opened or updated. I will talk about this variant of integrations with Jira in a future instalment as the current one is about the other situation, when it is your systems that establish connections with Jira.
The reason why it is more practical to first speak about the second form is that, even if WebHooks are somewhat easier to reason about, they do come with their own ramifications.
To start off, assuming that you use the cloud-based version of Jira (e.g. https://example.atlassian.net), you need to have a publicly available endpoint for Jira to invoke through WebHooks. Very often, this is undesirable because the systems that you need to integrate with may be internal ones, never meant to be exposed to public networks.
Secondly, your endpoints need to have a TLS certificate signed by a public Certificate Authority and they need to be accessible on port 443. Again, both of these are something that most enterprise systems will not allow at all or it may take months or years to process such a change internally across the various corporate departments involved.
Lastly, even if a WebHook can be used, it is not always a given that the initial information that you receive in the request from a WebHook will already contain everything that you need in your particular integration service. Thus, you will still need a way to issue requests to Jira to look up details of a particular object, such as tickets, in this way reducing WebHooks to the role of initial triggers of an interaction with Jira, e.g. a WebHook invokes your endpoint, you have a ticket ID on input and then you invoke Jira back anyway to obtain all the details that you actually need in your business integration.
The end situation is that, although WebHooks are a useful concept that I will write about in a future article, they may very well not be sufficient for many integration use cases. That is why I start with integration methods that are alternative to WebHooks.
Alternatives to WebHooksIf, in our case, we cannot use WebHooks then what next? Two good approaches are:
- Scheduled jobs
- Reacting to emails (via IMAP)
Scheduled jobs will let you periodically inquire with Jira about the changes that you have not processed yet. For instance, with a job definition as below:
Now, the service configured for this job will be invoked once per minute to carry out any integration works required. For instance, it can get a list of tickets since the last time it ran, process each of them as required in your business context and update a database with information about what has been just done - the database can be based on Redis, MongoDB, SQL or anything else.
Integrations built around scheduled jobs make most sense when you need to make periodic sweeps across a large swaths of business data, these are the "Give me everything that changed in the last period" kind of interactions when you do not know precisely how much data you are going to receive.
In the specific case of Jira tickets, though, an interesting alternative may be to combine scheduled jobs with IMAP connections:
The idea here is that when new tickets are opened, or when updates are made to existing ones, Jira will send out notifications to specific email addresses and we can take advantage of it.
For instance, you can tell Jira to CC or BCC an address such as zato@example.com. Now, Zato will still run a scheduled job but instead of connecting with Jira directly, that job will look up unread emails for it inbox ("UNSEEN" per the relevant RFC).
Anything that is unread must be new since the last iteration which means that we can process each such email from the inbox, in this way guaranteeing that we process only the latest updates, dispensing with the need for our own database of tickets already processed. We can extract the ticket ID or other details from the email, look up its details in Jira and the continue as needed.
All the details of how to work with IMAP emails are provided in the documentation but it would boil down to this:
# -*- coding: utf-8 -*- # Zato from zato.server.service import Service class MyService(Service): def handle(self): conn = self.email.imap.get('My Jira Inbox').conn for msg_id, msg in conn.get(): # Process the message here .. process_message(msg.data) # .. and mark it as seen in IMAP. msg.mark_seen()The natural question is - how would the "process_message" function extract details of a ticket from an email?
There are several ways:
- Each email has a subject of a fixed form - "[JIRA] (ABC-123) Here goes description". In this case, ABC-123 is the ticket ID.
- Each email will contain a summary, such as the one below, which can also be parsed:
- Finally, each email will have an "X-Atl-Mail-Meta" header with interesting metadata that can also be parsed and extracted:
The first option is the most straightforward and likely the most convenient one - simply parse out the ticket ID and call Jira with that ID on input for all the other information about the ticket. How to do it exactly is presented in the next chapter.
Regardless of how we parse the emails, the important part is that we know that we invoke Jira only when there are new or updated tickets - otherwise there would not have been any new emails to process. Moreover, because it is our side that invokes Jira, we do not expose our internal system to the public network directly.
However, from the perspective of the overall security architecture, email is still part of the attack surface so we need to make sure that we read and parse emails with that in view. In other words, regardless of whether it is Jira invoking us or our reading emails from Jira, all the usual security precautions regarding API integrations and accepting input from external resources, all that still holds and needs to be part of the design of the integration workflow.
Creating Jira connectionsThe above presented the ways in which we can arrive at the step of when we invoke Jira and now we are ready to actually do it.
As with other types of connections, Jira connections are created in Zato Dashboard, as below. Note that you use the email address of a user on whose behalf you connect to Jira but the only other credential is that user's API token previously generated in Jira, not the user's password.
Invoking JiraWith a Jira connection in place, we can now create a Python API service. In this case, we accept a ticket ID on input (called "a key" in Jira) and we return a few details about the ticket to our caller.
This is the kind of a service that could be invoked from a service that is triggered by a scheduled job. That is, we would separate the tasks, one service would be responsible for opening IMAP inboxes and parsing emails and the one below would be responsible for communication with Jira.
Thanks to this loose coupling, we make everything much more reusable - that the services can be changed independently is but one part and the more important side is that, with such separation, both of them can be reused by future services as well, without tying them rigidly to this one integration alone.
# -*- coding: utf-8 -*- # stdlib from dataclasses import dataclass # Zato from zato.common.typing_ import cast_, dictnone from zato.server.service import Model, Service # ########################################################################### if 0: from zato.server.connection.jira_ import JiraClient # ########################################################################### @dataclass(init=False) class GetTicketDetailsRequest(Model): key: str @dataclass(init=False) class GetTicketDetailsResponse(Model): assigned_to: str = '' progress_info: dictnone = None # ########################################################################### class GetTicketDetails(Service): class SimpleIO: input = GetTicketDetailsRequest output = GetTicketDetailsResponse def handle(self): # This is our input data input = self.request.input # type: GetTicketDetailsRequest # .. create a reference to our connection definition .. jira = self.cloud.jira['My Jira Connection'] # .. obtain a client to Jira .. with jira.conn.client() as client: # Cast to enable code completion client = cast_('JiraClient', client) # Get details of a ticket (issue) from Jira ticket = client.get_issue(input.key) # Observe that ticket may be None (e.g. invalid key), hence this 'if' guard .. if ticket: # .. build a shortcut reference to all the fields in the ticket .. fields = ticket['fields'] # .. build our response object .. response = GetTicketDetailsResponse() response.assigned_to = fields['assignee']['emailAddress'] response.progress_info = fields['progress'] # .. and return the response to our caller. self.response.payload = response # ########################################################################### Creating a REST channel and testing itThe last remaining part is a REST channel to invoke our service through. We will provide the ticket ID (key) on input and the service will reply with what was found in Jira for that ticket.
We are now ready for the final step - we invoke the channel, which invokes the service which communicates with Jira, transforming the response from Jira to the output that we need:
$ curl localhost:17010/jira1 -d '{"key":"ABC-123"}' { "assigned_to":"zato@example.com", "progress_info": { "progress": 10, "total": 30 } } $And this is everything for today - just remember that this is just one way of integrating with Jira. The other one, using WebHooks, is something that I will go into in one of the future articles.
More resources➤ Python API integration tutorial
➤ What is an integration platform?
➤ Python Integration platform as a Service (iPaaS)
➤ What is an Enterprise Service Bus (ESB)? What is SOA?
Haruna 1.2.0
Haruna version 1.2.0 is out with a new footer style.
Availability of other package formats depends on your distro and the people who package Haruna.
Windows version:
- haruna-1.2.0-windows-gcc-x86_64.exe
- haruna-1.2.0-windows-gcc-x86_64.7z
- haruna-1.2.0-windows-gcc-x86_64-dbg.7z
If you like Haruna then support its development: GitHub Sponsors | Liberapay | PayPal
Feature requests and bugs should be posted on bugs.kde.org, but for bugs make sure to fill in the template and provide as much information as possible.
Changelog: 1.2.0- Added floating footer/bottom toolbar style with 2 ways to trigger it:
- on every mouse movement of the video area
- only when the mouse is in the lower part of the video area
- Removed the docbook and moved its content to tooltips
- Middle clicking the playlist scrolls to the playing item
KDE Goals - Our Cumulative Culture
Every two years, the KDE community selects three goals that serve as focal points for the entire community's efforts in the coming years. This cyclical process of goal-setting and community-wide focus is a great example of KDE's Cumulative Culture in action.
This concept, typically observed in human societies, refers to the ability to build upon previous knowledge and innovations to create increasingly complex and effective solutions. In KDE's case, each cycle of goals represents a new layer of accumulated wisdom, i.e. new features and more stability.
The First Cycle (2018-2020)The first cycle of goals laid the groundwork with its focus on community growth, privacy, and usability.
- Streamlined Onboarding: Focused on attracting and retaining new contributors by making the onboarding process smoother and more engaging.
- Privacy Software: Prioritized user privacy and security, ensuring KDE software respects user data and complies with security standards.
- Usability & Productivity: Aimed to enhance the usability and productivity of KDE software, making it powerful yet easy to use.
The second cycle tackled more complex challenges. Goals like Wayland implementation improvements (which layed the foundation for the Plasma 6 release), improving the app ecosystem, and ensuring consistency in design and functionality.
- Wayland: This task aimed at stabilizing Wayland support accross KDE apps.
- All About the Apps: Improved KDE's app infrastructure, enabling more efficient app delivery and better support services.
- Improve Consistency across the Board: Ensured uniformity in design and functionality across KDE software, improving usability and reducing redundancy.
The third cycle, which is currently coming to an end, was about progress and adaptation. A focus to include environmental responsibility, operational efficiency, and inclusive design.
- Sustainable Software: Focused on making KDE software more energy-efficient and environmentally friendly by implementing practices that reduce resource consumption and ensure long-term sustainability.
- Automate and Systematize Internal Processes: Aimed to streamline KDE’s internal workflows by automating repetitive tasks, adding code tests across projects and creating a Quality Assurance team to name a few.
- KDE For All: Seeked to make KDE software accessible and inclusive for all users.
Now, as we enter the fourth cycle of the KDE Goals, we see the full power of this cumulative process. Each goal, whether fully achieved or not, contributes to the collective knowledge and capability of the KDE community. Ideas and partial solutions from past cycles become a solid foundation of knowledge and experience that support future efforts.
The commmunity is currently voting on the following proposals for the next KDE Goals cycle that will guide our efforts and shape our focus for the coming years:
- Enhancing control and automation: integrate KDE Plasma (and apps) with Smart Home Ecosystems
- Freedom through Better Data and Workflow Organization and Management
- KDE Needs You! 🫵 - Formalise and boost KDE's processes for recruiting active contributors
- KDE-based Text Snippet Expansion
- Sandbox all the things!
- Plasma - A Beacon for Open Design
- Refining and Enriching KDE: Empowering Users with Convenient and Intuitive Features
- Streamlined Application Development Experience
- Unify the Plasma experience
- We care about your Input
The three most voted goals will be announced at Akademy, where there will also be a wrap-up talk about the achievements of the current goals. Also, there will be Birds-of-a-feather (BoF) sessions with the new goal champions.
Join the Matrix room and keep an eye on the website for the latest KDE Goals updates.
Matt Layman: Layman's Guide to Python Built-in Functions
What's New In The Revised Blue Angel Criteria
KDE's Okular is the first software which got awarded with the Blue Angel label for resource and energy-efficient software products. The certification was based on the first version of the criteria for this product criteria which were introduced in 2020. Now the criteria have been updated. What has changed and what does that mean for KDE?
The revised criteria are available as version 4 on the Blue Angel web site. Only the German version is currently available; the English version will follow shortly.
New software categoriesThe biggest change is the scope of the label. In the past it was limited to desktop software. With the updated version, the criteria also include software on mobile devices and server software or a combination of these categories, such as a web service with mobile and desktop clients.
The biggest challenge is the measurement of the energy and resource efficiency for these new categories, which requires a more flexible approach and must accommodate scenarios where the measurement cannot be done by inserting a meter in front of the power supply of a single device. The new criteria address this by defining applicable methods for the measurement of mobile and server applications.
The extended scope covers a much broader range of software. For KDE the desktop category is most relevant, but of course a lot of software also interacts with a server component, for example an email client like KMail, which could now be treated and assessed as a combined client-server system to give more realistic and relevant results.
More flexible measurement procedureThe expansion in scope requires an expanded view on the measurement of energy and resource efficiency as well. The first version of the criteria was quite strict and prescribed a very specific measurement procedure on specified reference systems. It was based on a comparison of measurements in a representative usage scenario and in idle mode. This gave a realistic impression of what the usage of a computer program meant in terms of energy consumption.
The new criteria allow for more variation in how the measurements are carried out. The original method is still there, but variations which lead to comparable results are possible as well. This change means that a new criterion was introduced to document the way measurements are done.
In addition to the measurement of the usage scenario, a new type of measurement was introduced. This measures total energy consumption of a production system over a longer period of time. This is particular useful for server applications, where this method can lead to more realistic numbers by averaging resource consumption over real-world usage of multiple users.
For mobile applications, the measurement also has to include the data volume transmitted during a standard usage scenario and the list of URLs it has accessed. This is based on the assumption that large volumes of data transfer imply a higher energy usage. It can also be used to assess if the application is using advertisements or is collecting tracking information. Both are forbidden under the revised Blue Angel criteria.
Ongoing assessment of energy and resource efficiencyThe original criteria demanded that updates of the software still run on old reference systems and that the energy consumption does not increase more than 10%. They were not very clear in how exactly this should be proven and documented. Especially for software which is released very often, testing every individual update is impractical. For mobile and even more for server software, update cycles can be very short, up to multiple updates a day.
In the updated criteria there is a more precise way of handling updates. The general idea is still there that updated software run on old hardware and energy consumption not increase too much. But it's not tied to individual updates anymore. The required procedure is to do a measurement at least once a year and publish the results as part of the documentation of the software product. This includes documentation of the measurement setups and any changes to it as well as preserving the history of measurements, so that users can judge for themselves how much energy and resource usage is increasing over time.
This procedure clarifies the requirments and opens a pragmatic way of measuring updates. It implies a certain burden on updating documentation.
Consequences for KDE and OkularKDE holds the Blue Angel label for its PDF viewer Okular. This is desktop software and the standard usage scenario doesn't include any network access. That means that the expanded scope does not change anything for the existing certification. The revised criteria open up the opportunity to apply for the Blue Angel label for mobile software, such as KDE Connect, and mixed scenarios which also include server components, but the eco-certification for Okular is covered as it was before.
The more flexible measurement criteria give us more leeway in how we are doing the measurements. We have set up KEcoLab for being able to regularly do measurements. This setup follows the procedure prescribed in the original criteria. As this is still valid, it also means no change for us, and our measurements still fulfill the criteria. However, it gives us more opportunities to improve the lab and doesn't strictly tie us to the original list of reference systems anymore. We might want to take advantage of that.
The documentation of the measurement system is something we have always done in a transparent way, so this also doesn't require any big changes on our side. We have to consider how to best convey this in the documentation of Okular, but this is mostly a question on how we communicate the existing content.
The ongoing assessment of energy and resource efficiency ties very well into how we handle software updates. We have a continuous release stream with frequent updates and incremental changes. This fits the model of the new criteria. We have to review how we include regular updates of the documentation and measurement data in releases, but this again is mostly a question of how we communicate the existing content.
ConclusionThe revised criteria provide a welcome expansion of the Blue Angel to more categories of software and a more flexible way to do energy and resource efficiency measurements. They continue to align well with how KDE develops software in general and Okular in particular, so we do not see any issues with continuing the Blue Angel certification for Okular.
We would be happy if the new version of the criteria would increase adoption of the Blue Angel ecolabel for resource and energy efficient software. Sustainable software is an important topic and the Blue Angel can be one way of making progress in this area more visible to a broad audience.
NextCloudPi on Raspberry Pi 5
I finally took an evening to get NextCloudPi installed on a Raspberry Pi 5 with a large-ish NVMe drive. This was not a smooth ride. For your pleasure, this is how I got it working.
First, use Jeff Geerling’s guide to get the Pi booting from the NVMe drive.
Second, use this guide to move from Debian networking to systemd-networkd, but do not hold the avahi-daemon package.
Third, run the NextCloudPi curl install script.
Next up – the migration from my old instance. I have 1.5TB of files on a spin disk connected via USB that I need to move to the new NVMe storage – but that is for another night.
For the record – I do love NextCloud and NextCloudPi, so no finger pointing here, just sharing some frustration and how I got around the issue.
Thomas Lange: Custom Live Media, also for Newer Hardware
At this years Debian conference in South Korea I've presented1 the new feature of the FAIme web service. You can now build your own Debian live media/ISO.
The web interface provides various settings, for e.g. adding a user name and its password, selecting the Debian release (stable or testing), the desktop environment and the language. Additionally you can add your own list of packages, that will be installed into the live environment. It's possible to define a custom script that gets executed during the boot process. For remote access to the live system, you can easily sepcify a github, gitlab or salsa account, whose public ssh key will be used for passwordless root access. If your hardware needs special grub settings, you may also add those. I'm thinking about adding an autologin checkbox, so the live media could be used for a kiosk system.
And finally newer hardware is supported with the help of the backports kernel for the Debian stable release (aka bookworm). This combination is not available from the official Debian live images or the netinst media because the later has some complicated dependencies which are not that easy to resolve2. At DebConf24 I've talked to Alper who has some ideas3 how to improve the Debian installer environment which then may support a backports kernel.
The FAI web service for live ISO is available at
Seth Michael Larson: 2024 Minnesota State Fair foods
Published 2024-08-25 by Seth Larson
Reading time: minutes
If you didn't know, I'm from Minnesota. Minnesotans love their State Fair, and I'm not an exception! My wife and I were lucky enough to go to a State Fair preview for LuLu's Public House for fried ranch dressing among a handful of new drinks. I shared my thoughts on Mastodon and a few folks seemed interested in hearing more: so here's more!
Cajun fried pickles from The Perfect PickleThese are hands-down the best food at the Minnesota State Fair. You eat an order, ponder getting more (some years we do!) and then wonder to yourself why they put the best of the best right next to the shuttle entrance. Don't go out looking for answers lest they move these further away, sometimes it's best to leave sleeping “pickle dogs” lie.
Seriously, if you like pickles even a little bit, get these pickles. You can get them quick if you're lucky and other folks don't realize there are supposed to be six lines of people taking orders.
They're ripping hot right when they hand them to you, so if you're like me and enjoy food “biting back” then don't delay! 🔥
This year included a noticeable increase in the amount of Cajun seasoning, or we got lucky and someone behind the scenes gave us an extra coating (either way we're not complaining!)
Peanut Butter Bacon Cakes and Blue Cheese & Corn Fritz from The Blue BarnCelebrating their 10th consecutive year at the Minnesota State Fair, The Blue Barn is always a fan favorite. Seriously, run over there if you get to the fair early to beat the massive lines for food and drinks.
We grabbed the new Peanut Butter Bacon Cakes along with the returning classic Blue Cheese & Corn Fritz which I had never tried before.
The Peanut Butter Bacon Cakes were really great, there was thick-cut bacon griddled inside of pancake batter strips along with jelly and a peanut butter whipped cream. Perfect combo of savory and sweet, and you're in complete control of the ratios. The bacon and pancake flavors reminded me of learning to make pancakes with my late grandfather. Although that bacon was microwave-ready Hormel bacon... I promise this one's delish!
The Blue Cheese & Corn Fritz was really great, I missed out on this one last year. Perfect amount of sweetness from the corn, really well-balanced cheesy little bite! Wish I could have had more than one of these, we were sharing amongst a big group!
Wrangler Waffle Burger, Bacon-Wrapped Pickle Dog, and “Kind of a Big Dill” Lemonade from Nordic WafflesAnother vendor that fills up immediately after opening, Nordic Waffle should be top of your list because of two returning new foods from 2023: the Bacon-Wrapped Pickle Dog and the Pickle Lemonade. Both of these are really great, the lemonade sounds strange but works really well (even if you don't love pickles). The subtle saltiness balances out the sweet and tartness which makes for a dangerously drinkable item.
The Wrangler was good, it's one of those winning combinations of flavors that is really hard to mess up: beef, cheese, caramelized onions, and a mayo-based sauce. The onions being grilled into the waffle was fun but didn't do much flavor-wise (they might as well have been a topping), honestly wish they went all-out on the onions to the point of being noticeable texture-wise in the waffle. The bacon-wrapped pickle dog is as awesome as it sounds, so much more interesting flavor-wise!
I'm also not a fan of their choice of sauce, they went with Whataburger, a famously mid-tier burger joint in Texas, of all places? This is a grave error by Nordic Waffles because Minnesota and Texas have serious State Fair beef. Minnesota seeing the highest single-day attendance over 12 days, where the Texas State Fair sees the highest total attendance over 24 days (it might be obvious which State Fair I think is the true champion).
Sweet Corn Cola Float from Blue Moon Dine-in TheaterThis one was interesting! Sweet Corn icecream and house-made “corn Cola”, so I take that to mean corn syrup Cola? Not sure. The flavor definitely gave a “not-too-sweet” vibe which was nice, there was a good amount of a corny and almost “earthy” flavor in the float.
The texture of the corn icecream was a little less smooth than a normal icecream, which landed somewhere between novel and “interesting”. I actually recommend giving this one a good mix before you drink it to blend the flavors together better, you're only given a boba straw to drink it.
Overall, would I get it again? Probably not, because Lift Bridge root beer floats exist and are much better. But worth a try!
Sweet Heat Bacon Crunch from RC's BBQHad this one side-by-side with my typical order from RC's which is a bunch of ribs and yeah, it was fine, but if I'm buying barbecue I want ribs or brisket. There was some chili crisp (but not much, maybe because it's Minnesota) and hot honey that got a bit lost in the dish. Can't recommend this one, RC's usual items are much better.
Spam breakfast sandwich from SPAMAttention all SPAM-lovers at the fair! The SPAM booth has moved from under the Grandstand bridge to the southern edge of the DNR building. I nearly had a heart-attack when I saw the SPAM booth wasn't in its usual spot, I had to sneak away with a fellow SPAM-lover from our group to snag this item.
We got ours with pickles (surprise!) and jalapeños, a little bit of kick and acid to cut through the lovely fatty grilled SPAM. Pretty sure this little sandwich was gone in 4 bites, highly recommend finding this stand if you're a long-time-enjoyer or first-timer of SPAM!
That's all for this year. At this point we kept trying new items, but I suspect not being hungry started to impact my opinions of the foods, so you'll have to try them yourself! :)
Thanks for reading! ♡ Did you find this article helpful and want more content like it?
Get notified of new posts by subscribing to the RSS feed or the email newsletter.
This work is licensed under CC BY-SA 4.0
GNUnet News: GSoC Work Product: GNUnet over HTTP3
This project aimed to implement a new communicator for GNUnet's Transport Next Generation (TNG) using the HTTP/3 protocol.
What I did.We chose ngtcp2 and nghttp3 for their stability and adherence to RFC standards. I began by studying communicator fundamentals and analyzing relevant code examples. I then created a QUIC communicator using libngtcp2, implementing essential communication features. Building on this, I integrated libnghttp3 to support HTTP/3 layer communication. After establishing basic uni-directional communication, I proceeded to implement bi-directional capabilities. With the help and guidance of my mentors, I completed the above work, including the selection and design of message transmission methods and the implementation of code.
The current state.We have two branches, dev/shichao/http3 for basic communication and dev/shichao/http3bidirect for bi-directional communication. They can pass the basic tests. However, we found that there were occasional failures during the test. We currently assume that this is caused by the test harness not being able to process the received data packets in time.
What's left to do.There are still many areas that can be improved in the HTTP/3 communicator, such as using CID map instead of IP address map. In addition, in bi-directional communication, the server's sending rate is slightly lower than the client's transmission rate, and this will be optimized in the future. Finally, integrating the Peer Identity into the TLS handshake in order to authenticate the peers is a natural feature to implement.
What code got merged (or not) upstream.All the code is available upstream in the master branch and will be available with the next release.
Challenges I Encountered.Initially, I was unfamiliar with the ngtcp2 and nghttp3 libraries. While there were some examples available, I found limited guidance for more advanced usage. Through careful study and experimentation, I gradually gained a deeper understanding of these libraries. But in this process, I have a deeper understanding of QUIC and HTTP/3 protocols, and also improved my coding skills.
Freelock Blog: The rising costs of site ownership
How much do you spend on your website? I'm not asking how much it cost you to create/build -- I mean day to day, what does it cost to own and maintain your site?
And what happens if you stop paying that?
Sustainable/Open Business Read MoreBrian Okken: Finding the top pytest plugins
Kalyani Kenekar: Join Us: Contribute to Open Source as Marathi speaking person!
GNOME is one of the most widely used free and open-source desktop environments!
Your native language is Marathi and you are using GNOME as your desktop environment? Then me as the coordinator for the Marathi translation team in GNOME is excited to invite you to become part of the team who is working on translating the GNOME Desktop into Marathi!
By this and contributing to the translation of GNOME into Marathi you would be a member of an important project and you can help to make it more accessible to Marathi speakers worldwide and help also to keep our language alive in the open source world.
Why Should You Contribute?-
Promote Your Language
By translating GNOME into Marathi, you help to preserve and promote our beautiful language in the digital world.
-
Learn and Grow
Contributing to open-source projects like GNOME is a great way to improve your language and technical skills, network with like-minded individuals, and gain recognition in the global open-source community.
-
Give Back to the Community
This is an opportunity to contribute to a project that has a significant impact on users around the world. Your work will enable Marathi speakers to use technology in their native language.
You don’t need to be a professional translator to join us! If you are fluent in Marathi and have a basic understanding of English, your contributions will be invaluable. Whether you’re a student, a professional, or just someone passionate about your language, your help is needed and really appreciated!
How To Start Translating?Once you’re familiar with the tools, you can easily begin translating. We have a list of untranslated strings waiting for your contribution!
How To Join The Team?Follow these steps to join the Marathi translation team for GNOME and start contributing:
- Step 1: Visit our GNOME Translation Team Page.
- Step 2: If you’re a new user, click on the “Create Account” option to sign up.
- Step 3: Once you’ve created your account, log in with your credentials.
- Step 4: After logging in, click the “Join” button to become a translator for the Marathi team.
- Step 5: You’ll now see a list of different modules that need translation. Choose one of the files that interests you and download it to your computer.
- Step 6: Translate the content locally on your computer. Once you’re done, return to the website, click “Browse,” and submit your translated file.
If you’re not used to typing in Marathi, you can still contribute using the Varnam website, a free and open-source tool that converts English text into Marathi. Here’s how you can get started:
- Step 1: Visit the Varnam website.
- Step 2: Click on the “Try Now” button on the website.
- Step 3: In the language selection menu, choose “Marathi” as your desired language.
- Step 4: Now you can start typing in English, and Varnam will automatically convert your text into Marathi. If you need more guidance, there’s a help window available on the site that you can explore for additional support.
If you have any doubts or need further assistance how you get started with translating GNOME into Marathi, don’t hesitate to reach out. I’m here to help you on every step of the way!
You can connect with me directly at kalyaniknkr@gmail.com Whether you need technical support, guidance on using the tools, or just want to discuss the project, feel free to get in touch.
Let’s work together to make GNOME accessible to Marathi speakers around the world. Your contributions are always invaluable, and I look forward to welcoming you to our team!
Thank you for your interest and support!
Dirk Eddelbuettel: RcppEigen 0.3.4.0.2 on CRAN: Micro Maintenance
A new maintenance release of RcppEigen is now on CRAN, and will go to Debian shortly as usual. Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms. RcppEigen is used by 460 other CRAN packages, and has been downloaded 31.9 million times just off the mirrors of CRAN keeping logs for counting.
The recent change switing to Authors@R (now that CRAN mandates it) contained in dual typo in ORCID tags, this releases fixes it.
The complete NEWS file entry follows.
Changes in RcppEigen version 0.3.4.0.2 (2024-08-23)- Correct two typos in the ORCID tag
Courtesy of CRANberries, there is also a diffstat report for the most recent release.
If you like this or other open-source work I do, you can sponsor me at GitHub.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
Talk Python to Me: #475: Python Language Summit 2024
Russell Coker: Wifi 6E Mesh
I am looking into getting a Wifi mesh network. The aim is to use it for providing access to devices through my home especially for devices on the congested 2.4GHz frequency. Ideally I want 6GHz Wifi6E for the communication between mesh nodes as well as for talking to the few devices that are new enough to support it (I like buying cheap second hand devices). 2.5Gbit ethernet connections on all mesh nodes would be good too.
Wifi 7 is semi-released, you can buy devices even though the specs aren’t entirely finalised. I expect that next year when Wifi 7 devices are more common the second hand prices of Wifi 6E will drop. Currently Wifi 6E devices are somewhat expensive.
One major problem at the moment is “cloud configuration”. Here is a 41 page forum thread of TP-Link customers asking in vain for non-cloud configuration [1]. The problems with cloud configuration are that it doesn’t allow configuration without Internet access (so no fixing things when internet breaks and no use for a private network without Internet), it relies on a proprietary phone app (so a problem with your phone breaks everything), and it adds a dependency on an unpaid service that TP-Link might decide to turn off at some future time. The TP -Link Deco X55 AX3000 looks like a good set of devices, it currently costs $328 for a set of three Wifi 6 (not 6E) devices is a good deal, pity that the poor software options let it down.
TP-Link also seems to be scanning web traffic and sending the analysis to an external site [2], it seems to be operating as malware. The TP-Link software seems to be most accurately described as malware.
There is the OpenWrt project for open firmware on Wifi APs which is a great project [3] but it doesn’t seem to support any Wifi 6 mesh systems yet. If most Wifi hardware requires malware for operation it seems that running a VPN over Wifi is the way to go. A hostile party being able to sniff your home network is much worse than a hostile party sniffing public Internet traffic.
The Google Nest mesh devices have good specs and price, $359 for a three node Wifi 6E mesh that has 2.5Gbit ethernet. But they can only be configured with a Google app for Android or iOS and require a Gmail account. Giving Google the ability to shut down all my stuff by deleting my gmail account is not acceptable. Also Google is well known for cancelling services [4]. A mitigating factor is that there should be enough of those devices sold to make them a good target for an OpenWRT port.
As an aside it looks like the TailScale mesh VPN system could be a solution to the security issues related to malware on Wifi APs problem [5]. There is also HeadScale which is the fully open source variant of that [6]. Even when the vendor isn’t overtly hostile they can make mistakes so encryption is good.
Kogan is selling an own-brand Wifi 6 mesh network package that comes with 1/2/3 devices for $70/$120/$140. It doesn’t do Wifi 6E but supports the better encoding methods of Wifi 6 over Wifi 5 and will be good for bridging a LAN in one part of a house to a Wifi 2.4GHz or Ethernet connected device in another part. They also support up to 7 nodes so you could buy two of the 3 device packages and run one network with 2 and another with 4. The pricing is very competitive and they support web based administration!
I’ve just ordered the $140 pack from Kogan. If it doesn’t do what I want then I can find someone else who will be happy with whatever functionality it gives and $140 is an amount I can risk without concern. If it works well then I might upgrade to Wifi 6E or Wifi 7 next year and deploy the Wifi 6 one for a relative.
- [1] https://community.tp-link.com/en/home/forum/topic/531350
- [2] https://tinyurl.com/y84skkdp
- [3] https://openwrt.org/supported_devices
- [4] https://killedbygoogle.com/
- [5] https://en.wikipedia.org/wiki/Tailscale
- [6] https://headscale.net/
Related posts:
- Wifi Performance on Linux Wifi usually just works. In the past I haven’t had...
- 2 node vs 3+ node clusters A comment on my post about the failure probability of...
- Ethernet bonding Bonding is one of the terms used to describe multiple...
Russell Coker: Is Secure Boot Worth Using?
With news like this one cited by Bruce Schneier [1] people are asking whether it’s worth using Secure Boot.
Regarding the specific news article, this is always a risk with distributed public key encryption systems. Lose control of one private key and attackers can do bad things. That doesn’t make it bad it just makes it less valuable. If you want to setup a system for a government agency, bank, or other high value target then it’s quite reasonable to expect an adversary to purchase systems of the same make and model to verify that their attacks will work. If you want to make your home PC a little harder to attack then you can expect that the likely adversaries won’t bother with such things. You don’t need security to be perfect, making a particular attack slightly more difficult than other potential attacks gives a large part of the benefit.
The purpose of Secure Boot is to verify the boot loader with a public key signature and then have the boot loader verify the kernel. Microsoft signs the “shim” that is used by each Linux distribution to load GRUB (or another boot loader). So when I configure a Debian system with Secure Boot enabled that doesn’t stop anyone from booting Ubuntu. From the signatures on the boot loader etc there is no difference from my Debian installation and a rescue image from Debian, Ubuntu, or another distribution booted by a hostile party to do things against my interests. The difference between the legitimate OS image and malware is a matter of who boots it and the reason for booting it.
It is possible to deconfigure Microsoft keys from UEFI to only boot from your own key, this document describes what is necessary to do that [2]. Basically if you boot without using any “option ROMs” (which among other things means the ROM from your video card) then you can disable the MS keys.
If it’s impossible to disable the MS keys that doesn’t make it impossible to gain a benefit from the Secure Boot process. You can use a block device decryption process that involves a signature of the kernel and the BIOS being used as part of the decryption for the device. So if a system is booted with the wrong kernel and the user doesn’t recognise it then they will find that they can’t unlock the device with the password. I think it’s possible on some systems to run the Secure Boot functionality in a non-enforcing mode such that it will use a bootloader without a valid signature but still use the hash for TPM calculations, that appears impossible on my Thinkpad Yoga Gen3 which only has enabled and disabled as options but should work on Dell laptops which have an option to run Secure Boot in permissive mode.
I believe that the way of the future is to use something like EFIStub [3] to create unified kernel images with a signed kernel, initrd, and command-line parameters in a single bundle which can be loaded directly by the UEFI BIOS. From the perspective of a distribution developer it’s good to have many people using the current standard functionality of shim and GRUB for EFI as a step towards that goal.
CloudFlare has a good blog post about Linux kernel hardening [4]. In that post they cover the benefits of a full secure boot setup (which is difficult at the current time) and the way that secure boot enables the lockdown module for kernel integrity. When Secure Boot is detected by the kernel it automatically enables lockdown=integrity functionality (see this blog post for an explanation of lockdown [5]). It is possible to enable this by putting “lockdown=integrity” on the kernel command line or “lockdown=confidentiality” if you want even more protection, but it happens by default with Secure Boot. Secure Boot is something you can set to get a selection of security features enabled and get a known minimum level of integrity even if the signatures aren’t used for anything useful, restricting a system to only boot kernels from MS, Debian, Ubuntu, Red Hat, etc is not useful.
For most users I think that Secure Boot is a small increase in security but testing it on a large number of systems allows increasing the overall security of operating systems which benefits the world. Also I think that having features like EFIStub usable for a large portion of the users (possibly the majority of users) is something that can be expected to happen in the lifetime of hardware being purchased now. So ensuring that Secure Boot works with GRUB now will facilitate using EFIStub etc in future years.
- [1] https://tinyurl.com/264thnky
- [2] https://github.com/Foxboron/sbctl/wiki/FAQ#option-rom
- [3] https://wiki.debian.org/EFIStub
- [4] https://blog.cloudflare.com/de-de/linux-kernel-hardening
- [5] https://mjg59.dreamwidth.org/55105.html
- [6] https://wiki.debian.org/SecureBoot
Related posts:
- Secure Boot and Protecting Against Root There has been a lot of discussion recently about the...
- Question about a “Secure Filesystem” I have just been asked for advice about “secure filesystem”...
- Designing a Secure Linux System The Threat Bruce Schneier’s blog post about the Mariposa Botnet...
This week in KDE: per-monitor brightness control and “update then shut down”
This week was all about the quality of life features! As we close in on Plasma 6.2 (the soft feature freeze is in four days, eek!), some great work that’s been in progress for a long time got merged.
Notable New FeaturesOkular now has a “speak text from current page” feature (Athul Raj Kollareth, Okular 24.12.0. Link)
Plasma’s Brightness widget now shows individual brightness sliders for every connected monitor that supports this, so you can control them separately! If you want to adjust all of them together, you can still do that via global shortcut/keyboard key or by scrolling over the widget (Jakob Petsovits, Plasma 6.2.0. Link):
When there’s a pending offline system update, you’ve already got the option to update and then reboot, or just reboot and skip the update. Now, there’s also an option to complete the update and then shut down the computer! This option is exposed both on the logout screen, and also in Discover (Thomas Duckworth, Plasma 6.2.0. Link 1, link 2, and link 3):
Long-pressing an empty area of a Plasma panel using a touchscreen now enters edit mode for that panel (Niccolò Venerandi, Plasma 6.2.0. Link)
Notable UI ImprovementsThe “Add Widgets” sidebar has received a UX overhaul with numerous usability focused changes, including:
- Appearing on the right side of the screen when opened from a right-screen-edge panel
- Using wider grid cells to permit longer text without elision or unnatural word-wrap behaviors
- Improved appearance of the filter button, so now it looks like it opens a drop-down menu — because it does
- Sorting is now locale-aware, taking into account, for example, accented characters
- You access it from buttons and menu items labeled “Add or Manage Widgets,” since it also acts as the place where you get new widgets or delete unwanted ones
- Spacer widgets can also be found there, no longer only from the panel settings dialog
- When installing manually-downloaded widgets, the open dialog now accept all valid file types
And believe it or not, that’s not all that’s planned! But the rest will have to wait until next week… (Niccolò Venerandi, Plasma 6.2.0. Link 1, link 2 3, link 4, link 5, link 6, link 7)
When your system is using a non-default power profile, it’s now shown as a badge on the battery icon, so you can see both the power profile and also the battery status at the same time (Louis Moureaux and me: Nate Graham, Plasma 6.2.0. Link 1 and link 2):
At the moment this only works with the Breeze icon theme, and 3rd-party icon themes will have to add some more icons to opt into it. Until then, users of those icon themes will get the old appearance when using a non-default power profileA panel popup opened from a widget on the end of a limited-width panel now tries its best to align its edge with that of the panel (Niccolò Venerandi, Plasma 6.2.0. Link):
Maybe I just really like clocks, ok?You can now give a custom display name to your custom command shortcuts (Yifan Zhu and Thenujan Sandramohan, Plasma 6.2.0. Link):
Discover is now more accurate about how it presents licenses, and communicates the subtle distinctions between “proprietary” and “non-free”, rather than branding everything that isn’t free software as proprietary (me: Nate Graham, Plasma 6.2.0. Link):
When you change keyboard layouts, the labels of the language codes that appear in the system tray no longer subtly change in size based on the shape of their letters (Sauf Lvc, Plasma 6.2.0. Link)
Added a Breeze icon for Applet Wallet bundle files (Kai Uwe Broulik, Frameworks 6.6. Link):
Notable Bug FixesWhen Spectacle is configured to save in a format other than PNG by default, pasting a just-copied screenshot now always works in every target app, with the caveat that some apps that don’t advertise support for non-PNG image pasting (like Firefox and Chromium, annoyingly) will get a PNG version anyway, rather than your preferred file format. This is better than it not working at all, at least! (Noah Davis, Spectacle 24.08.1. Link)
You can once again use the arrow keys to move focus out of Kickoff’s favorites grid view (Arjen Hiemstra, Plasma 6.1.5. Link)
Fixed a complex bug that could cause KWin to crash when X11 or XWayland-using apps monkeyed with the window stacking order in specific ways (Vlad Zahorodnii, Plasma 6.1.5. Link. And thanks to the reporter Peter Strick for being incredibly helpful in making the issue reproducible! All bug reports should be so good.)
Fixed an annoying bug that caused text copied from cells in LibreOffice Calc to never make it onto the clipboard unless you changed the clipboard’s settings to always store images (Fushan Wen, Plasma 6.1.5. Link)
Fixed a bug that caused tooltips to appear at the last location the mouse pointer was located at when interacting with the system using a stylus (David Redondo, Plasma 6.1.5. Link)
Fixed a funny bug that could make Plasma crash when you have a Media Player widget on your panel (not the System Tray, directly on a panel) and play certain specific songs whose titles are exactly the right length to trigger an obscure layout bug (Fushan Wen, Plasma 6.2.0. Link)
Fixed a weird issue that made modifier-only global shortcuts in the X11 session fail to switch keyboard layouts as expected while on the lock screen and other places (Yifan Zhu, Plasma 6.2.0. Link)
Exporting your shortcuts on System Settings’ Shortcuts page now includes any custom script shortcuts you’ve created, so that when you import them elsewhere, they work (Akseli Lahtinen and David Redondo, Plasma 6.2.0. Link)
Other bug information of note:
- 2 Very high priority Plasma bugs (down from 3 as last week). Current list of bugs
- 36 15-minute Plasma bugs (up from 30 last week; bug triage activities discovered some more old issues that seemed important to fix soon, which were added to the list). Current list of bugs
- 156 KDE bugs of all kinds fixed over the last week. Full list of bugs
Improved KWin’s HDR tone mapping, allowing it to do a better job of displaying colors in cases where HDR content specifies a brightness level higher than what the screen is capable of outputting. There’s even more that can be done, but it’s already a big improvement. (Xaver Hugl, Plasma 6.2.0. Link)
Even further optimized the system performance impact in KWin of using an ICC profile to change your screen’s color calibration (Xaver Hugl, Plasma 6.2.0. Link)
Improved KWin’s performance for some multi-GPU systems (Xaver Hugl, Plasma 6.2.0. Link)
Added a bunch of autotests for X11-specific behavior in KWin, since fewer people are exercising that code now that 80+% of Plasma 6 users are using Wayland (Vlad Zahorodnii, Plasma 6.2.0. Link)
…And Everything ElseThis blog only covers the tip of the iceberg! If you’re hungry for more, check out https://planet.kde.org, where you can find more news from other KDE contributors.
How You Can HelpOtherwise, visit https://community.kde.org/Get_Involved to discover other ways to be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don’t have to already be a programmer, either. I wasn’t when I got started. Try it, you’ll like it! We don’t bite! Or consider donating instead! That helps too.