Feeds
Golems GABB: Using React in Drupal Themes
React can rightfully be called a game-changer JavaScript library for Drupal developers, as it can completely change the way interfaces are built. By integrating it into Drupal themes, we can enter a completely new world full of creativity and convenient functions, thus improving user experiences significantly.
With the help of React, interfaces of websites built on Drupal will no longer be static and boring: they become real, interactive, and fast. Less words, let's learn more details about how you can benefit from using React in Drupal themes.
Matt Layman: Golang Middleware and DBs - Building SaaS #199
GSoC 2024: Wrapping Up
Throughout this summer, I’ve developed a C++ library called MankalaEngine, implementing three opponents for the games of Bohnenspiel and Oware.
The current library is highly extensible. After implementing all the base classes and Bohnenspiel, adding Oware to the library was fairly fast and straightforward. This focus on extensibility has been a priority since the beginning of the project. Given that the Mancala family of games comprises numerous variants, designing the API with this in mind has proven valuable.
The three provided opponents use a random selection algorithm, Minimax, and MTD-f. The Minimax and MTD-f opponents were implemented with optimizations like alpha-beta pruning and transposition tables, making them both very capable, consistently outperforming the random opponent.
For a more detailed overview of what was accomplished, I wrote a work report on KDE’s wiki.
What I’ve learnedThe last few months have been a very enriching experience from a technical standpoint.
Contributing to a “real-world” project allowed me to learn about technologies I hadn’t used before. For example, I learned how to use CMake and how to set up a CI pipeline.
I also faced concerns that don’t typically arise when developing a school or personal project, such as adhering to an organization’s software standards. To this end, I learned about open-source licenses and new programming idioms.
Interacting With The CommunitySince MankalaEngine is a completely new library, my interaction with the community was limited, as there isn’t an existing group of contributors for this particular project.
I mainly interacted with my mentors, who were very helpful. Although less frequently, I also had the opportunity to communicate with other KDE contributors through mailing lists, from whom I also learned a great deal.
Thomas Goirand: Packaging Home Assistant
During Debconf, Edward Betts and myself started packaging Home Assistant for Debian. It consists of hundreds of Python packages. So far, we counted at least 675 packages. That’s a lot, though most packages are just libraries to talk with some IoT devices and some APIs. It’s fairly easy to create a new package: it takes me about 15 to 20 minutes, probably half that time to Edward. And it’s a lot of fun. So far in one month of time, we managed to package about 1 third of the list (probably 200+ Python packages already). Once we’ve done all the dependencies, we may start to have fun with the core of the application! At the current speed, hopefully we’ll be done before the end of the year. Edward and myself have swear to make at least one package a day, which I’ve been doing so far, and Edward did a way more… We also received contributions from Silton0506, Tianyu, piotr, EiPi Fun, sourabhtk37, and Count-Dracula, as per the very bottom of the TODO list in the wiki (see link below).
If you have a bit of free time, we’d love to have more contributors. Here’s were to get the needed information:
We created a team in Salsa: https://salsa.debian.org/homeassistant-team/
Our TODO list: https://wiki.debian.org/Python/HomeAssistant
Our DDPO Q/A page: https://qa.debian.org/developer.php?login=team%2Bhomeassistant%40tracker.debian.org
Feel free to join us on IRC: #debian-homeassistant
Discussing with a lot of people about it, I realized that A LOT of DDs are actually using Home Assistant. Wouldn’t you like it better if it was just a “apt install” away ? Any DD can simply take a package in the wiki, open an ITP, upload it’s debianized source on Salsa, and upload to the Debian archive. Most are very easy simple packages to make.
Twin Cities Drupal Camp: Interview With Keynote Speaker, Preston So
Talking to Preston So is easy.
I was nervous before our conversation simply because on paper there are things about the man that are frankly intimidating. Author of Content Strategy for Mobile Karen McGrane named Preston “the smartest guy in the field” in 2024. He was called “probably the smartest person working in this industry right now” by Deane Barker, author of Web Content Management.
But Preston So one-on-one is so personable, so engaging, that he instantly put me at ease. We talked about some aspects of his life and career, his experiences working in Drupal and other content management systems, what his keynote will be about, as well as his love of travel and learning languages.
Preston, you work at dotCMS. Can you talk a bit about what your job is and what it is you're doing?Many of the folks who know me from the Drupal world are probably a little surprised to see that I've gone over to a Java-based CMS. But I used to work at Oracle also, which was a Java-based CMS. I don't really have a lot of opinions about Java versus PHP, but I know there's some strong opinions on both sides.
But dotCMS is really interesting as a company. We're an open source CMS. You can see all of our code, all of what we do. You can contribute if you want to. So in that case, it's very similar to Drupal.
I joined dotCMS about five or six months ago as our new VP of product. And in that role, I basically oversee all of our sort of product or product-related functions. And that means our product team, our design team, our data function, our developer relations function. And also, I work on our analyst relations functions as well. So I wear a lot of hats at dotCMS.
And it's very similar to what I was doing before. I mean, my background has always been in software, in the actual engineering … in coding.
I read in your bio about your interest in voice interface and voice content. Can you talk a bit about your interest in non-traditional interfaces like voice?This ties into the writing I've done in the past around what I call the “channel explosion”…. These days, content needs to go to a lot of different places. One of the things that we often forget, especially those of us who have primarily worked with web content, is that content isn't just read, right? It's also spoken. It's also aural. It's visual. It's spatial. There are so many things about content that aren't really … tied to that rectangular box that we call the website or the screen or the web browser.
And a really good example of that is voice interfaces and voice bots or voice assistants. About seven or eight years ago, I was part of a really amazing team at Acquia, [that] worked on the first ever Alexa skill for the state of Georgia, building an Amazon Alexa skill that would allow people to ask questions: like, how do I register to vote or how do I enroll my child in pre-K?
Content needs to come from a single source of truth. You're seeing a lot of these new use cases emerge where people want to serve content to a mobile app, people want to serve content to a Roku device, people want to serve content to an AR overlay, for example, in your Vision Pro.
One of the reasons why I've been so interested in voice is because it really throws out a lot of the prescriptions and a lot of the ideas that we have about content, a lot of those biases that we have towards written, visual online content…. Web content is actually more abstracted away from natural human language and natural human biology than is speech-based interfaces or how we actually converse.
So I wrote a book about five years ago called Voice Content and Usability. In that book, I talk about voice content strategy, voice content design, how do you actually get content ready for a voice interface? And how do you actually implement an end-to- end voice interface that needs to consume content from a CMS?
When that book came out, there weren't a whole lot of Alexa content-driven implementations. It was basically just Capital One balance checking and Domino's Pizza ordering. And that was about it. No one had ever done a content-driven voice interface that was more informational rather than transactional.
Unfortunately, a lot of the things that have happened over the last few years with generative AI have really thrown those approaches out the window, because oftentimes with AI, you don't really feed it content. You're looking at content that is being reconstituted … by the AI as opposed to something that you're actually serving. But for governments, it's a much, much bigger concern for that content to stay up to date.
[You want] to help somebody learn how to get health insurance, or how to file a death certificate, [and that] cannot be mucked up by AI hallucinations or incorrectness. This is one of the reasons why voice content strategy and voice content still remains so relevant.
Your bio says that you're interested in “endangered and underserved languages”. Where else does your interest in learning languages come from?My biggest passion outside of work, outside of professional pursuits, is travel and languages. A lot of it comes from my background. I spent a good amount of time in Brazil when I was younger, so I'm fluent in Portuguese because I did an exchange program there. I taught English there in college as well. I also spent time in Wales.
Some of the richest interactions and some of the richest experiences I have when I travel are when I'm able to converse in a language that is a very seldom learned language, a very atypical language. It's a language that people don't really often take the time to learn or have much of an interest in learning. But [these languages are the way] in which you can get to know the culture, get to know the food, get to know just the way that people interact in these other environments and in these other languages.
Languages are entire universes unto themselves. Especially those languages that have that rich, rich tradition of oral language traditions or rich literature that stretches back for centuries. I love to focus on languages that I can speak right now with people today.
Right now, I'm focusing on three languages – two of them are incredibly difficult. The third is a little bit easier, and it's all towards a vacation I've got planned with a friend coming up in November. We're headed to South Africa, and so I'm learning Afrikaans, which is obviously at the center of Middle Dutch, the sort of colonial language in South Africa, but I'm also learning Xhosa and Zulu, which are two of the Nguni languages spoken in South Africa.
Can you say a little more about the keynote presentation that you're going to be giving at Twin Cities Drupal Camp?Over the past four to five years, I've been tracking sort of dissatisfaction on both sides of CMS.
I think one of the things that's really unique about the content management system is that it occupies a very unique ecological niche in the software world. Whereas a lot of other software products have a focus on individual personas, like Salesforce for salespeople. CRM tools tend to be for those kinds of folks.
The CMS has always been very unique in software because it brings together people with very different skills and very different priorities. Two of those personas that are probably the chief personas that the CMS deals with are, number one, the CMS developer. And then number two, the sort of content practitioner or content team or content architect or compliance reviewer or accessibility reviewer, everyone who has a stake in making sure that content is successful.
But we know based on just hearing from folks around the CMS industry that we're starting to see a bit of a schism right now, which is that there is, number one, a trend for developers to go towards headless CMSs, like Contentful, Sanity, some of those, and really go in that direction. But the problem with that is that it kind of leaves content teams with their hands tied behind their back. They can't really do drag and drop layout management anymore. They can't do preview of all of their different sites anymore. There's a lot of issues that come up with headless CMS.
But by the same token, developers today really don't want to work with the sort of monolithic or traditional CMS anymore. I love Twig. I love PHP template. There's a lot of folks who don't. There's a lot of folks, especially who are coming into front-end development nowadays, that really, really don't like to work with those paradigms.
One of the things that I think is really important is that as we contend with this huge influx of new JavaScript frameworks like Astro, SELT, so on and so forth, and also new delivery channels like we were talking about earlier, Blaine, around AR, VR, voice, AI, so on and so forth, it becomes a really big concern.
How do we actually collaborate effectively in a CMS that works for everybody and not just one half of the back office? One of the struggles that we see very often is that oftentimes headless CMSs will say, well, hey, content is just the data. Let us handle the presentation. Let us handle the front-end. Let us handle how things look.
But what that does is it severs all those linkages with how content authors want to preview, with how content editors want to be able to look at and review or schedule content or review things for compliance or review things for accessibility, so on and so forth. But developers also don't want to be held back.
The topic of my talk is really what I call the universal CMS, which is a new pair and I am really quickly getting a lot of traction. It really is about restoring the balance that characterized the early static web CMS era. Basically saying, hey, we could do all these really cool things with the website, but we had a handshake where we agreed that, hey, developers, if you hand over control over layout and control over all of these visual components, I will give you obviously control over how to code the whole thing.
But this unique grand compromise that we forged is something that is starting to come back. We are starting to see headless CMSs build in visual editing features which violate the peer headless architectural prescription. We are also seeing a lot of the old traditional CMSs or monolithic CMSs begin to build a lot more APIs and SDKs for JavaScript developers or mobile app developers to build on top of. And so I think what we are going to start to see here is a convergence between both the headless CMSs and the traditional CMSs towards a new equilibrium which I call universal CMS.
And here in just a few years, I think we are going to get rid of this full distinction between headless and monolithic and all of those tired terms that have a lot of baggage with them.
[Short bio]
Preston So (he/they) is a product executive with over 25 years in software, 17 years in content technologies, and 9 years leading product, design, engineering, and developer relations functions at organizations such as Oracle, Acquia, dotCMS, Time Inc., and Gatsby. He is Vice President, Product at dotCMS and the author of Immersive Content and Usability (A Book Apart, 2023), Gatsby: The Definitive Guide (O'Reilly, 2021), Voice Content and Usability (A Book Apart, 2021), and Decoupled Drupal in Practice (Apress, 2018).
Named “the smartest guy in the field” by Content Strategy for Mobile author Karen McGrane in 2024 and “probably the smartest person working in this industry right now” by Web Content Management author Deane Barker in 2020, Preston is a globally recognized authority on the intersections of content, design, and code. He is an editor at A List Apart and former top-read columnist at CMSWire. Preston is a frequent presenter with 17 years of speaking engagements spanning over 50 conferences, including SXSW Interactive (2017, 2017 encore, 2018) and An Event Apart (2020–22) and keynotes in three languages. He is based in New York City, where he can often be found immersing himself in languages that are endangered or underserved.
Posted In Drupal PlanetJonathan McDowell: Thoughts on Advent of Code + Rust
Diego wrote about his dislike for Advent of Code and that reminded me I hadn’t written up my experience from 2023. Mostly because, spoiler, I never actually completed it and always intended to do so and then write it up. I think it’s time to accept I’m not going to do that, and write down some thoughts before I forget all of them. These are somewhat vague, given the time that’s elapsed, but I think still relevant. You might also find Roger’s problem write up interesting.
I’ve tried AoC a couple of times before; I think I had a very brief attempt back in 2021, and I got 4 days in for 2022. For Advent of Code 2023 I tried much harder to actually complete the challenges, and got most of the way there. I didn’t allow myself to move on to the next day until fully completing the previous day, and didn’t end up doing the second half of December 24th, or any of December 25th.
RustFirst I want to talk about Rust, which is the language I chose to use for the problems. I’ve dabbled a little in it, but I’d like more familiarity with the basic language, and some programming problems seemed like a good way to get that. It’s a language I want to like; I’ve spent a lot of my career writing C, do more in Go these days, and generally think Rust promises a low level, run-time light environment like C but with the rough edges taken off.
I set myself the challenge of using just bare Rust; no external crates, no use of cargo. I was accused of playing on hard mode by doing this, but it really wasn’t the intention - I figured that I should be able to do what I needed without recourse to anything outside the core language, and didn’t want what seemed like the extra complexity of dealing with cargo.
That caused problems, however. I’m used to by-default generic error handling in Go through the error type, but Rust seems to have much more tightly typed errors. I was pointed at anyhow as the right way to do this in Rust. I still find this surprising; I ended up using unwrap() a lot when I think with more generic error handling I could have used ?.
The other thing I discovered is that by default rustc is heavy on the debug output. I got significantly better results on some of the solutions with rustc -O -C target-cpu=native source.rs. I probably shouldn’t be surprised by this, but worth noting.
Rust, to me, has a syntax only a C++ programmer could love. I am not a C++ programmer. Coming from C I found Go to be a nice, simple syntax to learn. Rust has not been the same. There’s a lot more punctuation, and it’s not always clear to me what it’s doing. This applies more when reading other people’s code than when writing it myself, obviously, but I see a lot of Rust code that could give Perl a run for its money in terms of looking like line noise.
The borrow checker didn’t bug me too much, but did add overhead to my thinking. The Rust compiler is generally very good at outputting helpful error messages when the programmer is an idiot. I ended up having to use a RefCell for one solution, and using .iter() for loops rather than explicit iterators (why, why is this different?). I also kept forgetting to explicitly mark variables as mutable when declaring them.
Things I liked? There’s a rich set of first class data types. Look, I’m a C programmer, I’m easily pleased. You give me some sort of hash array and I’ll be happy. Rust manages that, tuples, strings, all the standard bits any modern language can provide. The whole impl thing for adding methods to structures I like as a way of providing some abstraction, though I think Go has a nicer syntax for it. The compiler, as mentioned, is great at spitting out useful errors for the most part. Also although I wasn’t using external crates for AoC I do appreciate there’s a decent ecosystem there now (though that brings up another gripe: rust seems to still be a fairly fast moving target, to the extent I can no longer rely on the compiler in Debian stable to be able to compile random projects I find).
Advent of CodeLet’s talk about the advent of code bit now. Hopefully it’s long enough since it came out that this won’t be spoilers for anyone, but if you haven’t attempted the 2023 AoC and might, you might want to stop reading here.
First, a refresher on the format for those who might not be aware of it. Problems are posted daily from December 1st until the 25th. Each is in 2 parts; the second part is not viewable until you have provided the correct answer for the first part. There’s a whole leaderboard thing going on, but the puzzle opens at midnight UTC-5 so generally by the time I wake up and have time to look the problem has been solved many times over; no chance of getting listed.
Credit to AoC creator, Eric Wastl, for writing up the set of problems in an entertaining fashion. I quite enjoyed seeing how the puzzle would be phrased each day, and the whole thing obviously brings a lot of joy to folk I know.
I always start AoC thinking it’ll be a fun set of puzzles to solve. Then something happens and I miss a day or two, and all of a sudden I’ve a bunch of catching up to do and it’s all a bit more of a chore. I hit that at some points this time, but made a concerted effort to try and power through it.
That perseverance was required up front, because I found the second part of Day 1 to be ill specified, and had to iterate a few times to actually calculate the desired solution (IIRC, issues about whether sevenone at the end of a line ended up as 7 or 1 really tripped me up). I don’t recall any other problems that bit me as hard on the specification as this one, but it happening up front was unfortunate.
The short example input doesn’t always help with this either; either it’s not enough to be able to extrapolate patterns, or it doesn’t show all the variations you need to account for (that aren’t fully specified in the text), or in a few cases it turned out I needed to understand the shape of the actual data to produce a solution that could actually complete in a reasonable time.
Which brings me to another matter, sometimes brute force doesn’t actually work. This is fine, but the second part of the day’s problem can change the approach you’d take. So sometimes I got lucky in the way I handled the first half, and doing the second half was a simple 5 minute tweak, and sometimes I had to entirely change the way I was storing data.
You might claim that if I was a better programmer I’d have always produced a first half solution that was amenable to extension for the second half. First, I dispute that; I think there are always situations where the problem domain can change in enough directions that you can’t handle all of them without a lot of effort. Secondly, I didn’t find AoC an environment that encouraged me to optimise for generic solutions. Maybe some of the puzzles in isolation would allow for that, but a month of daily problems to solve while still engaging in regular life meant I hacked things up, took short cuts based on the knowledge I had of the input data, etc, etc.
Overall I can see the appeal, but the sheer quantity and the fact I write code as part of my day job just made it feel too much like a chore, rather than a fun mental exercise. I did wonder how they’d look as a set of interview puzzles (obviously a subset, rather than all of them), but I’m not sure how you’d actually use them for that - I wouldn’t want anyone to have to solve them in a live interview.
So, in case it’s not obvious, I’m not planning to engage in AoC again this yet. But I’m continuing to persevere with Rust (though most of my work stuff is thankfully still Go).
Python Engineering at Microsoft: Announcing the General Availability of the VS Code extension for Azure Machine Learning
Machine learning and artificial intelligence are transforming the world as we know it. With the power of data, you will have countless opportunities to create something new, unique, and exciting. Whether you are a seasoned data scientist or a curious beginner, you need a platform that can help you build, train, deploy, and manage your machine learning models with ease and efficiency. Azure Machine Learning has always been the backbone for machine learning tasks, and we want to further help you in your machine learning journey by improving the way you write code.
The VS Code extension for Azure Machine Learning has been in preview for a while and we are excited to announce the general availability of the VS Code extension for Azure Machine Learning. You can use your favorite VS Code setup, either desktop or web, to build, train, deploy, debug, and manage machine learning models with Azure Machine Learning from within VS Code. This means that the extension is stable, reliable, ready for production use, and comes with additional features, such as VNET support.
“We have been using the VS Code extension for Azure Machine Learning since its preview release, and it has significantly streamlined our workflow. The ability to manage everything from building to deploying models directly within our preferred VS Code environment has been a game-changer. The seamless integration and robust features like interactive debugging and VNET support have enhanced our productivity and collaboration. We are thrilled about its general availability and look forward to leveraging its full potential in our AI projects.” – Ornaldo Ribas Fernandes: Co-founder and CEO, Fashable
Azure Machine LearningAzure Machine Learning (Azure ML) is a cloud-based service that enables you to build, train, deploy, and manage machine learning models.
With Azure Machine Learning service, you can:
- Build and train machine learning models faster, and easily deploy to the cloud or the edge.
- Use the latest open-source technologies such as TensorFlow, PyTorch, or Jupyter.
- Experiment locally and then quickly scale up or out with large GPU-enabled clusters in the cloud.
- Interactively debug experiments, pipelines, and deployments using the built-in VS Code debugger.
- Speed up data science with automated machine learning and hyper-parameter tuning.
- Track your experiments, manage models, and easily deploy with integrated CI/CD tooling.
With this extension installed, you can accomplish much of this workflow directly from Visual Studio Code. The VS Code extension provides a user interface to create and manage Azure ML resources, such as experiments, compute targets, environments, and deployments. It also supports the Azure ML 2.0 CLI, which is the new command-line tool that simplifies the specification and execution of machine learning tasks.
Get Started with Azure Machine Learning Extension One click Connect to VS Code from Azure ML StudioTo get started with VS Code, navigate to the compute section of your Azure Machine Learning Studio. Find the desired compute instance and click on the VS Code (Web) or VS Code (Desktop) links under the “Applications” section.
Don’t have an Azure ML workspace or compute instance? Check out the guide here: Tutorial: Create workspace resources – Azure Machine Learning | Microsoft Learn
VS Code DesktopAfter clicking on the link for VS Code desktop, the browser will ask you for your permission to launch the VS Code Desktop application. VS Code desktop will ask you to sign in using your Microsoft/Azure account.
Follow the sign-in prompts, then you should be all set up to develop your own machine learning models using your favorite VS Code set up!
VS Code WebAfter clicking on the link, VS Code (Web) will open to a new tab on your browser. It may ask you to sign in using your Microsoft/Azure account, so VS Code will have permission to access your Azure subscription and workspace. Note the connection process may take a few minutes.
After signing in, you should now be connected to your Azure Machine Learning workspace inside of VS Code. Time to build your own machine learning model using the full power of VS Code!
FeedbackGive the Azure Machine Learning extension a try and let us know what you think. If you have any questions or feedback, please let us know your thoughts in this survey! You can also file an issue on our public GitHub repo with any questions or concerns you may have.
Need a guide to help you get started or documentation? Check out the tutorials here: Azure Machine Learning documentation | Microsoft Learn
The post Announcing the General Availability of the VS Code extension for Azure Machine Learning appeared first on Python.
mark.ie: My LocalGov Drupal contributions for week-ending August 23rd, 2024
This week I built a LocalGov Drupal dashboard, so we can better keep track of all our projects.
GSoC '24 Progress: Week 9 - 12
Hello everyone! Time flies and now we’re already in the final week of GSoC. In this blog post I’ll be sharing the progress I’ve made since my last update, focusing primarily on subtitle styling.
Subtitle EditorThe first thing I did was to enhance the existing subtitle editor. The updated editor now serves as an interface for editing ASS events, which include various components. With the new subtitle editor, we can easily modify elements such as the event’s layer, style, margins, and more. I’ve also simplified the effects section, allowing us to control subtitle scrolling by simply adjusting checkboxes and combo boxes for speed, direction, and range.
However, the most significant change is the text field and the buttons above it. To better understand these changes, it’s important to first introduce the relationship between ASS styles and events. In ASS files, each event must be assigned a valid style that applies to the entire event text. Additionally, ASS override tags are special text blocks within events that allow precise control over the styles of different parts of the text, rather than the entire text. (There are some exceptions, like “Set Position.”)
The text field has been enhanced to assist users in inputting ASS override tags using the provided buttons. For instance, when a user clicks the “Toggle Bold” button, tags are automatically inserted or adjusted to toggle the bold style for either the selected text or the text following the cursor if nothing is selected. Additionally, the text field features a highlighter that renders different parts of the tags in distinct styles, making them more distinguishable, and an auto-completer that lists all valid presets as we start typing a tag name.
For those who prefer the previous subtitle editor, which only displays the rendered text, a “Simple Editor” is also available. This editor syncs with the normal editor but displays only the text without tags while rendering some basic tag effects. However, due to the complexities of ASS tag rules, style editing in the Simple Editor can sometimes behave unpredictably. So it’s best suited for simpler use cases before or after editing styles.
Subtitle ManagerContinuing from the previous improvements, the Subtitle Manager is now integrated with style management and has been divided into four sections: File, Event, Style, and Info, which correspond to the four main components of ASS subtitles. Each section, except for the File section, includes a sidebar for switching between different subtitle files. Additionally, when in the Style section, we can drag and drop a style item onto a subtitle file name in the sidebar to efficiently move or copy styles between files. The same functionality is available in the Event section, where we can move or copy an entire layer to another file.
Misc Style EditorA new widget, the Style Editor, was created to edit styles and provide a preview.
Convert Old Global StyleOld styles will now be automatically converted to the “Default” style in the new project. Font size, outline, and shadow will be scaled to maintain the original effects.
Different Default Styles for LayersNow, we can assign different default styles to each layer, which will automatically be applied to a subtitle event when it’s created on the corresponding layer. This feature is especially useful for quickly building a subtitle file with multiple speakers, allowing each speaker to have a distinct style.
SummaryIt has been a wonderful summer getting involved in the KDE community and contributing to Kdenlive! I may not be the best at coding, but I’ve learned a lot throughout this journey. Thanks for everyone who has gave me guidence — Eugen Mohr, Farid Abdelnour, and especially my mentor, Jean-Baptiste Mardelle. While GSoC is coming to an end, my journey with KDE is just beginning. After these updates, I plan to continue improving subtitle functions, including making it easier for users to input more ASS override tags and refining the UI and user experience. See you in my next blog :)
The Drop Times: Connecting Drupal with the Next Generation of Makers: Albert Hughes
Drupal Association blog: Drupal Association Announces Tag1 Consulting as Partner for Drupal 7 Extended Security Support Provider Program
PORTLAND, Ore., 22 August 2024—The Drupal Association is pleased to announce Tag1 Consulting as a partner for the Drupal 7 Extended Security Support Provider Program. This initiative aims to support Drupal 7 users by carefully selecting providers to deliver extended security support services beyond the 5 January 2025 end-of-life (EOL) date.
The Drupal 7 Extended Security Support Provider Program allows organizations that cannot migrate from Drupal 7 to newer versions by the EOL date to continue using a version of Drupal 7 that is secure and compliant. This program complements the Association’s Drupal 7 Certified Migration Providers Program, which Tag1 is also a participant in, that helps organizations find the right partner to transition their sites from Drupal 7 to Drupal 11.
Tag1’s Drupal 7 extended support offers proactive security and compatibility updates for D7, backed by their team of top Drupal contributors and security experts who led its creation and evolution. With their support, users can continue running D7 as long as they need.
“We’re very pleased to add Tag1 to our Drupal 7 Extended Security Support Program,” commented Tim Doyle, CEO of the Drupal Association. “Tag1 brings a wealth of experience with Drupal and the Drupal Community, and we’re happy they’re applying their expertise to Drupal 7 support.”
As organizations prepare for the transition from Drupal 7, Tag1 Consulting will provide the necessary support to keep their sites secure and operational.
“As one of the oldest and most well-known consulting companies in the Drupal ecosystem, we're proud to offer trusted support for Drupal 7 after its end of life,” said Jeremy Andrews, Tag1’s CEO. “Our team is dedicated to helping organizations keep their sites secure and running smoothly, with the same expertise and care that we've brought to the community for over 20 years.”
More information on Drupal 7 Extended Support from Tag1.
About the Drupal AssociationThe Drupal Association is a nonprofit organization that fosters and supports the Drupal software project, the community, and its growth. Our mission is to drive innovation and adoption of Drupal as a high-impact digital public good, hand-in-hand with our open source community. Through various initiatives, events, and programs, the Drupal Association helps ensure the ongoing development and success of the Drupal project.
About Tag1 Consulting, Inc.Tag1 is a global technology consulting firm and recognized leader in the Drupal community. Known for our innovative work with top-tier organizations and our pivotal contributions to the Drupal platform itself, we provide unmatched expertise in key areas such as Drupal architecture, performance, scalability, and security. With over 100 team members across 20+ countries, we are the only organization with experience providing Extended Support for Drupal after End-of-Life, proudly having provided commercial support for Drupal 6 for over six years beyond its EOL. The largest and most well known users of Drupal, with the most demanding security needs have relied on Tag1’s Extended Support including Acquia, Pantheon, Fortive, Symantec, Capegmini, the Drupal Association and Drupal.org.
Tag1 Consulting: Tag1 D7ES - Extended Support for Drupal 7 after EOL in January 2025
Worried about the future of your Drupal 7 website? With Drupal 7 reaching end-of-life in January 2025, many site owners and developers are facing a tough decision: migrate to a new version of Drupal or to a new platform altogether, or risk running an unsupported site.
Read more michaelemeyers Thu, 08/22/2024 - 07:00Debian Brasil: Debian Day 2024 em Natal/RN - Brasil
por Allythy
O Debian Day é um evento anual que celebra o aniversário do Debian, uma das distribuições GNU/Linux mais importante do Software Livre, criada em 16 de Agosto de 1993, por Ian Murdock.
No último sábado (17/08/2024) no Sebrae-RN comemoramos os 31 anos Debian em Natal, no Rio Grande do Norte. A celebração, foi organizada pela PotiLivre(Comunidade Potiguar de Software Livre), destacou os 31 anos de história do Debian. O evento contou com algumas palestras e muitas discussões sobre Software Livre. Tivemos 70 inscrições, 40 estiverem presentes.
O Debian Day em Natal foi uma ocasião para celebrar a trajetória do Debian e reforçar a importância do Software Livre.
PalestrantesAgradecemos imensamente a Isaque Barbosa Martins, Eduardo de Souza Paixão, Fernando Guisso,que palestraram nessa edição! Obrigado por compartilhar tanto conhecimento com a comunidade. Esperamos ver vocês novamente em futuros encontros!
09:00 - 09:40 - Conhecendo projeto Debian - Allytthy e Clara Nobre 09:40 - 10:20 - Proxmox e Homelab: Como Transformei um Mini PC em um Servidor de Respeito) - Fernando Guisso 10:20 - 10:40 Intervalo 10:40 - 11:20 - Analisando a aplicação de algoritmos criptográficos em pacotes de redes - Isaque Barbosa Martins 11:20 - 12:00 - Introdução a escalação de privilégio em sistemas GNU/Linux - Eduardo de Souza Paixão
ParticipantesUm grande obrigado também a todos os participantes, nós fazemos isso por vocês! Esperamos que tenham aprendido, se divertido e feito novas conexões entre a comunidade
Essa edição do Debina Day Natal foi organizada por: Allythy, Clara Nobre, Gabriel Damazio e Marcel Ribeiro.
Community input drives the new draft of the Open Source AI Definition
A new version of the Open Source AI Definition has been released with one new feature and a cleaner text, based on comments received from public discussions and recommendations. We’re continuing our march towards having a stable release by the end of October 2024, at All Things Open. Get involved by joining the discussion on the forum, finding OSI staff around the world and online at the weekly town halls.
New feature: clarified Open Source model and Open Source weights- Under “What is Open Source AI,” there is a new paragraph that (1) identifies both models and weights/parameters as encompassed by the word “system” and (2) makes it clear that all components of a larger system have to meet the standard. There is a new sentence in the paragraph after the “share” bullet making this point.
- Under the heading “Open Source models and Open Source weights,” there is a description of the components for both of those for machine learning systems. We also edited the paragraph below those additions to eliminate some redundancy.
The role of training data is one of the most hotly debated parts of the definition. After long deliberation and co-design sessions we have concluded that defining training data as a benefit, not a requirement, is the best way to go.
Training data is valuable to study AI systems: to understand the biases that have been learned, which can impact system behavior. But training data is not part of the preferred form for making modifications to an existing AI system. The insights and correlations in that data have already been learned.
Data can be hard to share. Laws that permit training on data often limit the resharing of that same data to protect copyright or other interests. Privacy rules also give a person the rightful ability to control their most sensitive information, such as decisions about their health. Similarly, much of the world’s Indigenous knowledge is protected through mechanisms that are not compatible with later-developed frameworks for rights exclusivity and sharing.
- Open training data (data that can be reshared) provides the best way to enable users to study the system, along with the preferred form of making modifications.
- Public training data (data that others can inspect as long as it remains available) also enables users to study the work, along with the preferred form.
- Unshareable non-public training data (data that cannot be shared for explainable reasons) gives the ability to study some of the systems biases and demands a detailed description of the data – what it is, how it was collected, its characteristics, and so on – so that users can understand the biases and categorization underlying the system.
OSI believes these extra requirements for data beyond the preferred form of making modifications to the AI system both advance openness in all the components of the preferred form of modifying the AI system and drive more Open Source AI in private-first areas such as healthcare.
Other changes- The Checklist is separated into its own document. This is to separate the discussion about how to identify Open Source AI from the establishment of general principles in the Definition. The content of the Checklist has also been fully aligned with the Model Openness Framework (MOF), allowing for an easy overlay.
- Under “Preferred form to make modifications,” the word “Model” changed to “Weights.” The word “Model” was referring only to parameters, and was inconsistent with how the word “model” is used in the rest of the document.
- There is an explicit reference to the intended recipients of the four freedoms: developers, deployers and end users of AI systems.
- Incorporated credit to the Free Software Definition.
- Added references to conditions of availability of components, referencing the Open Source Definition.
- Continue iterating through drafts after meeting diverse stakeholders at the worldwide roadshow, collect feedback and carefully look for new arguments in dissenting opinions.
- Decide how to best address the reviews of new licenses for datasets, documentation and the agreements governing model parameters.
- Keep improving the FAQ.
- Prepare for post-stable-release: Establish a process to review future versions of the Open Source AI Definition.
We will be taking draft v.0.0.9 on the road collecting input and endorsements, thanks to a grant by the Sloan Foundation. The lively conversation about the role of data in building and modifying AI systems will continue at multiple conferences from around the world, the weekly town halls and online throughout the Open Source community.
The first two stops are in Asia: Hong Kong for AI_dev August 21-23, then Beijing for Open Source Congress August 25-27. Other events are planned to take place in Africa, South America, Europe and North America. These are all steps toward the conclusion of the co-design process that will result in the release of the stable version of the Definition in October at All Things Open.
Creating an Open Source AI Definition is an arduous task over the past two years, but we know the importance of creating this standard so the freedoms to use, study, share and modify AI systems can be guaranteed. Those are the core tenets of Open Source, and it warrants the dedicated work it has required. You can read about the people who have played key roles in bringing the Definition to life in our Voices of Open Source AI Definition on the blog.
How to get involvedThe OSAID co-design process is open to everyone interested in collaborating. There are many ways to get involved:
- Join the forum: share your comment on the drafts.
- Leave comment on the latest draft: provide precise feedback on the text of the latest draft.
- Follow the weekly recaps: subscribe to our monthly newsletter and blog to be kept up-to-date.
- Join the town hall meetings: we’re increasing the frequency to weekly meetings where you can learn more, ask questions and share your thoughts.
- Join the workshops and scheduled conferences: meet the OSI and other participants at in-person events around the world.
EuroPython: EuroPython August 2024 Newsletter
Hello and welcome to the post-conference newsletter! We really hope you enjoyed EuroPython 2024, cause we sure did and are still recovering from all the fun and excitement :)
We have some updates to share with you, and also wanted to use this newsletter to nostalgically look back at all the good times we had just last month, surrounded by old friends and new in the beautiful city of Prague ❤️.
🏛️ EuroPython Society (EPS)This year we had a booth for the EuroPython Society at the conference. What is the EPS? The EPS is the running engine behind the EuroPython Conference. The EPS board is made up of up to 9 directors (including 1 chair and 1 vice chair). It runs the day-to-day business of the EuroPython Society, including running the EuroPython conference series, and supports the community through various initiatives such as our grants programme. The board collectively takes up the fiscal and legal responsibility of the Society.
For the next few weeks, the board is working with our accountant and auditor to get our financial reports in order. As soon as that is finalised, we will be excited to call for the next Annual General Assembly (GA); the actual GA will be held at least 14 days after our formal notice.
General Assembly is a great opportunity to hear about EuroPython Society&aposs developments and updates in the last year and a new board will also be elected at the end of the GA.
All EPS members are invited to attend the GA and have voting rights. Find out how to sign up to become an EPS member for free here: https://www.europython-society.org/applicationAt the moment, running the annual EuroPython conference is a major task for the EPS. As such, the board members are expected to invest significant time and effort towards overseeing the smooth execution of the conference, ranging from venue selection, contract negotiations, and budgeting, to volunteer management. Every board member has the duty to support one or more EuroPython teams to facilitate decision-making and knowledge transfer.
In addition, the Society prioritises building a close relationship with local communities. Board members should not only be passionate about the Python community but have a high-level vision and plan for how the EPS could best serve the community.
How can you become an EPS 2024 board member?Any EPS member can nominate themselves for the EPS 2024 board. Nominations will be published prior to the GA.
Though the formal deadline for self-nomination is at the GA, it is recommended that you send in yours as early as possible (yes, now is a good time!) to board@europython.eu.We look forward to your email :)
📝 Feedback & NumbersThanks to everyone who filled in the feedback form! In total, 157 attendees gave their feedback, which represents around 13% of the onsite attendees and around 11% of total attendees. One caveat when reading the results below: it’s difficult to say whether this sample was representative of all attendees as we didn’t collect demographic data.
Satisfaction with the conferenceOn average, attendees let us know that they were very satisfied with the conference, with a mean overall satisfaction rating of 4.3. Moreover, attendees were satisfied with most specific aspects of the conference, including the venue (mean = 4.6), food (mean = 4.0), and the social event (mean = 4.0). Prague was a particularly popular choice of location, getting a mean rating of 4.7.
We also had a look to see which of these aspects were most strongly related to overall satisfaction with the conference. Using a Spearman correlation, we found that satisfaction with the food (rs = 0.20) and the social event (rs = 0.17) had the highest relationship with overall satisfaction with the conference. However, any fellow stats nerds reading this might have noticed that these are not particularly strong relationships, likely meaning that other factors we didn’t explicitly measure are driving how much people liked the conference.
If you’re interested in seeing more of the results we got from the feedback form, we published a blog post where we deep dive into everything we found in much more detail. And we promise there will be lots of pretty graphs!
https://blog.europython.eu/europython-2024-post-conference-feedback/
🦒 Speaker’s Mentorship ProgrammeIt was another successful year for our Speaker Mentorship Programme! Here are some key highlights from this year:
- Each mentee had the opportunity to receive personalized feedback, support, and guidance on their talk or proposal from an experienced mentor. We successfully supported 29 mentees, most from underrepresented communities, by pairing them with 29 seasoned mentors!
- Six mentees were given the opportunity to attend a public speaking workshop to further enhance their skills.
- On June 3rd, we held a fantastic first-time speakers&apos workshop where attendees engaged with experienced speakers, receiving valuable advice and feedback for their presentations.
Last but not least, a huge THANK YOU to all our mentors who volunteered their time to guide mentees in submitting their proposals and delivering their talks
🐍 PyLadies dayEuroPython this year had an entire day dedicated to PyLadies events. We started with Moderni Soberana giving a workshop on how to establish boundaries and stop abusive behaviour in society. This was followed up by the PyLadies lunch, sponsored by Kraken Technologies, that had 120 allies joining us for a truly empowerment session.
PyThe afternoon had a #IAmRemarkable workshop hosted by Lola Onipko! We also had a Meet & Greet session where beginners and experienced PyLadies shared knowledge and insights of the tech industry.
Picture by Deborah Foroni (PyLadies SP)💬 Python Organisers DiscussionWe had +35 community members joining us to discuss how the EuroPython Society can better support Python Communities.
✍️ Community write-upsIt warms our hearts to see posts from the community about their experience and stories this year! Here are some of them, please feel free to share yours by tagging us on socials @europython or mailing us at news@europython.eu
Anwesha Das about EuroPython 2024:
A conference that believes community matters, human values and feelings matter, and not afraid to walk the talk. And how the conference stood up to my expectations in every bit.Keep reading here: https://anweshadas.in/looking-back-to-euro-python-2024/
Grete Tungla, PyCon Estonia’s Head Organiser shares her insights from EuroPython 2024: https://www.linkedin.com/pulse/europython-2024-insights-from-pycon-estonias-head-organiser-
Jakub Cervinka shares how he was to participate in the Operations team organising EuroPython 2024: https://www.linkedin.com/pulse/thank-you-europython-2024-jakub-červinka-eusme
❤️ Thank you Volunteers & SponsorsYear after year EuroPython shines because of the hard work of our amazing team of volunteers!
But beyond the logistics and the schedules, it&aposs your smiles, your enthusiasm, and your genuine willingness to go the extra mile that truly made EuroPython 2024 truly special. Your efforts have not only fostered a sense of belonging among first time attendees but also exemplified the power of community and collaboration that lies at the heart of this conference. (And if you check out our blog post about the post-conference feedback, you&aposll see that community was the thing people reported liking most about EuroPython this year!)
Once again, thank you for being the backbone of EuroPython, for your dedication, and for showing the world yet again why people who come for the Python language end up staying for the amazing community :)
We built a page on our website to thank everyone for their effort on making EuroPython 2024 what it was! Check it out: https://ep2024.europython.eu/thank-you
And a special thank you to all of the Sponsors for all of their support!
Yay sponsors!Special thanks to StickerApp for the awesome stickers, Evolabel for shipping, Pretalx for the partnership and Kraken Technologies for the PyLadies lunch!
🎥 Conference Photos & VideosThe official conference photos are up on Flickr! Do not forget to tag us when you share your favourite clicks on your socials 😉.
https://www.flickr.com/photos/europython/albums/
While our team edits the conference videos, we&aposve put together a EuroPython 2024 livestream playlist with all the daily links. We hope this helps you easily find and enjoy the talks you want to catch up on Youtube.
We also have a sweet video featuring the amazing humans of EuroPython sharing why they volunteer!
🤝 Code of ConductCode of Conduct Transparency Report is now published on our website: https://www.europython-society.org/europython-2024-code-of-conduct-transparency-report/
🐍 Note from The PSFThe Python Software Foundation is proud to support EuroPython Prague 2024 with a grant in support of our mission to promote, protect, and advance the Python programming language and to support and facilitate the growth of a diverse and international community of Python programmers. We send congratulations and thanks to the organizers for their work to create a wonderful experience for the Python community!
The PSF is the non-profit charitable organization behind the Python language. We empower the Python community in a variety of ways including paying developers to work directly on CPython, PyPI, and security, hosting projects like PyLadies and Pallets, organizing PyCon US, and awarding community grants like this one. We welcome you to be a part of the PSF by signing up for PSF membership or supporting our mission and initiatives with a one-time, monthly, or annual donation. If your company uses Python and wants to support our community, you can find more information and submit a sponsor application on our website. We’re happy to answer any questions at sponsors@python.org.🗓️ Upcoming Events in the Python CommunityEuroPython might over but fret not there are a bunch of more PyCons happening
- PyCon Estonia https://pycon.ee/ 🇪🇪
- PyCon PT https://2024.pycon.pt/ 🇵🇹
- PyCon ES https://2024.es.pycon.org/ 🇪🇸
- PyCon SE https://www.pycon.se/ 🇸🇪
- PyCon IE https://python.ie/pycon-2024 🇮🇪
- PyLadiesCon https://conference.pyladies.com/ 🌎
- Swiss Python Summit https://www.python-summit.ch/ 🇨🇭
- EuroScipy: https://euroscipy.org/2024/ 🇪🇺
- PyCon FR: https://www.pycon.fr/2024/ 🇫🇷
- PyCon PL: https://pl.pycon.org/2024/ 🇵🇱
- PyCon NL: https://nl.pycon.org/ 🇳🇱
Enjoy a 30% discount for PyCon Estonia 2024 on Late Snake tickets with the code "EPSXPYCONEST24" over here: https://gateme.com/event/98762/
PyJok.es$ pip install pyjokes
$ pyjoke
Hardware: The part of a computer that you can kick.
Matthew Garrett: What the fuck is an SBAT and why does everyone suddenly care
Long version: When UEFI Secure Boot was specified, everyone involved was, well, a touch naive. The basic security model of Secure Boot is that all the code that ends up running in a kernel-level privileged environment should be validated before execution - the firmware verifies the bootloader, the bootloader verifies the kernel, the kernel verifies any additional runtime loaded kernel code, and now we have a trusted environment to impose any other security policy we want. Obviously people might screw up, but the spec included a way to revoke any signed components that turned out not to be trustworthy: simply add the hash of the untrustworthy code to a variable, and then refuse to load anything with that hash even if it's signed with a trusted key.
Unfortunately, as it turns out, scale. Every Linux distribution that works in the Secure Boot ecosystem generates their own bootloader binaries, and each of them has a different hash. If there's a vulnerability identified in the source code for said bootloader, there's a large number of different binaries that need to be revoked. And, well, the storage available to store the variable containing all these hashes is limited. There's simply not enough space to add a new set of hashes every time it turns out that grub (a bootloader initially written for a simpler time when there was no boot security and which has several separate image parsers and also a font parser and look you know where this is going) has another mechanism for a hostile actor to cause it to execute arbitrary code, so another solution was needed.
And that solution is SBAT. The general concept behind SBAT is pretty straightforward. Every important component in the boot chain declares a security generation that's incorporated into the signed binary. When a vulnerability is identified and fixed, that generation is incremented. An update can then be pushed that defines a minimum generation - boot components will look at the next item in the chain, compare its name and generation number to the ones stored in a firmware variable, and decide whether or not to execute it based on that. Instead of having to revoke a large number of individual hashes, it becomes possible to push one update that simply says "Any version of grub with a security generation below this number is considered untrustworthy".
So why is this suddenly relevant? SBAT was developed collaboratively between the Linux community and Microsoft, and Microsoft chose to push a Windows update that told systems not to trust versions of grub with a security generation below a certain level. This was because those versions of grub had genuine security vulnerabilities that would allow an attacker to compromise the Windows secure boot chain, and we've seen real world examples of malware wanting to do that (Black Lotus did so using a vulnerability in the Windows bootloader, but a vulnerability in grub would be just as viable for this). Viewed purely from a security perspective, this was a legitimate thing to want to do.
(An aside: the "Something has gone seriously wrong" message that's associated with people having a bad time as a result of this update? That's a message from shim, not any Microsoft code. Shim pays attention to SBAT updates in order to avoid violating the security assumptions made by other bootloaders on the system, so even though it was Microsoft that pushed the SBAT update, it's the Linux bootloader that refuses to run old versions of grub as a result. This is absolutely working as intended)
The problem we've ended up in is that several Linux distributions had not shipped versions of grub with a newer security generation, and so those versions of grub are assumed to be insecure (it's worth noting that grub is signed by individual distributions, not Microsoft, so there's no externally introduced lag here). Microsoft's stated intention was that Windows Update would only apply the SBAT update to systems that were Windows-only, and any dual-boot setups would instead be left vulnerable to attack until the installed distro updated its grub and shipped an SBAT update itself. Unfortunately, as is now obvious, that didn't work as intended and at least some dual-boot setups applied the update and that distribution's Shim refused to boot that distribution's grub.
What's the summary? Microsoft (understandably) didn't want it to be possible to attack Windows by using a vulnerable version of grub that could be tricked into executing arbitrary code and then introduce a bootkit into the Windows kernel during boot. Microsoft did this by pushing a Windows Update that updated the SBAT variable to indicate that known-vulnerable versions of grub shouldn't be allowed to boot on those systems. The distribution-provided Shim first-stage bootloader read this variable, read the SBAT section from the installed copy of grub, realised these conflicted, and refused to boot grub with the "Something has gone seriously wrong" message. This update was not supposed to apply to dual-boot systems, but did anyway. Basically:
1) Microsoft applied an update to systems where that update shouldn't have been applied
2) Some Linux distros failed to update their grub code and SBAT security generation when exploitable security vulnerabilities were identified in grub
The outcome is that some people can't boot their systems. I think there's plenty of blame here. Microsoft should have done more testing to ensure that dual-boot setups could be identified accurately. But also distributions shipping signed bootloaders should make sure that they're updating those and updating the security generation to match, because otherwise they're shipping a vector that can be used to attack other operating systems and that's kind of a violation of the social contract around all of this.
It's unfortunate that the victims here are largely end users faced with a system that suddenly refuses to boot the OS they want to boot. That should never happen. I don't think asking arbitrary end users whether they want secure boot updates is likely to result in good outcomes, and while I vaguely tend towards UEFI Secure Boot not being something that benefits most end users it's also a thing you really don't want to discover you want after the fact so I have sympathy for it being default on, so I do sympathise with Microsoft's choices here, other than the failed attempt to avoid the update on dual boot systems.
Anyway. I was extremely involved in the implementation of this for Linux back in 2012 and wrote the first prototype of Shim (which is now a massively better bootloader maintained by a wider set of people and that I haven't touched in years), so if you want to blame an individual please do feel free to blame me. This is something that shouldn't have happened, and unless you're either Microsoft or a Linux distribution it's not your fault. I'm sorry.
comments
qtatech.com blog: Automatiser les Déploiements de Sites Drupal avec CI/CD
With the constant evolution of Drupal, particularly with recent versions like Drupal 10 and Drupal 11, automating deployments has become essential to leverage new features and maintain an agile and reliable development cycle. This article will guide you through the coding approaches and techniques to automate the deployment of Drupal sites using CI/CD.
Implementing an Audio Mixer, Part 1
When using Qt Multimedia to play audio files, it’s common to use QMediaPlayer, as it supports a larger variety of formats than QSound and QSoundEffect. Consider a Qt application with several audio sources; for example, different notification sounds that may play simultaneously. We want to avoid cutting notification sounds off when a new one is triggered, and we don’t want to construct a queue for notification sounds, as sounds will play at the incorrect time. We instead want these sounds to overlap and play simultaneously.
Ideally, an application with audio has one output stream to the system mixer. This way in the mixer control, different applications can be set to different volume levels. However, a QMediaPlayer instance can only play one audio source at a time, so each notification would have to construct a new QMediaPlayer. Each player in turn opens its own stream to the system.
The result is a huge number of streams to the system mixer being opened and closed all the time, as well as QMediaPlayers constantly being constructed and destructed.
To resolve this, the application needs a mixer of its own. It will open a single stream to the system and combine all the audio into the one stream.
Before we can implement this, we first need to understand how PCM audio works.
PCMAs defined by Wikipedia:
Pulse-code modulation (PCM) is a method used to digitally represent sampled analog signals. It is the standard form of digital audio in computers, compact discs, digital telephony and other digital audio applications. In a PCM stream, the amplitude of the analog signal is sampled at uniform intervals, and each sample is quantized to the nearest value within a range of digital steps.
Here you can see how points are sampled in uniform intervals and quantized to the closest number that can be represented.
Description from Wikipedia: Sampling and quantization of a signal (red) for 4-bit LPCM over a time domain at specific frequency.
Think of a PCM stream as a humongous array of bytes. More specifically, it’s an array of samples, which are either integer or float values and a certain number of bytes in size. The samples are these discrete amplitude values from a waveform, organized contiguously. Think of the each element as being a y-value of a point along the wave, with the index representing an offset from x=0 at a uniform time interval.
Here is a graph of discretely sampled points along a sinusoidal waveform similar to the one above:
Image Source: Wikimedia Commons
Description from Wikimedia Commons: Image of a discrete time sinusoid
Let’s say we have an audio waveform that is a simple sine wave, like the above examples. Each point taken at discrete intervals along the curve here is a sample, and together they approximate a continuous waveform. The distance between the samples along the x-axis is a time delta; this is the sample period. The sample rate is the inverse of this, the number of samples that are played in one second. The typical standard sample rate for audio on CDs is 44100 Hz – we can’t really hear that this data is discrete (plus, the resultant sound wave from air movement is in fact a continuous waveform).
We also have to consider the y-axis here, which represents the amplitude of the waveform at each sampled point. In the image above, amplitude A is normalized such that A\in[−1,1]. In digital audio, there are a few different ways to represent amplitude. We can’t represent all real numbers on a computer, so the representation of the range of values varies in precision.
For example, let’s say we have two different representations of the wave above: 8-bit signed integer and 16-bit signed integer. The normalized value 1 from the image above maps to (2^{8}\div{2})−1=127 with 8-bit representation and (2^{16}\div2)−1=32767 with 16-bit. Therefore, with 16-bit representation, we have 128 times as many possible values to represent the same range; it is more precise, but the required size to store each 16-bit sample is double that of 8-bit samples.
We call the chosen representation, and thus the size of each sample, the bitdepth. Some common bitdepths are 16-bit int, 24-bit int, and 32-bit float, but there are many others in existence.
Let’s consider a huge stream of 16-bit samples and a sample rate of 44100 Hz. We write samples to the audio device periodically with a fixed-size buffer; let’s say it is 4096 bytes. The device will play each sample in the buffer at the aforementioned rate. Since each sample is a contiguous 2-byte short, we can fit 2048 samples into the buffer at once. We need to write 44100 samples in one second, so the whole buffer will be written around 21.5 times per second.
What if we have two different waveforms though, and what if one starts halfway through the other one? How do we mix them so that this buffer contains the data from both sources?
Waveform SuperimpositionIn the study of waves, you can superimpose two waves by adding them together. Let’s say we have two different discrete wave approximations, each represented by 20 signed 8-bit integer values. To superimpose them, for each index, add the values at that index. Some of these sums will exceed the limits of 8-bit representation, so we clamp them at the end to avoid signed integer overflow. This is known as hard clipping and is the phenomenon responsible for digital overdrive distortion.
x Wave 1 (y_1) Wave 2 (y_2) Sum (y_1+y_2) Clamped Sum 0 +60 −100 −40 −40 1 −120 +80 −40 −40 2 +40 +70 +110 +110 3 −110 −100 −210 −128 4 +50 −110 −60 −60 5 −100 +60 −40 −40 6 +70 +50 +120 +120 7 −120 −120 −240 −128 8 +80 −100 −20 −20 9 −80 +40 −40 −40 10 +90 +80 +170 +127 11 −100 −90 −190 −128 12 +60 −120 −60 −60 13 −120 +70 −50 −50 14 +80 −120 −40 −40 15 −110 +80 −30 −30 16 +90 −100 −10 −10 17 −110 +90 −20 −20 18 +100 −110 −10 −10 19 −120 −120 −240 −128Now let’s implement this in C++. We’ll start small, and just combine two samples.
Note: we will use qint types here, but qint16 will be the same as int16_t and short on most systems, and similarly qint32 will correspond to int32_t and int.
qint16 combineSamples(qint32 samp1, qint32 samp2) { const auto sum = samp1 + samp2; if (std::numeric_limits<qint16>::max() < sum) return std::numeric_limits<qint16>::max(); if (std::numeric_limits<qint16>::min() > sum) return std::numeric_limits<qint16>::min(); return sum; }This is quite a simple implementation. We use a function combineSamples and pass in two 16-bit values, but they will be converted to 32-bit as arguments and summed. This sum is clamped to the limits of 16-bit integer representation using std::numeric_limits in the <limits> header of the standard library. We then return the sum, at which point it is re-converted to a 16-bit value.
Combining Samples for an Arbitrary Number of Audio StreamsNow consider an arbitrary number of audio streams n. For each sample position, we must sum the samples of all n streams.
Let’s assume we have some sort of audio stream type (we’ll implement it later), and a list called mStreams containing pointers to instances of this stream type. We need to implement a function that loops through mStreams and makes calls to our combineSamples function, accumulating a sum into a new buffer.
Assume each stream in mStreams has a member function read(char *, qint64). We can copy one sample into a char * by passing it to read, along with a qint64 representing the size of a sample (bitdepth). Remember that our bitdepth is 16-bit integer, so this size is just sizeof(qint16).
Using read on all the streams in mStreams and calling combineSamples to accumulate a sum might look something like this:
qint16 accumulatedSum = 0; for (auto *stream : mStreams) { // call stream->read(char *, qint64) // to read a sample from the stream into streamSample qint16 streamSample; stream->read(reinterpret_cast<char *>(&streamSample), sizeof(qint16))); // accumulate accumulatedSum = combineSamples(sample, accumulatedSum); }The first pass will add samples from the first stream to zero, effectively copying it to accumulatedSum. When we move to another stream, the samples from the second stream will be added to those copied values from the first stream. This continues, so the call to combineSamples for a third stream would combine the third stream’s sample with the sum of the first two. We continue to add directly into the buffer until we have combined all the streams.
Combining All Samples for a BufferNow let’s use this concept to add all the samples for a buffer. We’ll make a function that takes a buffer char *data and its size qint64 maxSize. We’ll write our accumulated samples into this buffer, reading all samples from the streams and adding them using the method above.
The function signature looks like this:
void readData(char *data, qint64 maxSize);Let’s achieve more efficiency by using a constexpr variable for the bitdepth:
constexpr qint16 bitDepth = sizeof(qint16);There’s no reason to call sizeof multiple times, especially considering sizeof(qint16) can be evaluated as a literal at compile-time.
With the size of each sample and the size of the buffer, we can get the total number of samples to write:
const qint16 numSamples = maxSize / bitDepth;For each stream in mStreams we need to read each sample up to numSamples. As the sample index increments, a pointer to the buffer data needs to also be incremented, so we can write our results at the correct location in the buffer.
That looks like this:
void readData(char *data, qint64 maxSize) { // start with 0 in the buffer memset(data, 0, maxSize); constexpr qint16 bitDepth = sizeof(qint16); const qint16 numSamples = maxSize / bitDepth; for (auto *stream : mStreams) { // this pointer will be incremented across the buffer auto *cursor = reinterpret_cast<qint16 *>(data); qint16 sample; for (int i = 0; i < numSamples; ++i, ++cursor) if (stream->read(reinterpret_cast<char *>(&sample), bitDepth)) *cursor = combineSamples(sample, *cursor); } }The idea here is that we can start playing new audio sources by adding new streams to mStreams. If we add a second stream halfway through a first stream playing, the next buffer for the first stream will be combined with the first buffer of this new stream. When we’re done playing a stream, we just drop it from the list.
Next Steps
In Part 2, we’ll use Qt Multimedia to fully implement our mixer, connect to our audio device, and test it on some audio files.
About KDAB
If you like this article and want to read similar material, consider subscribing via our RSS feed.
Subscribe to KDAB TV for similar informative short video content.
KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.
The post Implementing an Audio Mixer, Part 1 appeared first on KDAB.
Promet Source: Drupal vs SharePoint for State and Local Government
KDE ⚙️ Gear 24.08
Many of the new features in Dolphin are designed to make it easier to access and manage files and folders that require administrative privileges. Visual cues, wizards to help install needed software, and menu options to elevate your privileges make it easier than ever to use Dolphin as a superuser.
New usability features include:
- A new "Move to New Folder…" option that pops up when right-clicking a file, allowing you to create a folder and copy the file into it all in one go.
- Double-clicking the view background that triggers the "Select All" action by default.
Filelight is a complementary application to Dolphin, and can be installed directly from Dolphin by clicking the down arrow in the lower right corner of the main window.
Filelight helps visualize how much space your files and folders are taking up. Version 24.08 comes with a friendlier home page, and the Windows version (available from the Microsoft Store) has been redesigned to improve its overall appearance.
Konsole 24.08 also comes with a brand new usability enhancement: if you need to bookmark something important in a long output, double-click the scroll bar to set a position marker. You can then quickly scroll back and locate it later.
CreateKdenlive is KDE's professional video editor, and this new release is all about the curves.
You can now use the brand new keyframe curve editor to customize effects, and easing methods (Cubic in/out and Exponential in/out) for fades.
To make things easier, we've redesigned the effects stack widget and improved the Transform effect, which now lets you select clips directly from the monitor. It also comes with a new grid, and improved design and behavior for the handle.
TravelComing to Akademy 2024? Don't forget to install the updates for Itinerary and Kongress and make your journey easy.
Itinerary is KDE's travel assistant. It lets you plan and manage your trips, providing an overview of where you need to be and when. It keeps boarding passes, tickets, and health certificates all in one place, and the latest version adds more details, including seat information displayed directly in the timeline.
Once you arrive at your destination, it's time to fire up Kongress so you don't miss any of the sessions or activities. Kongress now makes things easier by providing indoor maps of the venue, so you not only know when and what's going on, but also where.
Both Itinerary and Kongress work on desktop and laptop computers and most mobile devices.
CommunicateNeoChat is KDE's client for the Matrix chat system — and KDE's official way of chatting. Version 24.08 increases your safety by allowing you to preemptively block invites from unknown users not in any rooms with you.
Tokodon not only helps you read and post on Mastodon, but also manage your own server. Speaking of which, the version being released today can notify you of sign-ups on your server for better user management.
For posting, you can easily attach images from the internet, quote other posts and pop out the text editor to comfortably compose your toot.
When reading, Tokodon 24.08 supports scrolling up whole screenfuls of posts using the PageUp and PageDown keys.
DevelopWhether you want to help KDE implement features and fix bugs, or develop the next killer app, KDE's advanced text editor Kate has you covered.
Kate 24.08 improves its document formatting plugin with better support for bash, d, fish, Nix config, opsi-script, QML and YAML files. In related news, the Language Server Protocol (LSP) feature adds support for the Gleam, PureScript, and Typst languages.
If you're working on a CMake-based project, the Project and Build plugin now allows you to open the build directory and get both files and targets.
SurfFalkon is KDE's full-featured web browser. The new release implements many bug fixes and optimizations that make surfing the web with Falkon smoother, easier and safer.
A new feature in 24.08 allows you to customize things that affect privacy and functionality on a site-by-site basis. Say you don't mind JavaScript on one site because the authors are trustworthy and it actually provides useful functionality, but you want to block it elsewhere for security reasons. You can now configure this in Falkon's Settings.
And all this too…- Okular — KDE's eco-certified document reader — improves compatibility for fillable forms in PDF documents, gets a makeover for Windows, and adds a more usable zoom feature.
- PlasmaTube — a player for watching online videos from popular sites on your desktop — adds an option to block sponsored sections in videos.
- Elisa — our elegant music player — gets a "Play this next" feature, and allows resizing of the sidebar and playlist panes.
Although we fully support distributions that ship our software, KDE Gear 24.08 apps will also be available on these Linux app stores shortly:
Flathub SnapcraftIf you'd like to help us get more KDE applications into the app stores, support more app stores and get the apps better integrated into our development process, come say hi in our All About the Apps chat room.