FLOSS Project Planets

Petter Reinholdtsen: The 2024 LinuxCNC Norwegian developer gathering

Planet Debian - Fri, 2024-05-31 01:45

The LinuxCNC project is still going strong. And I believe this great software system for numerical control of machines such as milling machines, lathes, plasma cutters, routers, cutting machines, robots and hexapods, would do even better with more in-person developer gatherings, so we plan to organise such gathering this summer too.

The Norwegian LinuxCNC developer gathering take place the weekend Friday July 5th to 7th this year, and is open for everyone interested in contributing to LinuxCNC and free software manufacturing. Up to date information about the gathering can be found in the developer mailing list thread where the gathering was announced. Thanks to the good people at Debian as well as leftover money from last years gathering from Redpill-Linpro and NUUG Foundation, we have enough sponsor funds to pay for food, and probably also shelter for the people traveling from afar to join us. If you would like to join the gathering, get in touch and add your details on the pad.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Categories: FLOSS Project Planets

Spinning Code: Writing for Developers and Consultants: Know your Documents Types

Planet Drupal - Thu, 2024-05-30 20:50

When I started this series on writing for developers and consultants, I thought of this piece first, but I couldn’t get the ideas to come together. Sometimes it takes a few tries.

Anytime you write something, you should be aware of what kind of document you’re writing and who is it for. The definition of “document” in this context is very broad it could be: an email, letter, solution design, chat messages, blog post, test script, work of fiction, book, poem, presentation, marketing slide deck, scope of work, and so on: anything that involves writing words.

For example, this is a blog post, about writing, meant for people who are developers or technology consultants who may have been told good writing isn’t important.

Common Work Documents

I’m going to put aside poems, novels, and other personal writing forms to focus here on work related writing. Developers need to be able to describe their work to other people. We also need to communicate about what is happening on a project, support one another, and ask for help. We do all these things with words, and often in writing.

In my career I’ve written a wide variety of documents as part of my work. A partial list (because I doubt I can think of everything to create a complete list) includes:

  • Emails
    • to my boss
    • to colleagues or friends
    • to direct reports
    • to clients
    • to large mailing lists
  • Solution Design Documents
  • Scopes of Work
  • Contracts
  • Invoices
  • Test Scripts
  • Conference Talks
  • Research Reports
  • Chat Messages

Some require more detail and longer prose than others. Some are expected to be polished where others can tolerate typos and mistakes. But each has its own style, structure, audience, and expectations that the writer must meet.

A Document’s Purpose

When you start to wring something, know your purpose in writing.

Not all documents are created equal and so understanding your purpose is critical. Are you writing an Solution Design that needs to outline how you plan to solve all the hard problems in a project? Or are your writing an email to your boss asking for a few days off? Is this a research report meant to support an advocacy project or a cover letter for a resume? All of those are important things, but none should be written in the same tone or with the same style.

A Scope of Work (SOW) is a lasting artifact during a projects that sets the bounds of the work you’re going to complete. A sloppy SOW can cost you, or your employer, vast sums of money. A SOW writing purely to defended against those concerns may not express the client’s needs and interests, and result in them refusing to sign.

An email to a client might be a friendly reminder about pending deadlines, or a carefully crafted notes from a contentious meeting. Written well, both could leave you in a better place with your client. Written poor, both may cause your client to become frustrated with your sloppiness.

If you don’t know why you’re writing something, you are likely to write the wrong thing. At work, if you aren’t sure, ask for guidance.

A Document’s Audience

There is no such thing as a “general audience” you should always have a mental image of who you are writing to, and why.

We all know that it’s important to think about your audience, but we don’t always do this well. In part because determining the audience is sometimes a little complicated.

When your audience is the person or people you are writing to, you need to leverage your understanding of their knowledge, skill set, and project engagement. You want your text to meet them where they are.

Sometimes the audience you care about most, isn’t the direct subject of the message, but a 3rd party we know, or suspect, will read the document later. I find this is true particularly in contentious situations.

FOIA Warning

If you work in, for, or with government agencies in the US (and for similar reasons elsewhere as well) – including as a subcontractor – you should understand if your content is subject to a Freedom of Information Act requests. Sometimes your audience isn’t the person you are writing to at all, but the reporter who could read the message 2 years from now after they get copies of everything related to your project. In those settings, don’t put anything in writing you don’t want on the front page of a major newspaper.

But FOIA can also be a blessing for a developer who knows a bad decision is being made. Carefully worded expressions of concern, and requests for written confirmation of next steps, can trigger FIOA-cautious readers to recognize they need to follow your advice.

Finding the Right Level of Technical Detail

One of the hardest things for developers, and other people with lots of technical knowledge, to do well is communicate clearly about technical minutia. There is a balance to be struck between providing transparency and overwhelming readers with details. Developers have to think about details in our work. We also use field specific jargon that can be confusing to people whose work is in other areas.

Too often we confuse that specialized knowledge of our field, with intelligence. I have watched developers lose their audience in the nuances of small details, and come away announcing their audience was a bunch of idiots. Early in my career I was guilty of this as well. Assume you’re audience is as smart as you; they just know different stuff.

When you make that assumption you can avoid talking down to people, and start to work on finding their level.

The right level of technical detail will also vary by document type. When I’m exchanging emails with a client’s in-house developer we go deep into the weeds often. When I’m writing a SOW, the technical detail is nearly absent as we are defining functionality and purpose, not the exacting detail of how that functionality will be delivered.

The more you can be in conversation with the people you’re working with about their background, the easier it will be to find the right level of detail to explain yourself clearly.

Summation

Hopefully by now it’s clear, this is an overview of approach, not detailed guidance. In a future post I plan to write about some of these specific documents types, and how to write them. Hopefully this overview gives you ideas and things to think about as you work on your next document.

As I said in my first post on this topic, communications skills for developers and consultants is an enormous topic. The plan for this series is evolving as I go. If you have suggestions or requests feel free to leave me a message.

The post Writing for Developers and Consultants: Know your Documents Types appeared first on Spinning Code.

Categories: FLOSS Project Planets

25 Years of Krita!

Planet KDE - Thu, 2024-05-30 20:00
  • Halla Rempt

Twenty-five years. A quarter century. That's how long we've been working on Krita. Well, what would become Krita. It started out as KImageShop, but that name was nuked by a now long-dead German lawyer. Then it was renamed to Krayon, and that name was also nuked. Then it was renamed to Krita, and that name stuck.

I only became part of Krita in 2003, when Krita was still part of KDE's suite of productivity applications, KOffice, later renamed to Calligra... And I became maintainer of Krita in 2004, when Patrick Julien handed over the baton. That means that I've been around Krita for about twenty of those twenty-five years, so I'll hope you, dear reader, will forgive me for making this a really personal post; a very large part of my life has been tied up with Krita, and it's going to show.

But let’s first go back to before when I needed a digital painting application; the first seeds for Krita were laid in 1998, even earlier than the first bits of code. There was this excitement around Linux back then, and there were lots of projects that attempted to create great applications for Linux. One of those projects was GIMP, and another project was Qt. The first was a digital image manipulation application, the other was a toolkit to create user-friendly applications in C++. But GIMP didn't use Qt, it used its home-grown user interface toolkit (although it originally used Motif, which wasn't open source). A Qt fan, Matthias Ettrich, did an experimental port of GIMP to Qt, and gave a presentation about it at the 1998 Linux Kongress. That wasn't received well, and resulted in the kind of spat that is typical of the open source community. People were young and tempers were hot.

Well, in cases like this, the only solution is to go at it yourself, and that's what happened. It took several false starts, but on the last day of May 1999, Matthias Elter and Michael Koch started KImageShop: read the mail, because it's quite funny how we did and didn't follow the original vision (KOM was a Corba-like thing, and if you have never heard of Corba, that's probably because Corba was a terrible idea.).

Development started, and believe it or not, there's still some actual code dating back to then in Krita's codebase, though most of the remaining code is opening and closing brackets.

And then development stopped, because, well, doing a proper image manipulation application isn't easy or quick work. And then it started again, and stopped again, and started again. There were several maintainers before I was looking for a nice, performant codebase for a painting application in 2003. I didn't know C++; but I had written the first book on using Python and Qt together.

Krita had been rewritten to the point where it didn't even have a paint tool, so that was the first thing I wanted to have. That was not easy!

But... Being open about it not being easy meant people got interested, and we started gaining contributors. And so, in 2004, we had a small team of enthusiastic people. A lot happened in that year; Camilla Boemann rewrote the core of Krita so we had autosizing layers, Adrian Page wrote an OpenGL based backend, Cyrille Berger added the first inklings of plugins and scripting. Our approach was still pretty technical, though, and we didn't manage to make a release.

It was only in 2005 that we released Krita as part of KOffice 1.4. Still very immature, but everyone agreed that it was promising, and we got nice reviews in some Linux magazines -- that was still a thing in 2005.

Then came 2006. And Krita 1.5 was released with support for color managed CMYK. Krita 1.5 also had the short-lived real color mixing watercolor layer feature, but that was too complex to maintain. And in the same year, we released Krita 1.6: Linux Journal called it State of the Art. We thought it was a pretty mature release, but artists who gave us feedback still found it lacking a lot.

And then disaster struck. Qt3 reached end-of-life, and Qt4 was released. The porting effort was huge and took ages, also because we, foolishly, decided to rewrite a lot of the 1.x code to make it possible to share components between KOffice applications. The rewrite took all of 2007, 2008 and half of 2009.

In the meantime, when we were desperately trying to fix all the bugs the porting and rewrite were introducing, we held our first fundraiser: that was to get Wacom tablets for testing Krita with, complete with art pens. I am still using the Wacom Intuos 3 we got way back then!

In 2009, we then released Krita 2.0. It was not really usable, but it was important for us to have something out that we could get people to test. Krita 2.1 was also released in 2009. We also got our first sponsored developer, Lukáš Tvrdý, whose task specifically was to fix all bugs. Later on, he also improved the performance of Krita's brushes.

As Krita gained recognition, we got more and more feedback, and in 2010, we decided to have a big sprint in Deventer where we were going to determine what we wanted Krita to be for our users. A Photoshop clone? A GIMP clone? A Corel Painter Clone? Or something that was itself. Who were we making Krita for?

The answer is true to today: we are making Krita for digital artists who are making art, mostly from scratch. Painting with Krita should be fun for artists of all kinds, all over the world.

But it would be some time before we'd reach that goal. 2010 saw Krita 2.2 and Krita 2.3: we thought that Krita 2.3 was ready for artists, but it was only with Krita 2.4 and 2.5 in 2012 that Krita really became pretty good! In fact, we had a laser-precise focus: for some years our rallying call was “Make Krita usable for David Revoy!” – partly silly, but also partly serious. We spent time during dev sprints observing artists and allowing them to live comment on what they liked and didn’t like, without the observing developers being allowed to open their mouths, wether in rebuttal or to help the artist out.

In the meantime, I had created the Krita Foundation so we could do fund-raisers to sponsor full-time developers. The first developer we sponsored was Dmitry Kazakov, who is still the lead developer for Krita.

Back then, Krita was still part of KDE's office suite, but it was called Calligra now, because of an interminable conflict with just one KOffice developer, the KWord maintainer. All that energy spent on that conflict could have gone into development, it was a huge waste. From the Calligra days onwards, development went much smoother. Nokia was now involved with Calligra's development, and the resulting improvement in the central libraries all applications used also helped improve Krita, though, conversely, the complexity needed to support a very diverse set of applications is still burdening us today.

Years went by. 2013 was completely uneventful. We made our releases (2.6, 2.7), did our fund-raisers, added features (like animation support), created a version of Krita with a special user interface for touch/tablet users (sponsored by Intel: we still have a great relationship with Intel, our main development fund sponsor). It was great to see the art people were creating, great to get feedback from users and just plain fun to tackle development.

In 2014 we ported Krita to Windows, also because of the touch/tablet version of Krita. And we released eleven versions of Krita 2.9, which was really a very fine release.

Also in 2014, we had our first Kickstarter campaign. Kickstarter was new and fresh back then, and it was really exciting. We got nearly 700 people to sponsor Krita! And we ported Krita to MacOS. For some time we would do a Kickstarter campaign every year, and they were fun both for us and for our developers, we'd set stretch goals and let people vote on what they wanted us to work on.

I still had a day job back then, so it was all work done in the evenings and weekends, and on the train during my commute.

We also started porting Krita, again, this time to Qt5. That wasn't as hard as the port from Qt3 to Qt4, but we lost support for the tablet version of Krita because Qt5 made it impossible to properly integrate our OpenGL based canvas in the touch version of Qt5's libraries. We spent months and quite a bit of money on that, but it was no-go.

Then I broke my shoulder and lost my day job, with Blue Systems, and suddenly the Krita Foundation needed to pay me, too. Fortunately, we found a sponsor for the port to Qt5, and that was my first sponsored project.

In 2016, we released Krita 3.0 -- it wasn't as good as Krita 2.9, but thankfully we still remembered the pain we had when doing a rewrite combined with a port, so we simply did the port first, and didn't combine it with a huge rewrite. This had animation!

We also released our first and last paper artbook. A huge amount of work for me, which already started in 2015 and in the end, a huge money sink, too.

We worked on improved versions of Krita 3.0 all through 2016 and 2017. 2018 rolled by, and we released Krita 4.0, with the results of Kickstarter-sponsored work. Though not all of it, because in 2017, I was preoccupied with the Great Tax Disaster. The Dutch tax office wanted us to pay tens of thousands of euros in VAT for the work Dmitry had done; that's when we hired a proper accountant instead of a small business administration office in a local town.

When we went public with the problems, donations streamed in and PIA made a huge donation: they basically covered the bill.

To avoid having this happen again, I brought all commercial activities into a separate one-person company. That became even more important, because in 2017, we put Krita in the Windows Store. That was the second store, after we put Krita on the Steam Store in 2014. Since then, we have released Krita on Epic Store, the Google Play Store and now even on the Apple MacOS Store.

Time went on, and in 2018, we released Krita 4.1, in 2019 4.2, in 2020 Krita 4.3 and 4.4. Reasonably quiet years of active development, growing user base and popularity. More and more sponsored developers joined in, and Krita made a lot of progress.

Although the Krita YouTube channel already existed, in 2019, we asked Ramon Miranda to work on regular videos for our channel:

By now we've built up quite a list of impressive tutorials of all kinds, teaching everything from digital painting itself to creating brush presets!

And then development slowed down. In 2020, the effects of Covid19 became more and more clear. We couldn't have sprints anymore, so no hyper-productive in-person development sessions anymore. Team members got sick, for some, really sick. Long Covid has crashed my own productivity: there are many days when I can do nothing but lie down in a darkened room.

By 2021, even though we hadn't had to port Krita to a new version of Krita, we still decided to change vector layers from ODG to SVG, which made Krita files incompatible between versions 4 and 5. A major change in file format, in other words. We're still working on new versions of Krita 5: 5.1 in 2022, 5.2 in 2023.

The future promises a very nice Krita 5.3!

And also, groan, a Krita 6.0 because we have started porting Krita to Qt6. And that's no fun, because Qt6 is again a huge change in what Qt offers and allows.

And that was 25 years of working on something I started dabbling in because I wanted to draw a map for a fantasy novel on my laptop!

Join the Development Fund with a monthly donation. Or make a one-time donation here.

Categories: FLOSS Project Planets

Reproducible Builds (diffoscope): diffoscope 269 released

Planet Debian - Thu, 2024-05-30 20:00

The diffoscope maintainers are pleased to announce the release of diffoscope version 269. This version includes the following changes:

[ Chris Lamb ] * Allow Debian testing continuous integration builds to fail right now. [ Sergei Trofimovich ] * Amend 7zip version test for older versions that include the "[64]" string. (Closes: reproducible-builds/diffoscope#376)

You find out more by visiting the project homepage.

Categories: FLOSS Project Planets

Four Kitchens: Managing configuration in Drupal upstreams

Planet Drupal - Thu, 2024-05-30 16:18

Mike Goulding

Senior Drupal Engineer

Mike has been part of the Four Kitchens crew since 2018, where he works as a senior engineer and tech lead for a variety of Drupal and WordPress projects.

January 1, 1970

A common time-saver for organizations managing multiple Drupal sites is reducing to shared code in a concept called an upstream. This can be done through a hosting provider, continuous integration tooling, or old-fashioned copy-and-paste. (But don’t do that!)

This saves time by having most of the code shared by these sites stored in a single repository while allowing the sites to be somewhat unique from each other. It also avoids some of the overhead and pitfalls of a traditional Drupal multisite installation — something that several hosting providers haven’t been keen on recently.

A struggle that occurs when using the upstream technique is deploying a global configuration that should apply to all or some of the sites without removing what is uniquely theirs. In previous versions of Drupal, this was done using the Features module, and that is something that can still be done with some success. However, Features does come with some downsides regarding managing different configurations across sites.

In the setup described above, this becomes especially troublesome when configurations between sites need to be overridden. One site has a different field added to the blog content type, and the features import process can become painfully complex to deal with by itself.

So, what option do I suggest for upstreams and organizations managing multiple sites? Configuration Split. In particular, the latest iteration of this module, 2.x, which allows for easier conditional splitting and importing of configuration.

What is Configuration Split?

Configuration Split, or Config Split, as it is better known, has been around for a while, enabling many excellent uses. A quick example of how this module has been used would be enabling various modules and configuration between different environments for a site. It allows for importing different configuration files into a Drupal site based on certain set conditions.

Specifically, it allows for splitting up configuration into different groups that can be set to inactive or active programmatically. This is an improvement from both using the Features module and using other custom code to enable and disable things manually on different sites or environments.

Using Config Split simplifies configuration management by keeping development-specific modules and settings separate from the main site configuration. This helps maintain a cleaner environment on production and allows for automatic installation and removal of development-only features.

Additionally, Config Split can also be used for version control purposes. By splitting configuration into different groups, it becomes easier to track changes and roll back to previous configurations, if needed. This is especially useful for larger sites with multiple developers working on different features or environments.

Config Split 1.x

The initial release of Config Split opened up opportunities to do something different with Drupal configuration that wasn’t possible before with the default configuration management system. By allowing different groups of configuration that can be activated and deactivated, it empowers teams to easily bring different configurations to different situations.

Some common uses that we saw early on were enabling specific development modules and settings for use only in local development environments. This was previously something that would have to be built into tooling locally to script out. Now it just happens because of a condition placed into settings. It took some guesswork out of what would be enabled on which environment, and that was a big help.

The ecosystem for Config Split includes more than just enabling and disabling certain configuration. Another big part is Config Ignore, which makes it possible to have configuration that isn’t changed on import. The earliest version of this module was more conceptual than it is now, but it still provided a way to avoid exporting and importing changing configuration (like blocks), that was meant to exist only on live environments. When paired with Config Split, this offered great control over which configuration would be active and in-use in specific environments.

Stacking, patching, and more

Though it may seem in conflict with the glowing review above, Config Split didn’t always mean that managing configuration was easier or simpler. In fact, it often meant that there were even more configuration files to manage, and parsing changes to a file in a code review would make even the most senior engineer groan. It solved bigger problems than it caused, but there were still downsides that gave teams pause and sparked discussion of whether there was a better way.

Thankfully, there is a better way! The newest version of Config Split, 2.x, brought many big changes that make this easier to manage, along with a few useful bonuses. One improvement was the concept of patches for partial configuration splits. Partial changes from the default configuration can now be represented with much smaller yaml files that only show what was added and/or removed instead of repeating the entire file. This makes code reviews of the changes much, much easier to deal with!

Along with additional improvements to how configuration imports and exports when splits are involved, another addition in the newer version of the module is stackable configuration. This means that splits that would previously have been in conflict, like adding a field and changing a label to a content type from different split groups, can now work together. This also means that test environments can better represent live environments while still enabling development modules and logging output without the configuration import complaining during the build.

This isn’t always the silver bullet for all of your problems. As we’ve discussed recently, there can be some steps that need documentation and considerations for maintenance to fully make use of Config Split on a build. This is especially something to consider when updates to other modules in use have updates that affect multiple configuration files.

Using Config Split to manage multiple sites

Recently, we have taken these improvements from Config Split and applied them to managing multiple sites and upstreams. Using set conditions like environmental variables or site names, we can use splits for features or configurations that are unique to individual sites in the project. In some ways this can feel like a stretch on the intended use case for the module, but it solves the problem well in the cases we have used it. I’ll illustrate more in a specific example where this has been used.

Let’s take a project where an organization is going to have multiple similar but unique websites. They have a small team, and want to manage these sites from a single repository to make deployments quick, but don’t want to prevent the sites from being somewhat different. In some ways, a Drupal multisite can resolve part of this issue, but there are hosting limitations for that. Additionally, the multisite doesn’t completely avoid the issue of configuration differences, as those sites would still be using different databases.

In this situation, we use a single repository and use a continuous integration tool like CircleCI to push the code from that repository to the different sites where they are hosted. In this repository, we set up Config Split for each site in the project and the global configuration based on an example or default site. This way, configuration and features that should be available for the whole project can be developed in one place and deployed to each site. We can also make small changes to sites that make them different without incurring a lot of extra weight.

In a single repository configuration, we have different settings.php files that are loaded based on environmental variables so that each site always imports the correct information. This allows for differences in settings, content types, fields, and other aspects of the site. All of this with just one repository to manage and deployed to different instances without the duplication of effort and review between them. Sites can share similar architecture and have some differences without requiring a lot of overhead. The changed files are easy to review at a glance, knowing that only what is or should be different will be present.

We’ll talk about this more as time goes on and the module continues to grow. Adding this module to the toolbox for a Drupal site really can make managing one or more sites easier and more consistent. Should existing sites using features or something similar move to Config Split? I strongly feel that it is simpler to manage and that the workflow is more enjoyable.

The post Managing configuration in Drupal upstreams appeared first on Four Kitchens.

Categories: FLOSS Project Planets

Kdenlive 24.05.0 released

Planet KDE - Thu, 2024-05-30 14:31

The team is happy to announce Kdenlive 24.05, this update reimplements the Audio Capture feature and focuses on enhancing stability while introducing a few exciting new features like Group Effects and Automatic Subtitle Translations. This version comes with a huge performance boost and the usual batch of quality of life, user interface and usability improvements.

This release comes with several performance enhancements, significantly boosting efficiency and responsiveness. Highlights include a massive speed improvement when moving clips with the spacer tool, faster sequence switching, improved AV1 NVENC support, and quicker timeline operations. These optimizations are part of the ongoing performance improvement efforts funded by our recent fundraiser.

Group Effects

In the last release, we introduced the ability to add an effect to a group of clips. This release now lets you control the parameters affecting all effects within the group.

Multi Format Rendering

Video editors for social media can now rejoice: Kdenlive offers the ability to render videos in multiple aspect ratios, including horizontal, vertical, and square, all from a single project.

Simply set the desired format in the render widget. This feature was developed by Ajay Chauhan as part of the Season of KDE (SoK) and was mentored by the Kdenlive team. The mentoring process was funded by our recent fundraiser.

Automatic Subtitle Translations

Continuing the subtitle improvements, we have added the ability to automatically translate subtitles using SeamlessM4T. This process happens locally without requiring an internet connection.

Please note that you need to download the models from the settings first.

Proxy

In this release, we’ve introduced a user-friendly interface for creating and editing external camera proxy profiles. Additionally, we’ve added a new proxy profile for the Insta 360 AcePro.

Improvements

This release brings several improvements to Kdenlive. Track selection is now more intuitive, with double-clicking allowing you to select a track in the timeline. FFmpeg TIMEBASE chapter export has been fixed (thanks to Jonathan Grotelüschen). Nested sequences are now more stable than ever. We’ve implemented a more robust copy-and-paste and sequence clip duplication system, fixed numerous crashes, and improved sequence compositing. Project archiving has been improved. More filtering options have been added to the file picker when importing clips, including categories like Video files, Audio files, Image files, Other files and User files rather than the current All supported files and All files (thanks to Pedro Rodrigues). A new search field has been added to the Settings window. Additionally, integration with OpenTimelineIO has been enhanced.

Other highlights include:

Multiple Bins
Implemented several fixes for handling multiple bins, ensuring stability and usability.

 

Audio Capture
The audio capture feature has been reimplemented in Qt6 (thanks to Lev Maslov). There is also now the ability to set the Default capture folder in the project bin as well as setting to allow captures to the stored in a subdirectory of the project folder on disk, rather than only in the root (Thanks to Christopher Vollick).

 

Monitors
You may now configure play/pause on monitor click, added the option to Play Zone From Cursor and improved panning and zooming with the middle mouse button.

 

Subtitles
We’ve enhanced subtitle font styles by adding bold and italic attributes. Whisper now offers an option to set a maximum character count per subtitle and provides better user feedback by showing the output in the speech recognition dialog. In the Speech-to-Text settings, we’ve included links to the model folders and display their sizes.

 

Full Changelog
  • Double click to select a track in timeline. Commit. See bug #486208.
  • Fix sequence clip inserted in another one is not updated if track is locked. Commit. Fixes bug #487065.
  • Fix duplicating sequence clips. Commit. Fixes bug #486855.
  • Fix autosave on Windows (and maybe other platforms). Commit.
  • Fix crash on undo sequence close. Commit.
  • Fix wrong FFmpeg chapter export TIMEBASE. Commit. Fixes bug #487019.
  • Don’t invalidate sequence clip thumbnail on save, fix manually setting thumb on sequence clip. Commit.
  • Fixes for OpenTimelineIO integration. Commit.
  • Don’t add normalizers to timeline sequence thumb producer. Commit.
  • Fix crash undoing an effect change in another timeline sequence. Commit.
  • WHen dragging a new clip in timeline, don’t move existing selection. Commit.
  • Faster sequence switching. Commit.
  • Create sequence thumbs directly from bin clip producer. Commit.
  • Better icon for proxy settings page. Commit.
  • Fix mouse wheel does not scroll effect stack. Commit.
  • Open new bin: only allow opening a folder. Commit.
  • Fix monitor play/pause on click. Commit.
  • Ensure Qtblend is the prefered track compositing option. Commit.
  • Fix thumnbails and task manager crashes. Commit.
  • Various fixes for multiple bin projects. Commit.
  • Fix monitor pan with middle mouse button, allow zoomin until we have 60 pixels in the monitor view. Commit. See bug #486211.
  • Fix monitor middle mouse pan. Commit.
  • Track compositing is a per sequence setting, correctly handle it. Commit.
  • Fix archive widget showing incorrect required size for project archival. Commit.
  • FIx crash dragging from effect stack to another sequence. Commit. See bug #467219.
  • Fix typo. Commit.
  • Fix consumer crash on project opening. Commit.
  • Fix copying effect by dragging in project monitor. Commit.
  • Fix crash dropping effect on a track. Commit.
  • Fix duplicating Bin clip does not suplicate effects. Commit. Fixes bug #463399.
  • Workaround KIO Flatpak crash. Commit. See bug #486494.
  • Fix effect index broken in effectstack. Commit.
  • Fix double click in timeline clip to add a rotoscoping keyframe breaks effect. Commit.
  • Fix copy/paste rotoscoping effect. Commit.
  • Allow enforcing the Breeze icon theme (disabled by default on all platforms). Commit.
  • Fix effect param flicker on drag. Commit.
  • Fix tests warnings. Commit.
  • Test if we can remove our dark breeze icon theme hack on all platforms with the latest KF changes. Commit.
  • Dont lose image duration when changing project’s framerate. Commit. See bug #486394.
  • Fix composition move broken in overwrite mode. Commit.
  • Fix opening Windows project files on Linux creates unwanted folders. Commit. See bug #486270.
  • Audio record: allow playing timeline when monitoring, clicking track rec… Commit. See bug #486198. See bug #485660.
  • Fix compile warnings. Commit.
  • Fix Ctrl+Wheel not working on some effect parameters. Commit. Fixes bug #486233.
  • On sequence change: correctly stop audio monitoring, fix crash when recording. Commit.
  • Fix Esc key not correctly stopping audio record. Commit.
  • Fix audio rec device selection on Qt5. Commit.
  • Fix Qt5 compilation. Commit.
  • Fix audio capture source not correctly saved / used when changed. Commit.
  • Fix audio mixer initialization. Commit.
  • Fix crash disabling sequence clip in timeline. Commit. Fixes bug #486117.
  • Minor fixes and rephrasing for render widget duration info. Commit.
  • Adjust timeline clip offset label position and tooltip. Commit.
  • Feat: Implement effect groups. Commit.
  • Windows: disable force breeze icon and enforce breeze theme by default. Commit.
  • Edit clip duration: process in ripple mode if ripple tool is active. Commit.
  • Delay document notes widget initialisation. Commit.
  • Limit the threads to a maximum of 16 for libx265 encoding. Commit.
  • Another round of warning fixes. Commit.
  • Fix Qt6 deprecation warning. Commit.
  • Restore audio monitor state when connecting a timeline. Commit.
  • Work/audio rec fixes. Commit.
  • Cleanup and fix crash dragging a bin clip effect to a timeline clip. Commit.
  • Add close bin icon in toolbar, reword open new bin. Commit.
  • Correctly ensure all Bin Docks have a unique name, add menu entry in Bin to create new bin. Commit.
  • Fix a few Project Bin regressions. Commit.
  • Remove unused parameter. Commit.
  • Add multi-format rendering. Commit.
  • Fix crash opening a file on startup. Commit.
  • New camera proxy profile for Insta 360 AcePro. Commit.
  • Fix slip tool. Commit.
  • Qt6 Audio recording fixes. Commit.
  • MLT XML concurrency issue: use ReadWriteLock instead of Mutex for smoother operation. Commit.
  • Rename View menu “Bins” to “Project Bins” to avoid confusion, don’t set same name for multiple bins. Commit.
  • Add tooltip to channelcopy effect. Commit.
  • Fix crash after save in sequence thumbnails. Commit. See bug #485452.
  • Remove last use of dropped icon. Commit.
  • Use default breeze icon for audio (fixes mixer widget using all space). Commit.
  • Additional filters for file pickers / better way of handling file filters. Commit.
  • [nightly flatpak] Fix build. Commit.
  • Use default breeze icon for audio. Commit.
  • Fix possible crash on closing app just after opening. Commit.
  • Fix startup crash when pressing Esc. Commit.
  • Fix effects cannot be enabled after saving with disable bin/timeline effects. Commit. Fixes bug #438970.
  • Audio recording implementation for Qt6. Commit.
  • Fix tests. Commit.
  • Fix guides list widget not properly initialized on startup. Commit.
  • Fix Bin initialized twice on project opening causing various crashes. Commit. See bug #485452.
  • Fix crashes on insert/overwrite clips move. Commit.
  • Fix clips and compositions not aligned to track after spacer operation. Commit.
  • Fix spacer crash with compositions. Commit.
  • Fix spacer crash with guides, small optimization for group move under timeline cursor. Commit.
  • Correctly delete pluggable actions. Commit.
  • Fix dock action duplication and small mem leak. Commit.
  • View menu: move bins and scopes in submenus. Commit.
  • Ensure autosave is not triggered while saving. Commit.
  • Store multiple bins in Kdenlive Settings, remember each bin type (tree or icon view). Commit.
  • Code cleanup: move subtitle related members from timelinemodel to subtitlemodel. Commit.
  • Faster spacer tool. Commit.
  • Fix tab order of edit profile dialog. Commit.
  • Fix blurry folder icon with some project profiles. Commit.
  • Fix spacer tool with compositions and subtitles (broken by last commit). Commit.
  • Make spacer tool faster. Commit.
  • Monitor: add play zone from cursor. Commit. Fixes bug #484103.
  • Improve AV1 NVENC export profile. Commit.
  • Translate shortcut too. Commit.
  • Require at least MLT 7.22.0. Commit.
  • Use proper method to remove ampersand accel. Commit.
  • Drop code duplicating what KAboutData::setApplicationData() & KAboutData::setupCommandLine() do. Commit.
  • Fix possible crash when quit just after starting. Commit.
  • Fix crash in sequence clip thumbnails. Commit. See bug #483836.
  • Fix recent commit not allowing to open project file. Commit.
  • Go back to previous hack around ECM issue. Commit.
  • Restore monitor in full screen if they were when closing Kdenlive. Commit. See bug #484081.
  • When opening an unrecoverable file, don’t crash but propose to open a backup. Commit.
  • Ensure we never reset the locale while an MLT XML Consumer is running (it caused data corruption). Commit. See bug #483777.
  • Fix: favorite effects menu not refreshed when a new effect is set as favorite. Commit.
  • Rotoscoping: add info about return key. Commit.
  • Fix: Rotoscoping not allowing to add points close to bottom of the screen. Commit.
  • Fix: Rotoscoping – allow closing shape with Return key, don’t discard initial shape when drawing it and seeking in timeline. Commit. See bug #484009.
  • Srt_equalizer: drop method that is only available in most recent version. Commit.
  • Fix: Speech to text, allow optional dependencies (srt_equalizer), fix venv not correctly enabled on first install and some packages not installing if optional dep is unavailable. Commit.
  • Update and improve build documentation for Qt6. Commit.
  • Add test for latest cut crash. Commit.
  • Update Readme to GitLab CD destination. Commit.
  • Check if KDE_INSTALL_DIRS_NO_CMAKE_VARIABLES can be disabled (we still have wrong paths in Windows install). Commit.
  • Fix: cannot revert letter spacing to 0 in title clips. Commit. Fixes bug #483710.
  • Audio Capture Subdir. Commit.
  • Feat: filter avfilter.fillborders add new methods for filling border. Commit.
  • [nightly flatpak] Use the offical Qt6 runtime. Commit.
  • Update file org.kde.kdenlive.appdata.xml. Commit.
  • Update file org.kde.kdenlive.appdata.xml. Commit.
  • Add .desktop file. Commit.
  • Updated icons and appdata info for Flathub. Commit.
  • Fix whisper model size unit. Commit.
  • Don’t seek timeline when hover timeline ruler and doing a spacer operation. Commit.
  • Improve install steps for SeamlessM4t, warn user of huge downloads. Commit.
  • Initial implementation of subtitles translation using SeamlessM4T engine. Commit.
  • Make whisper to srt script more robust, use kwargs. Commit.
  • Block Qt5 MLT plugins in thumbnailer when building with Qt6. Commit. Fixes bug #482335.
  • [CD] Restore use of normal Appimage template after testing. Commit.
  • Fix CI/CD. Commit.
  • [CD] Disable Qt5 jobs. Commit.
  • Speech to text: add a link to models folder and display their size in settings. Commit.
  • Whisper: allow setting a maximum character count per subtitle (enabled by default). Commit.
  • Enforce proper styling for Qml dialogs. Commit.
  • Add missing license info. Commit.
  • Allow customizing camcorder proxy profiles. Commit. Fixes bug #481836.
  • Don’t move dropped files in the audio capture folder. Commit.
  • Don’t Highlight Newly Recorded Audio in the Bin. Commit.
  • Show whisper output in speech recognition dialog. Commit.
  • Ensure translated keyframe names are initialized after qApp. Commit.
  • Don’t call MinGW ExcHndlInit twice. Commit.
  • Fix extern variable triggering translation before the QApplication was created, breaking translations. Commit.
  • Fix bin thumbnails for missing clips have an incorrect aspect ratio. Commit.
  • Add Bold and Italic attributes to subtitle fonts style. Commit.
  • Warn on opening a project with a non standard fps. Commit. See bug #476754.
  • Refactor keyframe type related code. Commit.
  • Set Default Audio Capture Bin. Commit.
  • Fix python package detection, install in venv. Commit.
  • Try to fix Mac app not finding its resources. Commit.
  • Another attempt to fix appimage venv. Commit.
  • Add test for nested sequences corruption. Commit. See bug #480776.
  • Show blue audio/video usage icons in project Bin for all clip types. Commit.
  • Org.kde.kdenlive.appdata: Add developer_name. Commit.
  • Fix compilation warnings. Commit.
  • Better feedback message on failed cut. Commit.
  • Set default empty seek duration to 5 minutes instead of 16 minutes on startup to have a more usable scroll bar. Commit.
  • [Craft macOS] Try to fix signing. Commit.
  • [Craft macOS] Remove config for signing test. Commit.
  • Add some debug output for Mac effect drag crash. Commit.
  • Effect stack: don’t show drop marker if drop doesn’t change effect order. Commit.
  • Try to fix crash dragging effect on Mac. Commit.
  • Another try to fix monitor offset on Mac. Commit.
  • Don’t display useless link when effect category is selected. Commit.
  • Add comment on MLT’s manual build. Commit.
  • Add basic steps to compile MLT. Commit.
  • Blacklist MLT Qt5 module when building against Qt6. Commit.
  • Org.kde.kdenlive.appdata.xml use https://bugs.kde.org/enter_bug.cgi?product=kdenlive. Commit.
  • Fix Qt5 startup crash. Commit.
  • Refactor project loading message. Commit.
  • More rebust fix for copy&paste between sequences. Commit.

The post Kdenlive 24.05.0 released appeared first on Kdenlive.

Categories: FLOSS Project Planets

The Drop Times: Brian Perry Discusses Latest Updates and Future Vision for the API Client Initiative

Planet Drupal - Thu, 2024-05-30 11:33
Dive into an in-depth conversation with Brian Perry as he unveils the latest updates and future trajectory of the API Client initiative. Discover how the Drupal API Client is revolutionizing the interaction with Drupal APIs, and explore the exciting opportunities it presents for web development enthusiasts. Join us as we unravel the journey of innovation and collaboration within the Drupal ecosystem.
Categories: FLOSS Project Planets

Ian Ozsvald: What I’ve been up to since 2022

Planet Python - Thu, 2024-05-30 08:25

This has been terribly quiet since July 2022, oops. It turns out that having an infant totally sucks your time! In the meantime I’ve continued to build up:

  • Training courses – I’ve just listed my new Fast Pandas course plus the existing Successful Data Science Projects and Software Engineering for Data Scientists with runs of all three for July and September
  • My NotANumber newsletter, it goes out every month or so, carries Python data science jobs and talks on my strategic work, RebelAI leadership community and Higher Performance Python book updates
  • RebelAI – my private data science leadership community (there’s no web presence, just get in touch) for “excellent data scientists turned leaders” – this is having a set of very nice impacts for members
  • High Performance Python O’Reilly book – we’re working on the 3rd edition
  • PyDataLondon 2024 has a great schedule and if you’re coming – do find me and say hi!
Ian is a Chief Interim Data Scientist via his Mor Consulting. Sign-up for Data Science tutorials in London and to hear about his data science thoughts and jobs. He lives in London, is walked by his high energy Springer Spaniel and is a consumer of fine coffees.
Categories: FLOSS Project Planets

Salsa Digital: A guide to design systems and their benefits

Planet Drupal - Thu, 2024-05-30 08:00
Why build a design system? Designing and building a new website is often costly and time-consuming. Designers come up with a visual masterpiece, and then frontend developers have to build it — which  can be easier said than done, especially if the developer is working in isolation and not taking into consideration how difficult it might be to actually build their designs.
Categories: FLOSS Project Planets

EuroPython: How EuroPython Proposals Are Selected: An Inside Look

Planet Python - Thu, 2024-05-30 06:24

With the number of Python-related conferences around the world, many people might wonder how the selection process is configured and performed. For the largest and oldest European Python conference, EuroPython, we wanted to share how this process works.

The Programme team for each EuroPython conference changes every year. There are mechanisms in place to carry on some of the processes and traditions when dealing with proposals for the next event, although you can still find some differences each year. These differences are not large enough to make a significant impact.

The 2024 Process

In this post, we highlight how the 2024 process was conducted and your role in it, whether as a submitter or potential contributor in future versions.

Opening the Call for Proposals

This year, the Call for Proposals (CfP) configuration was based on the 2023 version, with minor modifications. For example, two new tracks were added to enable more people to categorise their proposals:

  • PyData: Research & Applications
  • PyData: LLMs

This change was motivated by the popularity of these topics at other global conferences. In addition, other tracks were merged or removed to keep the number manageable.

For many people, having a configuration with both an Abstract and a Description is confusing. Not everyone knows what to write in each field. To address this, we decided to be clearer: we dropped the Description field and asked explicitly for an Outline section. The intention was that to submit a proposal, one would require an Abstract and an Outline for their session.

Reviewers from the Community

We opened a form to get help from the community for reviewing the talks and decided to accept most (if not all) of them to nurture our review results as much as possible. We had more than 20 people helping with reviewing proposals on Pretalx.

We created a few “track groups” containing related categories that the reviewers could choose from. This way, there was no pressure to have an opinion on a topic one might not be familiar with.

We had an average of six reviews per proposal, which greatly helped us make a final decision.

Community Voting

Another way to receive input is through Community Voting, which allows participants who have attended any of the EuroPythons since 2012 to vote on the upcoming programme.

Using a separate simple web application, people participated by voting for the proposals they wanted to see at EuroPython 2024. We were fortunate to have enough people voting to get a good estimate of preferences.

Fun fact: Around 11 people were able to review all of the nearly 640 proposals we received this year.

We are very grateful to everyone who participated!

My reaction when I saw a good proposal to be voted at EP 2024.Programme Committee

This year the programme committee was mostly formed by a group of new people to the conference, helped by a few people familiar with the process from last year. In total, around 11 people were actively participating.

Like most Programme teams, we did our best to get people from different areas to have a more diverse general mindset, including skills from Data, Core, DevOps, and Web technologies.

It was important for us to have local people on the team, and we are very happy to have had two members from the local Czech community helping, while the rest were spread across Europe.

Selection Process

Based on the reviewers&apos results from Pretalx and Community Voting, we generated a master sheet that was used to perform the selection process.

Track by track, the Programme team went through each proposal that had good Pretalx and Community Voting results and voted (again) for the talks they believed were good material for the conference.

During the selection process, we felt that we did not have enough expertise in a specific area. Therefore, we are very thankful that we could add four more members to the selection team to remedy that.

After three calls, each lasting around 2 hours, the Programme team had the first batch of accepted proposals. The speakers for these proposals were notified as soon as the decision was made. Following a similar process, we did the same for the second (and final) batch of accepted and rejected proposals.

To ensure the acceptance of proposals from most tracks and topics, many plots and statistical analyses were created to visualise the ratio of accepted proposals to submitted ones, the variety of topics, and the diversity of speakers.

Plots from pretalx visualising Proposals by Submission date, Session type, Track & State

Even though it sounds cliché, there were many good proposals we couldn&apost accept immediately since the high volume and quality of proposals made it challenging to make instant decisions. We kept debating whether to place them on the waiting list.

Ultimately, we created another category for proposals that "could be accepted" allowing us to manage and organise high-quality proposals that required further deliberation.

Programme team trying to figure which talk to choose from the waiting listWhat about sponsored talks?

Each year, the conference offers sponsors with certain packages the perk of hosting a sponsored talk, meaning that some of the talk slots had to be saved for that purpose. Slots not taken were filled by proposals on the waiting list.

Is selecting the talks the end of the story?

No. After proposals are accepted/confirmed, special requirements emerge, mainly about "I’m sorry, I cannot be at the conference, can I do it online?" Which, in our opinion, is unfortunate news—not because we don’t like it, but because we have learned that remote talks are not as popular with attendees.

Even though there are some special cases that we fully understand, we noticed a few cases not being convincing enough. In those cases, we had to encourage people to give up their slot for other in-person proposals. This is a tricky process as we are limited in the total amount of remote talks possible, the specific reasons for the change, and the overall scenario for the conference.

What is needed to get accepted?

Most rejected proposals are rejected because they have a weak abstract.

We have tried many means to encourage people to ask questions and seek feedback about their proposals, and we have hosted calls providing the details of good proposals. Still, every year we get proposals that have a poorly structured, incomplete abstract, etc.

For us, a good abstract contains the following:

  • Context of your talk or problem
  • Definition of the problem
  • Why is it important to find a solution to that problem?
  • What will be discussed and what will attendees learn?
  • Previous requirements or additional comments on your talk

You can also imagine a proposal like an elevator pitch. You need to describe it in a way that’s striking and motivates people to attend.

Don’t Forget About the Outline!

This year, we introduced an “outline” field for you to paste the outline of your talk, including the time you will spend on each item. This is essential to get an idea of how much you will be talking about each topic. (Hint: add up the expected times.)

The outline might sound like an obvious topic to you, but many people failed to provide a detailed one. Some even copied the abstract here, so you might understand the importance of this field as well.

Why Does It Feel Like the Same People Are Speakers Every Year?

The main reason for this is that those people followed the proper abstract structure and provided a descriptive outline. Having experience being rejected certainly helps. So we hope that after giving you detailed selection process standards, you know how to crack the selection process.

What about AI?

We discussed a few proposals that “felt AI written” and even used external tools to assess them. In the end, we didn’t have a strict ruling against people using Artificial Intelligence tools to improve their proposals.

When a proposal felt like it was AI-generated, we went deeper into the proposal and the speaker&aposs background. For example, by analysing the bio from the speaker and checking if the person was giving talks somewhere else. Most importantly, if the “speaker” was a real person.

Independently of how the Programme team feels towards AI tools, we cannot completely ignore how these tools are helping some people with structure and grammar, as well as overall assisting them in their writing process. This might change in the future, but currently, we have not written regulations against the usage of AI tools.

The 2025 Process and Final Words

As described before, the team and process can change a bit next year, but we expect the same critical aspects of a good abstract and outline to be essential to the process.

We encourage you to ask for feedback, participate in sessions teaching how to write good proposals, participate on our Speaker&aposs Mentorship programme. These can truly help you to get accepted into the conference.

Having said all this, each conference has a different selection process. Maybe the reason your proposal was not selected is due to a better proposal on the same topic, or too many similar proposals in the same track, or your proposal just did not fit this year&aposs Zeitgeist (i.e. Community Voting).

Please don’t be discouraged! We highly recommend you keep working on your proposal, tweak it, write a new one, and most importantly, try again.

Submitting Several Proposals Doesn’t Help!

We value quality over quantity and will compare your proposals against each other. This is extra work and might even give you less of a chance because of a split vote between your proposals. So submitting more than 10 proposals to get accepted is the wrong approach.

The Call for Proposals will likely be open earlier next year. We hope you can follow the recommendations in this post and get your proposal to accepted for EuroPython 2025.

And remember: Don’t be afraid to ask for feedback!

Thanks for reading! This community post is written by Cristián on behalf of EuroPython 2024 Programme team
Categories: FLOSS Project Planets

KDGpu 0.5.0 is here!

Planet KDE - Thu, 2024-05-30 03:30

Since we first announced it last year, our Vulkan wrapper KDGpu has been busy evolving to meet customer needs and our own. Our last post announced the public release of v0.1.0, and version 0.5.0 is available today. It’s never been easier to interact with modern graphics technologies, enabling you to focus on the big picture instead of hassling with the intricacies and nuances of Vulkan.

The PBR example in the new KDGpu Examples repository.

Wider device support

KDGpu now supports a wider array of devices, such as older versions of Android. For some context, additional features in Vulkan are supported by extensions. If said features become part of the “core” specification, they are automatically included in Vulkan 1.2, 1.3 and so on. In the past, KDGpu required the device to fully support Vulkan 1.2, which limited what devices you could target. In newer KDGpu versions (>0.4.6) it will now run on certain 1.1 devices (like the Meta Quest) as long as the required extensions are supported.

A KDGpu example running natively on an Android device.

We also added native examples for Android, which can be ran straight from Android Studio! There’s also better iOS support alongside a native Apple example.

the KDGpu Hello Triangle example running in the iOS simulator

External memory and images support

When writing applications using KDGpu, you will inevitably have to interface with other APIs or libraries that don’t support it or maybe not even Vulkan specifically. For example, if you generate an image using Vulkan graphics and then need to pass that to CUDA for further processing. Now with KDGpu it’s possible to grab texture and buffer objects and get their external memory handles:

const TextureOptions textureOptions = { .type = TextureType::TextureType2D, .format = Format::R8G8B8A8_SNORM, .extent = { 512, 512, 1 }, .mipLevels = 1, .usage = TextureUsageFlagBits::SampledBit, .memoryUsage = MemoryUsage::GpuOnly, .externalMemoryHandleType = ExternalMemoryHandleTypeFlagBits::OpaqueFD, }; Texture t = device.createTexture(textureOptions); const MemoryHandle externalHandleOrFD = t.externalMemoryHandle();

Additionally, we have added methods to adopt existing VkImages as native KDGpu objects to better support libraries like OpenXR.

Easy & fast XR

OpenXR is the leading API used for writing cross-platform VR/AR experiences. Like Vulkan, code directly using OpenXR tends to be verbose and requires a lot of setup. To alleviate this, KDGpu now includes an optional library called KDXr. It wraps OpenXR, and it even easily integrates into KDGpu. It takes care of initialization, has the C++ classes you expect and can make it painless to integrate XR functionality into your application including support for XR compositor layers, head tracking, input handling and haptic feedback.

For example, to set up a projection view you subclass the ProjectionLayer type:

class ProjectionLayer : public XrProjectionLayer { public:

And implement the required methods like renderView() to start rendering into each eye:

void ProjectionLayer::renderView() { m_fence.wait(); m_fence.reset(); // Update the scene data once per frame if (m_currentViewIndex == 0) { updateTransformUbo(); } // Update the per-view camera matrices updateViewUbo(); auto commandRecorder = m_device->createCommandRecorder(); // Set up the render pass using the current color and depth texture views m_opaquePassOptions.colorAttachments[0].view = m_colorSwapchains[m_currentViewIndex].textureViews[m_currentColorImageIndex]; m_opaquePassOptions.depthStencilAttachment.view = m_depthSwapchains[m_currentViewIndex].textureViews[m_currentDepthImageIndex]; auto opaquePass = commandRecorder.beginRenderPass(m_opaquePassOptions); // Do the rest of your rendering commands to this pass...

And add this layer to the compositor, in our examples this is abstracted away for you:

// Create a projection layer to render the 3D scene const XrProjectionLayerOptions projectionLayerOptions = { .device = &m_device, .queue = &m_queue, .session = &m_session, .colorSwapchainFormat = m_colorSwapchainFormat, .depthSwapchainFormat = m_depthSwapchainFormat, .samples = m_samples.get() }; m_projectionLayer = createCompositorLayer<ProjectionLayer>(projectionLayerOptions); m_projectionLayer->setReferenceSpace(m_referenceSpace);

You can view the complete example here. In this new release, we’re continuing to work on multiview support! KDXr supports multiview out of the box (see the example layer code) and you can check out the multiview example.

More in-depth examples are now available

The examples sitting in our main repository are no more than small tests, which don’t show the true benefits of using KDGpu in large graphical applications. So, in addition to our previous examples, we now have a dedicated KDGpu Examples repository!

Screenshot from our N-Body Compute example.

And more!

There are also small improvements such as being able to request custom extensions and ignore specific validation layer warnings. Check out the changelog on GitHub for a full list of what’s been changed.

Let us know what you think about the improvements we’ve made, and what could be useful for you in the future!

About KDAB

If you like this article and want to read similar material, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post KDGpu 0.5.0 is here! appeared first on KDAB.

Categories: FLOSS Project Planets

April/May in KDE Itinerary

Planet KDE - Thu, 2024-05-30 02:15

Since the last summary of what happened around KDE Itinerary two month ago we shipped Transitous support, integrated a new import staging area, enabled creating entries from OSM elements and much more.

New Features Transitous support

The 24.05 releases shipped with Transitous support enabled by default for the first time. Transitous is a community-run free and open public transport routing service.

Since its start at FOSDEM 2024 just a few months ago Transitous is meanwhile consuming almost 800 GTFS feeds with base schedule information and 185 GTFS-RT feeds with realtime updates, covering 37 countries on 5 continents.

To support this a lot of work is happening both by the people taking care of the operational side of this as well as the team developing the MOTIS routing engine to improve the performance and scalability.

Unlike with vendor-operated or otherwise proprietary services Transitous allows us to expand public transport routing coverage ourselves, assuming publicly available GTFS feeds at least. See the Transitous contributor documentation on how to help with that, many major systems in Asia are still missing for example.

New import staging area

When importing from the system calendar Itinerary showed a list of detected elements and allowed to select which ones to actually import. This “staging area” for imported data has been generalized and is now available for all possible import scenarios.

Import staging area showing an entire trip.

This allows to review what the travel document extractor has found and provides greater control over what to import. It also is an important step towards the longer term plan of associating every element with a trip and providing more manual control over trip grouping.

Import from OSM URLs

Hotels or restaurants can be imported from OSM data, by pasting or dropping the link to the corresponding OSM element into Itinerary. That will result in the corresponding edit page being shown with all data present in OSM already pre-filled so you typically only have to enter dates and times.

With the next version the same will also work for a number of event venue types.

Manual ticket barcode entry

It’s now possible to manually add barcodes to reservations or tickets/passes that don’t have one yet, from within the corresponding edit page. For reservations it’s also possible to associate them with an existing pass or flat-rate ticket.

This can be useful when manually entering data that uses document or barcode formats that Itinerary doesn’t recognize automatically.

Barcode editing control in KDE Itinerary. Infrastructure Work CI/CD updates

We updated the build infrastructure for all of KDE’s Android apps to Qt 6.7 and NDK r26. Due to an API and ABI break in Qt’s JNI API (ie. something very central to Android integration) this unfortunately needed a lot more effort and changes than usual.

NDK r26 brings a much newer STL, which unblocked some KDE Framework changes, as well as allowed us to update to newer versions of the Quotient Matrix library.

Accessibility and UI testing

As mentioned in my report of KDE’s accessibility, automation and sustainability sprint the date and time input controls used by Itinerary can now be interacted with using assistive tools such as screen readers.

And that’s not only helpful for users relying on such tools, our UI testing tools use the same interface to control the application under test. Thanks to this Itinerary now has a first set of automated UI tests which are run as part of the automated builds.

Indoor routing

Indoor routing for the train station maps has been moving closer to becoming ready for integration by removing one major obstacle, its memory consumption.

The previous implementation processing an entire train station at once could end up needing 500MB or memory temporarily for creating the navigation mesh. That’s not a big deal on a laptop or desktop, but on a phone that is a bit much.

We now split larger areas into tiles and compute the navigation mesh for each of those separately. That has only minimal impact on the time it takes to do that, but decreases the peak memory consumption by a factor of 10 to 20.

MapCSS eval expressions

Another seemingly small but very powerful change in the indoor map renderer is support for MapCSS eval() expressions. This allows styling properties to not only be fixed values or values of OSM tags but can be complex expressions depending on other style properties or OSM tag values.

Complex road labels styled with MapCSS eval() expressions (right) compared to the previous result. Fixes & Improvements Travel document extractor
  • New or improved travel document extractors for CFR, Eurostar, Indico, IRTC, Lufthansa, Motel One, SNCB, SNCF, Ticketportal, Trenitalia, VDV eTicket and VR.
  • Initial generic support for railway tickets in IATA BCBP and PkPass formats.
  • Fixed invalid departure times for flights from airports with unknown timezones.
  • Added support for Base64 encoded ERA SSB ticket barcodes.
  • Improved handling of binary barcode content in Apple Wallet passes.
  • Fixed start/end time checks for restaurant reservations.

All of this has been made possible thanks to your travel document donations!

Public transport data
  • New occupancy indicator that no longer solely relies on color.
  • Support per-coach occupancy information on trains (only available in some ÖBB trains so far).
Train coach occupancy information.
  • Support for out-of-service train coaches in the coach layout view.
  • Updated public transport coverage metadata from the Transport API repository, which should result in more appropriate results for some regions.
  • Fixed filtering of pointlessly short foot paths from journey query results sometimes having no effect.
  • Discard non-WGS84 coordinates in EFA responses. This fixes some bizarre and physically impossible routing instructions in Baden-Württemberg.
Indoor map
  • Render node-based indoor columns.
  • Unified styling for all corridor types.
  • Handle one more OSM tagging variant for toilets.
  • Improved detection of the current holiday region for interpreting opening hours.
  • Handle more OSM tagging variants when doing floor level expansion.
Itinerary app
  • Show train coach layout actions also without any seat reservation or ticket.
  • Fixed overly long headers of ferry reservations.
  • Fixed some mistranslations due to missing translation contexts (in some languages the translation of “departure” depends on the mode of transportation).
  • Fixed layout issues for waiting sections on a public transport journey.
  • Allow to edit the ticket owner name as well.
  • Support for program membership passes and ticket in the Apple Wallet pass format.
How you can help

Feedback and travel document samples are very much welcome, as are all other forms of contributions. Feel free to join us in the KDE Itinerary Matrix channel.

Categories: FLOSS Project Planets

Matt Layman: About, FAQ, and Home Page - Building SaaS with Python and Django #192

Planet Python - Wed, 2024-05-29 20:00
In this episode, we worked on some core pages to round out the JourneyInbox user interface. This led us to work updating UI layout, writing copy, and doing other fundamentals for making templated pages.
Categories: FLOSS Project Planets

Python⇒Speed: Let’s optimize! Running 15× faster with a situation-specific algorithm

Planet Python - Wed, 2024-05-29 20:00
pre { white-space: pre; overflow-x: auto; font-size: 80%; }

Let’s speed up some software! Our motivation: we have an image, a photo of some text from a book. We want to turn it into a 1-bit image, with just black and white, extracting the text so we can easily read it.

We’ll use an example image from scikit-image, an excellent image processing library:

from skimage.data import page import numpy as np IMAGE = page() assert IMAGE.dtype == np.uint8

Here’s what it looks like (it’s licensed under this license):

Median-based local thresholding

The task we’re trying to do—turning darker areas into black, and lighter areas into white—is called thresholding. Since the image is different in different regions, with some darker and some lighter, we’ll get the best results if we use local thresholding, where the threshold is calculated from the pixel’s neighborhood.

Simplifying somewhat, for each pixel in the image we will:

  1. Calculate the median of the surrounding neighborhood.
  2. Subtract a magic constant from the calculated median to calculate our local threshold.
  3. If the pixel’s value is bigger than the threshold, the result is white, otherwise it’s black.

scikit-image includes an implementation of this algorithm. Here’s how we use it:

from skimage.filters import threshold_local def skimage_median_local_threshold(img, neighborhood_size, offset): threshold = threshold_local( img, block_size=neighborhood_size, method="median", offset=offset ) result = (img > threshold).astype(np.uint8) result *= 255 return result # The neighborhood size and offset value were determined "empirically", i.e. # they're manually tuning the algorithm to work well with our specific # example image. SKIMAGE_RESULT = skimage_median_local_threshold(IMAGE, 11, 10)

And here’s what the results look like:

Let’s see if we can make this faster!

Step 1. Reimplement our own version

We’re going to be using the Numba compiler, which lets us compile Python code to machine code at runtime. Here’s an initial implementation of the algorithm; it’s not quite identical to the original, for example the way edge pixels are handled, but it’s close enough for our purposes:

from numba import jit @jit def median_local_threshold1(img, neighborhood_size, offset): # Neighborhood size must be an odd number: assert neighborhood_size % 2 == 1 radius = (neighborhood_size - 1) // 2 result = np.empty(img.shape, dtype=np.uint8) # For every pixel: for i in range(img.shape[0]): # Calculate the Y borders of the neighborhood: min_y = max(i - radius, 0) max_y = min(i + radius + 1, img.shape[0]) for j in range(img.shape[1]): # Calculate the X borders of the neighborhood: min_x = max(j - radius, 0) max_x = min(j + radius + 1, img.shape[1]) # Calculate the median: median = np.median(img[min_y:max_y, min_x:max_x]) # Set the image to black or white, depending how it relates to # the threshold: if img[i, j] > median - offset: # White: result[i, j] = 255 else: # Black: result[i, j] = 0 return result NUMBA_RESULT1 = median_local_threshold1(IMAGE, 11, 10)

Here’s the resulting image; it looks similar enough that for our purposes:

Now we can compare the performance of the two implementations:

Code Elapsed milliseconds skimage_median_local_threshold(IMAGE, 11, 10) 76 median_local_threshold1(IMAGE, 11, 10) 87

It’s slower. But that’s OK, we’re just getting started.

Step 2: A faster implementation of the median algorithm

Calculating a median is pretty expensive, and we’re doing it for every single pixel, so let’s see if we can speed it up.

The generic median implementation Numba provides is likely to be fairly generic, since it needs to work in a wide variety of circumstances. We can hypothesize that it’s not optimized for our particular case. And even if it is, having our own implementation will allow for a second round of optimization, as we’ll see in the next step.

We’re going to implement a histogram-based median, based on the fact we’re using 8-bit images that only have a limited range of potential values. The median is the value where 50% of the pixels’ values are smaller, and 50% are bigger.

Here’s the basic algorithm for a histogram-based median:

  • Each pixel’s value will go into a different bucket in the histogram; since we know our image is 8-bit, we only need 256 buckets.
  • Then, we add up the size of each bucket in the histogram, from smallest to largest, until we hit 50% of the pixels we inspected.
@jit def median_local_threshold2(img, neighborhood_size, offset): assert neighborhood_size % 2 == 1 radius = (neighborhood_size - 1) // 2 result = np.empty(img.shape, dtype=np.uint8) # 😎 A histogram with a bucket for each of the 8-bit values possible in # the image. We allocate this once and reuse it. histogram = np.empty((256,), dtype=np.uint32) for i in range(img.shape[0]): min_y = max(i - radius, 0) max_y = min(i + radius + 1, img.shape[0]) for j in range(img.shape[1]): min_x = max(j - radius, 0) max_x = min(j + radius + 1, img.shape[1]) # Reset the histogram to zero: histogram[:] = 0 # Populate the histogram, counting how many of each value are in # the neighborhood we're inspecting: neighborhood = img[min_y:max_y, min_x:max_x].ravel() for k in range(len(neighborhood)): histogram[neighborhood[k]] += 1 # Use the histogram to find the median; keep adding buckets until # we've hit 50% of the pixels. The corresponding bucket is the # median. half_neighborhood_size = len(neighborhood) // 2 for l in range(256): half_neighborhood_size -= histogram[l] if half_neighborhood_size < 0: break median = l if img[i, j] > median - offset: result[i, j] = 255 else: result[i, j] = 0 return result NUMBA_RESULT2 = median_local_threshold2(IMAGE, 11, 10)

Here’s the resulting image:

And here’s the performance of our new implementation:

Code Elapsed milliseconds median_local_threshold1(IMAGE, 11, 10) 86 median_local_threshold2(IMAGE, 11, 10) 18

That’s better!

Step 3: Stop recalculating the histogram from scratch

Our algorithm uses a rolling neighborhood or window over the image, calculating the median for a window around each pixel. And the neighborhood for one pixel has a significant overlap for the neighborhood of the next pixel. For example, let’s say we’re looking at a neighborhood size of 3. We might calculate the median of this area:

...... .\\\.. .\\\.. .\\\.. ...... ......

And then when process the next pixel we’ll calculate the median of this area:

...... ..///. ..///. ..///. ...... ......

If we superimpose them, we can see there’s an overlap, the X:

...... .\XX/. .\XX/. .\XX/. ...... ......

Given the histogram for the first pixel, if we remove the values marked with \ and add the ones marked with /, we’ve calculated the exact histogram for the second pixel. So for a 3×3 neighborhood, instead of processing 3 columns we process 2, a minor improvement. For a 11×11 neighborhood, we will go from processing 11 columns to 2 columns, a much more significant improvement.

Here’s what the code looks like:

@jit def median_local_threshold3(img, neighborhood_size, offset): assert neighborhood_size % 2 == 1 radius = (neighborhood_size - 1) // 2 result = np.empty(img.shape, dtype=np.uint8) histogram = np.empty((256,), dtype=np.uint32) for i in range(img.shape[0]): min_y = max(i - radius, 0) max_y = min(i + radius + 1, img.shape[0]) # Populate histogram as if we started one pixel to the left: histogram[:] = 0 initial_neighborhood = img[min_y:max_y, 0:radius].ravel() for k in range(len(initial_neighborhood)): histogram[initial_neighborhood[k]] += 1 for j in range(img.shape[1]): min_x = max(j - radius, 0) max_x = min(j + radius + 1, img.shape[1]) # 😎 Instead of recalculating histogram from scratch, re-use the # previous pixel's histogram. # Substract left-most column we don't want anymore: if min_x > 0: for y in range(min_y, max_y): histogram[img[y, min_x - 1]] -= 1 # Add new right-most column: if max_x < img.shape[1]: for y in range(min_y, max_y): histogram[img[y, max_x - 1]] += 1 # Find the the median from the updated histogram: half_neighborhood_size = ((max_y - min_y) * (max_x - min_x)) // 2 for l in range(256): half_neighborhood_size -= histogram[l] if half_neighborhood_size < 0: break median = l if img[i, j] > median - offset: result[i, j] = 255 else: result[i, j] = 0 return result NUMBA_RESULT3 = median_local_threshold3(IMAGE, 11, 10)

Here’s the resulting image:

And here’s the performance of our latest code:

Code Elapsed microseconds median_local_threshold2(IMAGE, 11, 10) 17,066 median_local_threshold3(IMAGE, 11, 10) 6,386 Step #4: Adapative heuristics

Notice that a median’s definition is symmetrical:

  1. The first value that is smaller than the highest 50% values.
  2. Or, the first value that is larger than the lowest 50% values. We used this definition in our code above, adding up buckets from the smallest to the largest.

Depending on the distribution of values, one approach to adding up buckets to find the median may be faster than the other. For example, given a 0-255 range, if the median is going to be 10 we want to start from the smallest bucket to minimize additions. But if the median is going to be 200, we want to start from the largest bucket.

So which side we should start from? One reasonable heuristic is to look at the previous median we calculated, which most of the time will be quite similar to the new median. If the previous median was small, start from the smallest buckets; if it was large, start from the largest buckets.

@jit def median_local_threshold4(img, neighborhood_size, offset): assert neighborhood_size % 2 == 1 radius = (neighborhood_size - 1) // 2 result = np.empty(img.shape, dtype=np.uint8) histogram = np.empty((256,), dtype=np.uint32) median = 0 for i in range(img.shape[0]): min_y = max(i - radius, 0) max_y = min(i + radius + 1, img.shape[0]) histogram[:] = 0 initial_neighborhood = img[min_y:max_y, 0:radius].ravel() for k in range(len(initial_neighborhood)): histogram[initial_neighborhood[k]] += 1 for j in range(img.shape[1]): min_x = max(j - radius, 0) max_x = min(j + radius + 1, img.shape[1]) if min_x > 0: for y in range(min_y, max_y): histogram[img[y, min_x - 1]] -= 1 if max_x < img.shape[1]: for y in range(min_y, max_y): histogram[img[y, max_x - 1]] += 1 half_neighborhood_size = ((max_y - min_y) * (max_x - min_x)) // 2 # 😎 Find the the median from the updated histogram, choosing # the starting side based on the previous median; we can go from # the leftmost bucket to the rightmost bucket, or in reverse: the_range = range(256) if median < 127 else range(255, -1, -1) for l in the_range: half_neighborhood_size -= histogram[l] if half_neighborhood_size < 0: median = l break if img[i, j] > median - offset: result[i, j] = 255 else: result[i, j] = 0 return result NUMBA_RESULT4 = median_local_threshold4(IMAGE, 11, 10)

The end result is 25% faster. Since the heuristic is tied to the image contents, the performance impact will depend on the image.

Code Elapsed microseconds median_local_threshold3(IMAGE, 11, 10) 6,381 median_local_threshold4(IMAGE, 11, 10) 4,920 The big picture

Here’s a performance comparison of all the versions of the code:

Code Elapsed microseconds skimage_median_local_threshold(IMAGE, 11, 10) 76,213 median_local_threshold1(IMAGE, 11, 10) 86,494 median_local_threshold2(IMAGE, 11, 10) 17,145 median_local_threshold3(IMAGE, 11, 10) 6,398 median_local_threshold4(IMAGE, 11, 10) 4,925

Let’s go over the steps we went through:

  1. Switch to a compiled language: this gives us more control.
  2. Reimplement the algorithm taking advantage of constrained requirements: our median only needed to handle uint8, so a histogram was a reasonable solution.
  3. Reuse previous calculations to prevent repetition: our histogram for the neighborhood of a pixel is quite similar to that of the previous pixel. This means we can reuse some of the calculations.
  4. Adaptively tweak the algorithm at runtime: as we run on an actual image, we use what we’ve learned up to this point to hopefully run faster later on. The decision from which side of the histogram to start is arbirary in general. But in this specific algorithm, the overlapping pixel neighborhoods mean we can make a reasonable guess.

This process demonstrates part of why generic libraries may be slower than custom code you write for your particular use case and your particular data.

Next steps

What else can you do to speed up this algorithm? Here are some ideas:

  • There may be a faster alternative to histogram-based medians.
  • We’re not fully taking advantage of histogram overlap; there’s also overlap between rows.
  • The cumulative sum in the histogram doesn’t benefit from instruction-level parallelism or SIMD. It’s possible that using one of those would result in faster results even if it uses more instructions.
  • So far the code has only used a single CPU. Given each row is calculated independently, parallelism would probably work well if done in horizontal stripes, probably taller than one pixel so as to maximize utilization of memory caches.

Want to learn more about optimizing compiled code for Python data processing? This article is an extract from a book I’m working on; test readers are currently going through initial drafts. Aimed at Python developers, data scientists, and scientists, the book covers topics like instruction-level parallelism, memory caches, and other performance optimization techniques. Learn more and sign up to get updates here.

Read more...
Categories: FLOSS Project Planets

Matthew Palmer: GitHub's Missing Tab

Planet Debian - Wed, 2024-05-29 20:00

Visit any GitHub project page, and the first thing you see is something that looks like this:

“Code”, that’s fairly innocuous, and it’s what we came here for. The “Issues” and “Pull Requests” tabs, with their count of open issues, might give us some sense of “how active” the project is, or perhaps “how maintained”. Useful information for the casual visitor, undoubtedly.

However, there’s another user community that visits this page on the regular, and these same tabs mean something very different to them.

I’m talking about the maintainers (or, more commonly, maintainer, singular). When they see those tabs, all they see is work. The “Code” tab is irrelevant to them – they already have the code, and know it possibly better than they know their significant other(s) (if any). “Issues” and “Pull Requests” are just things that have to be done.

I know for myself, at least, that it is demoralising to look at a repository page and see nothing but work. I’d be surprised if it didn’t contribute in some small way to maintainers just noping the fudge out.

A Modest Proposal

So, here’s my thought. What if instead of the repo tabs looking like the above, they instead looked like this:

My conception of this is that it would, essentially, be a kind of “yearbook”, that people who used and liked the software could scribble their thoughts on. With some fairly straightforward affordances elsewhere to encourage its use, it could be a powerful way to show maintainers that they are, in fact, valued and appreciated.

There are a number of software packages I’ve used recently, that I’d really like to say a general “thanks, this is awesome!” to. However, I’m not about to make the Issues tab look even scarier by creating an “issue” to say thanks, and digging up an email address is often surprisingly difficult, and wouldn’t be a public show of my gratitude, which I believe is a valuable part of the interaction.

You Can’t Pay Your Rent With Kudos

Absolutely you cannot. A means of expressing appreciation in no way replaces the pressing need to figure out a way to allow open source developers to pay their rent. Conversely, however, the need to pay open source developers doesn’t remove the need to also show those people that their work is appreciated and valued by many people around the world.

Anyway, who knows a senior exec at GitHub? I’ve got an idea I’d like to run past them…

Categories: FLOSS Project Planets

Anarcat: Playing with fonts again

Planet Python - Wed, 2024-05-29 17:38

I am getting increasingly frustrated by Fira Mono's lack of italic support so I am looking at alternative fonts again.

Commit Mono

This time I seem to be settling on either Commit Mono or Space Mono. For now I'm using Commit Mono because it's a little more compressed than Fira and does have a italic version. I don't like how Space Mono's parenthesis (()) is "squarish", it feels visually ambiguous with the square brackets ([]), a big no-no for my primary use case (code).

So here I am using a new font, again. It required changing a bunch of configuration files in my home directory (which is in a private repository, sorry) and Emacs configuration (thankfully that's public!).

One gotcha is I realized I didn't actually have a global font configuration in Emacs, as some Faces define their own font family, which overrides the frame defaults.

This is what it looks like, before:

Fira Mono

After:

Commit Mono

(Notice how those screenshots are not sharp? I'm surprised too. The originals look sharp on my display, I suspect this is something to do with the Wayland transition. I've tried with both grim and flameshot, for what its worth.)

They are pretty similar! Commit Mono feels a bit more vertically compressed maybe too much so, actually -- the line height feels too low. But it's heavily customizable so that's something that's relatively easy to fix, if it's really a problem. Its weight is also a little heavier and wider than Fira which I find a little distracting right now, but maybe I'll get used to it.

All characters seem properly distinguishable, although, if I'd really want to nitpick I'd say the © and ® are too different, with the latter (REGISTERED SIGN) being way too small, basically unreadable here. Since I see this sign approximately never, it probably doesn't matter at all.

I like how the ampersand (&) is more traditional, although I'll miss the exotic one Fira produced... I like how the back quotes (`, GRAVE ACCENT) drop down low, nicely aligned with the apostrophe. As I mentioned before, I like how the bar on the "f" aligns with the other top of letters, something in Fira mono that really annoys me now that I've noticed it (it's not aligned!).

A UTF-8 test file

Here's the test sheet I've made up to test various characters. I could have sworn I had a good one like this lying around somewhere but couldn't find it so here it is, I guess.

US keyboard coverage: abcdefghijklmnopqrstuvwxyz`1234567890-=[]\;',./ ABCDEFGHIJKLMNOPQRSTUVWXYZ~!@#$%^&*()_+{}|:"<>? latin1 coverage: ¡¢£¤¥¦§¨©ª«¬­®¯°±²³´µ¶·¸¹º»¼½¾¿ EURO SIGN, TRADE MARK SIGN: €™ ambiguity test: e¢coC0ODQ iI71lL!|¦ b6G&0B83 [](){}/\.…·• zs$S52Z% ´`'"‘’“”«» all characters in a sentence, uppercase: the quick fox jumps over the lazy dog THE QUICK FOX JUMPS OVER THE LAZY DOG same, in french: voix ambiguë d'un cœur qui, au zéphyr, préfère les jattes de kiwis. VOIX AMBIGUË D'UN CŒUR QUI, AU ZÉPHYR, PRÉFÈRE LES JATTES DE KIWIS. Ligatures test: -<< -< -<- <-- <--- <<- <- -> ->> --> ---> ->- >- >>- =<< =< =<= <== <=== <<= <= => =>> ==> ===> =>= >= >>= <-> <--> <---> <----> <=> <==> <===> <====> :: ::: __ <~~ </ </> /> ~~> == != /= ~= <> === !== !=== =/= =!= <: := *= *+ <* <*> *> <| <|> |> <. <.> .> +* =* =: :> (* *) /* */ [| |] {| |} ++ +++ \/ /\ |- -| <!-- <!--- Box drawing alignment tests: █ ╔══╦══╗ ┌──┬──┐ ╭──┬──╮ ╭──┬──╮ ┏━━┳━━┓ ┎┒┏┑ ╷ ╻ ┏┯┓ ┌┰┐ ▉ ╱╲╱╲╳╳╳ ║┌─╨─┐║ │╔═╧═╗│ │╒═╪═╕│ │╓─╁─╖│ ┃┌─╂─┐┃ ┗╃╄┙ ╶┼╴╺╋╸┠┼┨ ┝╋┥ ▊ ╲╱╲╱╳╳╳ ║│╲ ╱│║ │║ ║│ ││ │ ││ │║ ┃ ║│ ┃│ ╿ │┃ ┍╅╆┓ ╵ ╹ ┗┷┛ └┸┘ ▋ ╱╲╱╲╳╳╳ ╠╡ ╳ ╞╣ ├╢ ╟┤ ├┼─┼─┼┤ ├╫─╂─╫┤ ┣┿╾┼╼┿┫ ┕┛┖┚ ┌┄┄┐ ╎ ┏┅┅┓ ┋ ▌ ╲╱╲╱╳╳╳ ║│╱ ╲│║ │║ ║│ ││ │ ││ │║ ┃ ║│ ┃│ ╽ │┃ ░░▒▒▓▓██ ┊ ┆ ╎ ╏ ┇ ┋ ▍ ║└─╥─┘║ │╚═╤═╝│ │╘═╪═╛│ │╙─╀─╜│ ┃└─╂─┘┃ ░░▒▒▓▓██ ┊ ┆ ╎ ╏ ┇ ┋ ▎ ╚══╩══╝ └──┴──┘ ╰──┴──╯ ╰──┴──╯ ┗━━┻━━┛ └╌╌┘ ╎ ┗╍╍┛ ┋ ▏▁▂▃▄▅▆▇█ Dashes alignment test: HYPHEN-MINUS, MINUS SIGN, EN, EM DASH, HORIZONTAL BAR, LOW LINE -------------------------------------------------- −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− –––––––––––––––––––––––––––––––––––––––––––––––––– —————————————————————————————————————————————————— ―――――――――――――――――――――――――――――――――――――――――――――――――― __________________________________________________

So there you have it, got completely nerd swiped by typography again. Now I can go back to writing a too-long proposal again.

Sources and inspiration for the above:

  • the unicode(1) command, to lookup individual characters to disambiguate, for example, - (U+002D HYPHEN-MINUS, the minus sign next to zero on US keyboards) and − (U+2212 MINUS SIGN, a math symbol)

  • searchable list of characters and their names - roughly equivalent to the unicode(1) command, but in one page, amazingly the /usr/share/unicode database doesn't have any one file like this

  • bits/UTF-8-Unicode-Test-Documents - full list of UTF-8 characters

  • UTF-8 encoded plain text file - nice examples of edge cases, curly quotes example and box drawing alignment test which, incidentally, showed me I needed specific faces customisation in Emacs to get the Markdown code areas to display properly, also the idea of comparing various dashes

  • sample sentences in many languages - unused, "Sentences that contain all letters commonly used in a language"

  • UTF-8 sampler - unused, similar

Other fonts

In my previous blog post about fonts, I had a list of alternative fonts, but it seems people are not digging through this, so I figured I would redo the list here to preempt "but have you tried Jetbrains mono" kind of comments.

My requirements are:

  • no ligatures: yes, in the previous post, I wanted ligatures but I have changed my mind. after testing this, I find them distracting, confusing, and they often break the monospace nature of the display
  • monospace: this is to display code
  • italics: often used when writing Markdown, where I do make use of italics... Emacs falls back to underlining text when lacking italics which is hard to read
  • free-ish, ultimately should be packaged in Debian

Here is the list of alternatives I have considered in the past and why I'm not using them:

  • agave: recommended by tarzeau, not sure I like the lowercase a, a bit too exotic, packaged as fonts-agave

  • Cascadia code: optional ligatures, multilingual, not liking the alignment, ambiguous parenthesis (look too much like square brackets), new default for Windows Terminal and Visual Studio, packaged as fonts-cascadia-code

  • Fira Code: ligatures, was using Fira Mono from which it is derived, lacking italics except for forks, interestingly, Fira Code succeeds the alignment test but Fira Mono fails to show the X signs properly! packaged as fonts-firacode

  • Hack: no ligatures, very similar to Fira, italics, good alternative, fails the X test in box alignment, packaged as fonts-hack

  • Hermit: no ligatures, smaller, alignment issues in box drawing and dashes, packaged as fonts-hermit somehow part of cool-retro-term

  • IBM Plex: irritating website, replaces Helvetica as the IBM corporate font, no ligatures by default, italics, proportional alternatives, serifs and sans, multiple languages, partial failure in box alignment test (X signs), fancy curly braces contrast perhaps too much with the rest of the font, packaged in Debian as fonts-ibm-plex

  • Intel One Mono: nice legibility, no ligatures, alignment issues in box drawing, not packaged in Debian

  • Iosevka: optional ligatures, italics, multilingual, good legibility, has a proportional option, serifs and sans, line height issue in box drawing, fails dash test, not in Debian

  • Jetbrains Mono: (mandatory?) ligatures, good coverage, originally rumored to be not DFSG-free (Debian Free Software Guidelines) but ultimately packaged in Debian as fonts-jetbrains-mono

  • Monoid: optional ligatures, feels much "thinner" than Jetbrains, not liking alignment or spacing on that one, ambiguous 2Z, problems rendering box drawing, packaged as fonts-monoid

  • Mononoki: no ligatures, looks good, good alternative, suggested by the Debian fonts team as part of fonts-recommended, problems rendering box drawing, em dash bigger than en dash, packaged as fonts-mononoki

  • Source Code Pro: italics, looks good, but dash metrics look whacky, not in Debian

  • spleen: bitmap font, old school, spacing issue in box drawing test, packaged as fonts-spleen

  • sudo: personal project, no ligatures, zero originally not dotted, relied on metrics for legibility, spacing issue in box drawing, not in Debian

So, if I get tired of Commit Mono, I might probably try, in order:

  1. Hack
  2. Jetbrains Mono
  3. IBM Plex Mono

Iosevka, Monoki and Intel One Mono are also good options, but have alignment problems. Iosevka is particularly disappointing as the EM DASH metrics are just completely wrong (much too wide).

This was tested using the Programming fonts site which has all the above fonts, which cannot be said of Font Squirrel or Google Fonts, amazingly. Other such tools:

Categories: FLOSS Project Planets

Antoine Beaupré: 2024-05-29-playing-with-fonts-again

Planet Debian - Wed, 2024-05-29 17:38

meta title="Playing with fonts again"

I am getting increasingly frustrated by Fira Mono's lack of italic support so I am looking at alternative fonts again.

This time I seem to be settling on either Commit Mono or Space Mono. For now I'm using Commit Mono because it's a little more compressed than Fira and does have a italic version. I don't like how Space Mono's parenthesis (()) is "squarish", it feels visually ambiguous with the square brackets ([]), a big no-no for my primary use case (code).

So here I am using a new font, again. It required changing a bunch of configuration files in my home directory (which is in a private repository, sorry) and Emacs configuration (thankfully that's public!).

One gotcha is I realized I didn't actually have a global font configuration in Emacs, as some Faces define their own font family, which overrides the frame defaults.

This is what it looks like, before:

Fira Mono

After:

Commit Mono

(Notice how those screenshots are not sharp? I'm surprised too. The originals look sharp on my display, I suspect this is something to do with the Wayland transition. I've tried with both grim and flameshot, for what its worth.)

They are pretty similar! Commit Mono feels a bit more vertically compressed maybe too much so, actually -- the line height feels too low. But it's heavily customizable so that's something that's relatively easy to fix, if it's really a problem. Its weight is also a little heavier and wider than Fira which I find a little distracting right now, but maybe I'll get used to it.

All characters seem properly distinguishable, although, if I'd really want to nitpick I'd say the © and ® are too different, with the latter (REGISTERED SIGN) being way too small, basically unreadable here. Since I see this sign approximately never, it probably doesn't matter at all.

I like how the ampersand (&) is more traditional, although I'll miss the exotic one Fira produced... I like how the back quotes (`, GRAVE ACCENT) drop down low, nicely aligned with the apostrophe. As I mentioned before, I like how the bar on the "f" aligns with the other top of letters, something in Fira mono that really annoys me now that I've noticed it (it's not aligned!).

Here's the test sheet I've made up to test various characters. I could have sworn I had a good one like this lying around somewhere but couldn't find it so here it is, I guess.

ASCII test abcdefghijklmnopqrstuvwxyz1234567890-= ABCDEFGHIJKLMNOPQRSTUVWXYZ!@#$%^&*()_+ ambiguous characters &iIL7l1!|[](){}/\oO0DQ8B;:,./?~`'"$ all characters in a sentence, uppercase the quick fox jumps over the lazy dog THE QUICK FOX JUMPS OVER THE LAZY DOG same, in french voix ambiguë d'un cœur qui, au zéphyr, préfère les jattes de kiwis. VOIX AMBIGUË D'UN CŒUR QUI, AU ZÉPHYR, PRÉFÈRE LES JATTES DE KIWIS. Box drawing alignment tests: █ ▉ ╔══╦══╗ ┌──┬──┐ ╭──┬──╮ ╭──┬──╮ ┏━━┳━━┓ ┎┒┏┑ ╷ ╻ ┏┯┓ ┌┰┐ ▊ ╱╲╱╲╳╳╳ ║┌─╨─┐║ │╔═╧═╗│ │╒═╪═╕│ │╓─╁─╖│ ┃┌─╂─┐┃ ┗╃╄┙ ╶┼╴╺╋╸┠┼┨ ┝╋┥ ▋ ╲╱╲╱╳╳╳ ║│╲ ╱│║ │║ ║│ ││ │ ││ │║ ┃ ║│ ┃│ ╿ │┃ ┍╅╆┓ ╵ ╹ ┗┷┛ └┸┘ ▌ ╱╲╱╲╳╳╳ ╠╡ ╳ ╞╣ ├╢ ╟┤ ├┼─┼─┼┤ ├╫─╂─╫┤ ┣┿╾┼╼┿┫ ┕┛┖┚ ┌┄┄┐ ╎ ┏┅┅┓ ┋ ▍ ╲╱╲╱╳╳╳ ║│╱ ╲│║ │║ ║│ ││ │ ││ │║ ┃ ║│ ┃│ ╽ │┃ ░░▒▒▓▓██ ┊ ┆ ╎ ╏ ┇ ┋ ▎ ║└─╥─┘║ │╚═╤═╝│ │╘═╪═╛│ │╙─╀─╜│ ┃└─╂─┘┃ ░░▒▒▓▓██ ┊ ┆ ╎ ╏ ┇ ┋ ▏ ╚══╩══╝ └──┴──┘ ╰──┴──╯ ╰──┴──╯ ┗━━┻━━┛ └╌╌┘ ╎ ┗╍╍┛ ┋ ▁▂▃▄▅▆▇█ MIDDLE DOT, BULLET, HORIZONTAL ELLIPSIS: ·•… curly ‘single’ and “double” quotes ACUTE ACCENT, GRAVE ACCENT: ´` EURO SIGN: € unicode A1-BF: ¡¢£¤¥¦§¨©ª«¬­®¯°±²³´µ¶·¸¹º»¼½¾¿ HYPHEN-MINUS, MINUS SIGN, EN, EM DASH, HORIZONTAL BAR, LOW LINE -------------------------------------------------- −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− –––––––––––––––––––––––––––––––––––––––––––––––––– —————————————————————————————————————————————————— ―――――――――――――――――――――――――――――――――――――――――――――――――― __________________________________________________

So there you have it, got completely nerd swiped by typography again. Now I can go back to writing a too-long proposal again.

Sources and inspiration for the above:

  • the unicode(1) command, to lookup individual characters to disambiguate, for example, - (U+002D HYPHEN-MINUS, the minus sign next to zero on US keyboards) and − (U+2212 MINUS SIGN, a math symbol)

  • searchable list of characters and their names - roughly equivalent to the unicode(1) command, but in one page, amazingly the /usr/share/unicode database doesn't have any one file like this

  • bits/UTF-8-Unicode-Test-Documents - full list of UTF-8 characters

  • UTF-8 encoded plain text file - nice examples of edge cases, curly quotes example and box drawing alignment test which, incidentally, showed me I needed specific faces customisation in Emacs to get the Markdown code areas to display properly, also the idea of comparing various dashes

  • sample sentences in many languages - unused, "Sentences that contain all letters commonly used in a language"

  • UTF-8 sampler - unused, similar

Categories: FLOSS Project Planets

Mike Driscoll: Episode 42 – Harlequin – The SQL IDE for Your Terminal

Planet Python - Wed, 2024-05-29 17:06

This episode focuses on the Harlequin application, a Python SQL IDE for your terminal written using the amazing Textual package.

I was honored to have Ted Conbeer, the creator of Harlequin, on the show to discuss his creation and the other things he does with Python.

Specifically, we focused on the following topics:

  • Favorite Python packages
  • Origins of Harlequin
  • Why program for the terminal versus a GUI
  • Lessons learned in creating the tool
  • Asyncio
  • and more!
Links

The post Episode 42 – Harlequin – The SQL IDE for Your Terminal appeared first on Mouse Vs Python.

Categories: FLOSS Project Planets

Django Weblog: Django Enhancement Proposal 14: Background Workers

Planet Python - Wed, 2024-05-29 15:04

As of today, DEP-14 has been approved 🛫

The DEP was written and stewarded by Jake Howard. A very enthusiastic community has been active with feedback and encouragement, while the Django Steering Council gave the final inputs before its formal acceptance. The implementation of DEP-14 is expected to be a major leap forward for the “batteries included” philosophy of Django.

Whilst Django is a web framework, there's more to web applications than just the request-response lifecycle. Sending emails, communicating with external services or running complex actions should all be done outside the request-response cycle.

Django doesn't have a first-party solution for long-running tasks, however the ecosystem is filled with incredibly popular frameworks, all of which interact with Django in slightly different ways. Other frameworks such as Laravel have background workers built-in, allowing them to push tasks into the background to be processed at a later date, without requiring the end user to wait for them to occur.

Library maintainers must implement support for any possible task backend separately, should they wish to offload functionality to the background. This includes smaller libraries, but also larger meta-frameworks with their own package ecosystem such as Wagtail.

This proposal sets out to provide an interface and base implementation for long-running background tasks in Django.

Future work

The DEP will now move on to the Implementation phase before being merged into Django itself.

If you would like to help or try it out, go have a look at django-tasks, a separate reference implementation by Jake Howard, the author of the DEP.

Jake will also be speaking about the DEP in his talk at DjangoCon Europe at DjangoCon Europe 2024 in Vigo next week.

Categories: FLOSS Project Planets

Pages