Planet KDE

Subscribe to Planet KDE feed Planet KDE
Planet KDE | English
Updated: 21 hours 29 min ago

Prog(ressive) C++ at Meeting C++

Sat, 2023-12-23 07:05
Photos by Victor Ciura

The recording of my keynote at this year’s Meeting C++ in Berlin has been published on Meeting C++ channel on YouTube:

Prog C++ - Ivan Čukić - Closing Keynote Meeting C++ 2023

The talk is about prog (or progressive) C++ and idioms that make C++ code provably safe.

The slides are available in the Traning and talks section.

Huge thanks to Jens for tirelessly organizing Meeting C++ for this many years (this was my tenth year presenting at Meeting C++, and I skipped a few).

You can support my work on Patreon, or you can get my book Functional Programming in C++ at Manning if you're into that sort of thing. -->
Categories: FLOSS Project Planets

KDE @ 37C3

Sat, 2023-12-23 05:00

Next week KDE will be present at the 37th Chaos Communication Congress (37C3) in Hamburg, Germany.

36C3

It’s been 4 years since 36C3, back when COVID still seemed like an isolated problem on the other side of the world. A lot has come out of 36C3 over the following years.

The Umweltbundesamt presented their work on eco-certifying software there, which ended up kick-starting KDE Eco and resulted in Okular being the first ever software application awarded the “Blauer Engel” eco label for sustainable software.

I met people there that got me involved with the German Open Transport Meetup, which resulted in collaborations like the Transport API Respository and got us access to realtime elevator data for Itinerary.

36C3 was also the first time you could see the integration of our travel document extractor in Nextcloud Mail publicly in action.

37C3

Following 36C3 I had suggested a bigger presence of KDE next time. And that we’ll have. There are at least 6 KDE people attending, and there will be a KDE assembly for the first time as well. Looking forward to meet you there!

There will also be a talk by Joseph on Software Licensing For A Circular Economy covering the KDE Eco work.

A special thanks goes to the nice people at CCC-P and WMDE who helped us get tickets!

KDE Itinerary

Just in time we also managed to merge the Qt 6 port of Itinerary, after dealing with a few last-minute surprises that only occurred in specific configuration and on phones with limited debugging options:

  • Looking up a QTimeZone by IANA id on Android is significantly slower than on all other platforms, presumably because lack of an internal cache. Since Itinerary does that a lot when loading your data we suddenly faced startup times of 20-30 seconds. This has been worked around with our own caches during loading for now, but it yet has to be investigated and fixed in Qt properly.
  • The Breeze Qt Quick Controls style triggered a memory alignment violation on 32bit ARM, due to a copied internal Qt class which meanwhile changed its memory layout.
  • The Kirigami Addons date picker popup caused an application freeze on 32bit ARM. This has been addressed by using the native Android date picker instead, making this also consistent with the time picker.
37C3 Apple Wallet pass entry ticket in KDE Itinerary.

And with all that done we could also fix the rendering of the Apple Wallet pass for 37C3. That contains a dark background image but does not specify text colors, which we so far rendered practically unreadable by using a dark text color. In such a case we now consider the dominant color of the background to pick a better text color.

Categories: FLOSS Project Planets

This week in KDE: Holiday bug fixes

Sat, 2023-12-23 00:55

Like last week, the focus remained on getting the megarelrease ready for, well, a mega release! Along the way folks have been starting their well-earned vacations, so the pace of work understandably decreased a bit. Accordingly, this will be the last regular weekly post of the year, with at least next week’s skipped, and possibly the next two. Happy holidays, everyone! Rest and recharge so we can hit the ground running in 2024.

KDE 6 Mega-Release

(Includes all software to be released on the February 28th mega-release: Plasma 6, Frameworks 6, and apps from Gear 24.02)

General infoOpen issues: 216

UI improvements

The Breeze icon theme’s smartphone icons have been overhauled and modernized to reflect what phones actually look like today (Áron Kovács, link):

System Settings’ Font Management page has gotten a visual modernization to be more in line with the new frameless style in Plasma 6 (Carl Schwan, link):

Windows that don’t show up in the Task Manager or the Alt+Tab Task Switcher no longer appear semi-invisibly in the Overview effect (Akseli Lahtinen, link)

Breeze-themed non-editable frameless tabs (e.g. the tabs of a tabbed tool view or settings page) now expand to fill the available space by default, as there’s really no reason not to (Carl Schwan, link)

Improved the text contrast for certain accent colors (Akseli Lahtinen, link)

After pasting a file into a Dolphin window, if the file would end up at a location that’s currently out of view, the view scrolls to it so you can see it (Méven Car, link)

Okular’s “Show Signatures Panel” button now also opens the sidebar containing the signatures panel, if it happened to be closed at the time (Albert Astals Cid, link)

Elisa now supports cover images in the Webp format (Jack Hill, link)

Bug fixes

Important note: I don’t mention fixes for bugs that were never released to users; it’s just too much for me (it would probably be too much for you to read as well), and most people never encountered them in the first place. Because we’re in the middle of a big Plasma dev cycle, there are a lot of these bugs! So big thanks to everyone who’s made it a priority to fix them!

The screen locker has a fallback theme that appears when your active lock screen theme is broken. However when the fallback theme itself is broken for some reason, now the screen locker process breaks with the dreaded “your lock screen is broken” message rather than failing to lock the screen at all, which is worse (Joshua Goins, link)

File dialogs from a variety of Qt-yet-non-KDE apps will now have their name filters set correctly (Nicolas Fella, link)

When using a fractional scale factor, the Breeze window decoration theme’s window outlines no longer exhibit minor visual glitches (Vlad Zahorodnii, link)

Fixed an issue that could result in cursors leaving trails behind them when using a fractional scale factor and certain graphics cards that don’t support hardware cursors (Vlad Zahorodnii, link)

Widgets that have been assigned keyboard shortcuts should now be more reliable about remembering them. This probably alleviates or fixes a lot of the “Can’t activate Kickoff with the meta key” bugs! (Akseli Lahtinen, link)

Memory usage for NVIDIA GPUs is now represented with the correct unit in various System Monitor widgets and the app of the same name (Arjen Hiemstra, link)

When using a Bluetooth headset with integrated volume buttons, pushing them now always shows the volume change OSD (Bharadwaj Raju, link)

System Settings’ Task Switcher page no longer confusingly uses the word “backtab”, and the backwards-looking task switching invoked using Alt+Shift+Tab now works continuously if you hold it down (Yifan Zhu, link 1 and link 2)

The scrollbars of scrollable menus in QtQuick-based apps no longer inappropriately overlap the menu items (Tomislav Pap, link)

Other bug information of note:

Performance & Technical

Okular has now been ported to Qt 6 (Nicolas Fella, Sune Vuorela, and Carl Schwan. link)

The Wacom Tablet applet has now been ported to Qt 6 (Nicolas Fella, link)

Fixed one source of hangs in Dolphin when browsing a slow Samba share–this time having to do with bottlenecks generating thumbnails (Harald Sitter, link)

Reduced the memory usage of screen recording using KPipeWire. This won’t entirely fix the issue of screen recording taking up too much memory, but it makes a big difference already and should prevent outright resource exhaustion (Arjen Hiemstra, link 1 and link 2)

The DrKonqi crash reporter is now capable of recording and reporting crashes of the Powerdevil power management subsystem (Harald Sitter, link)

Automation & Systematization

Added an autotest to make sure the Emoji Selector window works (Fushan Wen, link)

…And Everything Else

This blog only covers the tip of the iceberg! If you’re hungry for more, check out https://planet.kde.org, where you can find more news from other KDE contributors.

How You Can Help

We’re hosting our Plasma 6 fundraiser right now and need your help! Thanks to you we’re now at 98% of our goal of 500 new KDE e.V. members! That’s right, 98%!!! I bet we can get over the 500 mark before Christmas, and a little birdie might have told me that if we do, there could be stretch goals. So if you like the work we’re doing, spreading the wealth via this fundraiser is a great way to share the love.

If you’re a developer, work on Qt6/KF6/Plasma 6 issues! Which issues? These issues. Plasma 6 is very usable for daily driving now, but still in need of bug-fixing and polishing to get it into a releasable state by February.

Otherwise, visit https://community.kde.org/Get_Involved to discover other ways to be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don’t have to already be a programmer, either. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!

Categories: FLOSS Project Planets

Introduction to Delivery Performance Analytics

Fri, 2023-12-22 11:43
Delivery Performance Analytics is a consulting service dedicated to continuously enhancing how organizations sustainably deliver better software-defined products faster, while increasing the workforce’s well-being, by combining a data analytics approach with a proficient multidisciplinary consulting team.
Categories: FLOSS Project Planets

Web Review, Week 2023-51

Fri, 2023-12-22 09:05

Let’s go for my web review for the week 2023-51.

Do we need to rethink what free software is?

Tags: tech, foss, licensing, ethics

A bit of an older article I’m bumping into again. It lays out fairly well the current limits and issues with Free Software as it is defined today. I’m unconvinced it can be solved via licenses but the debate needs to happen… I feel that somehow it’s too much ignored.

https://mjg59.dreamwidth.org/52907.html


Adam Mosseri spells out Threads’ plans for the fediverse - The Verge

Tags: tech, facebook, fediverse, social-media

Looks like Meta is moving forward with more ActivityPub compatibility for Threads. This raises real questions about what they genuinely want to implement and what they’ll abandon along the way.

https://www.theverge.com/2023/12/15/24003435/adam-mosseri-threads-fediverse-plans


The Fediverse, Meta and the Tolerance Paradox

Tags: tech, facebook, fediverse, social-media

As Threads being connected to the Fediverse might turn into a reality, this article becomes all the more important. The question of this connection being even desirable is an important one.

https://www.viennawriter.net/blog/the-fediverse-meta-and-the-tolerance-paradox-en/


How bad are the thousands of new stochastically-generated websites?

Tags: tech, web, search, ai, gpt, criticism, knowledge

When SEO and generated content meet… this isn’t pretty. The amount of good content on the web reduced in the past decade, it looks like we’re happily crossing another threshold in mediocrity.

https://infosec.exchange/@bhawthorne/111601578642616056


Heather Ford: Is the Web Eating Itself? LLMs versus verifiability - Ethan Zuckerman

Tags: tech, ai, gpt, knowledge, wikipedia

The actual dangers of generative AI. Once the web is flooded with generated content, what will happen to knowledge representation and verifiability?

https://ethanzuckerman.com/2023/10/10/heather-ford-is-the-web-eating-itself-llms-versus-verifiability/


Facebook Is Being Overrun With Stolen, AI-Generated Images That People Think Are Real

Tags: tech, ai, machine-learning, gpt, social-media, criticism

Here we are… We’re really close to crossing into this territory where any fiction can disguise itself for reality. The problem is that we’ll literally be drowning in such content. The social impacts can’t be underestimated.

https://www.404media.co/facebook-is-being-overrun-with-stolen-ai-generated-images-that-people-think-are-real/


Cory Doctorow: What Kind of Bubble is AI?

Tags: tech, economics, business, ai, gpt

That’s a very good question. What will be left once all the hype is gone? Not all bubbles leaving something behind… we can hope this one will.

https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/


Google OAuth is broken (sort of) - Truffle Security

Tags: tech, google, oauth, security

Interesting finding. This shows a potential issue in how identities are verified by providers.

https://trufflesecurity.com/blog/google-oauth-is-broken-sort-of/


SMTP Smuggling - Spoofing E-Mails Worldwide - SEC Consult

Tags: tech, security, email

New technique for SMTP smuggling… vulnerable servers then allow to spoof while still passing DMARC checks properly. Check your providers and server configuration.

https://sec-consult.com/blog/detail/smtp-smuggling-spoofing-e-mails-worldwide/


Terrapin Attack

Tags: tech, ssh, security

Interesting new attack on the SSH protocol. This is hard to achieve outside of the LAN though.

https://terrapin-attack.com/


The World Before Git - by Sarup Banskota

Tags: tech, version-control, git, history

Back to the history of VCS, anyone still remember and used SCCS? Well, I did use it…

https://osshistory.org/p/the-world-before-git


PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU

Tags: tech, ai, machine-learning, gpt

Interesting inference engine. The design is clever with an hybrid CPU-GPU approach to limit the memory demand on the GPU and the amount of data transfers. The results are very interesting, especially surprising if the apparently very limited impact on the accuracy.

https://ipads.se.sjtu.edu.cn/_media/publications/powerinfer-20231219.pdf


Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads

Tags: tech, ai, machine-learning, gpt, optimization

Interesting technique to speed up the generation of large language models.

https://sites.google.com/view/medusa-llm


Interface Dispatch | Lukas Atkinson

Tags: tech, programming, object-oriented, compiler

Nice state of the art view on how dynamic dispatch is implemented in several languages. Does a good way showing the trade-offs involved.

https://lukasatkinson.de/2018/interface-dispatch/


Database Fundamentals

Tags: tech, databases, distributed

An exploration of how databases work from first principles, going all the way to distributed nodes etc. Good list of topics to explore further.

https://tontinton.com/posts/database-fundementals/


Maybe We Don’t Need UUIDv7 After All

Tags: tech, databases, uuid

It might not be as clear cut as sometimes assumed. With the right index UUIDv4 can still do as key in databases.

https://lu.sagebl.eu/notes/maybe-we-dont-need-uuidv7/


How many CPU cores can you actually use in parallel?

Tags: tech, multithreading, performance, python

This is unsurprisingly highly depend on the actual code, not only on the hardware.

https://pythonspeed.com/articles/cpu-thread-pool-size/


Performance engineering, profilers, and seeing the invisible - Made of Bugs

Tags: tech, profiling, optimization

Or why using a profiler is not as easy as it sounds. This requires quite some experience and the ability to tap in other information not present in the profile.

https://blog.nelhage.com/post/profilers-seeing-the-invisible/


ELF binaries and everything

before main() starts Tags: tech, elf, unix, system

Ever wondered how ELF and ld.so work? This is a good primer on the topic with a few OpenBSD specifics.

https://2023.eurobsdcon.org/slides/eurobsdcon2023-janne_johansson-ELF-binaries.pdf


A curiously recurring lifetime issue

Tags: tech, api, safety, c++

This is an easy mistake to make. I’d say the API isn’t helping there either, there’s an improvement to find in Cap’n’proto to make it safer.

https://blog.dureuill.net/articles/recurring-lifetime/


Memory Safety is a Red Herring

Tags: tech, memory, safety, rust, c++, java, python

Very interesting musing about undefined behaviors and language constraints. This is a bit Rust focused for obvious reasons but is also looking at what other languages have been doing.

https://steveklabnik.com/writing/memory-safety-is-a-red-herring


Never trust a programmer who says they know C++ by Louis Brandy

Tags: tech, c++, learning, interviews

An old post, but very much true… People who really know C++ have stared the abyss in the eye, and you can tell.

http://lbrandy.com/blog/2010/03/never-trust-a-programmer-who-says-he-knows-c/


Simulating Fluids, Fire, and Smoke in Real-Time

Tags: tech, shader, 3d, simulation, physics, mathematics

Wonder how to implement such real-time simulations? This is a good summary of all the math involved. Also comes with code snippets and demos.

https://andrewkchan.dev/posts/fire.html


The day I started believing in Unit Tests

Tags: tech, tests, embedded

Interesting story about using unit tests by someone who thought it was a waste of time… until, they helped uncover a bug which was widespread. Also it was in an embedded context which comes with its own challenges.

https://mental-reverb.com/blog.php?id=42


Advice for new software devs who’ve read all those other advice essays • Buttondown

Tags: tech, programming, craftsmanship

This is a good set of advices for beginners. I especially like the ones about best practices, trying different things and why it makes sense to be conservative tech wise.

https://buttondown.email/hillelwayne/archive/advice-for-new-software-devs-whove-read-all-those/


Technical Debt is not real

Tags: tech, technical-debt

This is indeed a more complex topic than it sounds. When someone complains about “technical debt” always inquire what it really means to them, what this is about, what are the symptoms.

https://www.foxhound.systems/blog/technical-debt-is-not-real/


Managing Technical Debt - Jacob Kaplan-Moss

Tags: tech, technical-debt, product-management, project-management, metrics

Good approach for tackling it indeed. The crux of the issue is really measuring the tech debt since it’s still a fuzzy concept and we have no good metrics for it.

https://jacobian.org/2023/dec/20/tech-debt/


Ask Questions, Repeat The Hard Parts, and Listen – Rands in Repose

Tags: management, leadership, decision-making

This is an impressive piece about decision making and leadership. I love the approach: seeking to get the decision out of the person instead of deciding for them.

https://randsinrepose.com/archives/ask-questions-repeat-the-hard-parts-and-listen/


How Lego builds a new Lego set

Tags: lego, design

Fascinating article explaining how some Lego sets are designed.

https://www.theverge.com/c/23991049/lego-ideas-polaroid-onestep-behind-the-scenes-price


Bye for now!

Categories: FLOSS Project Planets

Don’t change your login shell, use a modern terminal emulator

Thu, 2023-12-21 18:00

chsh is a small tool that lets you change the default shell for your current user. In order to let any user change their own shell, which is set in /etc/passwd, it needs privileges and is generally setuid root.

I am of the opinion that setuid/setgid binaries are a UNIX legacy that should be deprecated. I will explain the security reasons behind that statement in a future post.

In this “UNIX legacy” series of posts, I am looking at classic setuid binaries and try to find better, safer alternatives for common use cases. In this post, we will look at alternatives to changing your login shell.

Should you change the default shell?

People usually change their default shell because they want to use a modern alternative to Bash (Zsh, fish, Oils, nushell, etc.).

Changing the default shell (especially to a non POSIX or Bash compatible one) might have unintended consequences as some scripts relying on Bash compatibility might not work anymore. There are lots of warnings about this, for example for the fish shell:

On Fedora Atomic Desktops (Silverblue, Kinoite, etc.), your preferred shell may not always be available, notably if you have to reset your overlays for an upgrade, and could lead to an unusable system:

So overall, it is a bad idea to change the default login shell for interactive users.

For non-interactive users or system users, the shell is usually set by the system administrator only and the user itself never needs to change it.

If you are using systemd-homed, then you can change your own shell via the homectl command without needing setuid binaries but for the same reasons as above, it is still not a good idea.

Graphical interface: Use a modern terminal emulator

If you want to use another shell than the default one, you can use the functionality from your graphical terminal emulator to start it by default instead of Bash.

I recommend using the freshly released Prompt (sources) terminal if you are running on Fedora Silverblue or other GNOME related desktops. You can set your preferred shell in the Profiles section of the preferences. It also has great integration for toolbox/distrobox containers. We’re investigating making this the default in a future version of Fedora Silverblue (issue#520).

If you are running on Fedora Kinoite or other KDE related desktops, you should look at Konsole’s profiles features. You can create your own profiles and set the Command to /bin/zsh to use another shell. You can also assign shortcuts to profiles to open them directly a new tab, or use /bin/toolbox enter fedora-toolbox-39 as Command to directly enter a toolbox container for example.

This is obviously not an exhaustive list and other modern terminal emulators also let you specify which command to start.

If your terminal emulator does not allow you to do that, then you can use the alternative from the next section.

Or use a small snippet

If you want to change the default shell for a user on a server, then you can add the following code snippet at the beginning of the user’s ~/.bashrc (example for fish):

# Only trigger if: # - 'fish' is not the parent process of this shell # - We did not call: bash -c '...' # - The fish binary exists and is executable if [[ $(ps --no-header --pid=$PPID --format=comm) != "fish" && -z ${BASH_EXECUTION_STRING} && -x "/bin/fish" ]]; then shopt -q login_shell && LOGIN_OPTION='--login' || LOGIN_OPTION='' exec fish $LOGIN_OPTION fi References Other posts from this series
Categories: FLOSS Project Planets

Cutelyst v4 – 10 years 🎉

Thu, 2023-12-21 16:47

Cutelyst the Qt web framework is now at v4.0.0 just a bit later for it’s 10th anniversary.

With 2.5k commits it has been steadly improving, and in production for many high traffic applications. With this release we say good bye to our old Qt5 friend, also dropped uWSGI support, clearsilver and Grantlee were also removed, many methods now take a QStringView and Cutelyst::Header class was heavly refactored to allow usage of QByteArrayView, and stopped storing QStrings internally in a QHash, they are QByteArray inside a vector.

Before, all headers were uppercased and dashes replaced with underscores, this was quite some work, so that when searching the string had to be converted to this format to be searcheable, this had the advantage of allowing the use of QHash and in templates you could c.request.header.CONTENT_TYPE. Turns out both cases aren’t so important, speed is more important for the wider use cases.

With these changes Cutelyst managed to get 10 – 15% faster on TechEmpower benchmarks, which is great as we are still well positioned as a full stack framework there.

https://github.com/cutelyst/cutelyst/releases/tag/v4.0.0

Have fun, Merry Christmas and Happy New Year!

Categories: FLOSS Project Planets

Akademy 2023 Keynote: Kdenlive - what can we learn after 20 years of development?

Thu, 2023-12-21 12:20

Slides of the presentation: https://conf.kde.org/event/5/contributions/155/attachments/85/101/Kdenlive-Akademy-23.pdf

Last year marked the 20th anniversary of the Kdenlive video editor, and the start of a shift in our development. Discover the team behind this very popular project, and what we learned during these years - what are our strengths, how we are organizing our roadmap and what we are planning to avoid past mistakes and keep growing.

Categories: FLOSS Project Planets

Akademy 2023 - Keynote: Libre Space Foundation - Empowering Open-Source Space Technologies

Thu, 2023-12-21 12:08

Find the slides of the presentation here:
Join us for a talk that highlights the potential of open-source collaboration in the space industry. Together, we can unlock new possibilities in space exploration through the power of free software, open-source hardware and open-data.

The space industry is evolving rapidly, with open-source solutions playing an increasingly vital role. The Libre Space Foundation (LSF) champions this movement by developing open-source space technologies that make space exploration more accessible for everyone. In this talk, we'll introduce the Libre Space Foundation and discuss the relevance of free software in the space sector.

Key points to be covered:

Introduction to Libre Space Foundation: A brief overview of LSF's, history, goals, vision, and its commitment to open-source space technologies.
The value of free software in space: The role of free software in promoting innovation, collaboration, and accessibility in the space industry.
Challenges and opportunities: A look at some of the unique challenges LSF encounters and the ways the free software community can help address them.
LSF and other free software projects: Commonalities and differences between LSF and other free software initiatives.
Insights for KDE from LSF: What the KDE community can learn from LSF's experiences and how collaboration can be fostered between communities.
Stories and lessons learned: A few anecdotes and takeaways from LSF's journey, highlighting the importance of community and shared vision.
Future prospects: A glance at the future of Libre Space Foundation, its projects, and opportunities for the free software community to contribute.

Categories: FLOSS Project Planets

Projection Matrices with Vulkan – Part 2

Thu, 2023-12-21 04:00
Recap

Recall that in Part 1 we discussed the differences between OpenGL and Vulkan when it comes to the fixed function parts of the graphics pipeline. We looked at how OpenGL’s use of a left-handed set of coordinate axes for clip-space meant that projection matrices for OpenGL also incorporate a z-axis flip to switch from a right-handed eye space to a left-handed clip space.

We then went on to explain how we can apply a post-view correction matrix that performs a rotation of 180 degrees about the eye-space x-axis which will reorient the eye space axes such that they are aligned with the Vulkan clip space axes.

In this article we shall derive a perspective projection matrix, that transforms a vertex from the rotated eye space into the Vulkan clip space. Thanks to the fact that we have already taken care of aligning the source and destination space axes, all we have to care about is the projection itself. There is no need to introduce any axis inversions or other sleights of hand. We hope that this article when coupled with Part 1 will give you a full understanding of your transformations and allow you to make modifications without adding special cases. Let’s get cracking!

Defining the Problem

We will look at deriving the perspective projection matrix for a view volume, defined by 6 planes forming a frustum (rectangular truncated pyramid). Let’s assume that the camera is located at the origin O in our “rotated eye space” and looking along the positive z-axis. From here on in we will just refer to this “rotated eye space” as “eye space” for brevity and we will use the subscript “eye” for quantities in this space.

The following two diagrams show the view volume from top-down and side-elevation views. You may want to middle-click on them to get the full SVGs in separate browser tabs so that you can refer back to them.

 

Figure 1: Top down view of the view volume. Notice the x-axis increases down the page.

 

 

Figure 2: Side elevation view of the view volume. Notice the y-axis increases down the page.

 

The planes forming the frustum are defined by:

  • Near plane is defined by z_{eye} = n. This is the plane that we will project the vertices on to. Think of it as the window on to the virtual world through which we will look.
  • Far plane is defined by z_{eye} = f. This defines the maximum distance to which we can see. Anything beyond this will be clipped to the far plane.
  • Left and right planes are defined by specifying the x-coordinate of the near plane x_{eye} = l and x_{eye} = r, then projecting those back to the origin O. Note that r > l.
  • Top and bottom planes are defined by specifying the y-coordinate of the near plane y_{eye} = t and y_{eye} = b, then projecting those back to the origin O. Note that b > t which is in the opposite sense that you may be used to. This is because we rotated our eye space coordinate system so that y increases downwards.

Within the view volume, we define a point \bm{p}_{eye} = (x_e, y_e, z_e)^T representing a vertex that we wish to transform into clip space. If we trace a ray back from \bm{p}_{eye} to the origin, then we label the point where the ray crosses the near plane as \bm{p}_{proj} = (x_p, y_p, z_p)^T. Note that \bm{p}_{proj} is still in eye space coordinates.

We know that clip space uses 4 dimensional homogeneous coordinates. We shall call the resulting point in clip-space \bm{p}_{clip} = (x_c, y_c, z_c, w_c)^T. Our job then is to find a 4×4 projection matrix, P such that:

\bm{p}_{clip} = P \bm{p}_{eye} \qquad \rm{or} \qquad \begin{pmatrix} x_c \\ y_c \\ z_c \\ w_c \end{pmatrix} = P \begin{pmatrix} x_e \\ y_e \\ z_e \\ 1 \end{pmatrix} \qquad (\dagger)

Deriving the Perspective Projection Matrix

Clip space is an intermediate coordinate system used by Vulkan and the other graphics APIs to perform clipping of geometry. Once that is complete, the clip space homogenous coordinates are projected back to Cartesian space by dividing all components by the 4th component, w_c. In order to allow for perspective-correct interpolation of per-vertex attributes to happen, the 4th component must be equal to the eye space depth or w_c = z_e. This normalisation process then yields the vertex position in normalised device coordinates (NDC) as:

\bm{p}_{ndc} = \begin{pmatrix} x_n \\ y_n \\ z_n \end{pmatrix} = \begin{pmatrix} x_c / z_e \\ y_c / z_e \\ z_c / z_e \\ \end{pmatrix} \qquad (\ast)

Since we always want w_c = z_e, this means that the final row of P will be (0, 0, 1, 0). Notice that because our z-axis is aligned with the clip-space z-axis there is no negation required here.

So, at this stage we know that the projection matrix looks like this:

P = \begin{pmatrix} \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot \\ 0 & 0 & 1 & 0 \\ \end{pmatrix}

Let’s carry on and fill in the blanks.

Projection of the x-coordinate

Looking back at Figure 1, we can see by the properties of similar triangles that:

\frac{x_p}{z_p} = \frac{x_e}{z_e} \implies \frac{x_p}{n} = \frac{x_e}{z_e}

since on the near plane z_p = n. Rearranging this very slightly we get:

x_p = \frac{n x_e}{z_e} \qquad (i)

Let us now consider how the projected vertex positions map through to normalised device coordinates. In Vulkan’s NDC, the view volume becomes a cuboid where -1 \leq x_n \leq 1, -1 \leq y_n \leq 1, and 0 \leq z_n \leq 1. We want the x component of \bm{p}_{ndc} to vary linearly with the x component of the projected point, x_p. If it was not a linear relationship then objects would appear to be distorted across the screen or to move with apparently varying velocities.

We know that the extremities of the view volume in the x direction are defined by x_p = l and x_p = r. These map to -1 and +1 in normalised device coordinates respectively. We can therefore say that at x_p = l, x_n = -1 and x_p = r, x_n = 1. Using this information we can plot the following graph for x_n = m x_p + c.

 

Graph showing linear relationship between x component of normalised device coordinates and projected point in eye space.

 

That’s right, more of your high school maths is going to be used to find the gradient and intercept of this equation!

The gradient, m is given by:

m = \frac{\Delta y}{\Delta x} = \frac{1 - (-1)}{r - l} = \frac{2}{r - l}

Substituting the gradient back in we get a simple equation to solve to find the intercept, c:

x_n = \frac{2 x_p}{r - l} + c

substituting in x_n = 1 and x_p = r:

1 = \frac{2 r}{r - l} + c \implies c = 1 - \frac{2 r}{r - l} \implies c = \frac{r - l - 2r}{r - l} \implies c = - \frac{r + l}{r - l}

We then get the following expression for x_n as a function of x_p:

x_n = \frac{2 x_p}{r - l} - \frac{r + l}{r - l} \qquad (ii)

Substituting in for x_p from equation (i) into equation (ii) and factorising gives:

\begin{align*} x_n &= \frac{2 n x_e}{(r - l) z_e} - \frac{r + l}{r - l} \\ \implies x_n &= \frac{2 n x_e}{(r - l) z_e} - \frac{r + l}{r - l} \frac{z_e}{z_e} \\ \implies x_n &= \frac{1}{z_e} \left( \left( \frac{2n}{r - l} \right) x_e - \frac{r + l}{r - l} z_e \right) \\ \implies x_n z_e &= \left( \frac{2n}{r - l} \right) x_e - \frac{r + l}{r - l} z_e \\ \end{align*}

Recall from the first component of (\ast) that x_n z_e = x_c. Substituting this in for the left-hand side of the previous equation gives:

x_c = \left( \frac{2n}{r - l} \right) x_e - \frac{r + l}{r - l} z_e

which is now directly comparable to the equation for the 1st component of (\dagger) and comparing coefficients allows us to immediately read off the first row of the projection matrix as (\frac{2n}{r - l}, 0, -\frac{r + l}{r - l}, 0). This also makes intuitive sense looking back at Figure 1 as the x component of the clip space point should only depend upon the x and z components of the eye space position (the eye space y component does not affect it).

As it stands here, the projection matrix looks like this:

P = \begin{pmatrix} \frac{2n}{r - l} & 0 & -\frac{r + l}{r - l} & 0 \\ \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot \\ 0 & 0 & 1 & 0 \\ \end{pmatrix}

Projection of the y-coordinate

The good news, is that the analysis in the y direction is exactly analogous to what we just did for the x direction. Without further ado, from Figure 2 and by the properties of similar triangles and since on the near plane z_p = n:

\frac{y_p}{z_p} = \frac{y_e}{z_e} \implies \frac{y_p}{n} = \frac{y_e}{z_e}.

Which then gives:

y_p = \frac{n y_e}{z_e} \qquad (iii)

We know that the extremities of the view volume in the y direction are defined by y_p = t and y_p = b. These map to -1 and +1 in normalised device coordinates respectively. We can therefore say that at y_p = t, y_n = -1 and y_p = b, y_n = 1. Using this information we can plot the following graph for y_n = m y_p + c.

 

Graph showing linear relationship between y component of normalised device coordinates and projected point in eye space.

 

As before, we have a linear equation to find the gradient and intercept of. The gradient, m is given by:

m = \frac{\Delta y}{\Delta x} = \frac{1 - (-1)}{b - t} = \frac{2}{b - t}

Substituting the gradient back in we get a simple equation to solve to find the intercept, c:

y_n = \frac{2 y_p}{b - t} + c

substituting in y_n = 1 and y_p = b:

1 = \frac{2 b}{b - t} + c \implies c = 1 - \frac{2 b}{b - t} \implies c = \frac{b - t - 2b}{b - t} \implies c = - \frac{b + t}{b - t}

We then get the following expression for y_n as a function of y_p:

y_n = \frac{2 y_p}{b - t} - \frac{b + t}{b - t} \qquad (iv)

Substituting in for y_p from equation (iii) into equation (iv) and factorising gives:

\begin{align*} y_n &= \frac{2 n y_e}{(b - t) z_e} - \frac{b + t}{b - t} \\ \implies y_n &= \frac{2 n y_e}{(b - t) z_e} - \frac{b + t}{b - t} \frac{z_e}{z_e} \\ \implies y_n &= \frac{1}{z_e} \left( \left( \frac{2n}{b - t} \right) y_e - \frac{b + t}{b - t} z_e \right) \\ \implies y_n z_e &= \left( \frac{2n}{b - t} \right) y_e - \frac{b + t}{b - t} z_e \\ \end{align*}

Recall from the second component of (\ast) that y_n z_e = y_c. Substituting this in for the left-hand side of the previous equation gives:

y_c = \left( \frac{2n}{b - t} \right) y_e - \frac{b + t}{b - t} z_e

This time, comparing to the second component of (\dagger) we can read off the coefficients for the second row of the projection matrix as (0, \frac{2n}{b - t}, -\frac{b + t}{b - t}, 0). Once again a quick intuitive check against Figure 2 matches what we have found. The projected and clip space y coordinates do not depend upon the x component of the eye space position.

At the three quarters stage, the projection matrix is now:

P = \begin{pmatrix} \frac{2n}{r - l} & 0 & -\frac{r + l}{r - l} & 0 \\ 0 & \frac{2n}{b-t} & -\frac{b + t}{b - t} & 0 \\ \cdot & \cdot & \cdot & \cdot \\ 0 & 0 & 1 & 0 \\ \end{pmatrix}

We are almost there now. We have just the z-axis mapping left to deal with.

Mapping the z-coordinate

The analysis of the z-axis is a little different to that of the x and y dimensions. For Vulkan, we wish to map eye space depths such that:

  • the near plane, z_e = n, maps to z_n = 0 and
  • the far plane, z_e = f, maps to z_n = 1

The z components of the projected point and the normalised device coordinates point should not depend upon the x and y components. This means that for the 3rd row of the projection matrix the first two elements will be 0. The remaining two elements we will denote by A and B respectively:

P = \begin{pmatrix} \frac{2n}{r - l} & 0 & -\frac{r + l}{r - l} & 0 \\ 0 & \frac{2n}{b-t} & -\frac{b + t}{b - t} & 0 \\ 0 & 0 & A & B \\ 0 & 0 & 1 & 0 \\ \end{pmatrix}

Combining this with the 3rd row of (\dagger) we see that:

z_c = A z_e + B

Now if we divide both sides by z_e and recalling that from (\ast) that z_n = z_c / z_e we can write:

z_n = A + \frac{B}{z_e}. \qquad (v)

Substituting in our boundary conditions (shown in the bullet points above) into equation (v) we get a pair of simultaneous equations for A and B:

\begin{align*} A + \frac{B}{n} &= 0 \qquad (vi) \\ A + \frac{B}{f} &= 1 \qquad (vii) \\ \end{align*}

We can subtract equation (vi) from equation (vii) to eliminate A:

\begin{gather*} \frac{B}{f} - \frac{B}{n} = 1 \implies \frac{Bn - Bf}{nf} = 1 \implies \frac{B(n - f)}{nf} = 1 \implies B = \frac{nf}{n - f} \\ \implies B = - \frac{nf}{f - n} \qquad (viii) \\ \end{gather*}

Now to find A we can substitute (viii) back into (vii):

\begin{align*} A &- \frac{n f}{f(f - n)} = 1 \\ \implies A &- \frac{n}{f - n} = 1 \\ \implies A &= 1 + \frac{n}{f - n} \\ \implies A &= \frac{f - n + n}{f - n} \\ \implies A &= \frac{f}{f - n} \qquad (ix) \end{align*}

Substituting equations (viii) and (ix) back into the projection matrix, we finally arrive at the result for a perspective projection matrix useable with Vulkan in conjunction with the post-view rotation matrix from Part 1:

P = \begin{pmatrix} \frac{2n}{r - l} & 0 & -\frac{r + l}{r - l} & 0 \\ 0 & \frac{2n}{b-t} & -\frac{b + t}{b - t} & 0 \\ 0 & 0 & \frac{f}{f - n} & - \frac{n f}{f - n} \\ 0 & 0 & 1 & 0 \\ \end{pmatrix} \qquad (x)

Using the Projection Matrix in Practice

Recall that equation (x) is the matrix to perform the projection operation from the rotated eye space coordinates to the right-handed clip space coordinates used by Vulkan. What does this mean? Well, it means that we should include the post-view correction matrix into our calculations when transforming vertices. Given a vertex position in model space, \bm{p}_{model}, we can transform it into clip space by the following:

\bm{p}_{clip} = P X V M \bm{p}_{model}

As we saw in Part 1, the post-view correction matrix is just a constant that performs the 180 degree rotation about the x-axis, we can combine this into our calculation of the projection matrix, P. This is analogous to how the OpenGL projection matrix typically includes the z-axis flip to change from a right-handed to left-handed coordinate system. Combining the post-view rotation and Vulkan projection matrix gives:

\begin{align*} Q &= P X \\ \implies Q &= \begin{pmatrix} \frac{2n}{r - l} & 0 & -\frac{r + l}{r - l} & 0 \\ 0 & \frac{2n}{b-t} & -\frac{b + t}{b - t} & 0 \\ 0 & 0 & \frac{f}{f - n} & - \frac{n f}{f - n} \\ 0 & 0 & 1 & 0 \\ \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{pmatrix} \\ \implies Q &= \begin{pmatrix} \frac{2n}{r - l} & 0 & -\frac{r + l}{r - l} & 0 \\ 0 & -\frac{2n}{b-t} & -\frac{b + t}{b - t} & 0 \\ 0 & 0 & -\frac{f}{f - n} & - \frac{n f}{f - n} \\ 0 & 0 & -1 & 0 \\ \end{pmatrix} \qquad (xi) \\ \end{align*}

Before you rush off and implement equation (xi) in your favourite editor and language, there is one final piece of subtlety to consider! Recall that when we began deriving the perspective projection matrix, we set things up so that our source coordinate system was the rotated eye space so that its axes were already aligned with the clip space destination coordinate system. Refer back to Figures 1 and 2 and note the orientation of the axes. In particular that the y axis increases in a downward direction.

The thing to keep in mind is that the parameters used in (xi) are actually specified in the rotated eye space coordinate system. This has implications:

  • x axis: Nothing to change here. Since we rotate about the x-axis to get from eye space to rotated eye space, the x component of any position does not change.
  • y axis: The 180 degree rotation about the x axis will affect the y components of any positions. The following diagram shows a blue view volume in the non-rotated eye space – the z-axis increases to the left and the near plane is positioned on the negative z side. The view volume is in the upper right quadrant and in this case both the top and bottom values for the near plane are positive. In the lower left quadrant, in green, we also show the rotated view volume. Notice that the 180 degree rotation causes the signs of the t and b parameters to be negated.
  • z axis: Technically, the 180 degree rotation would also negate the z components of any positions. However, developers are already used to specifying the near and far plane parameters, n and f, as distances from the z_e = 0 plane. This is exactly what happens when creating an OpenGL projection matrix for example. Since we already specified n and f as positive values in the rotated eye space, we can just treat the inputs to any function that we write as positive distances for the near and far plane and stay in keeping with what developers are used to.

 

A view volume with top and bottom specified in eye space. When the post-view rotation is applied, the top and bottom parameters are negated.

 

Putting this together, we can create a function to produce a Vulkan projection matrix and optionally have it incorporate the post-view correction rotation matrix. All we have to remember is that if we are opting in to include the post-view correction, then the top and bottom parameters are treated as being specified in the non-rotated eye space. If we do not opt in, then they are specified in rotated eye space.

In practise, this works well because often you want to minimise the amount of floating point arithmetic going on per frame so opting in allows the developer to specify top and bottom in the usual eye space coordinates which is closer to the chosen world space system (often y-up too), than the rotated eye space.

Using the popular glm library, we can declare a function as:

enum class ApplyPostViewCorrection : uint8_t { No, Yes }; struct AsymmetricPerspectiveOptions { float left{ -1.0f }; float right{ 1.0f }; float bottom{ -1.0f }; float top{ 1.0f }; float nearPlane{ 0.1f }; float farPlane{ 100.0f }; ApplyPostViewCorrection applyPostViewCorrection{ ApplyPostViewCorrection::Yes }; }; glm::mat4 perspective(const AsymmetricPerspectiveOptions &options);

The implementation turns out to be very easy once we know equations (x) and (xi):

glm::mat4 perspective(const AsymmetricPerspectiveOptions &options) { const auto twoNear = 2.0f * options.nearPlane; const auto rightMinusLeft = options.right - options.left; const auto farMinusNear = options.farPlane - options.nearPlane; if (options.applyPostViewCorrection == ApplyPostViewCorrection::No) { const auto bottomMinusTop = options.bottom - options.top; const glm::mat4 m = { twoNear / rightMinusLeft, 0.0f, 0.0f, 0.0f, 0.0f, twoNear / bottomMinusTop, 0.0f, 0.0f, -(options.right + options.left) / rightMinusLeft, -(options.bottom + options.top) / bottomMinusTop, options.farPlane / farMinusNear, 1.0f, 0.0f, 0.0f, -options.nearPlane * options.farPlane / farMinusNear, 0.0f }; return m; } else { // If we are applying the post view correction, we need to negate the signs of the // top and bottom planes to take into account the fact that the post view correction // rotate them 180 degrees around the x axis. // // This has the effect of treating the top and bottom planes as if they were specified // in the non-rotated eye space coordinate system. // // We do not need to flip the signs of the near and far planes as these are always // treated as positive distances from the camera. const auto bottom = -options.bottom; const auto top = -options.top; const auto bottomMinusTop = bottom - top; // In addition to negating the top and bottom planes, we also need to post-multiply // the projection matrix by the post view correction matrix. This amounts to negating // the y and z axes of the projection matrix. const glm::mat4 m = { twoNear / rightMinusLeft, 0.0f, 0.0f, 0.0f, 0.0f, -twoNear / (bottomMinusTop), 0.0f, 0.0f, (options.right + options.left) / rightMinusLeft, (bottom + top) / bottomMinusTop, -options.farPlane / farMinusNear, -1.0f, 0.0f, 0.0f, -options.nearPlane * options.farPlane / farMinusNear, 0.0f }; return m; } } Summary

In this article we have shown how to build a perspective projection matrix to transform vertices from rotated eye space to clip space all from first principles. The requirement for perspective correct interpolation and the perspective divide yielded the 4th row of the the projection matrix. We have then shown how we can construct a linear relationship between the x or y components of the eye space projected point on the near plane to the normalised device coordinate point, and from there back to clip space. We then showed how to map the eye space depth component onto the normalised device coordinate depth. Finally we have given some practical tips about combining the projection matrix with the post-view rotation matrix.

We hope that this has removed some of the mystery surrounding the perspective projection matrix and how using an OpenGL projection matrix can cause your rendered results to be upside down. Armed with this knowledge you will have no need for the various hacks mentioned earlier.

In the next article, we will take a look at some more variations on the projection matrix and some more tips for using it in applications. Thank you for reading!

About KDAB

If you like this article and want to read similar material, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post Projection Matrices with Vulkan – Part 2 appeared first on KDAB.

Categories: FLOSS Project Planets

From GitLab to Microsoft Store

Wed, 2023-12-20 03:26
KDE Project:

This is an update on the ongoing migration of jobs from Binary Factory to KDE's GitLab. Since the last blog a lot has happened.

A first update of Itinerary was submitted to Google Play directly from our GitLab.

Ben Cooksley has added a service for publishing our websites. Most websites are now built and published on our GitLab with only 5 websites remaining on Binary Factory.

Julius Künzel has added a service for signing macOS apps and DMGs. This allows us to build signed installers for macOS on GitLab.

The service for signing and publishing Flatpaks has gone live. Nightly Flatpaks built on our GitLab are now available at https://cdn.kde.org/flatpak/. For easy installation builds created since yesterday include .flatpakref files and .flatpakrepo files.

Last, but not least, similar to the full CI/CD pipeline for Android we now also have a full CI/CD pipeline for Windows. For Qt 5 builds this pipeline consists of the following GitLab jobs:

  • windows_qt515 - Builds the project with MSVC and runs the automatic tests.
  • craft_windows_qt515_x86_64 - Builds the project with MSVC and creates various installation packages including (if enabled for the project) a *-sideload.appx file and a *.appxupload file.
  • sign_appx_qt515 - Signs the *-sideload.appx file with KDE's signing certificate. The signed app package can be downloaded and installed without using the Microsoft store.
  • microsoftstore_qt515 - Submits the *.appxupload package to the Microsoft store for subsequent publication. This job doesn't run automatically.

Notes:

  • The craft_windows_qt515_x86_64 job also creates .exe installers. Those installers are not yet signed on GitLab, i.e. Windows should warn you when you try to install them. For the time being, you can download signed .exe installers from Binary Factory.
  • There are also jobs for building with MinGW, but MinGW builds cannot be used for creating app packages for the Microsoft Store. (It's still possible to publish apps with MinGW installers in the Microsoft Store, but that's a different story.)

The workflow for publishing an update of an app in the Microsoft Store as I envision it is as follows:

  1. You download the signed sideload app package, install it on a Windows (virtual) machine (after uninstalling a previously installed version) and perform a quick test to ensure that the app isn't completely broken.
  2. Then you trigger the microsoftstore_qt515 job to submit the app to the Microsoft Store. This creates a new draft submission in the Microsoft Partner Center. The app is not published automatically. To actually publish the submission you have to log into the Microsoft Partner Center and commit the submission.

Enabling the Windows CD Pipeline for Your Project
If you want to start building Windows app packages (APPX) for your project then add the craft-windows-x86-64.yml template for Qt 5 or the craft-windows-x86-64-qt6.yml template for Qt 6 to the .gitlab-ci.yml of your project. Additionally, you have to add a .craft.ini file with the following content to the root of your project to enable the creation of the Windows app packages.

[BlueprintSettings] kde/applications/myapp.packageAppx = True

kde/applications/myapp must match the path of your project's Craft blueprint.

When you have successfully built the first Windows app packages then add the craft-windows-appx-qt5.yml or the craft-windows-appx-qt6.yml template to your .gitlab-ci.yml to get the sign_appx_qt* job and the microsoftstore_qt* job.

To enable signing your project (more precisely, a branch of your project) needs to be cleared for using the signing service. This is done by adding your project to the project settings of the appxsigner. Similarly, to enable submission to the Microsoft Store your project needs to be cleared by adding it to the project settings of the microsoftstorepublisher. If you have carefully curated metadata set in the store entry of you app that shouldn't be overwritten by data from your app's AppStream data then have a look at the keep setting for your project. I recommend to use keep sparingly if at all because at least for text content you will deprive people using the store of all the translations added by our great translation teams to your app's AppStream data.

Note that the first submission to the Microsoft Store has to be done manually.

Categories: FLOSS Project Planets

KDE's 6th Megarelease - Beta 2

Tue, 2023-12-19 19:00
Plasma 6, Frameworks and Gear draw closer

Every few years we port the key components of our software to a new version of Qt, taking the opportunity to remove cruft and leverage the updated features the most recent version of Qt has to offer us.

We are now just over two months away from KDE's megarelease. At the end of February 2024 we will publish Plasma 6, Frameworks 6 and a whole new set of applications in a special edition of KDE Gear all in one go.

If you have been following the updates here and here, you will know we are deep in the testing phase; and KDE is making available today the second Beta version of all the software we will include in the megarelease.

As with versions Alpha and the first Beta, this is a preview intended for developers and testers. The software in this second beta release is reaching stability fast, but it is still not 100% safe to use in a production environment. We still recommend you continue using stable versions of Plasma, Frameworks and apps for your everyday work. But if you do use this, watch out for bugs and report them promptly, so we can solve them.

Read on to find out more about KDE's 6th Megarelease, what it covers, and how you can help make the new versions of Plasma, KDE's apps and Frameworks a success now.

Plasma

Plasma is KDE's flagship desktop environment. Plasma is like Windows or macOS, but is renowned for being flexible, powerful, lightweight and configurable. It can be used at home, at work, for schools and research.

Plasma 6 is the upcoming version of Plasma that integrates the latest version of Qt, Qt 6, the framework upon which Plasma is built.

Plasma 6 incorporates new technologies from Qt and other constantly evolving tools, providing new features, better support for the latest hardware, and supports for the hardware and software technologies to come.

You can be part of the new Plasma. Download and install a Plasma 6-powered distribution (like Neon Unstable) to a test machine and start trying all its features. Check the Contributing section below to find out how you can deliver reports of what you find to the developers.

KDE Gear

KDE Gear is a collection of applications produced by the KDE community. Gear includes file explorers, music and video players, text and video-editors, apps to manage social media and chats, email and calendaring applications, travel assistants, and much more.

Developers of these apps also rely on the Qt toolbox, so most of the software will also be adapted to use the new Qt6 toolset and we need you to help us test them too.

Frameworks

KDE's Frameworks add tools created by the KDE community on top of those provided by the Qt toolbox. These tools give developers more and easier ways of developing interfaces and functionality that work on more platforms.

Among many other things, KDE Frameworks provide

  • widgets (buttons, text boxes, etc.) that make building your apps easier and their looks more consistent across platforms, including Windows, Linux, Android and macOS
  • libraries that facilitate storing and retrieving configuration settings
  • icon sets, or technologies that make the integration of the translation workflow of applications easier

KDE's Frameworks also rely heavily on Qt and will also be upgraded to adapt them to the new version 6. This change will add more features and tools, enable your applications to work on more devices, and give them a longer shelf life.

Contributing

KDE relies on volunteers to create, test and maintain its software. You can help too by...

  • Reporting bugs -- When you come across a bug when testing the software included in this Alpha Megarelease, you can report it so developers can work on it and remove it. When reporting a bug
    • make sure you understand when the bug is triggered so you can give developers a guide on how to check it for themselves
    • check you are using the latest version of the software you are testing, just in case the bug has been solved in the meantime
    • go to KDE's bug-tracker and search for your bug to make sure it does not get reported twice
    • if no-one has reported the bug yet, fill in the bug report, giving all the details you think are significant.
    • keep tabs on the report, just in case developers need more details.
  • Solving bugs -- Many bugs are easy to solve. Some just require changing a version number or tweaking the name of a library to its new name. If you have some basic programming knowledge of C++ and Qt, you too can help carry the weight of debugging KDE's software for the grand release in February.
  • Joining the development effort -- You may have a deeper knowledge development, and would like to contribute to KDE with your own solutions. This is the perfect moment to get involved in KDE and contribute with your own code.
  • Donating to KDE -- Creating, debugging and maintaining the large catalogue of software KDE distributes to users requires a lot of resources, many of which cost money. Donating to KDE helps keep the day-to-day operation of KDE running smoothly and allows developers to concentrate on creating great software. KDE is currently running a drive to encourage more people to become contributing supporters, but you can also give one-time donations if you want.
A note on pre-release software

Pre-release software is only suited for developers and testers. Alpha/Beta/RC software is unfinished, will be unstable and will contain bugs. It is published so volunteers can trial-run it, identify its problems, and report them so they can be solved before the publication of the final product.

The risks of running pre-release software are many. Apart from the hit to productivity produced by instability and the lack of features, the using pre-release software can lead to data loss, and, in extreme cases, damage to hardware. That said, the latter is highly unlikely in the case of KDE software.

The version of the software included in KDE's 6th Megarelease is beta software. We strongly recommend you do not use it as your daily driver.

If, despite the above, you want to try the software distributed in KDE's 6th Megarelease, you do so under your sole responsibility, and in the understanding that the main aim, as a tester, you help us by providing feedback and your know-how regarding the software. Please see the Contributing section above.

Categories: FLOSS Project Planets

Signed container images with buildah, podman and cosign via GitHub Actions

Tue, 2023-12-19 18:00

All the Toolbx and Distrobox container images and the ones in my personal namespace on Quay.io are now signed using cosign.

How to set this up was not really well documented so this post is an attempt at that.

First we will look at how to setup a GitHub workflow using GitHub Actions to build multi-architecture container images with buildah and push them to a registry with podman. Then we will sign those images with cosign (sigstore) and detail what is needed to configure signature validation on the host. Finally we will detail the remaining work needed to be able to do the entire process only with podman.

Full example ready to go

If you just want to get going, you can copy the content of my github.com/travier/cosign-test repo and start building and pushing your containers. I recommend keeping only the cosign.yaml workflow for now (see below for the details).

“Minimal” GitHub workflow to build containers with buildah / podman

You can find those actions at github.com/redhat-actions.

Here is an example workflow with the Containerfile in the example subfolder:

name: "Build container using buildah/podman" env: NAME: "example" REGISTRY: "quay.io/example" on: # Trigger for pull requests to the main branch, only for relevant files pull_request: branches: - main paths: - 'example/**' - '.github/workflows/cosign.yml' # Trigger for push/merges to main branch, only for relevant files push: branches: - main paths: - 'example/**' - '.github/workflows/cosign.yml' # Trigger every Monday morning schedule: - cron: '0 0 * * MON' permissions: read-all # Prevent multiple workflow runs from racing to ensure that pushes are made # sequentialy for the main branch. Also cancel in progress workflow runs for # pull requests only. concurrency: group: ${{ github.workflow }}-${{ github.ref }} cancel-in-progress: ${{ github.event_name == 'pull_request' }} jobs: build-push-image: runs-on: ubuntu-latest steps: - name: Checkout repo uses: actions/checkout@v4 - name: Setup QEMU for multi-arch builds shell: bash run: | sudo apt install qemu-user-static - name: Build container image uses: redhat-actions/buildah-build@v2 with: # Only select the architectures that matter to you here archs: amd64, arm64, ppc64le, s390x context: ${{ env.NAME }} image: ${{ env.NAME }} tags: latest containerfiles: ${{ env.NAME }}/Containerfile layers: false oci: true - name: Push to Container Registry uses: redhat-actions/push-to-registry@v2 # The id is unused right now, will be used in the next steps id: push if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main' with: username: ${{ secrets.BOT_USERNAME }} password: ${{ secrets.BOT_SECRET }} image: ${{ env.NAME }} registry: ${{ env.REGISTRY }} tags: latest

This should let you to test changes to the image via builds in pull requests and publishing the changes only once they are merged.

You will have to setup the BOT_USERNAME and BOT_SECRET secrets in the repository configuration to push to the registry of your choice.

If you prefer to use the GitHub internal registry then you can use:

env: REGISTRY: ghcr.io/${{ github.repository_owner }} ... username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }}

You will also need to set the job permissions to be able to write GitHub Packages (container registry):

permissions: contents: read packages: write

See the Publishing Docker images GitHub Docs.

You should also configure the GitHub Actions settings as follow:

  • In the “Actions permissions” section, you can restict allowed actions to: “Allow <username>, and select non-<username>, actions and reusable workflows”, with “Allow actions created by GitHub” selected and the following additionnal actions: redhat-actions/*,
  • In the “Workflow permissions” section, you can select the “Read repository contents and packages permissions” and select the “Allow GitHub Actions to create and approve pull requests”.

  • Make sure to add all the required secrets in the “Secrets and variables”, “Actions”, “Repository secrets” section.
Signing container images

We will use cosign to sign container images. With cosign, you get two main options to sign your containers:

  • Keyless signing: Sign containers with ephemeral keys by authenticating with an OIDC (OpenID Connect) protocol supported by Sigstore.
  • Self managed keys: Generate a “classic” long-lived key pair.

We will choose the the “self managed keys” option here as it is easier to setup for verification on the host in podman. I will likely make another post once I figure out how to setup keyless signature verification in podman.

Generate a key pair with:

$ cosign generate-key-pair

Enter an empty password as we will store this key in plain text as a repository secret (COSIGN_PRIVATE_KEY).

Then you can add the steps for signing with cosign at the end of your workflow:

# Include at the end of the worklow previously defined - name: Login to Container Registry uses: redhat-actions/podman-login@v1 if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main' with: registry: ${{ env.REGISTRY }} username: ${{ secrets.BOT_USERNAME }} password: ${{ secrets.BOT_SECRET }} - uses: sigstore/cosign-installer@v3.3.0 if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main' - name: Sign container image if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main' run: | cosign sign -y --key env://COSIGN_PRIVATE_KEY ${{ env.REGISTRY }}/${{ env.NAME }}@${{ steps.push.outputs.digest }} env: COSIGN_EXPERIMENTAL: false COSIGN_PRIVATE_KEY: ${{ secrets.COSIGN_PRIVATE_KEY }}

We need to explicitely login to the container registry to get an auth token that will be used by cosign to push the signature to the registry.

This step sometimes fails, likely due to a race condition, that I have not been able to figure out yet. Retrying failed jobs usually works.

You should then update the GitHub Actions settings to allow the new actions as follows:

redhat-actions/*, sigstore/cosign-installer@*, Configuring podman on the host to verify image signatures

First, we copy the public key to a designated place in /etc:

$ sudo mkdir /etc/pki/containers $ curl -O "https://.../cosign.pub" $ sudo cp cosign.pub /etc/pki/containers/ $ sudo restorecon -RFv /etc/pki/containers

Then we setup the registry config to tell it to use sigstore signatures:

$ cat /etc/containers/registries.d/quay.io-example.yaml docker: quay.io/example: use-sigstore-attachments: true $ sudo restorecon -RFv /etc/containers/registries.d/quay.io-example.yaml

And then we update the container signature verification policy to:

  • Default to reject everything
  • Then for the docker transport:
    • Verify signatures for containers coming from our repository
    • Accept all other containers from other registries

If you do not plan on using container from other registries, you can even be stricter here and only allow your containers to be used.

/etc/containers/policy.json:

{ "default": [ { "type": "reject" } ], "transports": { "docker": { ... "quay.io/example": [ { "type": "sigstoreSigned", "keyPath": "/etc/pki/containers/quay.io-example.pub", "signedIdentity": { "type": "matchRepository" } } ], ... "": [ { "type": "insecureAcceptAnything" } ] }, ... } }

See the full man page for containers-policy.json(5).

You should now be good to go!

What about doing everything with podman?

Using this workflow, there is a (small) time window where the container images are pushed to the registry but not signed.

One option to avoid this problem would be to first push the container to a “temporary” tag first, sign it, and then copy the signed container to the latest tag.

Another option is to use podman to push and sign the container image “at the same time”. However podman still needs to push the image first and then sign it so there is still a possibility that signing fails and that you’re left with an unsigned image (this happened to me during testing).

Unfortunately for us, the version of podman available in the version of Ubuntu used for the GitHub Runners (22.04) is too old to support signing containers. We thus need to use a newer podman from a container image to workaround this.

Here is the same worklow, adapted to only use podman for signing:

name: "Build container using buildah, push and sign it using podman" env: NAME: "example" REGISTRY: "quay.io/example" REGISTRY_DOMAIN: "quay.io" on: pull_request: branches: - main paths: - 'example/**' - '.github/workflows/podman.yml' push: branches: - main paths: - 'example/**' - '.github/workflows/podman.yml' schedule: - cron: '0 0 * * MON' permissions: read-all # Prevent multiple workflow runs from racing to ensure that pushes are made # sequentialy for the main branch. Also cancel in progress workflow runs for # pull requests only. concurrency: group: ${{ github.workflow }}-${{ github.ref }} cancel-in-progress: ${{ github.event_name == 'pull_request' }} jobs: build-push-image: runs-on: ubuntu-latest container: image: quay.io/travier/podman-action options: --privileged -v /proc/:/host/proc/:ro steps: - name: Checkout repo uses: actions/checkout@v4 - name: Setup QEMU for multi-arch builds shell: bash run: | for f in /usr/lib/binfmt.d/*; do cat $f | sudo tee /host/proc/sys/fs/binfmt_misc/register; done ls /host/proc/sys/fs/binfmt_misc - name: Build container image uses: redhat-actions/buildah-build@v2 with: archs: amd64, arm64, ppc64le, s390x context: ${{ env.NAME }} image: ${{ env.NAME }} tags: latest containerfiles: ${{ env.NAME }}/Containerfile layers: false oci: true - name: Setup config to enable pushing Sigstore signatures if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main' shell: bash run: | echo -e "docker:\n ${{ env.REGISTRY_DOMAIN }}:\n use-sigstore-attachments: true" \ | sudo tee -a /etc/containers/registries.d/${{ env.REGISTRY_DOMAIN }}.yaml - name: Push to Container Registry # uses: redhat-actions/push-to-registry@v2 uses: travier/push-to-registry@sigstore-signing if: (github.event_name == 'push' || github.event_name == 'schedule') && github.ref == 'refs/heads/main' with: username: ${{ secrets.BOT_USERNAME }} password: ${{ secrets.BOT_SECRET }} image: ${{ env.NAME }}

This uses two additional workarounds for missing features:

  • There is no official container image that includes both podman and buildah right now, thus I made one: github.com/travier/podman-action
  • The redhat-actions/push-to-registry Action does not support signing yet (issue#89). I’ve implemented support for self managed key signin in pull#90. I’ve not looked at keyless signing yet.

You will also have to allow running my actions in the repository settings. In the “Actions permissions” section, you should use the following actions:

redhat-actions/*, travier/push-to-registry@*, Conclusion

The next steps are to figure out all the missing bits for keyless signing and replicate this entire process in GitLab CI.

Categories: FLOSS Project Planets

Qt Contributor’s Summit 2023

Tue, 2023-12-19 06:58

Earlier this month I traveled to winterly Berlin for the Qt Contributor’s Summit. After having contributed many patches to Qt in the past months in order to make the upcoming Plasma 6 really shine I decided to attend for the first time this year to meet some more of the faces behind our beloved UI toolkit.

Welcome to Qt Contributor’s Summit 2023

The event took place over the course of two days adjacent to Qt World Summit at Estrel Hotel in Neukölln – a massive hotel, congress, and entertainment complex, and actually the largest one in Europe. It literally took me longer to walk from its main entrance to the venue than getting from Sonnenallee S-Bahn station to the entrance.

Thursday morning at 9:30 after registering and picking up our badges, Volker Hilsheimer of The Qt Group opened the event and gave a recap on the state of the Qt Project and where it’s headed. Following that was a panel discussion on how to attract more external contributors to Qt. Being a library consumed by applications rather than an end-user product on its own certainly makes it hard to excite people to contribute or give them a reason to scratch their own itch.

After a copious lunch we started diving into discussions and workshops, typically three tracks in parallel. They were usually scheduled for 30 minutes which I found was way too short for any kind of meaningful outcome. The first meeting I attended revolved around “Qt – Connected First” and how Qt networking APIs could be made more capable and easy to use, particularly in the realm of OAuth2 and JWT. The need for supporting the fetch API in QML was also emphasized. Next I joined “moc in 202x and beyond” with Fabian Kosmale where we discussed ways the Meta Object Compiler, what gives you signals and slots, and is basically just a glorified pre-processor, could understand actual C++ language constructs. After that I listened to a discussion on improving the API review process of Qt.

Finally, there was a one hour slot by Volker on evolving QIcon and theming that came out very promising. Linux desktops for the longest time have had a standardized way of loading and addressing themed icons and Qt’s own icon infrastructure is built around that. In recent years, however, most other major platforms, particularly macOS and iOS, Android, and Windows, gained the ability to provide standardized assets for many icons typically found in an application. They even took it a step further and support additional hints for an icon, for example whether it should be filled or just an outline, rounded or sharp, varying levels of “progress” (e.g. the WiFi icon might consume a hint on what signal strength it should represent), and of course dynamic colorization.

Most of those icons are actually implemented using font glyphs and so-called font parameter “tags”. Qt 6.7 laid the ground work for manipulating those through a new QFont::Tag API and will ship with a “platform icon engine” for the aforementioned operating systems. In KDE we’re quite excited about it since we also dynamically colorize our Breeze icons based on the current color scheme. This is currently done by our own KIconEngine and will not work when run in other environments like Gnome or Android where we instead have to ship a dedicated “Breeze Dark” icon-set using white rather than black icons. There’s now also a QIcon::ThemeIcon struct containing a list of well-known icon names (such as “undo” or “open file”) which will map to the respective native icon name depending on the current platform. And if this wasn’t thrilling enough, Qt SVG also received some love and, among other things, gained support for various patterns and filters, including Gaussian blur.

… Walking in a Winter Wonderland …

We then headed out to a Pizza place we didn’t believe would actually fit the thirty or so of us that were looking for dinner there. The next morning began with a presentation on Cyber Security by Kai Köhne and how to deal with CVEs in Qt better since Qt also ships a lot of 3rd party code. This was then followed by Marc Mutz and a session on the state of C++20 in Qt and of course co-routines. After lunch we continued discussing the Cyber Security topic. Thereafter Thiago Macieira explained how broken QSharedMemory and friends actually are and that there’s no real way to salvage them. The biggest user of it seems to be QtSingleApplication which I believe should actually be a core feature provided by Qt. There’s a also a few questionable uses within KDE with the most important one being in the KSycoca / KService database.

I then switched rooms to a joint session about cleaning up QMetaType where we scrolled through to the code a bit and tried to figure out what problem some of it is actually trying to solve. Finally, Fabian presented his work on extending the QML type system, most notably for supporting std::variant and, by proxy, std::optional. Currently, if an API accepts multiple types of input, such as “model” on a Repater or ListView taking a proper QAbstractItemModel* as well as a simple Array of values or even just a plain number, this has to be implemented using a property of QVariant. This doesn’t make for self-documenting code and poses problems at runtime where at best assigning an unsupported type will print a warning on console. Using std::variant one could declare the expected input up front. Likewise, rather than using QVariant to return undefined if no value exists, std::optional would make it obvious what the main type is but that it can be “nulled”. Furthermore, we discussed type-safe ways to declare the expected signature for a JS callback function such as the “provider” functions in TableView.

We then wrapped up the conference, collected our T-Shirts and whatever leftover merchandise and headed back towards home. Many thanks to Tobias Hunger and his family for their hospitality as well as The Qt Group for sponsoring my travel.

Categories: FLOSS Project Planets

Announcing Brise theme

Tue, 2023-12-19 05:00

Brise theme is yet another fork of Breeze. The name comes having both the French and German translations of Breeze, being Brise.

As some people know, I’m contributing quite a lot to the Breeze style for the Plasma 6 release and I don’t intend to stop doing that. Both git repositories share the same git history and I didn’t massively rename all the C++ classes from BreeseStyle to BriseStyle to make it as easy as possible to backport commits from one repository to the other. There are also no plans to make this the new default style for Plasma.

My goal with this Qt style is to have a style that is not a big departure of Breeze like you know it but does contain some cosmetic small changes. This would serve as a place where I can experiment with new ideas and if they tend to be popular to then move them to Breeze.

Here is a breakdown of all the changes I made so far.

  • I made Brise coinstallable with Breeze, so that users can have both installed simultaneously. I minified the changes to avoid merge conflicts while doing so.

  • I increased the border radius of all the elements from 3 pixels to 5 pixels. This value is configurable between small (3 pixels), medium (5 pixels) and large (7 pixels). A merge request was opened in Breeze and might make it into Plasma 6.1. The only difference is that in breeze the default will likely keep being 3 pixels for the time being.

Cute buttons and frames with 5 pixels border radius

  • Add a separator between the search field and the title in the standard KDE config windows which serves as an extension of the separator between the list of the setting’s categories and the setting’s page. This is mostly to be similar to System Settings and other Kirigami applications. There is a pending merge request for this also in Breeze.
  • A new tab style that removes the blue lines from the active lines and introduce other small changes. Non-editable tabs are also now filling the entire horizontal space available. I’m not completely happy with the look yet, so no merge requests have been submitted to Breeze.

Separator in the toolbar and the new tabs

  • Remove outlines from menu and combobox items. My goal is to go in the same direction as KirigamiAddons.RoundedItemDelegate.

Menu without outlines

  • Ensure that all the controls have the same height. Currently a small disparency in height is noticeable when they are in the same row. The patch is still a bit hacky and needs some wider testing on a large range of apps to ensure no regressions, but it is also a improvement I will definitively submit upstream once I feel like it’s ready.

Here, in these two screenshots, every control has 35 pixels as height.

Categories: FLOSS Project Planets

sudo without a `setuid` binary or SSH over a UNIX socket

Mon, 2023-12-18 18:00

In this post, I will detail how to replace sudo (a setuid binary) by using SSH over a local UNIX socket.

I am of the opinion that setuid/setgid binaries are a UNIX legacy that should be deprecated. I will explain the security reasons behind that statement in a future post.

This is related to the work of the Confined Users SIG in Fedora.

Why bother?

The main benefit of this approach is that it enables root access to the host from any unprivileged toolbox / distrobox container. This is particularly useful on Fedora Atomic desktops (Silverblue, Kinoite, Sericea, Onyx) or Universal Blue (Bluefin, Bazzite) for example.

As a side effect of this setup, we also get the following security advantages:

  • No longer rely on sudo as a setuid binary for privileged operations.
  • Access control via a physical hardware token (here a Yubikey) for each privileged operation.
Setting up the server

Create the following systemd units:

/etc/systemd/system/sshd-unix.socket:

[Unit] Description=OpenSSH Server Unix Socket Documentation=man:sshd(8) man:sshd_config(5) [Socket] ListenStream=/run/sshd.sock Accept=yes [Install] WantedBy=sockets.target

/etc/systemd/system/sshd-unix@.service:

[Unit] Description=OpenSSH per-connection server daemon (Unix socket) Documentation=man:sshd(8) man:sshd_config(5) Wants=sshd-keygen.target After=sshd-keygen.target [Service] ExecStart=-/usr/sbin/sshd -i -f /etc/ssh/sshd_config_unix StandardInput=socket

Create a dedicated configuration file /etc/ssh/sshd_config_unix:

# Deny all non key based authentication methods PermitRootLogin prohibit-password PasswordAuthentication no PermitEmptyPasswords no GSSAPIAuthentication no # Only allow access for specific users AllowUsers root tim # The default is to check both .ssh/authorized_keys and .ssh/authorized_keys2 # but this is overridden so installations will only check .ssh/authorized_keys AuthorizedKeysFile .ssh/authorized_keys # override default of no subsystems Subsystem sftp /usr/libexec/openssh/sftp-server

Enable and start the new socket unit:

$ sudo systemctl daemon-reload $ sudo systemctl enable --now sshd-unix.socket

Add your SSH Key to /root/.ssh/authorized_keys.

Setting up the client

Install socat and use the following snippet in /.ssh/config:

Host host.local User root # We use `run/host/run` instead of `/run` to transparently work in and out of containers ProxyCommand socat - UNIX-CLIENT:/run/host/run/sshd.sock # Path to your SSH key. See: https://tim.siosm.fr/blog/2023/01/13/openssh-key-management/ IdentityFile ~/.ssh/keys/localroot # Force TTY allocation to always get an interactive shell RequestTTY yes # Minimize log output LogLevel QUIET

Test your setup:

$ ssh host.local [root@phoenix ~]# Shell alias

Let’s create a sudohost shell “alias” (function) that you can add to your Bash or ZSH config to make using this command easier:

# Get an interactive root shell or run a command as root on the host sudohost() { if [[ ${#} -eq 0 ]]; then ssh host.local "cd \"${PWD}\"; exec \"${SHELL}\" --login" else ssh host.local "cd \"${PWD}\"; exec \"${@}\"" fi }

Test the alias:

$ sudohost id uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 $ sudohost pwd /var/home/tim $ sudohost ls Desktop Downloads ...

We’ll keep a distinct alias for now as we’ll still have a need for the “real” sudo in our toolbox containers.

Security?

As-is, this setup is basically a free local root for anything running under your current user that has access to your SSH private key. This is however likely already the case on most developer’s workstations if you are part of the wheel, sudo or docker groups, as any code runnning under your user can edit your shell config and set a backdoored alias for sudo or run arbitrary privileged containers via Docker. sudo itself is not a security boundary as commonly configured by default.

To truly increase our security posture, we would instead need to remove sudo (and all other setuid binaries) and run our session under a fully unprivileged, confined user, but that’s for a future post.

Setting up U2F authentication with an sk-based SSH key-pair

To make it more obvious when commands are run as root, we can setup SSH authentication using U2F with a Yubikey as an example. While this, by itself, does not, strictly speaking, increase the security of this setup, this makes it harder to run commands without you being somewhat aware of it.

First, we need to figure out which algorithm are supported by our Yubikey:

$ lsusb -v 2>/dev/null | grep -A2 Yubico | grep "bcdDevice" | awk '{print $2}'

If the value is 5.2.3 or higher, then we can use ed25519-sk, otherwise we’ll have to use ecdsa-sk to generate the SSH key-pair:

$ ssh-keygen -t ed25519-sk # or $ ssh-keygen -t ecdsa-sk

Add the new sk-based SSH public key to /root/.ssh/authorized_keys.

Update the server configuration to only accept sk-based SSH key-pairs:

/etc/ssh/sshd_config_unix:

# Only allow sk-based SSH key-pairs authentication methods PubkeyAcceptedKeyTypes sk-ecdsa-sha2-nistp256@openssh.com,sk-ssh-ed25519@openssh.com ... Restricting access to a subset of users

You can also further restrict the access to the UNIX socket by configuring classic user/group UNIX permissions:

/etc/systemd/system/sshd-unix.socket:

1 2 3 4 5 6 7 8 ... [Socket] ... SocketUser=tim SocketGroup=tim SocketMode=0660 ...

Then reload systemd’s configuration and restart the socket unit.

Next steps: Disabling sudo

Now that we have a working alias to run privileged commands, we can disable sudo access for our user.

Important backup / pre-requisite step

Make sure that you have a backup and are able to boot from a LiveISO in case something goes wrong.

Set a strong password for the root account. Make sure that can locally log into the system via a TTY console.

If you have the classic sshd server enabled and listening on the network, make sure to disable remote login as root or password logins.

Removing yourself from the wheel / sudo groups

Open a terminal running as root (i.e. don’t use sudo for those commands) and remove you users from the wheel or sudo groups using:

$ usermod -dG wheel tim

You can also update the sudo config to remove access for users that are part of the wheel group:

# Comment / delete this line %wheel ALL=(ALL) ALL Removing the setuid binaries

To fully benefit from the security advantage of this setup, we need to remove the setuid binares (sudo and su).

If you can, uninstall sudo and su from your system. This is usually not possible due to package dependencies (su is part of util-linux on Fedora).

Another option is to remove the setuid bit from the sudo and su binaries:

$ chmod u-s $(which sudo) $ chmod u-s $(which su)

You will have to re-run those commands after each update on classic systems.

Setting this up for Fedora Atomic desktops is a little bit different as /usr is read only. This will be the subject of an upcoming blog post.

Conclusion

Like most of the time with security, this is not a silver bullet solution that will make your system “more secure” (TM). I have been working on this setup as part of my investigation to reduce our reliance on setuid binaries and trying to figure out alternative for common use cases.

Let me know if you found this interesting as that will likely motivate me to write the next part!

References
Categories: FLOSS Project Planets

XWayland Video Bridge 0.4

Mon, 2023-12-18 10:42

An updated stable release of XWayland Video Bridge is out now for packaging.

https://download.kde.org/stable/xwaylandvideobridge/

sha256 ea72ac7b2a67578e9994dcb0619602ead3097a46fb9336661da200e63927ebe6

Signed by E0A3EB202F8E57528E13E72FD7574483BB57B18D Jonathan Esk-Riddell <jr@jriddell.org>
https://jriddell.org/esk-riddell.gpg

Changes

  • Also skip the switcher
  • Do not start in an X11 session and opt out of session management
Categories: FLOSS Project Planets

conf.kde.in 2024 in Pune, Maharashtra

Sun, 2023-12-17 19:00

conf.kde.in 2024 will be hosted at COEP Technological University, Pune.

conf.kde.in aims to attract new KDE Community members, as well as seasoned developers. The contents of the conference provide updates on what is going on in the KDE Community and teaches newcomers how to start making meaningful contributions.

This event attracts speakers from all over India. It provides students with an excellent opportunity to interact with established open-source contributors, as well as developers from various industries working on open-source projects in the fields of automotive, embedded, mobile, and more.

conf.kde.in was started in 2011 at the R V College of Engineering, Bangalore by a group of Indian KDE contributors. Since then we have hosted six different editions, each in different universities and venues:

  • 2013, KDE Meetup, DA-IICT, Gandhinagar
  • conf.kde.in 2014, DA-IICT, Gandhinagar
  • conf.kde.in 2015, Amrita University, Kerala
  • conf.kde.in 2016, LNMIIT, Jaipur
  • conf.kde.in 2017, IIT Guwahati
  • conf.kde.in 2020, Maharaja Agrasen Institute of Technology, Delhi

Past events have been successful in attracting Indian students to mentoring programs such as Google Summer of Code (GSoC), Season of KDE, and Google Code-In. These programs have often successfully helped students become lifelong contributors to open-source communities, including KDE.

Call for papers

We are asking for talk proposals on topics relevant to the KDE community and technology, as well as workshops or presentations targeted towards new contributors.

Interesting topics include (but are not restricted to):

  • New people and organisations that are discovering KDE
  • Working towards giving people more digital freedom and autonomy with KDE
  • New technological developments in KDE
  • Guides on how to participate for new, intermediate and expert users
  • Showcases of projects by students participating in KDE mentorship programs
  • Anything else that might interest the audience

Have you got something interesting to present? Let us know about it! If you know of someone else who should present, encourage them to do so too.

Please submit your papers at the following link

COEP Technological University

COEP Technological University is a unitary public university of the Government of Maharashtra, situated on the banks of Mula river, Pune, Maharashtra, India. Established in 1854, it is the 3rd oldest engineering institute in India.

College of Engineering, Pune (COEP), chartered in 1854 is a nationally respected leader in technical education. The institute is distinguished by its commitment to finding solutions to the great predicaments of the day through advanced technology. The institute has a rich history and dedication to the pursuit of excellence. COEP offers a unique learning experience across a spectrum of academic and social experiences. With a firm footing in truth and humanity, the institute gives you an understanding of both technical developments and the ethics that go with it. The curriculum is designed to enhance your academic experience through opportunities like internships, study abroad programmes and research facilities. The hallmark of COEP education is its strong and widespread alumni network, support of the industry and the camaraderie that the institute shares with several foreign universities.

On 23 June 2022, the Government of Maharashtra issued a notification regarding the conversion of the college into an independent technological university.

CoFSUG

The COEP's Free Software Users Group (CoFSUG) is a volunteer group at the COEP Technological University that promotes the use and development of free and open source software. CoFSUG runs the FOSS Lab, FOSS Server, COEP Moodle, COEP LDAP Server, and COEP Wiki. The group carries out activities like installation festivals to teach GNU/Linux, workshops on Linux administration, Python, Drupal, and FOSS technologies, promoting software development under the GNU GPLv3 license, and Freedom Fridays to spread FOSS philosophy. It has in the past hosted the Fedora Users and Developers Conference 2011, Android app development workshops, spoken tutorials program from IIT Bombay, and summer coding projects for students on FOSS contributions. CoFSUG organizes FLOSSMEET, a flagship event to create awareness and encourages the use of free and open-source software. Previous editions hosted talks on technologies such as Docker, Kubernetes, open-source licenses, and OSINT, exploring RUST to name a few.

Overall, CoFSUG aims to advance the use of free and open source solutions in academia through hands-on training, student projects, and evangelism about the FOSS philosophy.

Categories: FLOSS Project Planets

An update on HDR and color management in KWin

Sun, 2023-12-17 19:00

In my last post about HDR and color management I explained roughly how color management works, what we’re doing to make it work on Wayland and how far along we were with that in Plasma. That’s more than half a year ago now, so let’s take a look at what changed since then!

Color management with ICC profiles

KWin now supports ICC profiles: In display settings you can set one for each screen, and KWin will use that to adjust the colors accordingly.

Applications are still limited to sRGB for now. For Wayland native applications, a color management protocol is required to change that, so that apps can know about the colorspaces they can use, and so that KWin can know which colorspace their windows are actually using, and the upstream color management protocol for that is still not done yet. It’s getting close though! For example I have an implementation for it in a KWin branch, and Victoria Brekenfeld from System76 implemented a Vulkan layer using the protocol to allow applications to use the VK_EXT_swapchain_colorspace and VK_EXT_hdr_metadata Vulkan extensions, which can be used to run some applications and games with non-sRGB colorspaces.

Apps running through Xwayland are strictly limited to sRGB too, even if they have the ability to work with ICC profiles, as they have the same problem as Wayland native apps: Outside of manual overrides with application settings there’s now way to tell them to use a specific ICC profile or colorspace, and there’s also no way for KWin to know which profile or colorspace the application is using. Even if you set an ICC profile with an application setting, KWin still doesn’t know about that, so the colors will be wrong.1

It would be possible to introduce an “API” using X11 atoms to make at least the basic arbitrary primaries + sRGB EOTF case work though, so if any developers of apps that are still stuck with X11 for the foreseeable future would be interested in that, please contact me about it!

HDR

In Plasma 6 you can enable HDR in the display settings, which enables static HDR metadata signalling for the display, with the PQ EOTF and the display’s preferred brightness values, and sets the colorspace to rec.2020.

With that enabled, you get two additional settings:

  • “SDR Brightness” is, as the name suggests, the brightness KWin renders non-HDR stuff at, and effectively replaces the brightness setting that most displays disable when they’re in HDR mode
  • “SDR Color Intensity” is inspired by the color slider on the Steam Deck. For sRGB applications it scales the color gamut up to (at 100%) rec.2020, or more simply put, it makes the colors of non-HDR apps more intense, to counteract the bad gamut mapping many HDR displays do and make colors of SDR apps look more like when HDR is disabled

There’s some additional hidden settings to override bad brightness metadata from displays too. A GUI for that is still planned, but until that’s done you can use kscreen-doctor to override the brightness values your screen provides.

KWin now also uses gamma 2.2 instead of the sRGB piece-wise transfer function for sRGB applications, as that more closely matches what displays actually do in SDR mode. This means that in the dark regions of sRGB content things will now look like they do with HDR disabled, instead of things being a little bit brighter and looking kind of washed out.23


My last post ended at this point, with me saying that looking at boring sRGB apps in HDR mode would be all you could do for now… well, not anymore! While I already mentioned that Xwayland apps are restricted to sRGB, gamescope uses a Vulkan layer together with a custom Wayland protocol to bypass Xwayland almost entirely. This is how HDR is done on the Steam Deck OLED and it works well, so all that was still missing is a way for gamescope to pass the buffers and HDR metadata on to KWin.

To make that happen, Joshua Ashton from Valve and I put together a small Wayland protocol for doing HDR until the upstream protocol is done. I implemented it in KWin, forked Victoria’s Vulkan layer to make my own using that protocol and Joshua implemented HDR support for gamescope nested with the Vulkan extensions implemented by the layer.

The Plasma 6 beta is already shipping with that implementation in KWin, and with a few additional steps you can play most HDR capable games in the Wayland session:

  1. install the Vulkan layer
  2. install gamescope git master, or at least a version that’s new enough to have this commit
  3. run Steam with the following command: ENABLE_HDR_WSI=1 gamescope --hdr-enabled --hdr-debug-force-output --steam -- env ENABLE_GAMESCOPE_WSI=1 DXVK_HDR=1 DISABLE_HDR_WSI=1 steam

To explain a bit what that does, ENABLE_HDR_WSI=1 enables the Vulkan layer, which is off by default. gamscope --hdr-enabled --hdr-debug-force-output --steam runs gamescope with hdr force-enabled (automatically detecting and using HDR instead of that is planned) and with Steam integration, and env ENABLE_GAMESCOPE_WSI=1 DXVK_HDR=1 DISABLE_HDR_WSI=1 steam runs Steam with gamescope’s Vulkan layer enabled and mine disabled, so that they don’t conflict.

You can also adjust the brightness of sdr stuff in gamescope with the --hdr-sdr-content-nits flag; for a list of things you can do just check gamescope --help. The full command I’m using for my screen is

ENABLE_HDR_WSI=1 gamescope --fullscreen -w 5120 -h 1440 --hdr-enabled --hdr-debug-force-output --hdr-sdr-content-nits 600 --steam -- env ENABLE_GAMESCOPE_WSI=1 DXVK_HDR=1 DISABLE_HDR_WSI=1 steam -bigpicture

With that, Steam starts in a nested gamescope instance and games started in it have HDR working, as long as they use Proton 8 or newer. When this was initially implemented as a bunch of hacks at XDC this year there were issues with a few games, but right now, almost all the HDR capable games in my own Steam library work fine and look good in HDR. That includes

  • Ori and the Will of the Wisps
  • Cyberpunk 2077
  • God of War
  • Doom Eternal
  • Jedi: Fallen Order
  • Quake II RTX

Quake II RTX doesn’t even need gamescope! With ENABLE_HDR_WSI=1 SDL_VIDEODRIVER=wayland in the launch options for the game in Steam you can get a Linux- and Wayland-native game working with HDR, which is pretty cool.

I also have Spider-Man: Remastered and Spider-Man: Miles Morales, in both of which HDR also ‘works’, but it looks quite washed out vs. SDR. I found complaints about the same problem from Windows users online, so I assume the implementation in the games is just bad and there’s nothing wrong with the graphics stack around them.

(Sorry for showing SDR screenshots of HDR games, but HDR screenshots aren’t implemented yet, and it looks worse when I take a picture of the screen with my phone)


With the same Vulkan layer, you can also run other HDR-capable applications, like for example mpv:

ENABLE_HDR_WSI=1 mpv --vo=gpu-next --target-colorspace-hint --gpu-api=vulkan --gpu-context=waylandvk "path/to/video"

This time the video being played is actually HDR, without any hacks! And while my phone camera isn’t great at capturing HDR content in general, this is one of the cases where you can really see how HDR is actually better than SDR, especially on an OLED display:


The future

Obviously there is still a lot to do. Color management is limited to either sRGB or full-blown rec.2020, you shouldn’t have to install stuff from Github yourself, and certainly shouldn’t have to mess around with the command line4 to play games and watch videos in HDR, HDR screenshots and HDR screen recording aren’t a thing yet, and many other small and big things need implementing or fixing. There’s a lot of work needed to make these things just work™ as they should outside of special cases like the gamescope embedded session.

Things are moving fast though, and I’m pretty happy with the progress we made so far.




  1. Note that if you do want or need to set an ICC profile in the application for some reason, setting a sRGB profile as the display profile is wrong. It must have rec.709 primaries, but with the gamma 2.2 transfer function instead of the piece-wise sRGB one so often used! 

  2. This is the reason for footnote 1 

  3. As far as I know, Windows 11 still does this wrong! 

  4. I recommend setting up .desktop files to automate that away if you want to use HDR more often. Right click on the application launcher in Plasma -> Edit Applications makes that pretty easy 

Categories: FLOSS Project Planets

KDE Android apps porting update

Sat, 2023-12-16 05:30

Following the recent posts on porting KDE Android applications to KF6, CI/CD changes for APKs and changes to packaging of KDE Android apps here’s an update on where we are meanwhile with switching KDE’s Android apps to KF6.

Infrastructure improvements Style plugin packaging fixes

There have been two crucial fixes which finally give us generally starting and visually correct apps:

  • Fixed loading of the Kirigami QML plugin (MR 1394).
  • Fixed bundling of the Breeze Qt Quick Controls style with androiddeployqt (MR 1395 and MR 84).
KTrip with Breeze style and working translations. APK optimizations

The Qt 6 based APKs tend to be noticeably larger than what we used to have with Qt 5. There’s multiple reasons for that:

  • Depending on Qt5Compat graphical effects costs about 4 MB storage / 1.5 MB download size due to its runtime dependency on the shader compilation pipeline. A handful of remaining uses have been removed from Kirigami meanwhile, with MR 1407 being the last one.
  • Remaining uses of the Qt.labs.platform module pull in QtWidgets as a dependency. For Qt 5 we used to have a patch to avoid that, for Qt 6 we need to find every single use and port away from that. That’s another 6.5 MB storage / 2.5 MB download size.
  • We used to have a much more minimal Craft default configuration for Qt 5. This has changed as more apps ended up needing a more diverse set of dependencies (such as the full ffmpeg stack for Tokodon, or support for signed PDFs in Okular), as well as Craft itself having become a more widely used part of our infrastructure on all platforms. Work on enabling application-specific minimized builds of particular heavy dependencies is ongoing (e.g. MR 740 or MR 743).

The above mentioned numbers might seem small on their own, but that’s to be seen against typically 20-30Mb APKs and are for things that are largely unused, ie. this is an easy 10-20% gain which doesn’t just affect every user but also our build and distribution infrastructure.

Permission checks

On Android applications need to explicitly request permissions for accessing certain components or information, such as the camera or the current location. That’s neither new nor related to the transition to Qt 6, but re-testing on a fresh installation on the latest Android version has exposed a few places where we missed these runtime permission requests.

Qt 6 has new API for doing this, but with Qt 6.5 that is still somewhat cumbersome to use as it’s only available on platforms with a permission system. While you can ifdef that in C++ it’s very inconvenient in QML, either way not good for easy and maintainable code (see e.g. the use in KWeatherCore).

With Qt 6.6 this significantly improved as the permission API is now available unconditionally on all platforms and just always reports “permission already granted” on platforms without a permission system, resulting in much simpler code (see e.g. the use in Qrca).

Qrca requesting camera permissions. Video processing

An issue that turned out to be particularly challenging has been a crash in KF’s barcode scanner when used after an application had been suspended at any point in time previously. Suspending applications happens on Android when an app is no longer in the foreground, so this is a very common scenario, and we have at least 4 apps with integrated barcode scanning.

Meanwhile we have at least identified the underlying cause in Qt Multimedia code, where some RHI state invalidated by suspension isn’t properly restored/reset after resuming. A crude patch for addressing that is meanwhile in review, whether that turns out to be the right way to fix this remains to be seen.

Dark mode support

Something we have been missing so far and that became easier to implement with Qt 6 is automatic support for the dark UI mode. While we have the style for this, we lacked the detection and automatic switching logic on Android so far, which has changed now:

There’s still one missing piece before we can enable this unconditionally though, we lack support for icon re-coloring on Android, or alternatively for bundling Breeze Dark icons.

Alligator with Breeze Dark color scheme but hardly visible icons. Emoji glyph rendering

A rather severe issue for apps like NeoChat or Tokodon was the lack of emoji glyph rendering in Qt 6 apps. This was a result of using fewer bundled dependencies of Qt but building those libraries ourselves instead, and thus ending up with a Freetype build that lacked support for PNG glyphs.

Once identified this was a matter of changing a few settings in Craft, and a minor patch to libpng’s pkgconfig file.

KTrip backend selection page with emoji flags. Application status Working

For some applications we meanwhile have a continuous F-Droid publishing in the nightly repository enabled again, using “works at least as well as the previous Qt 5 nightly build” as decision criteria:

In progress

There’s a few apps that have generally working Qt 6 APKs in general but are blocked on specific issues:

  • Alligator fails to add and fetch feeds.
  • Qrca and Vakzination are both blocked on the above mentioned video processing issue.
  • NeoChat still shows visual glitches in avatar textures and is also affected by the barcode scanner crash.
No Qt 6 APK builds yet

The third group are apps in various stages of porting to Qt 6 and with no port of their APK builds yet:

Itinerary

Personally my goal here is of course to get Itinerary working with Qt 6 in time for the 24.02 release, without disrupting users of the nightly build F-Droid repository. That’s why my focus has been on apps containing or using building blocks relevant for Itinerary (such as all of the above mentioned infrastructure issues).

With all of that in place the actual port of Itinerary is then hopefully not such a big deal anymore, in particular with the nasty barcode scanning crash fixed.

How you can help

You can help with porting and polishing the apps still needing work! Here’s the checklist I tend to follow for that:

  • The application is fully ported to Qt 6 and works on Linux with QT_QUICK_CONTROLS_MOBILE=1 and QT_QUICK_CONTROLS_STLYE set to org.kde.breeze or Material respectively.
  • There is a passing Android CI job.
  • The APK packaging has been ported as described here.
  • There is a CI job building the APK, as described here.
  • The APK does actually start on a phone or inside the emulator. With all the above mentioned work this is rarely a problem anymore, but can still happen due to a missing dependency for example.
  • There is an application-specific Craft exclusion list as described here.
  • Check if all icons show up as they do on desktop. Bundling icons is a rather error prone aspect of APK packaging.
  • If the app uses anything requiring special permissions (camera, location, notifications, etc) check whether those are properly requested. This benefits from testing on a very recent Android version.
  • Check if continuous publishing to the nightly F-Droid repository is enabled here.

If you want to help and/or have questions, feel free to join us in the KDE Android Matrix channel.

Categories: FLOSS Project Planets

Pages