Feeds
Vincent Bernat: Non-interactive SSH password authentication
SSH offers several forms of authentication, such as passwords and public keys. The latter are considered more secure. However, password authentication remains prevalent, particularly with network equipment.1
A classic solution to avoid typing a password for each connection is sshpass, or its more correct variant passh. Here is a wrapper for Zsh, getting the password from pass, a simple password manager:2
pssh() { passh -p <(pass show network/ssh/password | head -1) ssh "$@" } compdef pssh=sshThis approach is a bit brittle as it requires to parse the output of the ssh command to look for a password prompt. Moreover, if no password is required, the password manager is still invoked. Since OpenSSH 8.4, we can use SSH_ASKPASS and SSH_ASKPASS_REQUIRE instead:
ssh() { set -o localoptions -o localtraps local passname=network/ssh/password local helper=$(mktemp) trap "command rm -f $helper" EXIT INT > $helper <<EOF #!$SHELL pass show $passname | head -1 EOF chmod u+x $helper SSH_ASKPASS=$helper SSH_ASKPASS_REQUIRE=force command ssh "$@" }If the password is incorrect, we can display a prompt on the second tentative:
ssh() { set -o localoptions -o localtraps local passname=network/ssh/password local helper=$(mktemp) trap "command rm -f $helper" EXIT INT > $helper <<EOF #!$SHELL if [ -k $helper ]; then { oldtty=\$(stty -g) trap 'stty \$oldtty < /dev/tty 2> /dev/null' EXIT INT TERM HUP stty -echo print "\rpassword: " read password printf "\n" } > /dev/tty < /dev/tty printf "%s" "\$password" else pass show $passname | head -1 chmod +t $helper fi EOF chmod u+x $helper SSH_ASKPASS=$helper SSH_ASKPASS_REQUIRE=force command ssh "$@" }A possible improvement is to use a different password entry depending on the remote host:3
ssh() { # Grab login information local -A details details=(${=${(M)${:-"${(@f)$(command ssh -G "$@" 2>/dev/null)}"}:#(host|hostname|user) *}}) local remote=${details[host]:-details[hostname]} local login=${details[user]}@${remote} # Get password name local passname case "$login" in admin@*.example.net) passname=company1/ssh/admin ;; bernat@*.example.net) passname=company1/ssh/bernat ;; backup@*.example.net) passname=company1/ssh/backup ;; esac # No password name? Just use regular SSH [[ -z $passname ]] && { command ssh "$@" return $? } # Invoke SSH with the helper for SSH_ASKPASS # […] }It is also possible to make scp invoke our custom ssh function:
scp() { set -o localoptions -o localtraps local helper=$(mktemp) trap "command rm -f $helper" EXIT INT > $helper <<EOF #!$SHELL source ${(%):-%x} ssh "\$@" EOF command scp -S $helper "$@" }For the complete code, have a look at my zshrc.
-
First, some vendors make it difficult to associate an SSH key with a user. Then, many vendors do not support certificate-based authentication, making it difficult to scale. Finally, interactions between public-key authentication and finer-grained authorization methods like TACACS+ and Radius are still uncharted territory. ↩︎
-
The clear-text password never appears on the command line, in the environment, or on the disk, making it difficult for a third party without elevated privileges to capture it. On Linux, Zsh provides the password through a file descriptor. ↩︎
-
To decipher the fourth line, you may get help from print -l and the zshexpn(1) manual page. details is an associative array defined from an array alternating keys and values. ↩︎
Thorsten Alteholz: My Debian Activities in October 2023
This month I accepted 361 and rejected 34 packages. The overall number of packages that got accepted was 362.
Debian LTSThis was my hundred-twelfth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.
During my allocated time I uploaded:
- [DLA 3615-1] libcue security update for one CVE to fix an out-of-bounds array access
- [DLA 3631-1] xorg-server security update for two CVEs. These were embargoed issues related to privilege escalation
- [DLA 3633-1] gst-plugins-bad1.0 security update for three CVEs to fix possible DoS or arbitrary code execution when processing crafted media files.
- [1052361]bookworm-pu: the upload has been done and processed for the point release
- [1052363]bullseye-pu: the upload has been done and processed for the point release
Unfortunately upstream still could not resolve whether the patch for CVE-2023-42118 of libspf2 is valid, so no progress happened here.
I also continued to work on bind9 and try to understand why some tests fail.
Last but not least I did some days of frontdesk duties and took part in the LTS meeting.
Debian ELTSThis month was the sixty-third ELTS month. During my allocated time I uploaded:
- [ELA-978-1]cups update in Jessie and Stretch for two CVEs. One issue is related to missing boundary checks which might lead to code execution when using crafted postscript documents. The other issue is related to unauthorized access to recently printed documents.
- [ELA-990-1]xorg-server update in Jessie and Stretch for two CVEs. These were embargoed issues related to privilege escalation.
- [ELA-993-1]gst-plugins-bad1.0 update in Jessie and Stretch for three CVEs to fix possible DoS or arbitrary code execution when processing crafted media files.
I also continued to work on bind9 and as with the version in LTS, I try to understand why some tests fail.
Last but not least I did some days of frontdesk duties .
Debian PrintingThis month I uploaded a new upstream version of:
- … rlpr
Within the context of preserving old printing packages, I adopted:
- … lprng
If you know of any other package that is also needed and still maintained by the QA team, please tell me.
I also uploaded new upstream version of packages or uploaded a package to fix one or the other issue:
- … cups
- … hannah-foo2zjs
This work is generously funded by Freexian!
Debian MobcomThis month I uploaded a package to fix one or the other issue:
- … osmo-pcu The bug was filed by Helmut and was related to /usr-merge
This month I uploaded new upstream version of packages, did a source upload for the transition or uploaded it to fix one or the other issue:
Meet the Kdenlive team in Zürich
The Kdenlive team will hold a Sprint in Zürich, Switzerland, between the 10th and the 12th of November 2023. During these days, we will discuss our roadmap, including the Qt6 strategy, and the refactoring of the effects workflow, as part of last year’s fundraising campaign, among other topics.
You will also have the opportunity to join us for a public event, Saturday the 11th of November between 4PM and 6PM (CET time) at Bitwäscherei, Neue Hard 12, CH-8005 Zürich. If you are interested to learn more about Kdenlive, want to meet our team or help the project, save the date and join us!
We will also host an online Kdenlive Café around 6:30PM on the same day for those who cannot come to Zürich.
This event is made possible thanks to KDE e.V. We are looking forwards to meet you there !
The post Meet the Kdenlive team in Zürich appeared first on Kdenlive.
Marcos Dione: automating-blender-based-hillshading-with-python
Remember my Blend based hillshading? I promised to try to automate it, right? Well, it seems I have the interest and stamina now, so that's what I'm doing. But boys and girls and anything in between and beyond, the stamina is waning and the culprit is Blender's internals being exposed into a non-Pythonic API[3]. I swear if I worked in anything remotely close to this, I would be writing a wrapper for all this. But in the meantime, it's all a discovery path to something that does not resemble a hack. Just read some of Blender's Python Quickstart:
When you are familiar with other Python APIs you may be surprised that new data-blocks in the bpy API cannot be created by calling the class:
bpy.types.Mesh() Traceback (most recent call last): File "<blender_console>", line 1, in <module> <span class="createlink">TypeError</span>: bpy_struct.__new__(type): expected a single argumentThis is an intentional part of the API design. The Blender Python API can’t create Blender data that exists outside the main Blender database (accessed through bpy.data), because this data is managed by Blender (save, load, undo, append, etc).
Data is added and removed via methods on the collections in bpy.data, e.g:
mesh = bpy.data.meshes.new(name="MyMesh")That is, instead of making the constructor call this internal API, they make it fail miserably and force you to use the internal API! Today I was mentioning that Asterisk's programming language was definitely designed by a Telecommunications Engineer, so I guess this one was designed by a 3D artist? But I digress...
One of the first thing about Blender's internals is that one way to work is based on Contexts. This makes sense when developing plugins, where you mostly need to apply things to the selected object, but for someone really building everything from scratch like I need to, it feels weird.
One of the advantages is that you can open a Python console and let Blender show you the calls it makes for every step you make on the UI, but it's so context based that the results is useless as a script. Or for instance, linking the output of a thing into he the input of another is registered as a drag-and-drop call that includes the distance the mouse moved during the drag, so it's relative of the output dot where you started and what it links to also depends on the physical and not logical position of the things you're linking,
bpy.ops.node.link(detach=False, drag_start=(583.898, 257.74))It takes quite a lot of digging around in a not very friendly REPL[1] with limited scrollback and not much documentation to find more reproducible, less context dependent alternatives. This is what's eating up my stamina, it's not so fun anymore. Paraphrasing someone on Mastodon: What use is a nice piece of Open Software if it's documentation is not enough to be useful[2]?
Another very important thing is that all objects have two views: one that has generic properties like position and rotation, which can be reacheched by bpy.data.objects; and one that has specific properties like a light's power or a camera's lens angle, which can be reached by f.i. bpy.data.cameras. This was utterly confusing, specially since all bpy.data's documentation is 4 lines long. Later I found out you can get specific data from the generic one in the .data attribute, so the take out is: always get your objects from bpy.data.objects.
Once we get over that issue, things are quite straightforward, but not necessarily easy. The script as it is can already be used with blender --background --python <script_file>, but have in account that when you do that, you start with the default generic 3D setup, with a light, a camera and a cube. You have to delete the cube, but you can get a reference to the other two to reuse them.
Then comes the administrative stuff around just rendering the scene. To industrialize it and be able to quickly test stuff, you can try to get command line options. You can use Python's argparser module for this, but have in account that those --background --python blender.py options are going to be passed to the script, so you either ignore unknown options or you declare those too:
mdione@ioniq:~/src/projects/elevation$ blender --background --python blender.py Blender 3.6.2 Read prefs: "/home/mdione/.config/blender/3.6/config/userpref.blend" usage: blender [-h] [--render-samples RENDER_SAMPLES] [--render-scale RENDER_SCALE] [--height-scale HEIGHT_SCALE] FILE blender: error: unrecognized arguments: --background --pythonAlso, those options are going to be passed to Blender! So at the end of your run, Blender is going to complain that it doesn't understand your options:
unknown argument, loading as file: --render-samples Error: Cannot read file "/home/mdione/src/projects/elevation/--render-samples": No such file or directory Blender quitThe other step you should do is to copy the Geo part of GeoTIFF to the output file. I used rasterio, mostly because at first I tried gdal (I was already using gdal_edit.py to do this in my previous manual procedure), but it's API was quite confusing and rasterio's is more plain. But, rasterio can't actually open a file just to write the metadata like gdal does, so I had to open the output file, read all data, open it again for writing (this truncates the file) and write metadata and data.
Now, some caveats. First, as I advanced in my last post, the method as it is right now has issues at the seams. Blender can't read GDAL VRT files, so either I build 9 planes instead of 1 (all the neighbors are needed to properly calculate the shadows because Blender is also taking in account light reflected back from other features, meaning mountains) or for each 1x1 tile I generate another with some buffer. I will try the first one and see if it fixes this issue without much runtime impact.
Second, the script is not 100% parametrized. Sun size and power are fixed based on my tests. Maybe in the future. Third, I will try to add a scattering sky, so we get a bluish tint to the shadows, and set the Sun's color to something yellowish. These should probably be options too.
Fourth, and probably most important. I discovered that this hillshading method is really sensible to missing or bad data, because they look like dark, deep holes. This is probably a deal breaker for many, so you either fix your data, or you search for better data, or you live with it. I'm not sure what I'm going to do.
So, what did I do with this? Well, first, find good parameters, one for render samples and another for height scale. Render time grows mostly linearly with render samples, so I just searched for the one before detail stopped appearing; the value I found was 120 samples. When we left off I was using 10 instead of 5 for height scale, but it looks too exaggerated on hills (but it looks AWESOME in mountains like the Mount Blanc/Monte Bianco! See below), so I tried to pinpoint a good balance. For me it's 8, maybe 7.
Why get these values right? Because like I mentioned before, a single 1x1°, 3601x5137px tile takes some 40m in my laptop at 100 samples, so the more tuned the better. One nice way to quickly test is to lower the samples or use the --render-scale option of the script to reduce the size of the output. Note that because you reduce both dimensions at the same time, the final render (and the time that takes) is actually the square of this factor: 50% is actually 25% (because 0.50 * 0.50 = 0.25).
So, without further addo, here's my script. If you find it useful but want more power, open issues or PRs, everything is welcome.
https://github.com/StyXman/blender_hilllshading [5]
Try to use the main branch; develop is considered unstable and can be broken.
A couple of images of the shadows applied to my style as teaser, both using only 20 samples and x10 height scale:
Dhaulagiri:
Mont Blanc/Monte Bianco:
Man, I love the fact that the tail of the Giacchiaio del Miage is in shadows, but the rest is not; or how Monte Bianco/Mont Blanc's shadow reaches across the valley to the base of la Tête d'Arp. But also notice the bad data close to la Mer de Glace.
blender python openstreemap gdal elevation hillshading rasterio gis dem
[1] Ok, TBH here, I'm very much used to ipython's console, it's really closer to the plain python one. No tab completion, so lots of calls to dir() and a few help()s.
[2] I couldn't find it again. Mastodon posts are not searchable by default, which I understand is good for privacy, but on the other hand the current clients don't store anything locally, so you can't even search what you already saw. I have several semi-ranting posts about this and I would show them to you, but they got lost on Mastodon. See what I mean?
[3] So you have an idea, this took me a whole week of free time to finish, including but not in the text, my old nemesis, terracing effect. This thing is brittle.
[4] Yeah, maybe the API is mostly designed for this.
[5] My site generator keeps breaking. This is the second time I have to publicly admit this. Maybe next weekend I'll gather steam and replace it with nikola.
Iustin Pop: Corydalis: new release and switching to date-versioning
After 4 years, I finally managed to tag a new release of Corydalis. There’s nothing special about this specific point in time, but there was also none in the last four years, so I gave up on trying to any kind of usual version release, and simply switched to CalVer.
So, Corydalis 2023.44.0 is up and running on https://demo.corydalis.io. I am 100% sure I’m the only one using it, but doing it open-source is nicer, and I still can’t imagine another way of managing/browsing my photo library (that keeps it under my own control), so I keep doing 10-20 commits per year to it.
There’s a lot of bugs to fix and functionality to improve (main thing - a real video player), but until I can find a chunk of free time, it is what it is 😣.
Petter Reinholdtsen: Test framework for DocBook processors / formatters
All the books I have published so far has been using DocBook somewhere in the process. For the first book, the source format was DocBook, while for every later book it was an intermediate format used as the stepping stone to be able to present the same manuscript in several formats, on paper, as ebook in ePub format, as a HTML page and as a PDF file either for paper production or for Internet consumption. This is made possible with a wide variety of free software tools with DocBook support in Debian. The source format of later books have been docx via rst, Markdown, Filemaker and Asciidoc, and for all of these I was able to generate a suitable DocBook file for further processing using pandoc, a2x and asciidoctor, as well as rendering using xmlto, dbtoepub, dblatex, docbook-xsl and fop.
Most of the books I have published are translated books, with English as the source language. The use of po4a to handle translations using the gettext PO format has been a blessing, but publishing translated books had triggered the need to ensure the DocBook tools handle relevant languages correctly. For every new language I have published, I had to submit patches dblatex, dbtoepub and docbook-xsl fixing incorrect language and country specific issues in the framework themselves. Typically this has been missing keywords like 'figure' or sort ordering of index entries. After a while it became tiresome to only discover issues like this by accident, and I decided to write a DocBook "test framework" exercising various features of DocBook and allowing me to see all features exercised for a given language. It consist of a set of DocBook files, a version 4 book, a version 5 book, a v4 book set, a v4 selection of problematic tables, one v4 testing sidefloat and finally one v4 testing a book of articles. The DocBook files are accompanied with a set of build rules for building PDF using dblatex and docbook-xsl/fop, HTML using xmlto or docbook-xsl and epub using dbtoepub. The result is a set of files visualizing footnotes, indexes, table of content list, figures, formulas and other DocBook features, allowing for a quick review on the completeness of the given locale settings. To build with a different language setting, all one need to do is edit the lang= value in the .xml file to pick a different ISO 639 code value and run 'make'.
The test framework source code is available from Codeberg, and a generated set of presentations of the various examples is available as Codeberg static web pages at https://pere.codeberg.page/docbook-example/. Using this test framework I have been able to discover and report several bugs and missing features in various tools, and got a lot of them fixed. For example I got Northern Sami keywords added to both docbook-xsl and dblatex, fixed several typos in Norwegian bokmål and Norwegian Nynorsk, support for non-ascii title IDs added to pandoc, Norwegian index sorting support fixed in xindy and initial Norwegian Bokmål support added to dblatex. Some issues still remains, though. Default index sorting rules are still broken in several tools, so the Norwegian letters æ, ø and å are more often than not sorted properly in the book index.
The test framework recently received some more polish, as part of publishing my latest book. This book contained a lot of fairly complex tables, which exposed bugs in some of the tools. This made me add a new test file with various tables, as well as spend some time to brush up the build rules. My goal is for the test framework to exercise all DocBook features to make it easier to see which features work with different processors, and hopefully get them all to support the full set of DocBook features. Feel free to send patches to extend the test set, and test it with your favorite DocBook processor. Please visit these two URLs to learn more:
If you want to learn more on Docbook and translations, I recommend having a look at the the DocBook web site, the DoCookBook site and my earlier blog post on how the Skolelinux project process and translate documentation, a talk I gave earlier this year on how to translate and publish books using free software (Norwegian only).
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
3C Web Services: Essential Modules for Drupal 8 & 9
3C Web Services: How to convert an existing file/image field to a Media type field
3C Web Services: Introducing the Commerce Abandoned Carts module
3C Web Services: Super Login module for Drupal 8
3C Web Services: Tips to minimize Form Spam on your Drupal Website
3C Web Services: Displaying a Field before the Node's Title in Drupal 7
3C Web Services: How to accept payments of varying amounts with Drupal Commerce.
3C Web Services: Introduction to the Super Login Module for Drupal 7
3C Web Services: Creating a Drag & Drop Sorting Interface for a Drupal View
3C Web Services: How to remove the Fieldset from a Drupal Address Field
On the Road to Plasma 6, Vol. 4
Chill your Champagne bottles – it’s official: the KDE Plasma 6.0 + KDE Frameworks 6.0 + KDE Gear 24.02 Mega Release that will take KDE software to the next level is going to happen on 28th February 2024! Let’s have a look at what I’ve been up to in the past two months, again working mostly on either Qt itself or dealing with its behavior changes on the application side.
It feels like every time I take a desktop screenshot for this type of post, the caption has changed slightly. :-)A major annoyance I have been chasing for a while has been the Kate/KWrite double-close bug: when you quit the editor with unsaved changes it would naturally prompt whether you’d want to save the changes. If you said no, it asked you again‽ I am not very familiar with the code bases of Kate and KTextEditor (which by the way is actually a fantastic Framework you can use in your own apps if you need a text editor with syntax highlighting, code folding, multi-cursor, and much more!) so I just couldn’t figure out why the exit code ran twice.
One night I was casually browsing Qt documentation looking for “What’s New In Qt 6” and found that QCoreApplication::quit() behaves differently in Qt 6. While in Qt 5, calling quit() was the same as calling QCoreApplication::exit(0), i.e. just exiting the main event loop, in Qt 6 calling quit() closes all top-level windows first. Kate was calling quit(), which would now ask the editor window to close, causing a “Save?” prompt. Once the dialog was rejected, shutdown commenced but the changes were still “unsaved” and Kate would ask again. Since Kate has its own shutdown code anyway, we can just skip all of that and call QCoreApplication::exit(0), fixing the bug. That’s why it’s important to read Release Notes.
One API change in QFont even affected Chromium, which incidentally gained a Qt 6 back end: In Qt 5 QFont::Weight was a number from 0 to 100, with “Normal” being 50. Qt 6 on the other hand together with CSS and many others follows the OpenType specification where font-weight ranges from 0 to 1000, with “Normal” being 400. Chrome used the same conversion logic on both platforms and in a Plasma 6 session encountered the default font being set to 400 and thus assumed it was meant to be ultra-black, leading to bold text throughout its UI. Once I reported this, the issues was resolved by one of their Linux platform maintainers faster than I would have been able to build Chromium and submit a fix myself.
Another one that might bite you is an assert that was added in Qt 6.3 about calling slots on objects being torn down. While this has always been dangerous, Qt will now yell at you. If, for example, you have a child object and connect its destroyed signal to a slot in your object, when the child is destroyed on normal parent-child teardown (which happens in the QObject destructor, i.e. after your class destructor) it will call into your slot when the object is already half-dead, and Qt rightfully complains.
This uncovered an issue in Dolphin when closing it while its terminal panel was open, since it is an embedded Konsole KPart which is monitored for destruction (if the shell unexpectedly quit, for instance) to then close the panel. Of course, the KPart is also destroyed during normal application teardown and triggered this assertion. The solution was to disconnect the signal in the destructor. A similar thing occurred in KWin where a thumbnail item monitored its window changing, which can also happen during cleanup when the window goes away and is announced to now be null. This was resolved by instead calling the relevant handler from QQuickItem::itemChange, which is also ever so slightly more efficient than creating a signal-slot connection.
I also finally figured out those “Unregistered input type in parameter list” errors coming from Qt DBus, which, among other things, broke Bluetooth device discovery. It failed to connect to the relevant DBus ObjectManager signals for when a device appeared. Normally this is a sign that you forgot to call qDBusRegisterMetaType which makes a custom type, such as a struct, or even a simple typedef, known to the DBus type system. However, the code in question on the KDE side hasn’t changed in years and has been working just fine under Qt 5. Turns out QMetaType is a lot smarter in Qt 6: whereas in Qt 5 it saw a typedef only by its declared name, e.g. VariantMapMap, in Qt 6, it would know it as the compiler saw it and register it under e.g. QMap<QString, QMap<QString, QVariant>>. When it then during DBus type marshalling encountered the name “VariantMapMap” it didn’t know what to do and failed.
Apps using QtMultimedia identifying themselves correctlyAdditionally, I did a few more Qt changes:
- QModelIndex::data() is invokable by QML starting in Qt 6.7, so when passed a QModelIndex, instead of calling index.model.data(index, role), you can directly call index.data(role) as expected.
- The text property on QQuickText is no longer FINAL in Qt 6.6.0, since during the effort to mark everything as final, it was missed that even Qt’s own QQuickMnemonicLabel, which handles the underlined letters in QtQuick Controls 2, overrode it.
- QtMultimedia’s PulseAudio backend sets a proper application ID, name, and icon, so apps don’t show up in the Volume applet as generic “QtPulseAudio:<pid>” anymore.
On the KDE side, I fixed rendering images in Gwenview when using fractional scaling on Wayland in a similar way as I did last time for Dolphin – both use QGraphicsView after all. A funky repaint issue in the System Monitor has also been fixed which was caused by someone calling winId() on something that is not a window. When you do that on Wayland, Qt will split the widget into its own subsurface window causing all sorts of hard to debug input and rendering issues. Furthermore, the new QML-based PolicyKit prompt can now be dismissed by pressing Escape, just like any other dialog.
Kirigami’s Mnemonic handler which auto-generates shortcuts from the underlined letters for controls in QML applications has been optimized by installing one global event filter instead of one per label which typically are many. The event filter is used to show the underlined letter only while the Alt key is pressed. The new Kirigami InlineViewHeader control used at the top of many list views nowadays can be used in conjunction with a GridView. While ListView supports sticky header positioning, GridView’s headers always scroll with its contents. The layouting code in QtQuick’s item views is quite complex, so I just added a workaround in Kirigami translating the vertical header position in the opposite direction of the scroll position, effectively sticking it to the top of the view.
Finally, Font Management works in a Plasma Wayland session but only as a stopgap since it basically just opens its own X connection rather than relying on Qt here which obviously won’t do that when run under Wayland. The whole thing is twenty years old and completely written around Xft, the X Font library, so I think the only way to make it run natively on Wayland is to rewrite it from scratch.
Katie and Konqi say thank you for your continued support! (CC-BY-SA raghukamath)Plasma 6.0 Alpha is just around the corner and you can help make it a reality by donating to our Plasma 6 Fundraiser! I can only encourage you to give Plasma 6 a try and help iron out any bugs. If you want to learn more about what to expect in the future, there’s two lovely two-hour interviews on Tech Over Tea with David Edmundson and Nate Graham about Plasma, Wayland, and KDE in general. And don’t forget about the Wallpaper Contest that closes on 14th November!
Discuss this post on KDE Discuss.
Mirek Długosz: 10 years in testing
Exactly 10 years ago I started my first job as a software tester.
That doesn’t mean I started testing 10 years ago. Back when I was in high school and at university, I did spend some time doing testing and quality-related stuff for various open source projects - Debian, KDE, Kadu, LibreOffice, Cantata, and some more. I don’t remember any longer which was the first and when exactly that happened. I imagine my first contribution was pretty uneventful - perhaps a message on users forum, a response confirming this is a bug, and an encouragement to report it on bug tracking system or devs mailing list.
Nonetheless, “first job as software tester” is a good place to start counting. First, it’s easy - I have papers to prove the exact date. Second, from that day I have spent about eight hours a day, five days a week, every week, on testing-related things. That adds up to a lot of time, but it’s the consistency that sets it apart from any open source work I have done. Last but not least, the decision to start this specific job set me on a path to treat testing much more seriously, and which eventually led me to where I am today.
I’m not much of a job hopper. In these 10 years, I have only had two employers. But I did change teams and projects quite a lot - I’ve been on 4 projects in first company, and now I’m on my 5th project in second company. The longest time I’ve ever been in a single project is 2 years and 7 months. Details are on LinkedIn.
I came into testing after getting a degree in sociology. In my time at university, I had an opportunity to get my feet wet in empirical social research. I approached testing the same way I approached empirical sociology, even if only because I didn’t really know anything else - I assumed there’s a number of things the team would like to know and my job is to learn about them and report my findings. The hard part is that we don’t have direct access to some of the things we would like to know more about, so we need to depend on a number of proxies of uncertain reliability. X can be caused by Y, and we observed X, but is this because of Y, or some other factor Z? How can we rule out Z? Today, I can confidently say this is not the worst way to approach testing.
When I started my first job, I have been using Linux as my main operating system for about 7 years. During that time I learned how to use shell, I got familiar with the idea that things change and move around, I faced various breakages after updates. Often trying to fix them was frustrating, but I did learn how to search for information, I picked up few tricks and I learned how various components can interact in complex system. That was another major source of experiences that influenced my approach to testing.
I guess I also have certain character traits that helped me to become a decent tester. I tend to be stubborn, I don’t give up easily, I self-identify as perfectionist and I strive to actually understand the thing I am dealing with.
After a year and a half I decided that I want to know more about testing, especially established testing techniques and solutions. My work was praised, but it was all based on intuition and past experiences from other fields. I felt I was missing fundamentals and I feared I might be missing some obvious and elementary testing techniques or skills. I tried to fill these gaps by attending an ISTQB preparation course, but it did not deliver what I was looking for.
My manager knew about my disappointment and at one point presented me with the opportunity to attend a testing conference in another city. One of the talks given there was called “Context-Driven Testing: A New Hope”. This is a funny title, as Context-Driven Testing was already 15 years old at that time and “schools of testing” debate has long left community conciousness. I don’t remember many details of the talk itself, but I did left the conference with a feeling that I should learn more about CDT, as they might have at least some of the answers I was looking for.
I think I started by reading “Lessons Learned in Software Testing”, and what a book it was! It not only revolutionized the way I think about testing to this day, but also gave me much-needed confidence. I found I was already doing some of the things that book recommended, but now I knew why they were worth doing. This is the book that everyone who is serious about testing should read, and probably re-read thorough their career. I think I read it at very good moment, too - I had about three years of experience at the time. I feel I wouldn’t get that much from it if I read it earlier.
Later I have read “Perfect Software” by late Jerry Weinberg. I think this is a great book for people who just start in testing. It surely helped to establish some of my knowledge, but I don’t think it was as influential for me as “Lessons Learned”. It would have been if I read it earlier.
Finally, I have read the complete archives of James Bach and Michael Bolton blogs. This is not something I can recommend to anyone, as both are very prolific writers - each authored few hundreds articles. I think it took me well over a year to get through them all. Nonetheless, this allowed me to fully immerse myself in their thinking and I can confidently say I understand where they are coming from and where they are going to. This also allowed me to stumble upon few very valuable articles and resources that I still refer to.
There’s a lot that I learned from all these resources, but I would like to point out two overarching principles that I often come back to. One, my role as a tester is to show possibilities and broaden the view of the team. My job is to go beyond simple and quick answers. Two, every single day I need to ask myself: what is the most important, most impactful thing I can do right now? And then do this exact thing, even if it means putting aside earlier plans and ideas. Change is something to embrace, not to be afraid of.
About five years into my career, I began to slowly move into more software development-heavy role. To some extent, that was out of necessity - I saw many tasks that could be rectified with a tiny bit of programming. At the same time, I was in the environment where development was considered higher on organizational totem pole than “manual testing”, and showing programming skills was a clear way for more respectable assignments and higher salary. Similar to my testing journey, that was not the moment I started to learn programming - I have written my first shell scripts and perl programs back in high school. While I did struggle, I felt confident enough in my programming prowess to do some simple things.
The event that really helped me to take off to the next level happened about a year after I joined Red Hat. We had a UI test automation framework, which was recently rewritten by a couple of contractors. They worked in a silo and as a result most of the team was not familiar with that code. My job was to learn it, contribute to it and become one of the maintainers.
I think contractors felt threatened by my presence and thought their job security depended on them being the only people capable of working with the framework. As a result, they made code review a nightmare. They threw it all - passive-aggressive comments, unhelpful comments, misleading comments, requests to change code that was already approved in earlier review cycle, demands to explain almost every single line of code, replying anytime between a day and a week. That was all on top of working with unfamiliar, complex and barely documented libraries.
I don’t look back at that time with fondness, but I have to admit it was an effective learning exercise. I was forced to understand things above my capabilities, and eventually I did understand them. This was very much the moment programming finally clicked for me. Also, I learned precisely what to avoid during code reviews and when teaching others.
Since then, my interests started to move more in direction of software design and architecture. I know I can write good enough code that works. But I also want to write code that is maintainable in the long term and allows for adjustments in response to changing environment or requirements.
In these 10 years, I have primarily been an individual contributor. This is the role I feel comfortable in and which I think suits me well. However, I did act as a kind of team lead in two separate occasions. Both times I was not formally a manager for other people and I didn’t feel I have all the tools necessary to make them do the required work. The first time I was completely unprepared for a challenge in front of me. The second time went a little bit better, as I knew more about ways to informally influence people.
These would be the rough summary and most important highlights of my 10 years in testing. There’s no narrative closure, as I am still here and intend to stay for a while longer. I’m happy to talk about testing, open source, software engineering and related topics, so feel free to get in touch with me if this is something you find interesting, or if you would like to draw from my experience.
This week in KDE: Plasma 6 Alpha approaches
Time has a way of creeping up, and the Plasma 6 alpha release is in two days. People are scrambling to get their features in before either the soft feature freeze (on Monday) or the hard one (a few weeks later). So this has been a week of big changes! Starting on Monday, we’ll officially start the process of convergence and shift focus to bug fixing and UI polishing, with the currently in-flight new features trickling in too.
KDE 6 MegaRelease(Includes all software to be released on the February 28th mega-release: Plasma 6, Frameworks 6, and apps from Gear 24.02)
General info – Open issues: 113
Discover now has a better way to present app ratings: now it shows a big overview of the ratings with quotations from the best ones, and you can still read all of them in a popup like before. When you do, they’re now sorted by “relevance” which is a determined by combination of recency, helpfulness votes, and the version being reviewed matching the version available to you (Marco Martin, link 1 and link 2):
Discover’s search has been hugely improved, and now generally always returns the results you’re expecting when you search for something that exists and is available (Marco Martin, link):
Please excuse the lack of app icons; this is a local setup issue on my machine that I haven’t fixed yet, not a bug in DiscoverSystem Settings’ Energy Saving page has been rewritten in QML, which fixed all of the open bug reports for the old one, and also has a nicer and easier-to-parse visual design (Jakob Petsovits, link):
Plasma styles without the grouped task indicator SVG (of which Breeze is now one) now use a fancy new style to show grouped tasks (me: Nate Graham. Link):
Still slightly work-in-progress and subject to change based on feedbackVertically-space-limited line graphs in System Monitor and the Plasma widgets of the same name no longer let their legends get cut off (Arjen Hiemstra, link):
Pen input using a graphics tablet can now be manually re-mapped to the entire screen area (Aki Sakurai, link)
Transient dialog windows (i.e. windows that close with the Escape key that you typically open from another window, like settings dialogs) are now handled in the Plasma Wayland session like they are on X11: they no longer appear in the Task Manager as separate windows, propagate “needs attention” status to their parents, and so on (Kai Uwe Broulik, link)
Ark can now extract files from multi-volume ZIP files (Ilya Pominov, Ark 24.02. Link)
When using “Repeat this track” mode in Elisa, manually skipping forward or back to the next or previous tracks now works as expected (Quinten Kock, link)
KRunner’s web shortcuts runner now has two new entries: Codeberg (search for “cb [search term]”) and PyPi (search for “pypi [search term]”) (Salvo Tomaselli, link)
Other Significant Bugfixes(This is a curated list of e.g. HI and VHI priority bugs, Wayland showstoppers, major regressions, etc.)
Fixed one of the most common random crashes in Plasma or when changing audio device settings in System Settings (David Redondo, Plasma 5.27.9. Link)
Fixed an extremely subtle threading bug that could cause Plasma or KWin to randomly crash when files being watched for changes got certain types of changes with certain timings, which in Qt 6 became easier to trigger by switching Kate or Konsole profiles while KRunner’s Kate Sessions or Konsole Profiles runners were active (Harald Sitter, Plasma 6.0 and Plasma 5.27.10 with Frameworks 5.112. Link)
It’s no longer possible for the screen locker to break when you either had an extremely large number of session-restored apps, or any of your session-restored apps did something naughty and silently exhausted the system’s session restoration resources. Instead, when either of these things happens, Plasma will warn you about it and prevent the resource exhaustion (Harald Sitter, Plasma 6.0. Link)
When using NetworkManager 1.44, restarting the NetworkManager system service–which sometimes happens automatically when the computer goes to sleep and then wakes up again– no longer causes the Networks widget in the System Tray to either disappear or stop displaying any networks (Ilya Katsnelson, Frameworks 5.112. Link)
Other bug-related information of interest:
- 3 Very high priority Plasma bugs (up from 2 last week). Current list of bugs
- 52 15-minute Plasma bugs (down from 55 last week). Current list of bugs
- 98 KDE bugs of all kinds fixed this week. Full list of bugs
This blog only covers the tip of the iceberg! If you’re hungry for more, check out https://planet.kde.org, where you can find more news from other KDE contributors.
How You Can HelpWe’re hosting our Plasma 6 fundraiser right now and need your help! We’re almost to the 50% mark of our goal of 500 members, so if you like the work we’re doing, joining up and spreading the wealth is a great way to share the love.
If you’re a developer, work on Qt6/KF6/Plasma 6 issues! Plasma 6 is usable for daily driving now, but still in need of bug-fixing and polishing to get it into a releasable state by February.
Otherwise, visit https://community.kde.org/Get_Involved to discover other ways to be part of a project that really matters. Each contributor makes a huge difference in KDE; you are not a number or a cog in a machine! You don’t have to already be a programmer, either. I wasn’t when I got started. Try it, you’ll like it! We don’t bite!
Dirk Eddelbuettel: RcppEigen 0.3.3.9.4 on CRAN: Maintenance, Matrix Changes
A new release 0.3.3.9.4 of RcppEigen arrived on CRAN yesterday, and went to Debian today. Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.
This update contains a small amount of the usual maintenance (see below), along with a very nice pull request by Mikael Jagan which simplifies to interface with the Matrix package and inparticular the CHOLMOD library that is part of SuiteSparse. This release is coordinated with lme4 and OpenMx which are also being updated.
The complete NEWS file entry follows.
Changes in RcppEigen version 0.3.3.9.4 (2023-11-01)The CITATION file has been updated for the new bibentry style.
The package skeleton generator has been updated and no longer sets an Imports:.
Some README.md URLs and badged have been updated.
The use of -fopenmp has been documented in Makevars, and a simple thread-count reporting function has been added.
The old manual src/init.c has been replaced by an autogenerated version, the RcppExports file have regenerated
The interface to package Matrix has been updated and simplified thanks to an excllent patch by Mikael Jagan.
The new upload is coordinated with packages lme4 and OpenMx.
Courtesy of CRANberries, there is also a diffstat report for the most recent release.
If you like this or other open-source work I do, you can sponsor me at GitHub.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.