Planet KDE

Syndicate content
Planet KDE -
Updated: 18 hours 33 min ago

Clock-to-color wallpaper plugin for Plasma5

Wed, 2014-12-17 18:53

Today I came across this interesting idea - - basically it puts the current time as the background color. You really need to see it in action ;)

So I thought - hey, this could be a cool Plasma wallpaper. Yep. And so 10 minutes later, I made it so :)

You can download this zip directly and install like this:

$ plasmapkg2 -t wallpaperplugin -i   Looks pretty neat :)  

Oh and...

Categories: FLOSS Project Planets

SuperX 3.0 Beta Released

Wed, 2014-12-17 06:20

SuperX, a relatively new distribution, just released beta for its upcoming 3.0 release. SuperX is a KDE centric distribution, and focuses on giving a polished KDE experience (a marketing statement, SuperX guys use).

It is one of the early and few Linux distributions from India, and arguably the most active in development. It has made a mark in Government domain and universities in India. In the year of 2013, the State Government of Assam (for those who don’t know, Assam is a state in northeastern part of India) distributed 29, 887 laptop computers to meritorious high-school passed students and SuperX was the default OS of choice. This might be one of the large OEM-shipment of KDE desktop ever.

SuperX is currently used in Gauhati University in Guwahati, Assam to train Bachelor of Technology students in Linux. The University moved from Windows to locally developed, SuperX. They are also giving training of application development in PyQt and PyKDE and teaching Qt in general to students, under the developers of SuperX. A good initiative by Gauhati University to promote a locally developed distribution and Qt/KDE technologies. On the other side, it is also used in some commerce colleges in Assam, its an example how KDE and open source in general can be used across use cases, may not be necessarily be related to technology.

Here are some of pictures of SuperX being used in Gauhati University (taken from SuperX’s Facebook page):

The beta release of SuperX 3.0, codenamed ‘Grace’ looks promising. It features a highly customized KDE 4 desktop, a dark plasma theme and some great aesthetic tweaks, Under the hood, it is the same Linux desktop, with all goodness of KDE and Qt. The developers have said that, they are ‘KDE and Qt-Centric but not GTK free’, so you will find some GTK application and tools here and there. They have adopted some of Linux Mint’s tools and integrated them, like integrating superx-sources (a fork of mintsources from Linux Mint) into Muon Suite, not a bad thing IMO.

If you want to test the beta, you can download it from their website. At moment I am also helping in development for this distribution in my free time. Thanks for reading. If you have any questions about SuperX, you can ask me on

Categories: FLOSS Project Planets

on why relighting the flame for Luminosity is not working

Tue, 2014-12-16 17:25

(People keep asking me about Luminosity and various other efforts I've laid to the side, and I owe them something better than silence. I've tried to write this blog entry at least four times in the last six weeks or so. I wasn't happy with any of the results so I just sat on it until I could phrase what I'm thinking and feeling in a more satisfactory manner. Hopefully this is it.)

So, as some of you have noticed I haven't done a Luminosity of Free software episode in a while. The reason for that relates to why I do Luminosity. Luminosity is a show for people in the free software community, an attempt to remind us all (including myself) of the positive, useful and interesting things going on out there in the community. The great software and the interesting social questions examined in depth ... that sort of thing. If the free software community was not there, however, I wouldn't have made even a single episode of Luminosity, even if free software itself was going strong (sans community).

Well, 2013 and 2014 have been tough years for my faith in the free software community. Free software itself is doing fantastic, becoming the status-quo for an ever increasing number of technology categories. More software products are based on free software than ever, more technology companies are actively creating, supporting and finding business success with free software and more people are also using free software. Free software has been thriving, but the free software community has become a bit of a shambles.

(I deleted 900+ words here that explained why I feel that is, including numerous real-world examples. On re-reading, I realized that the fallout from critique, accurate it may be, is not worth it. I also don't like spreading negativity for what will almost certainly amount to no benefit. Maybe that is why I'm publishing this version and not the previous four. Anyways ... back to what I wrote after those 900+ words:)

Without evidence for a free software community that is healthy and vibrant, I have no motivation to continue putting my energy into things such as Luminosity: that community was the motivation.

It is difficult to remain interested in, let alone upbeat about, a community that does not walk the walk it claims to. I understand that the "free software community" as it currently is works for a number of people out there. I also know of some projects which do have fantastic and wonderful communities around them. This has unfortunately been overwhelmed by the larger patterns of activity in the "free software community".

That said, I'm just as committed as ever to free software itself. I'm still earning my living working with free software, even if it means making some sacrifices to do so. I have no doubts that free software is taking over the world, is a technological force for positive change, is endlessly full of fascinating projects and ideas, and worth putting one's life effort into. I just don't think the free software community has a relevant role in any of that these days.

Ergo why Luminosity is not lit up: the community was the motivation.

p.s. to those I owe Luminosity postcards to, I actually have a cute little pile of cards on my desk that I've enjoyed collecting for this purpose. They will go out in the post over the holidays ...

Categories: FLOSS Project Planets

Overview of Qt3D 2.0 – Part 1

Tue, 2014-12-16 08:46

Back in the days when Qt was owned by Nokia, a development team in Brisbane had the idea of making it easy to incorporate 3D content into Qt applications. This happened around the time of the introduction of the QML language and technology stack, and so it was only natural that Qt3D should also have a QML based API in addition to the more traditional C++ interface like other frameworks within Qt.

Qt3D was released alongside Qt 4 and saw only relatively little use before Nokia decided to divest Qt to Digia. During this transition, the Qt development office in Brisbane was closed and unfortunately Qt3D never saw a release alongside Qt 5. This chain of events left the Qt3D code base without a maintainer and left to slowly bit rot.

With OpenGL taking a much more prominent position in Qt 5’s graphical stack — OpenGL is the underpinning of Qt Quick 2’s rendering power — and with OpenGL becoming a much more common part of customer projects, KDAB decided that it would be good for us and for the Qt community at large if we took over maintainership and development of the Qt3D module. To this end, several KDAB engineers have been working hard to bring Qt3D back to life and moreover to make it competitive to other modern 3D frameworks.

This article is the first in a series that will cover the capabilities, APIs, and implementation of Qt3D in detail. Future articles will cover how to use the API in various ways from basic to advanced with a series of walked examples. For now, we will begin in this article with a high-level overview of the design goals of Qt3D; some of the challenges we faced; how we have solved them; what remains to be done before we can release Qt3D 2.0; and what the future may bring beyond Qt3D 2.0.

What Should Qt3D Do?

When asked what a 3D framework such as Qt3D should actually do, most people unfamiliar with 3D rendering simply say something along the lines of “I want to be able to draw 3D shapes and move them around and move the camera”. This is, of course, a sensible baseline, but when pressed further you get back wishes that typically include the following kinds of things:

  • 2D and 3D
  • Meshes
  • Materials
  • Shadows

Then, when you move on and ask the next target group, those who already know about the intricacies of 3D rendering, you get back some more technical terms such as:

That is already a fairly complex set of feature requests, but the real killer is that last entry which translates into ‘I want to be able to configure the renderer in ways you haven’t thought of’. Given that Qt3D 1.0 offered both C++ and QML APIs this is something that we wished to continue to support, but when taken together with wanting to have a fully configurable renderer this led to quite a challenge. In the end, this has resulted in something called the framegraph.

Framegraph vs Scenegraph

A scenegraph is a data-driven description of what to render.
The framegraph is a data-driven description of how to render.

Using a data-driven description in the framegraph allows us to choose between a simple forward renderer; or including a z-fill pass; or using a deferred renderer; when to render any transparent objects etc. Also, since this is all configured purely from data, it is very easy to modify even dynamically at runtime. All without touching any C++ code at all!

Once you move beyond the essentials of getting some 3D content on to the screen, it becomes apparent that people also want to do a lot of other things related to the 3D objects. The list is extensive and wide ranging but very often includes requests like:

This is obviously a tall order, and one that we couldn’t possibly hope to satisfy out of the box with the limited resources available. However, it is clear, that in order to support these features in the future, we needed to do some ground work now to architect Qt3D 2.0 to be extensible and flexible enough to act as a host for such extensions. The work around this topic took a lot of effort and several aborted prototypes before we settled on the current design. We will introduce the resulting architecture later and then cover it in more detail in an upcoming article.

Beyond the above short and long term feature goals, we also wanted to make Qt3D perform well and scale up with the number of available CPU cores. This is important given how modern hardware is improving performance — by increasing the numbers of cores rather than base clock speed. Also, when analysing the above features we can intuitively hope that utilising multiple cores will work quite naturally since many tasks are independent of each other. For example, the operations performed by a path finding module will not overlap strongly with the tasks performed by a renderer (except maybe for rendering some debug info or statistics).

Overview of the Qt3D 2.0 Architecture

The above set of requirements turned out to be quite a thorny problem, or rather a whole set of them. Fortunately, we think we have found solutions to most of them and the remaining challenges look achievable.

For the purposes of discussion, let’s start at the high-level and consider how to implement a framework that is extensible enough to deal with not just rendering but also all of the other features plus more that we haven’t though of.

At its heart, Qt3D is all about simulating objects in near-realtime, and then very likely then rendering the state of those objects onto the screen somehow. Let’s break that down and start with asking the question: ‘What do we mean by an object?’

Of course in such a simulation system there are likely to be numerous types of object. If we consider a concrete example this will help to shed some light on the kinds of objects we may see. Let’s consider something simple, like a game of Space Invaders. Of course, real-world systems are likely to be much more complex but this will suffice to highlight some issues. Let’s begin by enumerating some typical object types that might be found in an implementation of Space Invaders:

  • The player’s ground cannon
  • The ground
  • The defensive blocks
  • The enemy space invader ships
  • The enemy boss flying saucer
  • Bullets shot from enemies and the player

In a traditional C++ design these types of object would very likely end up implemented as classes arranged in some kind of inheritance tree. Various branches in the inheritance tree may add additional functionality to the root class for features such as: “accepts user input”; “plays a sound”; “can be animated”; “collides with other objects”; “needs to be drawn on screen”.

I’m sure you can classify the types in our Space Invaders example against these pieces of functionality. However, designing an elegant inheritance tree for even such a simple example is not easy.

This approach and other variations on inheritance have a number of problems as we will discuss in a future article but includes:

  • Deep and wide inheritance hierarchies are difficult to understand, maintain and extend.
  • The inheritance taxonomy is set in stone at compile time.
  • Each level in the class inheritance tree can only classify upon a single criteria or axis.
  • Shared functionality tends to ‘bubble up’ the class hierarchy over time.
  • As library designers we can’t ever know all the things our users will want to do.

Anybody that has worked with deep and wide inheritance trees is likely to have found that unless you understand, and agree with, the taxonomy used by the original author, it can be difficult to extend them without having to resort to some ugly hacks to bend classes to our will.

For Qt3D, we have decided to largely forego inheritance and instead focus on aggregation as the means of imparting functionality onto an instance of an object. Specifically, for Qt3D we are using an Entity Component System (ECS). There are several possible implementation approaches for ECSs and we will discuss Qt3D’s implementation in detail in a later article but here’s a very brief overview to give you a flavour.

An Entity represents a simulated object but by itself is devoid of any specific behaviour or characteristics. Additional behaviour can be grafted on to an entity by having the entity aggregate one or more Components. A component is a vertical slice of behaviour of an object type.

What does that mean? Well, it means that a component is some piece of behaviour or functionality in the vein of those we described for the objects in our Space Invaders example. The ground in that example would be an Entity with a Component attached that tells the system that it needs rendering and how to render it; An enemy space invader would be an Entity with Components attached that cause them to be rendered (like the ground), but also that they emit sounds, can be collided with, are animated and are controlled by a simple AI; The player object would have mostly similar components to the enemy space invader, except that it would not have the AI component and in its place would have an input component to allow the player to move the object around and to fire bullets.

On the back-end of Qt3D we implement the System part of the ECS paradigm in the form of so-called Aspects. An aspect implements the particular vertical slice of functionality imbued to entities by a combination of one or more of their aggregated components. As a concrete example of this, the renderer aspect, looks for entities that have mesh, material and optionally transformation components. If it finds such an entity, the renderer knows how to take that data and draw something nice from it. If an entity doesn’t have those components then the renderer aspect ignores it.

Qt3D is an Entity-Component-System

Qt3D builds custom Entities by aggregating Components that impart additional capabilities. The Qt3D engine uses Aspects to process and update entities with specific components.

Similarly, a physics aspect would look for entities that have some kind of collision volume component and another component that specifies other properties needed by such simulations like mass, coefficient of friction etc. An entity that emits sound would have a component that says it is a sound emitter along with when and which sounds to play.

A very nice feature of the ECS is that because they use aggregation rather than inheritance, we can dynamically change how an object behaves at runtime simply by adding or removing components. Want your player to suddenly be able to run through walls after gobbling a power-up? No problem. Just temporarily remove that entity’s collision volume component. Then when the power-up times out, add the collision volume back in again. There is no need to make a special one-off subclass for PlayerThatCanSometimesWalkThroughWalls.

Hopefully that gives enough of an indication of the flexibility of Entity Component Systems to let you see why we chose it as the basis of the architecture in Qt3D. Within Qt3D the ECS is implemented according to the following simple class hierarchy.

Qt3D’s ‘base class’ is QNode which is a very simple subclass of QObject. QNode adds to QObject the ability to automatically communicate property changes through to aspects and also an ID that is unique throughout the application. As we will see in a future article, the aspects live and work in additional threads and QNode massively simplifies the tasks of getting data between the user-facing objects and the aspects. Typically, subclasses of QNode provide additional supporting data that is then referenced by components. For example a QShaderProgram specifies the GLSL code to be used when rendering a set of entities.

Components in Qt3D are implemented by subclassing QComponent and adding in any data necessary for the corresponding aspect to do its work. For example, the Mesh component is used by the renderer aspect to retrieve the per-vertex data that should be sent down the OpenGL pipeline.

Finally, QEntity is simply an object that can aggregate zero or more QComponent’s as described above.

To add a brand new piece of functionality to Qt3D, either as part of Qt or specific to your own applications, and which can take advantage of the multi-threaded back-end consists of:

  • Identify and implement any needed components and supporting data
  • Register those components with the QML engine (only if you wish to use the QML API)
  • Subclass QAbstractAspect and implement your subsystems functionality.

Of course anything sounds easy when you say it fast enough, but after implementing the renderer aspect and also doing some investigations into additional aspects we’re pretty confident that this makes for a flexible and extensible API that, so far, satisfies the requirements of Qt3D.

Qt3D has a Task-Based Engine

Aspects in Qt3D get asked each frame for a set of tasks to execute along with dependencies between them. The tasks are distributed across all configured cores by a scheduler for improved performance.


We have seen that the needs of Qt3D extend far beyond implementing a simple forward-renderer exposed to QML. Rather, what is needed is a fully configurable renderer that allows to quickly implement any rendering pipeline that you need. Furthermore, Qt3D also provides a generic framework for near-realtime simulations beyond rendering. Qt3D is cleanly separated into a core and any number of aspects that can implement any functionality they wish. The aspects interact with components and entities to provide some slice of functionality. Examples of future possible aspects include: physics, audio, collision, AI, path finding.

In the next part of this series, we shall demonstrate how to use Qt3D and the renderer aspect to produce a custom shaded object and how to make it animate all from within QML.

The post Overview of Qt3D 2.0 – Part 1 appeared first on KDAB.

Categories: FLOSS Project Planets

KDE at its very best!

Mon, 2014-12-15 16:32

Recently, there were some thoughts on where KDE is going, and  related to that what’s the driving force behind it in terms of the pillars of KDE. Albeit it is true our development model changed significantly, I’m not convinced that it’s all about git.

No, I rather believe that it is the excitement about the KDE that makes it stand out – KDE as a community if you wish, but also KDE as a software project.

Going back to the late nineties, I was developing small games for DOS (Turpo Pascal, anyone? Snake and Gorillas in QBasic? ) and also for Windows. At around that time, Linux got also a bit more popular so that I finally had a SuSE 6.0 in my hands. I installed it and was able to run KDE 2, iirc (?). It certainly was interesting, but then I also wasn’t involved in any free software projects, so it also wasn’t that a big deal.

Still, I started to look into how to develop GUI applications for Linux. Since under Windows I used MFC (oh well, now it’s out, but you know, I quickly got back on the right track) I found Qt quite nice (for CPoint you had QPoint, for CDialog a QDialog, and so on). As I used KDE in Linux, I started to change small things like an Emoticon preview in Kopete (one of my first contributions?), or some wizards for KDevelop in 2003. These were projects that were fairly easy to compile and run. Still, what might seem so little was a huge success for the following reason:

More or less still being child, getting in touch with C++, KDE, and all the tools around it was completely new. CVS? Never heard about it before, and anyways, what was a version control system? How it worked with the mailing lists. With entering a bug. Compiling kdelibs: It took me more than 2 weeks to succeed (Which btw. to myself proves that even at that time it was really hard for a newbe to compile KDE, just like today). All in all, these were times where I learned a lot. I started to read the cvs-commit mailing list (around 400 mails a day, I read them almost all, more than 5 years long).

But that was not yet it. It continued like that for years. For instance, understanding how KIO slaves worked was just amazing. How all KDE components integrate into and interact with each other. There were a lot of parts where KDE was simply the best in terms of the software technology it provided and created.

To me, this was KDE at its best.

In my opinion, KDE followed this route for a long time, also with KDE4. I even say KDE still follows this way today.

But it’s much harder to get excited about it. Why? Think of yourself as seeing snow for the first time. It’s just awesome, you’re excited and can’t believe how awesome this is. Or maybe also New Years eve with nicely looking fireworks coming. It’s something you simply can’t wait for enough. Kind of like a small child… This is the excitement KDE raised in lots of us. Getting a new KDE release was totally something I wanted. I saw the improvements everywhere. What also helped a lot was the detailed commit digest Derek Kite worked on so hard each week, showing what was going on even with detailed discussions and screenshots (today the KDE commit digest is mostly an auto-generated list of commits, which I already have through the KDE commit filter).

Today, I know all the details. All the KDE technology. Of course, it got even better over time, and certainly still is an immensely powerful technology. But I’m not that much excited about it anymore.

I believe this in itself is not an issue. For exactly this reason, developers come and go, leaving room for other developers to implement their ideas. It helps the project to stay young and agile.

It is often said, the KDE has grown up. This is certainly a good thing for instance in terms of the KDE e.V. supporting the KDE project as much as possible, or the KDE Free Qt Foundation that helps us to make sure Qt will always be freely available to us, or a strong background in legal issues.

At the same time, it is a very bad thing in terms of getting people excited about KDE. We need developers with freaky ideas who just sit down and implement new features (btw., this is very much true for all free software projects). For instance, why has no one come up with a better KXmlGui concept? I’m sure it can be done better!

Where does that put us? Is there really no cool stuff in KDE?

Well, the reason for this post is to show that we did not loose what once was cool. In fact, we see it every day. For instance, yesterday I was using Dolphin and had to change a lot between subfolders in the same level (e.g. from some_folder/foo to some_folder/bar and so on). I accidentally used the mouse wheel over “foo”, and whohooo! You can switch to the next folder just by scrolling with the mouse wheel over the navigation bar. This is immensely useful, and in fact, this is why KDE shines also today, it’s just not so visible to users and maybe also to developers. You now may say that it’s just some little detail. But this is exactly it: Yesterday I was totally amazed by how cool this is, just like 10 years back from now… Therefore, I say, this still is

KDE at its very Best!

Getting people excited about KDE is what defines KDE’s future, not git.

Edit (imho): I would like to add something here. When reading these kind of blogs, you may get the impression that KDE is getting a less and less attractive platform, or that KDE is kind of dying. This is absolutely not the case. Quite contrary: With KDE’s foundation libraries, and applications being about to released on top of the KDE Frameworks 5 libraries, KDE can certainly make the statement that the project and its software will definitely be available and certainly just as strong in 10 years from now. I have absolutely no doubt that you can count on that. And that is a really cool thing only few free software projects can claim! Let’s talk about it again in 2024

PS: On a unrelated note, KDE currently runs the End of Year 2014 Fundraiser. Support is very much welcome!

Categories: FLOSS Project Planets

Photographing Bats

Sun, 2014-12-14 21:46
One evening last summer the dogs and I were on the beach near our home. Just as it got dark a group of bats came out of the brush and flew back and forth along the shore feeding on the mayfly hatch. Of course I wanted to get a photo. So began the saga. I learned an enormous amount, spent a substantial sum, and now want to describe it for posterity.

I chanced upon a location where a small number of bats are concentrated in a small area making it possible to get a photo. The beach was enclosed on one side by bush, the other by a dock. The access to the beach was a trail, and it seems that bats either head for open space or feel their way by following a boundary of some sort. So during the time of peak activity, about 15 minutes just as it got dark, there was a certain density in the air main it possible to capture the odd one in a frame. Another advantage was if I sat in a particular location, I could see the bats against the western sky and the reflection on the water.
The bats stayed close to shore for 10-15 minutes then spread out over the lake. It is very dark, the subject moves very quickly in unpredictable ways. And they are small. To capture them in a photo requires artificial light, a very short exposure to freeze movement, and some way to point and trigger the camera to get one in the frame.
Start with exposure. High speed photography is the art of capturing very fast events; a bullet, a balloon popping, drops of water. The extremely short exposure are created with a flash unit in a dark room. If you look at the specifications for an electronic flash unit you will see that the duration shortens as the power decreases. At full power the flash duration is close to the synchronization speed of your camera. My Pentax K3 is 1/180 of a second, my Metz 50 AF1 at full power has the light on for 1/125 of a second. At 1/4 power it is 1/2000 of a second, much better. That is approaching the exposure speed where you can freeze movement. Still too slow for a bat. I found 1/8 power, or 1/4000 of a second better. The photo above was at that speed.
Now for a lens. You can't focus in real time, they are too quick and you can't see them. So you want to set up a box in space that is in focus and illuminated. I tried different lenses; 35mm which had a wide field of view and captured lots of action, unfortunately was soft so the shots were unsatisfactory. 100 mm lens which was sharp but the area in focus was to far away to illuminate with the flashes that I had.
Intensity of light decreases at a rate of the square of the distance. The box was too far away. A 50 mm lens seemed to give me the best results.

Aperture? Here a depth of field table is useful ( Depth of field is the space in which the lens is in focus. A large aperture, or lower f-stop number gives a narrow depth of field. The closer your subject the narrower the depth of field. The longer the lens in mm the narrower the depth of field. For our purposes the further away, the shorter the lens and the higher the f-stop the larger the box where things are in focus will be. But the further away the more light you will need. The further away the smaller the bat will be in your frame. Same with a short lens. The higher the f-stop you may run into distortion as well. I found f8, focusing at 10 feet with a 50mm lens gave me a box about 4 x 4 3/4 x 3 feet.
One flash isn't enough. I had three. One Metz 50 AF1 on the camera, and two Yongnuo 560ii manual flash units assisting. All on 1/8 power. This was the challenging part. There are a few ways to trigger multiple flashes. 1/4000 is .25 milliseconds. The Yongnuo units are a bit slower than that, but bear with me. Optical triggering, which I used takes .06 ms, or roughly a quarter of the total exposure time. Wireless radio triggers take about .25 ms to trigger, meaning that the master flash would be almost finished before the slave units would begin. I noticed that the shots where the bat was flying across the frame were soft, possibly indicating too long of an exposure. That extra .06 ms may have been the difference. In any case speed is of the essence.
I initially had the the flash units close together, but found that the images were unpleasant. 

A 5 ft long piece of aluminum angle made a bracket with the camera in the middle and the flash units on either end. It made a very big difference in image quality.

How to capture the varmints? I used a remote shutter trigger so I could sit where they were visible, then held down the button. I would take hundreds of shots and get maybe two dozen with something in the frame, and maybe one or two that were interesting.
This pray and spray method has a drawback. The flash units need to regenerate between exposures, and will shut off from time to time to cool off, or to allow the battery discharge to catch up. I used Duracell precharged NiMH 1.2 v 2400mah AA batteries, freshly charged for each session. They worked very well. I have one set of Eneloops and found they didn't keep up with the Duracell's. Even then I would get dark exposures as one or more of the flashes missed. The solution is to shoot less. I learned discipline, which helped. I also invested in a trigger device, but ran out of time. The weather changed, or the hatch changed and the bats stopped hanging around the shore.
So next year. I am already planning and accumulating. Another two flashes. I will wire them together, getting rid of any latency. At the same distance I might be able to set them to 1/16 power. The wire harness is ready and tested. I need another support, which entails the purchase of a light stand. And I intend to test the trigger device to figure out how to set it up.
I got about 80% of the way there. The shots are decent but not excellent. My goal is to get enough excellent shots where they are also doing something interesting. Just getting one was a challenge. Now to get a good one.
Categories: FLOSS Project Planets

First Theme added

Sun, 2014-12-14 13:13

Today I added my first theme for Pairs. You can find that here.  In my current project “Theme Designing for Pairs” which is a project from “Season of KDE” and mentored by Heena Mahour, the main task is adding themes.

When designing a theme for a game designed for pre-school children, you can’t just draw some pictures and add. The pictures will be educational and easily identifiable. So as my first theme I designed a theme of fruits. It contain 8 fruits; apple, banana, pineapple, mango, strawberry, pairs, orange and grapes. It will support the “Pairs”, “Relations”, “Logic” and “Words” game modes. A screenshot from “Pairs” game mode will look as follows:

To create the image files in the svg format, I have used inkscape.  Even though it took some time to be familiar with its functionality first, once you draw a thing or two you get addicted to it with its cool features.



Filed under: KDE, Open Source Tagged: KDE edu, Pairs, SoK
Categories: FLOSS Project Planets

Krita 2.9: First Beta Released!

Sun, 2014-12-14 07:00

Last week, the first preparations for the next Krita release started with the creation of the first Krita 2.9 beta release: Krita 2.9 Beta 1. This means that we’ve stopped adding new features to the codebase, and are now focusing on making Krita 2.9 as stable as possible.

We’ve come a long way since March, when we released Krita 2.8! Thanks to the enthusiastic support of many, many users, here and on kickstarter, Krita 2.9 has a huge set of cool new features, improvements and refinements.

Here’s a short list to wet your appetite — a full intro to all new features will be presented when we present the final 2.9 release. That’s expected to happen in January, by the way.

  • Interface:
    • Krita can now open more than one image in a singe window, and the same image in more than one window. You can choose between sub windows and tabs in the preferences.
    • You can now organize the favorite presets in the pop-up palette using tags.
    • You can also increase the amount of brushes available in the pop-up palette at a time in preferences.
    • You can select more than one layer at a time and delete, move or drag and drop them in the layer docker.
    • New options for the cursor, including one to show a dot at the center of the brush outline.
    • The thumbnails for resources like brushes, gradients or patterns are resizable by using ctrl+scrollwheel over them.
    • Editing gradients have been improved.
    • You can now choose between giving the first layer a default color and giving the image a non-transparent background.
    • You can create palette files inside Krita.
    • The compositions docker stores the collapsed state, you can update compositions and control which compositions you export.
    • A new type of gradients: selection-shape based gradients was added.
  • Layers, Selections and Masks
    • A new mask type was added: non-destructive transformation masks.
    • Many new ways to convert between layers and masks.
    • The rendering of vector graphics at various image resolution settings was fixed.
    • It’s now possible to edit the alpha channel separately.
    • You can split a layer into several layers, one for each color on the layer, which is useful together with G’Mic’s recolorize[comics] feature for coloring artwork.
    • You can isolate a layer by using the shortcut alt-select.
    • You can edit selections directly as if it were a black and white image.
  • Brushes and painting:
    • The anti-aliasing quality of thin lines has been improved greatly.
    • The smudge brush was improved.
    • The Flow option has finally been separated from opacity.
    • Steps in the undo history can now be merged.
    • The brush preset system was extended to make it possible to keep changes to a certain preset during a session, instead of resetting to the original preset on every brush preset switch.
    • You can lock the size of the brush when switching between paint and erase mode, or have a separate size for each mode.
    • New painting assistants for working with vanishing points and rulers.
    • the line tool will use all sensors now! Rotation, speed, tilt, etc.
    • There’s a sticky key available for accessing the straight line tool from the freehand brush (V).
    • a delayed-stroke based stabilizer option was added for stroke smoothing
    • the weighted smoothing and stabilizer have a ‘delay’ option now, which allows you to create a dead area around your cursor for extra sharp corners.
    • a scalable distance option was added to the weighted stroke smoothing, so now the weight is relative to the zoom.
  • The transform tool has been enormously extended:
    • Perspective transformation was added
    • Liquify transformation was added
    • Cage transformation was added (and is super-fast, too)
    • Selecting multiple nodes in the cage and warp tools is now possible. You can resize, rotate and move a whole section at once.
  • Color selectors:
    • A new color selector with accurate sliders, for every color model, was added
    • HSY’ and HSI color models are supported in both the new slider docker, the MyPaint shade selector, and the advanced color selector. You can even customise the weights for the HSY’ calculation.
    • All color selectors are now color managed, and also work when you paint in HDR mode.
    • You can paint in HDR mode now, with the LUT docker and OCIO.
    • There are sticky keys for gamma and exposure with HDR painting.
  • Filters:
    • The G’Mic plugin (controls filters) was updated to the latest version of G’Mic. On-canvas and miniature preview was added.
    • A posterize filter was added.
    • Index-colors filter was added, which is very useful in the HD-pixel art painting technique.
  • Files:
    • Some improvements to the PSD import/export filter: resource blocks are round-tripped, all but four PSD blending modes are now supported, Krita can now load some CS6 PSD files, PSD layer groups are loaded, support for 16 bit multi-layer files is improved, Krita vector layers are saved (as raster layers) to PSD.
    • The OpenEXR filter can now load and save single-channel grayscale EXR files, and was fixed for loading images with very small alpha values, and images with zero alpha but non-zero color values
    • Support for raw
    • Saving 16 bit grayscale images to tiff, jpeg and ppm now works
    • support for r16 and r8 heightmap files were added

Missing in 2.9 are Photoshop layer styles and PSD layer masks: we’re working hard on those, but they aren’t done yet. All the scaffolding is done, and most of the drop shadow effect, except the integration, is in the rendering process… We’re working to have them ready by the end of January. The animation tool has been disabled for refactoring. In Beta 1, Sketch and Gemini have been disabled.


There are still  234 bugs at the moment, some of which might cause dataloss, so users are recommended to use the beta builds for testing, not for critical work!

For Linux users, Krita Lime will be updated on Monday. Remember that launchpad is very strict about the versions of Ubuntu it supports. So the update is only available for 14.04 and up.

OpenSUSE users can use the new OBS repositories created by Leinir:

Windows users can choose between an installer and the zip file. You can unzip the zip file anywhere and start Krita by executing bin/krita.exe. The Surface Pro 3 tablet offset issue has been fixed! We only have 64 bits Windows builds at the moment, we’re working on fixing a problem with the 32 bits build.

OSX users can open the dmg and copy where they want. Note that OSX still is not supported. There are OSX-specific bugs and some features are missing.

Categories: FLOSS Project Planets

Calligra 2.9 Beta Released

Sun, 2014-12-14 05:37

We’re pleased to present you the first beta release in 2.9 series of Calligra Suite for testing! We will focus on fixing issues including those that you’d report. All thus to make the final release of 2.9 expected in January 2015 as stable as possible!

Support Calligra!

When you update many improvements and a few new features will be installed, mostly in Kexi and Krita as well as general ones. Finally in 2.9 a new app, Calligra Gemini, appears. Read below to see why it may be of interest to you.

New Features and Improvements in This Release New Integration: Displaying office documents in Okular

Calligra document plugin for Okular showing a DOC file

A new plugin for Okular, KDE’s universal document viewer, enables Okular to use the Calligra office engine for displaying documents in the formats OpenDocument Text (ODT), MS Word (DOC, DOCX) and WordPerfect (WPD). It supplements the existing plugin from Calligra that gives Okular ability to display OpenDocument Presentation (ODP) and MS Powerpoint (PPT, PPTX) formats.

The Calligra office engine has been used for the default document viewers on the smartphones Nokia N9 and Jolla, the Android app COffice, and other mobile editions of Calligra. So it makes sense to also use the Calligra office engine for the document reader from KDE, coming with a UI designed for document consumption for people who want to read, but not edit office documents.

New application: Calligra Gemini

The same text document edited on laptop computer and in tablet mode

Calligra Gemini debuts in 2.9, a novel application encasing word processor and presentation Calligra components can function both as a traditional desktop application used with a mouse and keyboard, and transform into a touch friendly application on the go. This changes the experience to one suitable for all-touch devices without the inconvenience of having to switch to a separate application.

Read more about story behind the app.

Kexi – Visual Database Applications Builder

Many usability improvements and bug fixes. Forms have finally been ported from Qt 3 to Qt 4.

  • General:
    • New: Simplify and automatize bug reporting; OS and Platform information is auto-selected on
    • New: Make side panes lighter by removing frames in all styles except Oxygen
    • New: Added “Close all tabs” action to main window tabs.
    • Improve appearance of main tabbed toolbar for GTK+ and Oxygen styles. (bug 341150)
    • Improve handling permission errors on database creation. Do not allow to create a new SQLite-based .kexi file if: non-writable folder is selected, relative path is selected (unsafe), non-file path is selected (perhaps a folder).
    • Do not crash when Kexi is unable to find plugins; display message and exit.
    • Fix right-to-left user interface support in side panes.
    • Simplify “Save password” checkbox text in database connection editor and add direct what’s this button.
    • Disable ability of setting left/right sidebars floatable (like in Dolphin, improve stability)
    • Remove redundant ‘find’ action from the main toolbar. It’s already available in local context where it really works.
    • Move the ‘Export data table’ from the main toolbar to a local table and query object’s menu.
    • Improve user-visible messages.
  • Forms:
    • New: Port Kexi Forms to Qt4’s scroll area, a milestone leading to Qt5-based Kexi.
    • Improve translation support in Forms’ action selection dialog
  • Reports:
    • New: Added inline editing for labels in Report Designer.
    • New: Added “Do you want to open exported document?” question when report is exported to a Text/Spreadsheet/as Web Page.
    • Print reports in High DPI (precision). (bug 340598)
Krita – Creative Sketching & Painting App
  • New: Krita can now open multiple images in one window
  • New: Perspective transform
  • New: Liquify transform
  • New: Cage transform
  • New: Selection-shaped gradients
  • New: Several new filters
  • New: A HSV color selector
  • New: It’s now possible to edit the alpha channel separately
  • New: A new feature to split a layer into several layers by color
  • Thin line quality has been improved
  • Anti-aliasing of the transform tool has been improved
  • It’s now much easier to create masks and convert between masks and layers
  • Vector object scaling and resolution has been fixed
  • The smudge brush has been made more correct
  • Steps on the Undo history can now be merged
  • The brush preset system has been improved to make it possible to temporarily lock changes to a preset during a session
  • The G’Mic filter has been updated and there are previews now
  • Missing: Photoshop layer styles and PSD layer masks: we’re working hard on those, but they aren’t done yet. We’re working to have them ready by the end of January. The animation tool has been disabled for refactoring. In Beta 1, Sketch and Gemini have been disabled.
Calligra Words – Word Processor

Layouting has been reworked to fix many small rendering glitches. It is the first required step before more page layouting features can be added as well as dynamic page layout changes.

Try It Out What’s Next and How to Help?

We’re approaching the era of 2.9 to be released in early 2015. It will be followed by Calligra 3.0 based on new technologies later in 2015.

You can meet us to share your thoughts or offer your support on general Calligra forums or dedicated Kexi or Krita forums. Many improvements are only possible thanks to the fact that we’re working together within the awesome community.

(some Calligra apps need new maintainers, you can become one, it’s fun!)

How and Why to Support Calligra?

Calligra apps may be totally free, but their development is costly. Power, hardware, office space, internet access, travelling for meetings – everything costs. Direct donation is the easiest and fastest way to efficiently support your favourite applications. Everyone, regardless of any degree of involvement can do so. You can choose to:

About Calligra

Calligra Suite is a graphic art and office suite developed by the KDE community. It is available for desktop PCs, tablet computers, and smartphones. It contains applications for word processing, spreadsheets, presentation, databases, vector graphics, and digital painting. See more information at the website

} .button:hover{ padding:11px 32px; border:solid 1px #004F72; -webkit-border-radius:10px; -moz-border-radius:10px; border-radius: 10px; font:18px Arial, Helvetica, sans-serif; font-weight:bold; color:#E5FFFF; background-color:#3BA4C7; background-image: -moz-linear-gradient(top, #3BA4C7 0%, #1982A5 100%); background-image: -webkit-linear-gradient(top, #3BA4C7 0%, #1982A5 100%); background-image: -o-linear-gradient(top, #3BA4C7 0%, #1982A5 100%); background-image: -ms-linear-gradient(top, #3BA4C7 0% ,#1982A5 100%); filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#1982A5', endColorstr='#1982A5',GradientType=0 ); background-image: linear-gradient(top, #3BA4C7 0% ,#1982A5 100%); -webkit-box-shadow:0px 0px 2px #bababa, inset 0px 0px 1px #ffffff; -moz-box-shadow: 0px 0px 2px #bababa, inset 0px 0px 1px #ffffff; box-shadow:0px 0px 2px #bababa, inset 0px 0px 1px #ffffff;

} .button:active{ padding:11px 32px; border:solid 1px #004F72; -webkit-border-radius:10px; -moz-border-radius:10px; border-radius: 10px; font:18px Arial, Helvetica, sans-serif; font-weight:bold; color:#E5FFFF; background-color:#3BA4C7; background-image: -moz-linear-gradient(top, #3BA4C7 0%, #1982A5 100%); background-image: -webkit-linear-gradient(top, #3BA4C7 0%, #1982A5 100%); background-image: -o-linear-gradient(top, #3BA4C7 0%, #1982A5 100%); background-image: -ms-linear-gradient(top, #3BA4C7 0% ,#1982A5 100%); filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#1982A5', endColorstr='#1982A5',GradientType=0 ); background-image: linear-gradient(top, #3BA4C7 0% ,#1982A5 100%); -webkit-box-shadow:0px 0px 2px #bababa, inset 0px 0px 1px #ffffff; -moz-box-shadow: 0px 0px 2px #bababa, inset 0px 0px 1px #ffffff; box-shadow:0px 0px 2px #bababa, inset 0px 0px 1px #ffffff; }

.button a,.button a:link, .button a:visited, .button a:hover, .button a:active { color:#E5FFFF; } -->

Categories: FLOSS Project Planets

One year away

Sat, 2014-12-13 11:06

It’s been a year since I had a surgery to remove a tumor from my jaw bone. It was the second time I went into an operating room for the same cause, the first time being an aggressive tumour, but not quite evil. This time, the pathology result came and it was malign. Basically, it was an ameloblastoma, it would eat my jaw bone unless removed on time.
Good news is the surgery went well, it didn’t hurt as much as the first time, and I was back up and working only a week after. Just on time to handle the final delivery of a project, by the way.
Bad news, it may return and next time probably most of my right jaw will have to be removed and replaced with either another bone or a prosthetic (maybe a nerdy 3d-printed one?). I have to get an X-ray every 6 months to control the blastoma is not growing again, for the rest of my life.
One big advice from the doctor was to calm down and live a more relaxed life. Any strain on my inmunosystem can help trigger the blastoma. Thus, I was forced to make some changes in my life, albeit slowly, because I have always managed to get my hands full of work so it’s hard to slow down and try to relax.
Not many people really knew about all this, while I tried to wrap my head around this. What am I now? How do I handle a latent disease in my body, which I don’t know when it will show up again? I don’t have cancer and it’s not lethal, but having my jaw removed is not a light aesthetic surgery, and going into an operating room is not something I look forward to. I also don’t feel different in my daily life and I don’t want to be treated differently.
I had to cut back on my tasks, what would have to go? I’m a KDE contributor, a project manager at work, and a political activist, a husband and a father. Something had to go.
Along the year, my priorities shifted, but basically I decreased my involvement in KDE, and my political activities are almost nil at the moment. I’ll try to put more work in KDE as long as I see that I can cope with it in a stressless way.
I’m still dealing with what happened and I don’t have all the answers, so I’ll have to play along and see how it turns out, living 6 months at a time, until the next check.

Yesterday I got my semi-annual X-ray. I still have to see the doctor, but there is no big black monster eating my jaw, or so it seems.

Just somewhere I expect to live in the future, for a more quiet and healthy life.

Categories: FLOSS Project Planets

kdev-qmljs 1.7.0 released

Fri, 2014-12-12 15:06

I’m pleased to announce that the kdev-qmljs plugin that I’ve developed during the spring and the summer is finally released in version 1.7.0. This version is compatible with KDevPlatform 1.7.0 and KDevelop 4.7.0. You can download the source tarball on

  • SHA256Sum: 70927785de7791335eda43b55ef7742af7915425823d5f70b97edac1828681e1
  • SHA1Sum: 78ad0f72ffa091f6aff5ea633eb5f135e9d625f3
  • MD5Sum: e51b93519e4eb028a8863c5d7bc96848

This release contains all the features developed during the summer. It adds support for the QML and Javascript languages in KDevelop and is able to recognize Javascript variables, functions, “objects” (based on prototype oriented programming) and most of its standard library (along with the DOM standard library and core Node.js modules). The QML language supports Qt 4 and Qt 5 modules, custom binary modules (modules for which you have an .so file), modules in your application and all sorts of import statements.

More advanced support for Node.js, richer QML helper tools and some other features will be available in the KF5 version of the plugin. I hope to have time to work on it shortly.

The tarball that I linked hereabove is known to compile and work on Fedora and a preliminary package has already been created (the final one will come shortly I think). Thanks again to everyone who helped me develop this plugin and fix bugs (especially my mentor Sven Brauch, Milian Wolf who dived into the deepest parts of KDevelop in order to fix a memory corruption issue, Kevin Funk and Aleix Pol who gave me advice on how to properly implement some parts of the plugin). Thanks again to the whole KDE community for the support and encouragement, and to the sysadmins who do a wonderful job to get everything working.

Categories: FLOSS Project Planets

Calligra 2.8.7 is Out

Fri, 2014-12-12 09:02

Packages for the release of KDE's document suite Calligra 2.8.7 are available for Kubuntu 14.10. You can get it from the Kubuntu Updates PPA. They are also in our development version Vivid.

Bugs in the packaging should be reported to kubuntu-ppa on Launchpad. Bugs in the software to KDE.

Categories: FLOSS Project Planets

Sharing with Qt on Android

Thu, 2014-12-11 23:40

We just release a new version of GiraffPanic – a logic mobile game written with Qt and QML. In the new version we give the users the possibility to share unlock codes with each other to unlock new levels. So we wanted to have a nice way to share the code between devices without any need to copy paste them (codes) into another application. After trying a lot of different approaches (that did not work), we found it is possible to invoke the native Android share menu from within our application. Using this method keeps our own code quite tidy and supports all the ways of sharing provided by the host device.

By using sharing that way the application does not need any special permissions.

Here is a preview of what the end result might look like:

You can find sample source code for the test app here. The code below is shortened to make it easier to understand.

So what is needed for this to work?

1. Java class that calls the native Android Java API     ...     public class ShareIntent     {     static public void shareText(String title, String subject, String content, QtActivity activity)     {         Intent share = new Intent(Intent.ACTION_SEND); share.setType("text/plain"); share.putExtra(Intent.EXTRA_SUBJECT, subject); share.putExtra(Intent.EXTRA_TEXT, Html.fromHtml(content).toString()); share.putExtra(Intent.EXTRA_HTML_TEXT, content); activity.startActivity(Intent.createChooser(share, title)); } } ...  2. The androidextras module in .pro file ... QT += androidextras ... 3. Qt class to call Java class via JNI(Java Native Interface) ... void QtAndroidShare::share(const QString &title, const QString &subject, const QString &content) { QAndroidJniObject jTitle = QAndroidJniObject::fromString(title); QAndroidJniObject jSubject = QAndroidJniObject::fromString(subject); QAndroidJniObject jContent = QAndroidJniObject::fromString(content); QAndroidJniObject activity = QtAndroid::androidActivity(); QAndroidJniObject::callStaticMethod<void>( "net/exit0/androidshare/ShareIntent", "shareText", "(Ljava/lang/String;Ljava/lang/String;Ljava/lang/String;" "Lorg/qtproject/qt5/android/bindings/QtActivity;)V", jTitle.object<jstring>(), jSubject.object<jstring>(), jContent.object<jstring>(), activity.object<jobject>() ); }

The QAndroidJniObject class is part of the andoridextras module and simplify the calling of Java methods with JNI. First we convert the QString objects to a Java String objects which are used as parameters for the Java method. We also pass the activity object to start the share intent from. As the Java method shareText is static public void we can use QAndroidJniObject::callStaticMethod<void>() to call it.
The parameters are:

className - This reflects the path where the class is at methodName - The method we want to call signature - The signature of the method we want to call. In our case the function takes 3 String parameters and one QtActivity parameter and returns void. parameters - All the parameters passed to the method.

In the article Qt on Android Episode 5 Bogdan gives a nice Overview about the Qt and JNI.

4. Make the AndroidShare class available from QML

To make the AndroidShare class available from QML AndroidShare::share is declared as

  Q_INVOKABLE virtual void share(const QString &title, const QString &subject, const QString &content);

and an object of the class is added to the QQmlContext

... QQmlApplicationEngine engine; QQmlContext *context = engine.rootContext(); qmlRegisterType<QtAndroidShare>("QtAndroidShare", 1, 0, "ShareIntent"); context->setContextProperty("shareIntent", new QtAndroidShare()); ... 5. Use from QML ... Button { text: "Press to share" onClicked: { shareIntent.share(title.text, subject.text, content.text); } } ...

As you can see calling the code from QML is now quite easy. I hope this helps some people out there.

To see the code in action in our game download it for free from Google Play. It is also available on BlackBerry World and there is a download of the game for the N9 here (it is no longer possible to upload anything to the store) as that is the phone I still use on a daily basis.

Categories: FLOSS Project Planets

digiKam: Season of KDE Update

Thu, 2014-12-11 11:00

I’ve been slow on updating the progress of the project — I apologize for that. I’m working on implementing an improved noise estimation function for digiKam.

Currently, I’m working on a CLI version of the implementation to better understand the noise estimation process. The implementation consists of calculating the gradient covariance matrix of the image. For an image patch y, it’s defined as follows:


and are the horizontal and vertical derivatives of the image patch respectively. The resulting matrix is then decomposed to get the eigen values.

To put it briefly, the gradient of the image patch gives us the differences in pixels. We then decompose the covariance matrix to get eigen values whose values represent the strength of the image patch. Specifically, the larger the maximum eigen value, the richer the texture is, in the dominant direction.

I came across an OpenCV function which handles all of this and returns the eigen values (and vectors).

cornerEigenValsAndVecs calculates the eigen values and vectors for a given blocksize and aperture size.

The function returns the result represented in a 6-channel image — the first two containing the eigen values, and the others — the eigen vectors corresponding to those eigen values.

For testing this, I’ve applied a noise with uniform distribution on an image, and calculated the eigen values for the resulting image.

Further tests have to made to check whether the retrieved results match well with that of the algorithm, and proceed to select the “weak textured patches”.

Categories: FLOSS Project Planets

Moving to Stockholm

Thu, 2014-12-11 07:41


[This post is off-topic for some Planet readers, sorry for it. I just expect to get some help with free software communities.]

Exciting times are coming to me before the end of year. Next week (probably) I am going to Stockholm to live for 5 months. I am getting an visiting researcher position at  KTH – Royal Institute of Technology as part of my PhD course. I will work with PSMIX group – Power System Management with related Information Exchange, leaded by Prof. Lars Nordström.

The subject of my project is the modelling and simulation of smart grids (power system plus a layer of communication and decision making automation) features using multi-agent systems. I expect to work with the simulation platform developed by PSMIX based in Raspberry PI and SPADE framework. The platform is described in this paper.

Well, I am very anxious with this travel because two things: the communication in English and the Sweden winter. The second is my main concern. Gosh, going out from the all the days Brazilian summer to the Nordic winter! :'( But ♫ I will survive ♪ (I expect).

If you know someone to help me with tips about apartment rent, I will be very glad. Rent accommodation in Stockholm is very hard and expensive.

Thanks and I will send news!

Categories: FLOSS Project Planets

Meeting C++ and fantastic people

Wed, 2014-12-10 06:45

I got back from Meeting C++ and I must say I loved every second of it. At first, it was a bit strange – I’m accustomed to KDE/Qt conferences where I know a lot of people. Here, it was not the case. It is a bit sad to see that barely anyone from the Qt community was there (apart from a few KDAB people), but that is a separate topic.

The conference started with the great Scott (pun intended) Meyers. The talk was less technical than most of us expected, but it was really awesome. It was filled with great advice for anyone wanting to write books or give talks. It even made me change a few parts of my presentation which was scheduled for the next day.

Scott Meyers started his Keynote #meetingcpp

— Meeting C++ (@meetingcpp) December 5, 2014

In Scott’s first slide, he had shown my favourite monument in Berlin – Soviet War Memorial at Treptower Park. I followed, and raised him the one at Tiergarten.

It was a trully awesome feeling to speak in front of people like Scott Meyers, Hartmut Kaiser and Detlef Wilkening. And it was fantastic to see that people are really interested (quite surprising for me) in monads and asynchronous programming. I got a few questions in the Q&A section, and much more afterwards.

Yeah… Sure… Of course… #meetingcpp

— Andrea (@ndb70) December 6, 2014

The next step for me is C++ Russia meeting in Moscow. It seems I’ll have the chance to meet Bartosz Milewski and Sean Parent there. Can’t wait!

Categories: FLOSS Project Planets

HAWD: how are we doing

Wed, 2014-12-10 05:22

People love it when the software they are writing is hitting all those performance targets, but if the tooling is anything to go by people don't like measuring it much. ;) Given that the work we're doing around Akonadi is performance-sensitive, as in "we want it to be as fast as we can safely make it", we need to be able to measure these things in what is, for me, a useful manner.

What I wanted was a way to define a data set, then create entries for that dataset and then manage them from the command line. I wanted git hashes and timestamps to be added automatically so I could sort and track progress. I wanted to be able to annotate entries with notes like "horrible performance here due to a mistake I made" or "need to investigate what changed to make performance improve here". That sort of thing.

So I stole some scaffolding code from another side-project of mine and quickly erected a little tool called hawd, which stands for "How Are We Doing?" It works like this ...

You put a hawd.conf in a source repository that looks something like this:
    "results": "~/hawd",
    "project": "@CMAKE_SOURCE_DIR@/hawd_defs"
}Aaah, JSON. ;) You then install it to your build directory via cmake like this:configure_file(hawd.conf hawd.conf)Then you start creating definition files in the hawd_defs directory, things that look like this:aseigo@serenity:~/src/pim/akonadinext/hawd_defs (hawd)> cat buffer_creation
    "name": "Buffer Creation",
    "description": "Tests how fast buffer creation is",
    "columns": {
        "numBuffers": { "type": "int" },
        "time": { "type": "int", "unit": "ms", "min": 0, "max": 100 },
        "ops": { "type": "float", "unit": "ops/ms" }
}You then add a little code to your test. Something like this:HAWD::State hawdState;
HAWD::Dataset dataset("buffer_creation", hawdState);
HAWD::Dataset::Row row = dataset.row();
row.setValue("numBuffers", count);
row.setValue("time", bufferDuration);
row.setValue("ops", opsPerMs);
dataset.insertRow(row);Obviously you may wish to reuse the State and Dataset objects in your test, but you get the idea. It handles storage, checking values, timestamping, figuring out git hashes, etc. all for you. You can leave values out, add/remove columns to your definition later, update rows ...
Later on you can flip to the command line and do things like list all the datasets defined in this project:aseigo@serenity:~/src/pim/build/akonadinext> hawd list
Data sets in this project:
        storage_readwrite... or look into a given dataset a bit more:aseigo@serenity:~/src/pim/build/akonadinext> hawd list buffer_creation
        Dataset: Buffer Creation
                int time
                float ops
                int numBuffers... or check the syntax of one or all of the definition files:aseigo@serenity:~/src/pim/build/akonadinext> hawd checkall
buffer_creation is OK
storage_readwrite is OK... or print some data out in tabular form:aseigo@serenity:~/src/pim/build/akonadinext> hawd print buffer_creation
Timestamp       Commit  ops (ops/ms)    numBuffers      time (ms)
1418202530986   064ac243f       714.285714285714        50000   70
1418202678475   ca294cba2       694.444444444444        50000   72
1418202686175   ca294cba2       657.894736842105        50000   76
1418202691829   ca294cba2       684.931506849315        50000   73... or only get specific columns:aseigo@serenity:~/src/pim/build/akonadinext> hawd print buffer_creation ops numBuffers
Timestamp       Commit  numBuffers      ops (ops/ms)
1418202530986   064ac243f       50000   714.285714285714
1418202678475   ca294cba2       50000   694.444444444444
1418202686175   ca294cba2       50000   657.894736842105
1418202691829   ca294cba2       50000   684.931506849315It's still in its infancy, but for two spare-time-in-the-evening's worth of hacking I'm happy with the progress so far. It already produces data useful for charting and is handy for running tests from different git branches to see how they stack up next to each other.
I need to add annotation, row deletion, dataset windowing (limit/offset, basically) and a few other things to the hawd tool before it is truly useful, and maybe one day it will even grow the ability to generate nice graphical reports all on its own. Who knows.
Extending the hawd command line tool is easy enough, thankfully. Each bit of syntax is added as a module that registers some lambdas:Print::Print() : Module()
    Syntax top("print", &Print::print);
    setDescription(QObject::tr("Prints a table from a dataset; you can provide a list of rows to output"));
}Syntaxi is a tree structure, so if a module has syntax that allows for specializations (e.g. "print table", "print xml") it just needs to add a Syntax object for each of these to the "top" Syntax and give it a lambda to run. It is even nifty enough to allow "print foo" to call the top lambda in the above example (since "foo" doesn't match "table" or "xml"), and you can have syntax without a lambda which allows creating nodes in the syntax tree which require further arguments. The lambdas are handed a QStringList of everything left on command line and a HAWD::State object for easy access to datasets, git hashes, etc.
Currently hawd sits in a branch of the akonadinext repository where we are experimenting with what is possible with Akonadi. (I hope to merge it into master once Christian has had a peek at it.) It also uses the Storage class we've been toying around with, mostly because I wanted to have another use case with which to bang on it some more, which is already helping me to understand how the API ought to look. (Currently Storage is quite ugly, but that's because it isn't even a draft API at this point.)
If you would like to poke at hawd, feel free to grab it from You'll find it under tests/hawd/ at the moment. It requires Qt5, lmdb and a fairly recent libgit2 for the git hashes (though that dependency is optional). Patches are also more than welcome.
p.s. On the topic of performance, Millian Wolf's heaptrack tool that he announced recently is quite exciting. It's a rather excellent tool for measuring heap usage in applications without grinding your system to a halt. If you haven't seen it, check it out. Development of it is moving nicely forward, as well, with things like attaching to running processes having been added just the other day.

Categories: FLOSS Project Planets

Trusty Old Router

Tue, 2014-12-09 18:10

I decommissioned a fine piece of hardware today. This access point brought the first wireless connectivity to my place. It’s been in service for more than 11 years, and is still fully functional.

In the past years, the device has been running OpenWRT, which is a really nice and very powerful little Linux distribution specifically for this kind of routers. OpenWRT actually sprang from the original firmware for this device, and was extended, updated, improved and made available for a wide range of hardware. OpenWRT lately has made this piece of hardware useful, and I’m really thankful for that. It also a shows how much value releasing firmware under an Open Source license can add to a product. Aside from the long-term support effect of releasing the firmware, updated firmware would add features to the router which were otherwise only available in much more expensive hardware.

The first custom firmware I ran on this device was Sveasoft. In the long run, this ended up not being such a good option, since the company producing the software really stretched the meaning of the GPL — while you were technically allowed to share software with others, doing so would end your support contract with the company — no updates for you. LWN has a good write-up about this story.

Bitter-sweet gadget-melancholy aside, the replacement access point brings a 4 times speed increase to the wifi in my home office: less finger-twiddling, more coding. :)

Categories: FLOSS Project Planets

Heaptrack - Attaching to Running Process

Tue, 2014-12-09 16:02

Hello all,

I’m happy to be back so soon with a status update on heaptrack: It is now possible to attach to an already running process!

Thanks to the great help from Celelibi on StackOverflow, I managed to achieve this important goal. Once you know what to do, it is actually extremely simple to patch a running process. I use GDB to attach to the process, then call dlopen to load a special heaptrack library for runtime-injection. Then I call an initialization function which takes the desired output file as a parameter, and then detach GDB. To actually overwrite malloc & friends, one can leverage dl_iterate_phdr and the public ELF API on Linux systems to find dynamic sections that reference one of our target symbols in their global offset table (GOT). This can then be rewritten to point to our custom hooks. Some refactoring later, which stabilized the shutdown sequence to allows multiple heaptrack attach/detach sequences, we can now do this:

  1. heaptrack -p $(pidof <yourapp>)
  2. # wait
  3. ^C
  4. heaptrack_print heaptrack.<yourapp>.$$.gz | less

This is a great help when you want to investigate why the memory consumption of your application suddenly rises. No need to restart the app, just attach heaptrack and wait for some, then kill it and heaptrack_print the outputfile.

Please try this new feature and send me bug reports and feedback.


Categories: FLOSS Project Planets

Keeping our eyes on the big picture – High Priority Projects

Tue, 2014-12-09 12:23

Free Software has made great strides in all kinds of areas and improved our lives. Nonetheless there are still many areas where people don’t have the freedom to use, study, modify and redistribute software that is important to them. The Free Software Foundation has a list of projects where it is especially important to provide a new or better Free Software solution. I am very happy to see that the process for maintaining this list has been opened up now. The list is going to be renewed by a committee (that I am a part of). Our movement needs to keep the big picture in mind and attract new people for important areas if we want to make further progress on giving more people more control over more parts of their digital lives. But what should be on this list in the future? Where does Free Software need to make a difference? We need your input. For further details please see the announcement by the FSF.

Categories: FLOSS Project Planets