Planet KDE

Syndicate content
Planet KDE -
Updated: 1 hour 33 min ago

GSoC 2015 (Mid Term Evaluation)

Fri, 2015-07-03 13:51
The Midterm evaluation week has almost come to an end and midterm evaluation deadline ends today. This post will describe about what all I have achieved in my project "Integration of Cantor with LabPlot" and what I plan to do further.

Below are the screenshots of LabPlot.

As you can see in the above screenshots Cantor's session is integrated. Variable manager, Print, Print Preview and all other relevant actions in Cantor has also been implemented into LabPlot. With implementation of all these I have successfully achieved my midterm evaluation target. I was working on improvising my code implemented so far and implementing my mentor's suggestion to code.
I will now move on to extract variables from the cantor's session so that we can use them to create new plots inside LabPlot. After that I will work on saving Cantor's data along with LabPlot's data, so that user could save and load both the worksheets.
I have learned a lot during this journey so far. I learn how duscussion plays an important role in the development of a software. I learn about some of the best practices that should be followed and their importance in real life code.
That is not all! My upcoming weeks will see more of coding and hence I am prepared to learn more during that time.
Categories: FLOSS Project Planets

KDEPIM without Akonadi

Fri, 2015-07-03 10:16

As you know, Gentoo is all about flexibility. You can run bleeding edge code (portage, our package manager, even provides you with installation from git master KF5 and friends) or you can focus on stability and trusted code. This is why we've been offering our users for the last years KDEPIM (the version where KMail e-mail storage was not integrated with Akonadi yet, also known as KMail1) as a drop-in replacement for the newer versions.
Recently the Nepomuk search framework has been replaced by Baloo, and after some discussion we decided that for the Nepomuk-related packages it's now time to go. Problem is, the old KDEPIM packages still depend on it via their Akonadi version. This is why - for those of our users who prefer to run KDEPIM 4.4 / KMail1 - we've decided to switch to Pali Rohár's kdepim-noakonadi fork (see also his 2013 blog post and the code).The packages are right now in the KDE overlay, but will move to the main tree after a few days of testing and be treated as an update of KDEPIM
The fork is essentially KDEPIM 4.4 including some additional bugfixes from the KDE/4.4 git branch, with KAddressbook patched back to KDEPIM 4.3 state and references to Akonadi removed elsewhere. This is in some ways a functionality regression since the integration of e.g. different calendar types is lost, however in that version it never really worked perfectly anyway.

For now, you will still need the akonadi-server package, since kdepimlibs (outside kdepim and now at version 4.14.9) requires it to build, but you'll never need to start the Akonadi server. As a consequence, Nepomuk support can be disabled everywhere, and the Nepomuk core and client and Akonadi client packages can be removed by the package manager (--depclean, make sure to first globally disable the nepomuk useflag and rebuild accordingly).

You might ask "Why are you still doing this?"... well. I've been told Akonadi and Baloo is working very nicely, and again I've considered upgrading all my installations... but then on my work desktop where I am using newest and greatest KDE4PIM bug 338658 pops up regularly and stops syncing of important folders. I just don't have the time to pointlessly dig deep into the Akonadi database every few days. So KMail1 it is, and I'll rather spend some time occasionally picking and backporting bugfixes.

Categories: FLOSS Project Planets

Looks as if Wily got Plasma 5.3.2.

Fri, 2015-07-03 10:06


No backports PPA required.

Plasma 5.3.2.

Daily Wily Images.

Categories: FLOSS Project Planets

GSoC 2015 midterm update

Fri, 2015-07-03 09:28

I'm working on the project Improve Marble's OSM vector rendering and printing support. The first part of the project is to polish and improve the rendering of the OSM vector tiles in Marble. In my previous post you could see what it looks like without the improvements I have in my mind and now I want to share some pics and videos about the progress and results so far.

Adding outlines for OSM waysOne of the most outstanding issue was the lack of the outlines for streets and highways. My first task was to somehow overcome this problem. These lines are drawn as a QPainter line and thus they don't have style settings for outlines, so the best that I could do is to draw these lines twice: first draw a bit thicker (with 2 pixels) line with the outline color, then a second over top of it with the fill color. This approach of course is not enough, because the streets are not rendered in the order in which we want them. This results in something like this (which I showed you in my previous blogpost):

The solution is to sort the lines (and every object) depending on their z-value, so they are rendered in the correct order. As this sorting is a little bit resource-hungry, the outlines (and further on the decorations) are not rendered below Normal quality setings.I even changed the style of almost every OSM item and added some new, unimplemented items to the list(like cycle ways or pedestrian ways), so they would match even better the standard color theme of the openstreetmap website. The end result looks good I think.

After the improvements
Before the improvements
Decoration and "Fake3D"Implementing this feature led us to the idea to create a more robust API for creating such decorations for the GeoGraphicsItem classes which are used in the rendering process (e.g. the class used to describe the streets is the GeoLineStringGraphicsItem class). With overriding a simple virtual function, one can create custom decorations for every GeoGraphicsItem-derived class. To demonstarte this functionality, the outlines are implemented this way and I added a simple 3D effect (shadow) for the buildings. This takes almost the same approach as the outline rendering, but now this shadow is rendered with a little offset, based on the focuspoint of the map (the little cross at the center). I think this is a pretty good example to demonstrate the capability of such a decoration system.

Struggle with the street labelsAs you can see on the first pictures, the street labels aren't aligned to the streets at all, even if this is a crucial point of urban maps. I thought that this would be an easier task than the decorations, but I was wrong. The proper alignment of the street labels with the right color, right size, right font, right direction etc. it's more like an art, than a simple task. One of the problem is that the program needs this to be done in realtime, so I can't use a "complex algorithm" to calculate these properties. And on top of that, Qt doesn't provide a proper way to draw a text along a (curved) path. The best thing I could do is to try to avoid the curvatures of the streets when drawing the labels, so they will be rendered properly most of the times. A quick demonstration of what I achived this far:

OverallDue to my exams, I couldn't give a 100% priority to GSoC, but the progress this far meets my expectations and from now on, I can even double the time spent working, programming, experimenting.
I'll post a smaller status-update when I start working on the printing support improvement, until then I'm going to polish the work I had done this far.
In the end, here is a video demonstrating these improvements:

Categories: FLOSS Project Planets

Roundcube Next: The Next Steps

Fri, 2015-07-03 09:17

The crowdfunding campaign to provide funding and greater community engagement around the refactoring of Roundcube's core to give it a secure future has just wrapped up. We managed to raise $103,531 from 870 people. This obviously surpassed our goal of $80,000, so we're pretty ecstatic. This is not the end, however: now we begin the journey to delivering a first release of Roundcube Next. This blog entry outines some of that path forward


The most obvious thing on our list is to get people's t-shirts and stickers out to them. We have a few hundred of them to print and ship, and it looks like we may be missing a few shipping addresses so I'll be following up with those people next week. Below is a sneak peak of what the shirts might look like. We're still working out the details, so they may look a bit different than this once they come off the presses, but this should give you an idea. We'll be in touch with people for shirt sizes, color options, etc. in the coming week.

Those who elected for the Kolaborator perk will get notified by email how to redeem your free months on Kolab Now. Of course, everyone who elected for the in-application-credits-mention will get that in due time as well. We've got you all covered! :)

Note that it takes a couple of weeks for Indiegogo to get the funds to us, and we need to waitn on that before confirming our orders and shipping for the physical perk items.

Roundcube Backstage

We'll be opening the Roundcube Backstage area in the ~2 weeks after wrap-up is complete next week. This will give us enough time to create the Backstage user accounts and get the first set of content in place. We will be using the Discourse platform for discussions and posting our weekly Backstage updates to. I'm really looking forward to reading your feedback there, answering questions, contemplating the amazing future that lays ahead of us, ...

The usual channels of Roundcube blogging, forums and mailing lists will of course remain in use, but the Backstage will see all sorts of extras and closer direct interaction with the developers. If you picked up the Backstage perk, you will get an email next week with information on when and where you can activate your account.

Advisory Committee

The advisory committee members will also be getting an email next week with a welcome note. You'll be asked to confirm who the contact person should be, and they'll get a welcome package with further information. We'll also want some information for use in the credits badge: a logo we can use, a short description you'd like to see with that logo describing your group/company, and the web address we should point people to.

The Actual Project!

The funds we raised will cover getting the new core in place with basic email, contacts and settings apps. We will be able to adopt JMap into this and build the foundations we so desperately need. The responsive UI that works on phones, tablets and desktop/laptop systems will come as a result of this work as well, something we are all really looking forward to.

Today we had an all-hands meeting to take our current requirements, mock-ups and design docs and reflect on how the feedback we received during the campaign should influence those. We are now putting all this together in a clear and concise form that we can share with everyone, particularly our Advisory Committee members as well as in the Backstage area. This will form the bases for our first round of stakeholder feedback which I am really looking forward to.

We are committed to building the most productive and collaborative community around any webmail system out there, and these are just our first steps. That we have the opportunity here to work with the likes of Fastmail and Mailpile, two entities that one may have thought of as competitors rather than possible collaborators, really shows our direction in terms of inclusivity and looking for opportunities to collaborate.

Though we are at the end of this crowdfunding phase, this is really just the beginning, and the entire team here isn't waiting a moment to get rolling! Mostly because we're too excited to do anything else ;)

Categories: FLOSS Project Planets

GSoC ’15 Post #3: Install-ed!

Fri, 2015-07-03 03:07

After familiarising myself with PackageKit-Qt last week, I started working on a small application that uses it this week. The aim was simple – to create an application that uses PackageKit to install packages. Thanks to detailed guides here (PackageKit reference) and here (PackageKit-Qt API docs) pointed to by ximion, I was able to build a KF5 application that takes user input, asks for password and installs the application the user typed in. The application can be found here (git).

The Application

The application has a simple interface – a lineEdit and two pushButtons.
Once the user input has been stored into a QString variable (this is the package name), the next step is to resolve the name to a package ID. The package ID is basically the package name with some more data (related to the system on which the package is being installed). For example, package ID for the package geany turns out to be

geany;1.24.1+dfsg-1build1;amd64;vivid geany;1.24.1+dfsg-1build1;i386;vivid

To resolve the package names to package IDs, PackageKit provides a function named resolve. Resolve emits package IDs, which when fed to the function packageInstall installs the packages on the system. That’s it. All you need to know to build your application is the functions and what they emit.

Next Up

Now, I have all the tools ready to start working on the applications.

Next, I’m working on Dolphin to integrate PackageKit into it to install Samba. I have run into some building issues, but hopefully they’ll be solved soon and once that’s done, I’ll just have to replicate what I did in the above application there.

Categories: FLOSS Project Planets

Pointing devices KCM: update #2

Thu, 2015-07-02 19:00

For general information about the project, look at this post

Originally I planned to work on the KCM UI at this time. But as I am unsure how it should look like, I started a discussion on VDG forum, and decided to switch to other tasks.

Currently the KCM looks like this:

Don’t worry, it isn’t the final UI, just a minimal demo :)

KDED module is, I think, almost complete. It can apply settings from configuration file, and has a method exported to D-Bus to reload configuration for all devices or for some specific device. Of course, it also applies settings immediately when device is plugged in. The only thing that is missing is auto-disabling of some devices (like disable touchpad when there’s an external mouse).

As usual, here is a link to the repository.

Also, I started working on a D-Bus API for KWin. The API will expose most of libinput configuration settings. Currently it lists all available devices, some of their most important read-only properties (like name, hardware ids, capabilities), and allows to enable/disable tap-to-click as an example of writable property. As I already said, KCM isn’t yet ready, but I was able to enable tap-to-click on my touchpad using qdbusviewer.

My kwin repo clone is here, branch libinput-dbusconfig

Categories: FLOSS Project Planets

Fiber UI Experiments – Conclusion?

Thu, 2015-07-02 17:13

It’s been one heckuva road, but I think the dust is starting to settle on the UI design for Fiber, a new web browser which I’m developing for KDE. After some back-and fourth from previous revisions, there are some exciting new ideas in this iteration! Please note that this post is about design experiments – the development status of the browser is still very low-level and won’t reach the UI stage for some time. These experiments are being done now so I can better understand the structure of the browser as I program around a heavily extension-based UI, so when I do solidify the APIs it we have a rock-solid foundation.

Just as an aside before I get started; just about any time I mention “QML”, there is the possible chance that whatever is being driven could also alternatively use HTML. I’m looking into this, but make no guarantees.

As a recap to previous experiments, one of the biggest things that became very clear from feedback was that the address bar isn’t going away and I’m not going to hide it. I was a sad panda, but there are important things the address bar provides which I just couldn’t work around. Luckily, I found some ways to improve upon the existing address bar ideology via aggressive use of extensions, and slightly different usage compared to how contemporary browsers embed extensions into the input field – so lets take a look at the current designs;

By default, Fiber will have either “Tabs on Side” or “Tabs on Bottom”; this hasn’t been decided yet, but there will also have a “Tabs on Top” option (which I have decided will not be default for a few reasons). Gone is the search box as it was in previous attempts – replaced with a proper address bar which I’m calling “Multitool” – and here’s more about it why I’m a little excited;


Fiber is going to be an extensions-based browser. Almost everything will be an extension, from basic navigational elements (back, forward), to the bookmarks system – and all will either disable-able or replaceable. This means every button, every option, every utility will be configurable. I’ve studied how other browsers embed extensions in the address bar, and none of them really integrate with it in a meaningful and clearly defined way. Multitool is instead getting a well-defined interface for extensions which make use of the bar;

Extensions which have searchable or traversable content will be candidates for extending into the Multitool, which includes URL entry, search, history, bookmarks, downloads, and other things. Since these are extensions with a well-defined API you will be able to ruthlessly configure what you want or don’t want to show up, and only the URL entry will be set in stone. Multitool extensions will have 3 modes which you can pick from: background, button, and separate.

Background extensions will simply provide additional results when typing into the address bar. By default, this will be the behaviours of things like current tabs, history, and shortcut-enabled search. Button extensions in mutitool will provide a clickable option which will take over the Multitool when clicked, offering a focused text input and an optional QML-based “home popout”. Lastly, “separateextensions will split the text input offering something similar to a separate text widget – only integrated into the address bar.

The modes and buttons will be easily configurable, and so you can choose to have extensions simply be active in the background, or you could turn on the buttons, or disable them entirely. Think of this as applying KRunner logic to a browser address bar, only with the additional ability to perform “focused searches”.

Shown on the right side of the Multitool are the two extensions with dedicated buttons; bookmarks and search, which will be the default rollout. When you click on those embedded buttons they will take over the address bar and you may begin your search. These tools will also be able to specify an optional QML file for their “home” popout. For example the Bookmarks home popout could be a speed-dial UI, History could be a time-machine-esque scrollthrough. Seen above is a speed dial popout. With Bookmarks and Search being in button mode by default, just about everything else that performs local searches will be in “background mode”, except keyword-based searches which will be enabled – but will require configuration. Generally, the address portion of Multitool will NOT out-of-box beam what you type to a 3rd party, but the search extension will. I have not selected search providers.

We also get a two-for-one deal for fast filtering, since the user is already aware they have clicked on a text entry. Once you pick a selection from a focused search or cancel, the bar will snap back into address mode. If you hit “enter” while doing a focused search, it will simply open a tab with the results of that search.

Aside from buttons, all the protocol and security information relevant to the page (the highlighted areas on the left) will also be extension-driven. Ideally, this will let you highly customise what warnings you get, and will also let extensions tie any content-altering behaviour into proper warnings. For example, the ad-blocker may broadcast the number of zapped ads. When clicked the extensions will us QML-driven popouts.

Finally, the address itself (and any focused extension searches) will have extension-driven syntax highlighting. Right now I’m thinking of using a monospace font so we can drive things like bold fonts without offsetting text.


Tab placement was a big deal to people; some loved the single-row approach, others wanted a more traditional layout. The solution to the commotion was the fact that there isn’t a single solution. Tabs will have previews and simple information (as seen in the previous round of designs), so by default tabs will be on the bottom or side so the previews don’t obstruct unnecessary amounts of UI.

Fiber will have 3 tabbing options; Tabs on top, tabs on bottom, and tabs on side. When tabs are “on side” it will reduce the UI to one toolbar and place the tabs on the same row as the Multitool, and should also trigger a “compressed” layout for Multitool as well.

There will be the usual “app tab” support of pinning tabs, but not shown here will be tab-extensions. Tab extensions will behave like either app tabs or traditional tabs, and will be QML-powered pages from extensions. These special tabs will also be home-screen or new-tab options, and that is, largely, their purpose; but clever developers may find a use in having extension-based pages.

Tabs can also embed simple toggle-buttons, as usual, powered by extensions. Main candidates for these will be mute buttons or reader-mode buttons. There won’t be much to these buttons, but they will be content-sensitive and extensions will be required to provide the logic for when these buttons should be shown. For example, “reader mode” won’t be shown on pages without articles, and “mute” won’t be shown on pages without sound.

Current Progress

The current focus in Fiber is Profiles, Manifest files, and startup. Profiles will be the same as Firefox profiles, where you can have separate profiles with separate configurations. When in an activities-enabled environment, Fiber Profiles will attempt to keep in sync with the current activity – otherwise they will fall back to having users open a profile tool.

The manifest files are a big deal, since they define how extensions will interact with the browser. Fiber manifest files were origionally based on a slimmed down Chrome manifest with more “Qt-ish” syntax (like CamelCase); but with the more extensive extension plans and placement options there’s more going on with interaction points. There’s a decent manifest class, and it provides a reliable interface to read from, including things like providing missing defaults and offering some debugging info which will be used in Fibers extension development tools.I’m using DBus for Fiber to check a few things on startup; Fiber will be a “kind of” single-instance application, but individual profiles will be separate processes. DBus is being used to speak with running instances to figure out what it should do. The idea behind this setup is to keep instances on separate activities from spiking eachother, but to still allow the easier communication between windows of a single instance – this should help things like tab dragging between windows immensely. It also gives the benefit that you could run “unstable” extensions in a separate instance, which will be good for development purposes.I wish I could say development is going quickly, but right now my time is a bit crunched; either way things are going smoothly, and I’d rather be slow and steady than fast and sloppy.Development builds will be released in the future (still a long ways away) which I’ll be calling “Copper” builds. Copper builds will mostly be a rough and dirty way for me to test UI, and will not be stable or robust browsers. Mostly, it’ll be for the purpose of identifying annoying UI patterns and nipping them before they get written into extensions.

Categories: FLOSS Project Planets

Joining the press – Which topic would you like to read about?

Thu, 2015-07-02 16:26

When I saw an ad on Linux Veda that they are looking for new contributors to their site, I thought “Hey, why shouldn’t I write for them?”. Linux Veda (formerly muktware) is a site that offers Free Software news, how-tos, opinions, reviews and interviews. Since its founder Swapnil Bhartiya is personally a big KDE fan, the site has a track record of covering our software extensively in its news and reviews, and has already worked with us in the past to make sure their articles about our software or community were factually correct (and yes, it was only fact-checking, we never redacted their articles or anything).

Therefore, I thought that a closer collaboration with Linux Veda could be mutually beneficial: Getting exclusive insights directly from a core KDE contributor could give their popularity an additional boost, while my articles could get an extended audience including people who are currently interested in Linux and FOSS, but not necessarily too much interested in KDE yet.

I asked Swapnil if I could write for him. He said it would be an honor to work with me, which I must admit made me feel a little flattered. So I joined Linux Veda as a freelance contributor-

My first article actually isn’t about anything KDE-related, but a how-to for getting 1080p videos to work in Youtube’s HTML5 player in Firefox on Linux, mainly because I had just explained it to someone and felt it might benefit others as well if I wrote it up.

In the future you will mainly read articles about KDE-related topics from me there. Since I’m not sure which topics people would be most interested in, I thought I’d ask you, my dear readers. You can choose between three topics that came to my mind which one I should write about, or add your own ideas. I’m excited which topic will win!

Take Our Poll

Of course this doesn’t mean I won’t write anything in my blog here anymore. I’ll decide on a case-by-case basis if an article would make more sense here or over at Linux Veda. I hope you’ll find my articles there interesting and also read some of the other things they have on offer, you’ll find many well-written and interesting articles there!

Filed under: KDE
Categories: FLOSS Project Planets

KDEPIM report (week 26)

Thu, 2015-07-02 15:41

My focus was KAddressBook last week.


It’s not a complicated application, so I didn’t find a lot of bugs. But indeed as I maintain it I already fixed critical bugs.

Some fixes:

  • I improved gravatar support.
  • I added “Server Side Subscription” action. Before that it was necessary to go to kmail or go to resource settings and click on “ServerSide subscription” button. I was not userfriendly.
  • I continued to clean up code and I use new qt5 connect api.
  • I fixed a lot of bugs (as layout bugs etc.)

Other works in KDEPIM:

  • I fixed LDAP support which was broken when we ported it to QUrl now we can use it. It’s great
  • I fixed translation in korganizer.
  • AkonadiSearch dbus interface name was changed. So we couldn’t have email indexing information. Fixed.
  • I fixed Imap Resource interface too.
  • A lot of bugs were fixed in korganizer/kmail/sieveeditor etc.


Next week I will focus on KNotes and Kleopatra.

Other info:

Dan merged his work about replacing text protocol by a binary protocol for Akonadi. I confirm it’s faster.

He worked a lot to improve speed. Now kmail is very speed

I hope that he will add more speed patch

Sergio continued to add some optimizations in kdepimlibs/kdepim/akonadi. !!!

Categories: FLOSS Project Planets

KStars Observers Management patched

Thu, 2015-07-02 08:29

This update is a little break from my current GSoC project so i won’t talk about my progress just yet. I will talk about the current observers management dialog that is currently active in KStars. Basically, an observation session requires observer information like first name, last name and contact. Currently, an observer could be added only from the settings menu so i thought that it would be more intuitive if this functionality was placed in a more appropirate place and a proper GUI was to be implemented for a better user experience.

This is how the new observers management dialog looks like:

Now, the user has a heads on display on how many observers are currently in the database and has the possibility of managing that information.

Regarding GSoC, i am now working on the main Scheduler logic. I will come with an update as soon as possible. Stay tuned :D

Categories: FLOSS Project Planets

Convergence through Divergence

Wed, 2015-07-01 18:53

It’s that time of the year again, it seems: I’m working on KPluginMetaData improvements.

In this article, I am describing a new feature that allows developers to filter applications and plugins depending on the target device they are used on. The article targets developers and device integrators and is of a very technical nature.

Different apps per device

This time around, I’m adding a mechanism that allows us to list plugins, applications (and the general “service”) specific for a given form factor. In normal-people-language, that means that I want to make it possible to specify whether an application or plugin should be shown in the user interface of a given device. Let’s look at an example: KMail. KMail has two user interfaces, the desktop version, a traditional fat client offering all the features that an email client could possibly have, and a touch-friendly version that works well on devices such as smart phones and tablets. If both are installed, which should be shown in the user interface, for example the launcher? The answer is, unfortunately: we can’t really tell as there currently is no scheme to derive this information from in a reliable way. With the current functionality that is offered by KDE Frameworks and Plasma, we’d simply list both applications, they’re both installed and there is no metadata that could possibly tell us the difference.

Now the same problem applies to not only applications, but also, for example to settings modules. A settings module (in Frameworks terms “KCM”) can be useful on the desktop, but ignored for a media center. There may also be modules which provide similar functionality, but for a different use case. We don’t want to create a mess of overlapping modules, however, so again, we need some kind of filtering.

Metadata to the rescue

Enter KPluginMetaData. KPluginMetaData gives information about an application, a plugin or something like this. It lists name, icon, author, license and a whole bunch of other things, and it lies at the base of things such as the Kickoff application launcher, KWin’s desktop effects listing, and basically everything that’s extensible or uses plugins.

I have just merged a change to KPluginMetaData that allows all these things to specify what form factor it’s relevant and useful for. This means that you can install for example KDevelop on a system that can be either a laptop or a mediacenter, and an application listing can be adapted to only show KDevelop when in desktop mode, and skipping it in media center mode. This is of great value when you want to unclutter the UI by filtering out irrelevant “stuff”. As this mechanism is implemented at the base level, KPluginMetaData, it’s available everywhere, using the exact same mechanism. When listing or loading “something”, you simply check if your current formfactor is among the suggested useful ones for an app or plugin, and based on that you make a decision whether to list it or skip it.

With increasing convergence between user interfaces, this mechanism allows us to adapt the user interface and its functionality in a fully dynamic way, and reduces clutter.

Getting down and dirty

So, how does this look exactly? Let’s take KMail as example, and assume for the sake of this example that we have two executables, kmail and kmail-touch. Two desktop files are installed, which I’ll list here in short form.

For the desktop fat client:

[Desktop] Name=Email Comment=Fat-client for your email Exec=kmail FormFactors=desktop

For the touch-friendly version:

[Desktop] Name=Email Comment=Touch-friendly email client Exec=kmail FormFactor=handset,tablet

Note that that “FormFactors” key does not just take one fixed value, but allows specifying a list of values — an application may support more than one form-factor. This is reflected throughout the API with the plural form being used. Now the only thing the application launcher has to do is to check if the current form-factor is among the supplied ones, for example like this:

foreach (const KPluginMetaData &app, allApps) { if (app.formFactors().count() == 0 || app->formFactors().contains("desktop")) { shownAppsList.append(app); } }

In this example, we check if the plugin metadata does specify the form-factor by counting the elements, and if it does, we check whether “desktop” is among them. For the above mentioned example files, it would mean that the fat client will be added to the list, and the touch-friendly one won’t. I’ll leave it as an exercise to the reader how one could filter only applications that are specifically suitable for example for a tablet device.

What devices are supported?

KPluginMetaData does not itself check if any of the values make sense. This is done by design because we want to allow for a wide range of form-factors, and we simply don’t know yet which devices this mechanism will be used on in the future. As such, the values are free-form and part of the contract between the “reader” (for example a launcher or a plugin listing) and the plugins themselves. There are a few commonly used values already (desktop, mediacenter, tablet, handset), but in principle, adding new form-factors (such as smartwatches, toasters, spaceships or frobulators) is possible, and part of its design.

For application developers

Application developers are encouraged to add this metadata to their .desktop files. Simply adding a line like the FormFactors one in the above examples will help to offer the application on different devices. If your application is desktop-only, this is not really urgent, as in the case of the desktop launchers (Kickoff, Kicker, KRunner and friends), we’ll likely use a mechanism like the above: No formfactors specified means: list it. For devices where most of the applications to be found will likely not work, marking your app with a specific FormFactor will increase the chances of it being found. As applications are being adopted to respect the form-factor’s metadata, its usefulness will increase. So if you know your app will work well with a remote control, add “mediacenter”, if you know it works well on touch devices with a reasonably sized display, add “tablet”, and so on.


We now have basic API, but nobody uses it (a chicken-and-egg situation, really). I expect that one of the first users of this will be Plasma Mediacenter. Bhushan is currently working on the integration of Plasma widgets into its user interface, and he has already expressed interest in using this exact mechanism. As KDE software moves onto a wider range of devices, this functionality will be one of the cornerstones of the device-adaptable user interface. If we want to use device UIs to their full potential, we do not just need converging code, we also need to add divergence features to allow benefiting from the difference of devices.

Categories: FLOSS Project Planets

Hello Red Hat

Wed, 2015-07-01 18:35

As I mentioned in my last post I left my previous employer after quite some years – since July 1st I work for Red Hat.

In my new position I will be a Solutions Architect – so basically a sales engineer, thus the one talking to the customers on a more technical level, providing details or proof of concepts where they need it.

Since its my first day I don’t really know how it will be – but I’m very much looking forward to it, it’s an amazing opportunity! =)

Filed under: Business, Fedora, Linux, Politics, Technology, Thoughts
Categories: FLOSS Project Planets

The Kubuntu Podcast Team is on a roll

Wed, 2015-07-01 16:39

Building on their UOS Hangout, the Kubuntu Podcast Team has created their second Hangout, featuring Ovidiu-Florin Bogdan, Aaron Honeycutt, and Rick Timmis, discussing What is Kubuntu?

Categories: FLOSS Project Planets

The Earth, on Android

Wed, 2015-07-01 14:36

In the previous month I worked on compiling Marble widget to Android. It was a long and hard road but it is here:

(I shot this screenshot on my phone)
The globe can be rotated, and the user can zoom with the usual zooming gesture. Here is a short video example:

The hardest part was to figure out, how to compile everything with cmake instead of qmake and Qt Creator. There are some very basic things what can sabotage your successfully packaged and deployed app. For example if you did not set a version number in cmake for your library...
As you maybe know Marble also uses some elements of QtWebKit, but this is not supported on Android. So I introduced some dummy classes to substitute these (of course, not in their useability) to be able to compile Marble for Android.
You can find here step-by-step instructions, how to compile Marble Maps for Android:
The next steps:We have decided to separate Marble's functionality into two separate apps. I introduce you Marble Maps and Marble Globe. As their name suggests Marble Map will be essentially a map application with navigation, and Marble Globe will be an app where you can switch to other planets, view historical maps, etc. what also can be used for teaching purposes.
The main goal for the summer to give life for Marble Maps. But if everything goes fine, Marble Globe can be expected too.
To close this article, here are some screenshots:

Categories: FLOSS Project Planets

Road so far

Wed, 2015-07-01 12:41

As GSOC's mid-term is closing in, I thought I'd share what's been done so far! In case you haven't seen my earlier posts, here's a quick reminder on what I'm working on: implementing an Open Street Map (OSM) editor for Marble that allows the user to import ".osm" files, edit them with OSM-specific tools, and finally export them into ready-for-upload files. All that inside Marble's existing Annotate Plugin ( editor for ".kml" maps ).

What's been done so far?  As one would imagine, OSM( ) has noticeable differences from KML ( ), the schema upon which Marble is built. These differences, from an OSM perspective, mainly consist in server-generated data such as id, changeset, timestamp etc. but also in core data elements, such as the <relation> and <tag> tags.
Up until now, I've developed a way to store this server-generated data, mainly by saving itas KML's  ExtendedData. Exporting to ".osm" files is now possible as well, so that pretty much makes Marble a KML-to-OSM ( and in reverse )  translator at the moment ( it has some draw backs of course )

What was the main challenge?Not everything can be translated perfectly from OSM to KML and vice-versa, so while translating, I had to ensure as little data as possible is lost.

Since data parsing isn't a really picture-worthy topic, here is an example of a map's journey through Marble's editor:

The OSM version of a highway: "sample highway"<?xml version="1.0" encoding="UTF-8"?>
<osm version="0.6" generator="Marble 0.21.23 (0.22 development version)">
    <node lat="-23.7082750358" lon="-4.4577696853" id="-1" action="modify" visible="false"/>
    <node lat="-21.0946495732" lon="-11.9900406335" id="-2" action="modify" visible="false"/>
    <node lat="-16.6010784801" lon="-6.7785258299" id="-3" action="modify" visible="true">
        <tag k="name" v="sample placemark"/>
    <way id="-75891" action="modify" visible="true">
        <tag k="name" v="sample highway"/>
        <tag k="highway" v="residential"/>
        <nd ref="-1"/>
        <nd ref="-2"/>
The KML version of it after going through Marble's editor: The osm data( that is irrelevant from a KML perspective ) is stored within an ExtendedData block<Placemark>
            <name>sample highway</name>
            <ExtendedData xmlns:osm_data="Marble/temporary/namespace">
                <osm_data:OsmDataSnippet id="-75891" visible="true">
                        <osm_data:tag k="highway" v="residential"/>
                        <osm_data:nd count="0">
                            <osm_data:OsmDataSnippet id="-1" visible="false" action="modify"/>
                        <osm_data:nd count="1">
                            <osm_data:OsmDataSnippet id="-2" visible="false" action="modify"/>
                <coordinates>-4.457769,-23.708275 -11.990040,-21.094649</coordinates>

Categories: FLOSS Project Planets

Reproducible testing with docker

Wed, 2015-07-01 11:22

Reproducible testing is hard, and doing it without automated tests, is even harder. With Kontact we’re unfortunately not yet in a position where we can cover all of the functionality by automated tests.

If manual testing is required, being able to bring the test system into a “clean” state after every test is key to reproducibility.

Fortunately we have a lightweight virtualization technology available with linux containers by now, and docker makes them fairly trivial to use.


Docker allows us to create, start and stop containers very easily based on images. Every image contains the current file system state, and each running container is essentially a chroot containing that image content, and a process running in it. Let that process be bash and you have pretty much a fully functional linux system.

The nice thing about this is that it is possible to run a Ubuntu 12.04 container on a Fedora 22 host (or whatever suits your fancy), and whatever I’m doing in the container, is not affected by what happens on the host system. So i.e. upgrading the host system does not affect the container.

Also, starting a container is a matter of a second.

Reproducible builds

There is a large variety of distributions out there, and every distribution has it’s own unique set of dependency versions, so if a colleague is facing a build issue, it is by no means guaranteed that I can reproduce the same problem on my system.

As an additional annoyance, any system upgrade can break my local build setup, meaning I have to be very careful with upgrading my system if I don’t have the time to rebuild it from scratch.

Moving the build system into a docker container therefore has a variety of advantages:
* Builds are reproducible across different machines
* Build dependencies can be centrally managed
* The build system is no longer affected by changes in the host system
* Building for different distributions is a matter of having a couple of docker containers

For building I chose to use kdesrc-build, so building all the necessary repositories is the least amount of effort.

Because I’m still editing the code from outside of the docker container (where my editor runs), I’m simply mounting the source code directory into the container. That way I don’t have to work inside the container, but my builds are still isolated.

Further I’m also mounting the install and build directories, meaning my containers don’t have to store anything and can be completely non-persistent (the less customized, the more reproducible), while I keep my builds fast and incremental. This is not about packaging after all.

Reproducible testing

Now we have a set of binaries that we compiled in a docker container using certain dependencies, so all we need to run the binaries is a docker container that has the necessary runtime dependencies installed.

After a bit of hackery to reuse the hosts X11 socket, it’s possible run graphical applications inside a properly setup container.

The binaries are directly mounted from the install directory, and the prepared docker image contains everything from the necessary configurations to a seeded Kontact configuration for what I need to test. That way it is guaranteed that every time I start the container, Kontact starts up in exactly the same state, zero clicks required. Issues discovered that way can very reliably be reproduced across different machines, as the only thing that differs between two setups is the used hardware (which is largely irrelevant for Kontact).

..with a server

Because I’m typically testing Kontact against a Kolab server, I of course also have a docker container running Kolab. I can again seed the image with various settings (I have for instance a John Doe account setup, for which I have the account and credentials already setup in client container), and the server is completely fresh on every start.

Wrapping it all up

Because a bunch of commands is involved, it’s worthwhile writing a couple of scripts to make the usage a easy as possible.

I went for a python wrapper which allows me to:
* build and install kdepim: “devenv srcbuild install kdepim”
* get a shell in the kdepim dir: “devenv srcbuild shell kdepim”
* start the test environment: “devenv start set1 john”

When starting the environment the first parameter defines the dataset used by the server, and the second one specifies which client to start, so I can have two Kontact instances with different users for invitation handling testing and such.

Of course you can issue any arbitrary command inside the container, so this can be extended however necessary.

While that would of course have been possible with VMs for a long time, there is a fundamental difference in performance. Executing the build has no noticeable delay compared to simply issuing make, and that includes creating a container from an image, starting the container, and cleaning it up afterwards. Starting the test server + client also takes all of 3 seconds. This kind of efficiency is really what enables us to use this in a lather, rinse, repeat approach.

The development environment

I’m still using the development environment on the host system, so all file-editing and git handling etc. happens as usual so far. I still require the build dependencies on the host system, so clang can compile my files (using YouCompleteMe) and hint if I made a typo, but at least these dependencies are decoupled from what I’m using to build Kontact itself.

I also did a little bit of integration in Vim, so my Make command now actually executes the docker command. This way I get seamless integration and I don’t even notice that I’m no longer building on the host system. Sweet.

While I’m using Vim, there’s no reason why that shouldn’t work with KDevelop (or whatever really..).

I might dockerize my development environment as well (vim + tmux + zsh + git), but more on that in another post.

Overall I’m very happy with the results of investing in a couple of docker containers, and I doubt we could have done the work we did, without that setup. At least not without a bunch of dedicated machines just for that. I’m likely to invest more in that setup, and I’m currently contemplating dockerizing also my development setup.

In any case, sources can be found here:

Categories: FLOSS Project Planets

Web Open Font Format (WOFF) for Web Documents

Wed, 2015-07-01 10:55

The Web Open Font Format (short WOFF; here using Aladin font) is several years old. Still it took some time to get to a point, where WOFF is almost painless to use on the linux desktop. WOFF is based on OpenType style fonts and is in some way similar to the more known True Type Font (.ttf). TTF fonts are widely known and used on the Windows platform. Those feature rich kind of fonts are used for high quality font displaying for the system and local office-and design documents. WOFF aims at closing the gap towards making those features available on the web. With these fonts it becomes possible to show nice looking fonts on paper and web presentations in almost the same way. In order to make WOFF a success, several open source projects joined forces, among them Pango and Qt, and contributed to harfbuzz, a OpenType text shaping engine. Firefox and other web engines can handle WOFF inside SVG web graphics and HTML web documents using harfbuzz. Inkscape uses at least since version 0.91.1 harfbuzz too for text inside SVG web graphics. As Inkscape is able to produce PDF’s, designing for both the web and print world at the same time becomes easier on Linux.

Where to find and get WOFF fonts?
Open Font Library and Google host huge font collections . And there are more out on the web.

How to install WOFF?
For using inside inkscape one needs to install the fonts locally. Just copy the fonts to your personal ~/.fonts/ path and run

fc-cache -f -v

After that procedure the fonts are visible inside a newly started Inkscape.

How to deploy SVG and WOFF on the Web?
Thankfully WOFF in SVG documents is similar to HTML documents. However simply uploading a Inkscape SVG to the web as is will not be enough to show WOFF fonts. While viewing the document locally is fine, Firefox and friends need to find those fonts independent of the localy installed fonts. Right now you need to manually edit your Inkscape SVG to point to the online location of your fonts . For that open the SVG file in a text editor and place a CSS font-face reference right after the <svg> element like:

<style type=”text/css”>
@font-face {
font-family: “Aladin”;
src: url(“fonts/Aladin-Regular.woff”) format(“woff”);

How to print a Inkscape SVG document containing WOFF?
Just convert to PDF from Inkscape’s file menue. Inkscape takes care for embedding the needed fonts and creates a portable PDF.

In case your prefered software is not yet WOFF ready, try the woff2otf python script for converting to the old TTF format.

Hope this small post gets some of you on the font fun path.

Categories: FLOSS Project Planets

Qt3D Technology Preview Released with Qt 5.5.0

Wed, 2015-07-01 09:02

KDAB is pleased to announce that the Qt 5.5.0 release includes a Technology Preview of the Qt3D module. Qt3D provides a high-level framework to allow developers to easily add 3D content to Qt applications using either QML or C++ APIs. The Qt3D module is released with the Technology Preview status. This means that Qt3D will continue to see improvements across the API design, supported features and performance before release. It is provided to start collecting feedback from users and to give a taste of what is coming with Qt3D in the future. Please grab a copy of the Qt 5.5.0 release and give Qt3D a test drive and report bugs and feature requests.

Qt3D provides a lot of functionality needed for modern 3D rendering backed by the performance of OpenGL across the platforms supported by Qt with the exception of iOS. There is work under way to support Qt3D on iOS and we expect this to be available very shortly. Qt3D allows developers to not only show 3D content easily but also to totally customise the appearance of objects by using the built-in materials or by providing custom GLSL shaders. Moreover, Qt3D allows control over how the scene is rendered in a data-driven manner. This allows rapid prototyping of new or custom rendering algorithms. Integration of Qt3D and Qt Quick 2 content is enabled by the Scene3D Qt Quick item. Features currently supported by the Qt3D Technology Preview are:

  • A flexible and extensible Entity Component System with a highly threaded and scalable architecture
  • Loading of custom geometry (using built in OBJ parser or assimp if available)
  • Comprehensive material, effect, render pass system to customise appearance
  • Data-driven renderer configuration – change how your scene is rendered without touching C++
  • Support for many rendering techniques – forward, deferred, early z-fill, shadow mapping etc.
  • Support for all GLSL shader stages (excluding compute at present)
  • Good support for textures and render targets including high-dynamic range
  • Support for uniform buffer objects where available
  • Out of the box support for simple geometric primitives and materials
  • Keyboard input and simple camera mouse control
  • Integration with Qt Quick 2 user interfaces

Beyond rendering, Qt3D also provides a framework for adding additional functionality in the future for areas such as:

  • Physics simulation
  • Skeletal and morph target animation
  • 3D positional audio
  • Stereoscopic rendering
  • Artificial intelligence
  • Advanced input mechanisms

To learn more about the architecture and features of Qt3D, please read KDAB’s series of blogs and the Qt3D documentation.

KDAB and The Qt Company will continue to improve Qt3D over the coming months to improve support for more platforms, input handling and picking, import of additional 3D formats, instanced rendering, more materials and better integration points to the rest of Qt. If you wish to contribute either with code, examples, documentation or time then please contact us on the #qt-3d channel on freenode IRC or via the mailing lists.

The post Qt3D Technology Preview Released with Qt 5.5.0 appeared first on KDAB.

Categories: FLOSS Project Planets

Sponsor our digiKam team for Randa meeting

Wed, 2015-07-01 04:57

Dear digiKam Users

One of our digiKam developers requires your kind support to organize his trip to "digiKam developers meetup 2015" to be held in Randa, Switzerland from 6th September'15 to 13th September'15. He seeks to raise 639 euros from fundraising to support his travel expenses to Randa.

read more

Categories: FLOSS Project Planets