Almost a year has passed since the last release of the Wacom Tablet KCM. A lot has happened on my side but I finally found the time to to some hacking on the code again.
This beta tackles a few issues from the bug tracker.
* Support for more than one tablet at the same time
Now you can connect as many tablets as you want and select between them in the KCM. For each tablet the correct profile will be applied, while the global shortcuts will be used on all connected tablets.
* Support for profile rotation and status LEDs (Intuos/Cintiq)
New global shortcuts allow you to rotate through a list of profiles (for each individual device). You can map this onto the tablet (for example the button 1 on the Intuos) to do a fast profile switch.
In addition the LEDs should tell you which profile in the rotation list is currently active.
(The LED feature is highly experimental and might not actually work, as I do not own a Intuos)
* New Tablet finder application to detect unknown tablets or change the current db entries
In case new tablets are sold which are not in our own tablet database yet, you can go through the process of detecting and specifying them with the help of this application. The result will be saved in a local database that will be checked first when the tablet is connected. This way you can also override the existing tablet database with your own changes.
* Fix profile loading error in some cases
* Fix white text on white background the the plasma applet
You can find the source archive on kde-apps.org or in the releng2.1 branch.
Wikimania 2014 is now over and that is a good excuse to write updates about the MediaWiki Translate extension and translatewiki.net.
I’ll start with an update related to our YAML format support, which has always been a bit shaky. Translate supports different libraries (we call them drivers) to parse and generate YAML files. Over time the Translate extension has supported four different drivers:
- spyc uses spyc, a pure PHP library bundled with the Translate extension,
- syck uses libsyck which is a C library (hard to find any details) which we call by shelling out to Perl,
- syck-pecl uses libsyck via a PHP extension,
- phpyaml uses the libyaml C library via a PHP extension.
The latest change is that I dropped syck-pecl because it does not seem to compile with PHP 5.5 anymore; and I added phpyaml. We tried to use sypc a bit but the output it produced for localisation files was not compatible with Ruby projects: after complaints, I had to find an alternative solution.
Joel Sahleen let me know of phpyaml, which I somehow did not found before: thanks to him we now use the same libyaml library that Ruby projects use, so we should be fully compatible. It is also the fastest driver of the four. Anyone generating YAML files with Translate is highly recommended to use the phpyaml driver. I have not checked how phpyaml works with HHVM but I was told that HHVM ships with a built-in yaml extension.
Speaking of HHVM, the long standing bug which causes HHVM to stop processing requests is still unsolved, but I was able to contribute some information upstream. In further testing we also discovered that emails sent via the MediaWiki JobQueue were not delivered, so there is some issue in command line mode. I have not yet had time to investigate this, so HHVM is currently disabled for web requests and command line.
I have a couple of refactoring projects for Translate going on. The first is about simplifying the StringMangler interface. This has no user visible changes, but the end goal is to make the code more testable and reduce coupling. For example the file format handler classes only need to know their own keys, not how those are converted to MediaWiki titles. The other refactoring I have just started is to split the current MessageCollection. Currently it manages a set of messages, handles message data loading and filters the collection. This might also bring performance improvements: we can be more intelligent and only load data we need.
Finally, at Wikimania I had a chance to talk about the future of our translation memory with Nik Everett and David Chan. In the short term, Nik is working on implementing in ElasticSearch an algorithmto sort all search results by edit distance. This should bring translation memory performance on par with the old Solr implementation. After that is done, we can finally retire Solr at Wikimedia Foundation, which is much wanted especially as there are signs that Solr is having problems.
Together with David, I laid out some plans on how to go beyond simply comparing entire paragraphs by edit distance. One of his suggestions is to try doing edit distance over words instead of characters. When dealing with the 300 or so languages of Wikimedia, what is a word is less obvious than what is a character (even that is quite complicated), but I am planning to do some research in this area keeping the needs of the content translation extension in mind.
We start the next in our little test series of different icon sets. Please, again, participate in our little game and help us to learn more about the usability of icon design.
Keep on reading: Understanding Icons: Participate in fantastic fourth survey
...and, luckily, how I restored it!
Let me say this before you start reading: backup your data NOW!!!
Really, do it. I post-poned this for so long and, as result, I had a drammatic weekend.
Last Friday I had the wonderful idea to update my Ghost setup to the newer 0.5. I did this from my summer house via SSH, but the network isn't the culprit here.
You have to know that some months ago, maybe more, I switched from a package installation, through this PKGBUILD, to an installation via npm. So, as soon as I typed npm update, all my node_modules/ghost content was gone. Yep, I must be dumb.
After some minute, which helped me to better understand how the situation was, I immediately shutdown the BeagleBone Black.
The day after I went home, I installed Arch Linux ARM on a microSD and obviously the super TestDisk which got SQLite support since a while now. Cool!
This way I restored the Ghost database, BUT it was corrupted. However, a StackOverflow search pointed me to this commad:cat <( sqlite3 ghost.db .dump | grep "^ROLLBACK" -v ) <( echo "COMMIT;" ) | sqlite3 ghost-fixed.db
After that, I was able to open the database and to restore 14 of 40 posts.
My second attempt has been to use the Google cache. Using this method I recovered about 10 posts. Nice, I already had more than 50% of the total content! I was feeling optimistic.
The Arch Linux Planet let me recover 3 posts more, which however I could recover anyway using Bartle Doo; I never heard of this website before, but thanks to it I recovered some posts by looking for my First and Last Name.
I was almost here. About 10 posts missing, but how to recover them?? I didn't remember titles and googling without specific keywords didn't help neither.
I went back on the broken SQLite database, Vim can open it so let's look into for some data. Bingo! The missing posts titles are still there!
And then I started googling again, but for specific titles, which pointed me to websites mirroring my posts content.
At the end of this step I had 38 of 40 posts!
I can't stop now, it's more than a challenge now.
I went back again on the broken database where posts content is corrupted: there's some text, then symbols and then another text which doesn't make any sense in union with the first part. This looks like a tedious job. This Saturday can end here.
It's sunday; I'm motivated and I can't lose those 2 posts because of my laziness.
I've the missing posts titles and I now remember their content, so I started to look for their phrases in the database and, with all my surprise and a lot of patience, I recovered their content!
This mainly because Ghost keeps both the markdown and the HTML text in the database and then the post content is duplicated which decrease the chance of a corruption in the same phrase.
Another summer, another Linux survival experience (that I'm pleased to link to!).
Packages for the release of KDE SC 4.14 are available for Kubuntu 14.04LTS and our development release. You can get them from the Kubuntu Backports PPA. It includes an update of Plasma Desktop to 4.11.11.
In preparation for Akademy I wanted to swap out the drive from my laptop — which is full of work-work things — and drop in a new one with stuff I actually want to have with me at Akademy, like git clones of various repositories. I spent a few hours wrestling with my Lenovo x121e (AMD) laptop and FreeBSD, which taught me the following:
- You can update the BIOS from a USB stick using only Linux tools, and
- FreeBSD does not like it when the SATA controller is in compatibility mode, and either hangs or fails to find the hard drive at all; in AHCI mode things are fine, but
- Even the updated BIOS cannot boot from GPT partitions, so I had to be careful during installation to manually do an MBR / fdisk-based installation (this seems to preclude ZFS as well), and then
- Wireless isn’t automatically detected (but the WAN modem is), and suspend-resume doesn’t resume.
This makes for less-than-stellar performance for a conference laptop; I’ll fiddle with it a little before departing for Berlin in two weeks time (isn’t Akademy in Brno? Yes, it is, but the most effective train journey takes me to Berlin first to catch up with the trainful of KDE people at 12:46 from HBf), so I may end up being a FreeBSD person sporting an OpenSUSE laptop.For development purposes — sort of as a quick counterpart to the FreeBSD VM where I’m doing qt5-based things for KDE applications — I installed a project Neon VM. This way, too, I can check that I’m not breaking anything on non-FreeBSD systems. What I’m seeing on the desktop in that VM is not very encouraging to me, though. As used as I am to the current KDE software on OpenSUSE or FreeBSD (4.12 or whatever), the newer software feels weird and arbitrarily changed and oddly slow. That last bit might be due to VirtualBox, I don’t really know. I’ll have to attend some of the VDG or HCI topics to get a better feeling for the (visual) changes already made.
Recently I blogged about how we have been reducing the number of databases in Plasma. We have been doing this by using the file system to store additional information such as tags, ratings and comments.
Unfortunately, extended attributes are not supported on all file systems. The most notable ones are “FAT” and any Network File Share. With Plasma 5.1, Baloo will not support tags, ratings or comments in these file systems.Why was this done?
With the previous code base, we had to maintain multiple code paths - one for xattr supported systems and one for the other ones. This complicated the code, and more importantly, it provided a false sense of “feature completeness”. We supported tags on these other file systems, but they were -
- Stored in a custom database
- Not portable
- Not user visible mechanism to backup or restore these tags
- The tags break horribly over Network file systems as the tags would be stored locally.
With these reasons, we decided to drop support for them completely. It’s better to not support something than to support them this poorly.Advantages?
Not only does this greatly simplify our code base. It also allowed us to easily restructure our architecture so that we can react to xattr changes.
Before 5.1, when an application changed the tags, they would also need to update the Baloo specific index, which meant that the user could not, theoretically, use a non KDE application to write tags.
With 5.1, we now monitor for xattr changes. This means that all applications now no longer interact with Baloo when reading or writing tags. They directly interact with the file system. This makes it really fast, and simplifies the code.
The baloo_file process monitors the file system and updates its index whenever the tags change.
Today Qt announced some changes to their licence. The KDE Free Qt team have been working behind the scenes to make these happen and we should be very thankful for the work they put in. Qt code was LGPLv2.1 or GPLv3 (this also allows GPLv2). Existing modules will add LGPLv3 to that. This means I can get rid of the part of the KDE Licensing Policy which says "Note: code may not be copied from Qt into KDE Platform as Qt is LGPLv2.1 only which would prevent it being used under LGPL 3".
New modules, starting with the new web module QtWebEngine (which uses Blink) will be LGPLv3 or GPLv2. Getting rid of LGPLv2.1 means better preserving our freedoms (can't use patents to restrict, must allow reverse enginerring, must allow to replace Qt etc). It's not a problem for the new Qt modules to link to LGPLv2 or LGPLv2+ libraries or applications of any licence (as long as they allow the freedoms needed such as those listed above). One problem with LGPLv3 is you can't link a GPLv2 only application to it (not because LGPLv3 prevents it but because GPL2 prevents it), this is not a problem here because it will be dual licenced as GPLv2 alongside.
The main action this prevents is directly copying code from the new Qt modules into Frameworks, but as noted above we forbid doing that anyway.
With the new that Qt moved to Digia and there is a new company being spun out I had been slightly worried that the new modules would be restricted further to encourage more commercial licences of Qt. This is indeed the case and it's being done in the best possible way, thanks Digia.
What is the “KDE Free Qt Foundation”?
The KDE Free Qt Foundation is a legal entity, set up by KDE e.V. and Trolltech, the company originally developing Qt. It aims to safeguard the availability of Qt as Free Software and already fulfilled an important role. Trolltech was bought by Nokia, who sold Qt later to Digia. The contracts stayed valid during all these transitions.
The foundation has four voting board members (two from Digia, two from KDE e.V.) and two non-voting advisory board members (the Trolltech founders). In case of a tie, KDE e.V.’s board members have an extra vote.
Through a contract with Digia, the KDE Free Qt Foundation receives rights to all Free Qt releases “for the KDE Windowing System” (currently defined as X11 – we plan to extend this to Wayland) and for Android. As long as Digia keeps the contract, the KDE Free Qt Foundation will never make use of these rights.
How can it be made even better?
Today Digia announced a license update from LGPL version 2.1 to version 3. Compatibility with version 2.1 will be kept for all existing Qt modules. LGPL v3 is an improved version of LGPL v2.1 that does a better job of defending the freedom of the code (updated patent language, Tivoization).
Lars Knoll from Digia explains this in his blog post.
At the same time, Digia has agreed to work with us on three improvements for our contract:
- The contract will include the other desktop and mobile platforms (Windows, Mac, iOS and WinRT) for as long as Qt has support for running on them.
- Qt will be released under the “GPL v2 or later” in addition to LGPL v3, which helps to safeguard us for the future. (LGPL v2.1 itself contains a similar clause which was dropped in LGPL v3.)
- The KDE Free Qt Foundation will receive rights to Qt-project.org contributions that were not yet released.
I consider these to be major improvements to our contract.
When will new license combination take effect?
For the upcoming Qt 5.4 release, Digia will already release their new web engine as separate add-ons under the LGPL v3 (plus GPL v2 or later). The license of all other Qt modules will keep LGPL v2.1 and only add LGPL v3. In the KDE Free Qt Foundation, we have approved this approach for Qt 5.4.
We plan to update our legal agreement with Digia so that LGPLv3-licensed modules can be formally part of Qt – in combination with the other improvements mentioned above. (This depends on the KDE e.V. membership not objecting to the plan, see below.)
What does the license update mean for other existing GPL or LGPL software?
- All Qt modules already contained in Qt 5.3 will stay with LGPL v2.1 for the time being.
- Compatibility with GPL v2-software will be kept also for the new modules. The total license combination will be “LGPL v3, GPL v2 or any later version of the GPL”.
- GPL-licensed applications can additionally use the new modules without problems since “GPLv2 or later” is contained in the new license combination. (But within KDE, we have already started to move from GPL v2 to GPL v3 anyway for some applications.)
- Libraries under “LGPL v2.1 or later” can be updated to “LGPL v3 or later” to use the new modules. (It is possible to link to both LGPL v2.1 and LGPL v3, as long as the license of the application itself allows this, e.g. GPL v3.) The KDE Licensing Policy already requires compatibility with LGPL v3. To drop LGPL v2.1, however, a change of the policy will be needed.
- Libraries under “LGPL v2.1” only can be updated to the GPL using a clause of LGPL v2.1 to use the new modules.
Please inform me if you hear of a well-maintained Qt-based library that would use the new modules and cannot migrate to LGPL v3.
What does the license change mean for proprietary software?
Proprietary software can use LGPL code if a number of conditions are met (e.g. allowing reverse engineering, passing on the license rights to Qt itself, naming the Qt authors and allowing end users to use a different Qt version). Companies that do not like these conditions can buy an enterprise license of Qt (and thereby fund the development of new Qt versions).
The conditions worded are slightly differently in v2.1 and in v3. LGPLv3 tends to have more legal “teeth”, but the principles are the same.
What are the next steps?
If there are no objections from the KDE e.V. membership, then we will work with Digia and with our own lawyer to implement the improvements of the contract and update it to “GPL v2 or later or LGPL v3”.
If people make me aware of problems with the new license combination, then we will take these into account.
If the KDE e.V. membership should decide against the license update plan, then we will not pursue an update of the contract with Digia and will discuss how to handle the web engine in the future with both Digia and the KDE e.V. membership. Not updating the agreements would mean not getting some major improvements, so I would prefer to solve any problems that might appear in a constructive way.
I am very happy about the planned improvements to the legal framework around Qt. But of course I might have missed something. If you have critical questions, constructive feedback or enquiries for clarification, then please leave a comment or contact me by email.
Finally I committed a very old script.
I wrote it for kde 4.0 but it was not perfect. I took time to fix it last week end.
What does it do ? It allows to remove not necessary forward declaration. It’s very useful during kf5 migration because we change a lot of code. So sometime we keep some “class foo;” which will not create a compile error, but it will keep an unused code line.
As usual it’s not perfect you can’t launch this script and commit directly, but it works fine by default. (If you find some usercase when it failed send me an email and I will try to fix it).
It’s just a shell script, so go to the directly and launch it. Not necessary to specify file name.
For others porting script improvement:
- I added “search-kdelibs4support-header.sh” another shell script which find all kdelibs4support class header file in a directory. I will help you to remove kdelibs4support. I show what you need to remove.
- I continue to improve this script convert-kdialog.pl (I fixed some bugs and add new porting code)
- convert-kmimetype.pl: I cleaned up it.
- convert-kfiledialog.pl: I added new KFileDialog:: porting. This script is not finished yet.
- convert-to-new-signal-slot-signal.pl: Improve++ Now I can search local variable and I can us it to convert to new connect API
- convert-kurl.pl: Add new convert code.
I hope you will use them, and that theses scripts help you during porting to kf5.
Sadly, the official coding time for Google Summer of Code has come to an end. :( It was wonderful working with my mentor Jigar. So I have coded three features for Calligra Sheets.
1. View Splitter -
-> I have pushed the code in sheets-vs-mrupanjana.
The feature enables a particular sheet view to be split into two portions vertically. The
cursors are not synchronized. Different input data can be given in the two portions. We often do not need to work with lots of columns, so the sheet view is at times more than optimum. It can easily be split and we can continue our work on both the portions.
2. Highlighting changes in a cell -
-> Code is pushed in sheets-hc-mrupanjana
This is a really interesting one and is absent in other similar applications. The user begins a session, feeds some data in the sheet, makes some changes in the cells where already there has data. The cells which have undergone changes are highlighted with dark blue colour. This enhances readability and the user will be aware of the changes made in the present session.
3. Autocorrection of function name -
-> Code has been pushed in sheets-fName-mrupanjana
Often user forgets the exact function name for calculating cos of an angle, absolute value of a number and guesses the function names. The user makes a guess and inputs a function name which is supposedly wrong. As he or she presses enter, it gets automatically corrected.If the user does not want the change in name, he or she can escape to the next cell using tab.
I have coded the storage and the implementation of all the three features. Hope to see them merged soon. :)
I was trying to write this blog post for quite a long time, and it become so, so big that I’ll have to split it in three posts, It is like a ‘people of kde’ but different, the focus is not to show someone that works for KDE, but someone that tried to use KDE to work – being it a non-tech person. Since I spend most of my days helping people that is struggling with Free Software to pass the hate feeling, I feel that I have lots of things to say being that I’m activelly maintaining over 5 laptops from different friends that lives on different states.
I also like to study humans, this very strange animal that has so many different ways of expressing himself that it’s so, so hard to get it right.
First case of study:
Cléo Martins, Professional Chef, Vegan and “Wanted to try linux because she liked the concept”, That’s a bold move.
Cléo is a Chef, and as such she knows how to cook and it’s not a technological person at all. She prefer being in the wild and plant her own organic food, cook, taste and be awesome. She’s also presenting a online course on vegan food. All of a sudden she removed windows from all computers from her school and switched to Ubuntu because someone over the internet told her that it was the best linux that there where and it will ever be, than this person vanished – A lovely thing to do if you need somebody to hold hands for a bit while you understand the system that you are using.
Cléo had a *huge* amount of problems on her linux install, tried a few people over the internet, some came to help, some send her a few snippets of code to paste on the terminal and that was making her insanely crazy. “Everyone tells me to do something differently” she cried.
The problems on her computers, and the fix:
- Bad Translations on LibreOffice
- The correct locale was not being set on the system /etc/locale , but it was being set on gnome-settings.
- Printer was printing black and white when she wanted colored output.
- Buggy drivers from HPLIP, if I converted her files to PDF and printed, things worked, but not from LibreOffice
- Strange Warnings regarding memory on the disk
- The dude that installed Ubuntu for her used a 6gbs partition for root and it was full, impossible to update without learning about the unix file systems and creating a link in /var/cache/apt to the other, big partition.
- Laptop seems to “die” after a few minutes of idle
- laptop was on, brigthness was set to minimum, and even if you moved the mouse it wouldn’t get brigther, you needed to unlock the screen and *then* move the mouse, but remember that the screen was pitch black
- Email on Thunderbird was *much* more slow than on windows
- Her account was configured only for online mode
So finally she reached me, I’v went to her house and tried to fix all problems that I could, some I couldn’t because of lack of knowledge on Unity / Ubuntu, I’v asked her if she was willing to try another thing, installed and configured KDE correctly, removed the Unity and Gnome stuff that was there and she was much happier, but still hating Linux – “It seems to me that all of those linux guys are just kids playing with computers, those things should work and they don’t and when they do it’s because you spend a lot of time configuring stuff on my computer to make they work”, I couldn’t disagree. It’s sad, It’s the truth: We like to configure our linux boxes but the overall user wants just to use computers and it should work reliably.
Today Cléo changed her distribution be cause all of the help that she could got on ubuntu only made her computer worse, and is using the same as I use, for I know that I’ll have the time when she needs to help her – I’v spend almost a day configuring it for her so she wouldn’t need to worry about repositories, packages and whattanot. Gave her a brief introduction on what she could touch that will not break the system and what she couldn’t, and she’s now a Linux user, not happy one yet because the HPLIP is still giving her headaches. Overall, not a very good experience on her, but we can do so much better in the future.
As the title says, this is my last report regarding my project during the Google Summer of Code program. I’m saying during GSoC because this is certainly not my last contribution to the plugin I’ve been working on all summer. In fact not only that I will give my best during the next days to get it to a deployable shape for the next KDE release, but I’m also planning to continue contributing to Marble in the long term. But first, I will make a presentation about the changes the plugin has undergone since my last post.
Because I polished many features and added a couple of other new ones as well, I’ll only discuss a little bit about each of the most important ones. The first new thingies I implemented are the Cut/Copy/Paste actions on graphic items (polygons, placemarks, polylines) which allow an easier duplication, in case one wants to set a style to a placemark and then to use it for others and only change the description, for instance. They also increase consistency since they can be performed on all available graphic items. The second new feature I added is the possibility of drawing and customizing paths (polylines) which used to be a real hole in our Editing Mode in Marble. Now one could easily go to osm.org and export an .osm file and load it in Marble. The actions available on polylines are identical to those for polygons since, obviously, these paths are the same thing as polygons except they are not closed. Last, but maybe one of the most important changes is the introduction of the ‘Focus Item’ concept to our Editing Mode. This means that there is only one item at a time with which the user interacts. This approach is much more intuitive for them and also makes the code easier to understand (so developers benefit from this too). It also allowed to easily adjust (enable and disable) the available actions depending on the Focus Item. My work has also included fixing bugs and making some optimizations especially when interacting with polygons and paths since the data they store can become really huge as the number of nodes increases. I tried to cover in the following screencast all these new features.
I also tried through this screencast to give you a hint about how the Annotate Plugin feels like overall and what you can do so far using its available features. The plugin still needs a lot of effort to be put in until I’ll be completely satisfied with it, but until then, this is most of what I managed to do during this summer, Google’s SUMMER of code.
I can say without doubts that it has been a great summer during which I learnt a lot from some of the best programmers and community people I’ve ever met. I want to thank everyone who made it possible, but especially to: Google who came out with this program, KDE for their friendliness and the passionate people I had the chance to meet, Torsten Rahn and Dennis Nienhüser, my mentors, who were always up to help and guide us and who deeply influenced my way of thinking, my GSoC colleagues, Sanjiban and Abhinav for making me feel more competitive and last, but not least, all Marble developers for contributing to the development of such a great application.
My journey with Marble and KDE has just started.
GSoC 2014 just ended (today was the firm pencils down date) and I thought it would be great to blog about the current status of my project.
AkonadiClient is an application that allows power users and system administrators to manage Akonadi from the command line.
As per my proposal I had to improve upon and add commands to the already existing prototype of the client (developed by my mentors Kevin Krammer and Jonathan Marten).
I recently finished adding documentation (man pages) to the project and also finished improving upon the add command and it’s test cases. And I beleive this completes all the tasks that I had planned for in my GSoC proposal.Features of AkonadiClient
For those who are interested, following are the commands that are supported by AkonadiClient after this summers work (short description included):
- add – Add an item to a collection
- list – List collections
- rename - Rename collections
- move – Move a collection
- copy – Copy a collection
- create - Create new collections
- delete - Delete an item or a collection
- show - Shows the raw payload of an item
- update – Update an items payload
- edit - Open an item’s payload in $EDITOR for editing
- agents - Manage running Akonadi agents
- export – Export a collection to XML
- import – Import a collection from XML
- expand - Expands a contact group item
- info – Display information about a collection or an item
- tags - List all known tags
In addition to these commands a command shell / interpreter has also been created by me which is invoked when no arguments are passed to the akonadiclient. It can be used to run multiple commands without launching a separate instance of the application.Future Plans
I would love to continue working on the project and add bug fixes and or enhancements (and more thorough test cases :)). I would also love to work on other parts of KDE like other PIM applications and Plasma.Shoutouts
I would like to thank my mentors Jonathan Marten and Kevin Krammer for their support and guidance and I really appreciate the time and effort you put into reviewing my patches Kevin, I’ve learnt a lot working on this project with you (especially the need for properly formatted code :)) and thanks Jonathan for the invaluable suggestions that you gave at the start of the project and the implementation of the tags command
I would also like to thank Daniel Vrátil (KDE PIM) for suggesting the proper way to test akonadiclient using Akonadi’s isolated testing environment, Luigi Toscano (Documentation Team) for pointing me in the right direction pertaining to KDE’s standard way of writing man pages and Yuri Chornoivan (Documentation Team) for reviewing the documentation that I’ve written for the project, much appreciated guys!
This has been a really fun summer! Now, I can finally go and get a full nights sleep
With a series of icon tests we currently study effects on the usability of icon design. This article however does not focus on these general design effects but presents findings specific to the Oxygen icon set.
Keep on reading: Intermediate results of the icon tests: Oxygen
I recently purchased a new laptop because my old one gave up. I harvested the hard disk with rest going to be recycled, and started looking for a suitable replacement. In the process of doing so it became very evident that there is a significant shift underway.
A few years back "transformable" laptops with touch screens that swiveled around to become a rather brick-like "tablet" first hit the market. I hated the hinge design and the screens were horrible, but there they were and some people bought them. Fast forward to today and Microsoft is mandating touch screens in laptops as part of their new-style Windows UI push. More and more laptops are coming with touch screens not as an exotic add-on, but as a matter-of-course feature.
The laptops that convert into tablets today are now extremely sleek little things. Some are hefty tablets with a detachable keyboard; definitely more towards tablet than laptop, but certainly still a laptop. Some are more traditional clamshell laptops with a screen that flips all the way back, which turns off the attached keyboard, and you have a rather large-screen tablet that is not much thicker than an "actual" tablet. Of course, for a couple years now bluetooth keyboards built into tablet carrying cases have been a small rage, allowing people to turn their tablet into something more like a laptop.
The trend is clear: laptops are not-so-slowly picking up the features we traditionally associate with mobile devices. Touch screens, all solid state components, lightweight .. even pretty. The difference between a tablet and a laptop is growing slimmer with each product cycle.
KDE should pause and think deeply about this. Since the laptop was adopted over a decade ago as a target platform for KDE software in addition to the traditional desktop tower, the laptop is morphing and pulling the tablet form factor into focus.
In fact, many early adopters are using tablets as their laptops. I remember a blog series on this by Henri Bergius last year (or was it early this year?).
As for me and my touchscreen laptop, I actually find myself using the touchscreen a lot more than I thought I would. Scrolling maps is far more comfortable with a finger, for instance. But you know what? It is still a laptop. That thing KDE has done so well in targeting all these years.
Building on this, along with the last two blog entries, I will offer my answer to the question, "What is the desktop?" in the next blog entry.
For the first time, Krita has been present at Siggraph! Siggraph is the largest conference on computer graphics and interactive techniques and it has a big trade show as well as presentations, posters, book shops and animations. While Krita has been presented before at the Mobile World Congress, Siggraph really is where Krita belongs!
On Monday, we started building our booth — as you can see, we had a beautiful backdrop made with the splashscreen graphic Tyson Tan‘s made for our upcoming, amazing 2.9 release! We also had a big screen with a running demo that drew a lot of attention. Plus, there was a demo workstation, a huge Z1 sponsored by Hewlet Packard, a Dell XPS12 showing off Krita Gemini and two more laptops for giving in-depth demos.Krita Foundation and we’ll make an offer for implementing it. We met students, teachers, illustrators, matte painters, professionals and hobbyists. We met kickstarter backers, too!
In short, it was an awesome experience!
We handed out a huge number of leaflets, stickers, postcards and we even had very stylish totebags made — black, with the Krita logo and slogan. There are still a number of them left, and you can get them, too! Just ten euros, postage included — the finest quality cotton totebag, as seen around Timothee’s neck in the picture above:
(Black Krita cloth totebag, 10 euros)
In the end, there’s only one conclusion possible: Krita is really ready for something like Siggraph. It’s a good thing for an open source project to go out and reach for users outside the free software community. We weren’t the only open source project at Siggraph either: the booth next to us was occupied by Blender, and in fact, Ton Roosendaal from Blender helped us with a host of the practical detail. Thanks! Another open source project that had a stand was the Natron compositor application, in France’s big booth.There was a big open source “beer of a feather” do in one of the posher bars on Wednesday night, and despite the horrible overcrowding, some useful conversations happened, and Krita already has improvements to the HDR painting feature thanks to it.
The actual organization of all the details was done by Irina Rempt. Thanks! And another big, big thank-you to Intel for their sponsorship! Without them, we wouldn’t have made it.
Although a couple of days have passed since I returned from Randa, Switzerland, I had in mind long before to make a small blog post about my experience there.
For those who don’t know, the Randa Meetings are yearly events where KDE developers gather together to hack on their projects under the same roof and surrounded by the wonderful Swiss Alps.
This was my first time at Randa and I can say that it was the greatest week in this summer. I met a bunch of awesome folks who are not only great programmers but also wonderful people to spend the time with. I had a great time coding on Marble alongside my GSoC mentors, Torsten Rahn and Dennis Nienhüser, my GSoC colleague Sanjiban and other KDEdu guys. I also enjoyed talking to developers from other KDE departments which were present at Randa, such as KDE Books, KDE SDK, KDE Frameworks 5 Port, etc.
Other cool things? There were A LOT and I’m sure I will miss something, but I’ll mention some of them: the daily walks which gave us the opportunity to admire the amazing view, the table football, the internet connection (haha, oh, it’s fine, we use git :) ), the food (oh, yeah!), the beer (FreeBeer is one of the greatest beers I have ever drunk) and, of course, the Swiss chocolate.
I want to thank very much to Mario Fux who organizes these meetings since 2009 and contributed a lot to my participation since I heard about this year’s meeting after the registrations had closed and he still offered to organize the accommodation for me. So, thanks once again Mario. I’d also want to thank everyone present at Randa this year for this great experience. This was my first participation in a KDE event but not the last one for sure :).
This is a message from Christian, who is working on online SEPA transfers for KMyMoney. We need help to find bugs before this new feature can be merged into our master branch.
for a long time I am working on the online banking branch already. It is close to be finished even if lot still has to be done. But have a look on what is achieved already:
• It is possible to create, edit and send sepa credit transfers. I used KMyMoney for *many* sepa credit transfers already.
• You can save and edit account numbers (<- plural!) for your payees.
• Actually you should be able to create national transfers as well. But my banks have disabled that feature, so KMyMoney automatically deactivated it as well (smart, right?). So I cannot test national transfers anymore.
So it is time for a break. I hope I can continue in about a month. There are a couple of features missing before it can be merged into master (e.g. the database). So think about a 4.x release before the merge. I won't be ready before October anyway.
Also I need more testers before it can be released. Jos had some good points in his blog why I should not test alone .
So if YOU are interested in HBCI online banking and are willing to compile KMyMoney yourself, please clone the add-onlinebanking branch and test it.
Then write all glitches, issues but also ideas and any feedback to this list (maybe with [onlinebanking] in the subject). You can even write a todo list
 for bugs, so it is easier for me to fix that stuff.
There are some design decisions to make, there I could need some help of people who used the online banking features already.
Please not that using online banking with a database is not possible at the moment! But there is some stuff on my computer regarding this already
Each tester helps me a lot to finish the work! Also it is very motivating to get feedback — very important during bug hunting.
So GSoC 2014 is ending and I were hurrying up to introduce more features to the outliner (read more). My project was to implement an outliner for the Calligra Author. This app is based on Words and should be an ideal tool for writing books. It has support of exporting your creation to different mobile formats, like EPub. But there is no way to write a plan for your work in the app. For example, novelists need to add a descriptions of the story actors and refer them during writing. That is why Author need an outliner.
The biggest problems I tackled on the last stage of work were in the RDF implementation in Calligra and my understanding of RDF.
At first I was struggling with an XML-style writing of objects. Such way of storing RDF easily hides actual RDF-triples it has. For example:
This hides 2 triples (one with rdf:type isn't obvious for me):
<someuri> rdf:type cau:Section
<someuri> cau:descr "Some description"
Maybe it doesn't look too complicated, but if you're newbie in RDF I recommend to read all the basic documentation for RDF that is available on the Internet, especially RDF XML Syntax helped me a lot.
And if you want to register a custom file to be saved inside ODT package you can add such triples to manifest.rdf (all this done through KoDocumentRdf class in Calligra and a special manifest context that you can retrieve with this class)
<filenode> rdf:type odf:MetaDataFile
<filenode> pkg:path "filename.rdf"
and then use resource node with url:
KoDocumentRdf::rdfPathContextPrefix() + "filename.rdf"
as a context for the triples you want to put on this file. And don't forget that modification of RDF doesn't make Author or Words to mark your document as changed. So it is possible that changes will be lost. So it is necessary to modify this flag from code (see KWDocument::setModified(bool) method).
Now, I have a full understanding of all technical parts of storing metadata for the outliner. As I said, the plan was to save all the notes, descriptions, created when you are planning, as RDF metadata. It is open format and openDocument supports it. So it will be possible to open any ODT file with Author to work with it, then the saved version could be used outside Author (of course if this another app supports RDF and will not remove Author metadata from package).
By now you can edit Section's data: add descriptions, change its state (draft, edit or finished).
I can't say that the outliner is finished, but I have done much of work improving sections support (which I weren't planning at the beginning of GSoC), that is needed to implement outliner. And while working on outliner, I found that some aspects of sections implementation should be improved (I want to introduce a special section model for easy integration of it to any view). So there is many work to do and definitely I won't stop with GSoC and will continue working on Calligra. And I would be glad to work with the Calligra team at GSoC 2015.