FLOSS Project Planets

My Akademy schedule

Planet KDE - Sun, 2014-08-17 16:07

I for one applaud the decision to put Akademy-badges on the KDE community wiki. I was afraid I was going to have to apply my awesome Kolourpaint skills again. Albert’s reminder has caused me to figure out my reasons for attending Akademy this year. So my agenda is four-or-fivefold:

  1. attend the AGM of KDE e.V.
  2. talk to Paul Adams about measuring community health. Technical aspects of measurement aside, I have some serious methodological misgivings about what he’s measuring and how he’s presenting it. This will require several beer mats of exposition.
  3. see if there’s any other FreeBSD users about. Or, for that matter, OpenSolaris people or anything else non-Linux.
  4. talk to Lydia and Valorie and other folks with knowledge of techbase and community to see if I can contribute there. There’s a lot of stuff that is undergoing a complete rewrite — but has been undergoing that for a long time.
  5. hear from the FrogLogic folks how their test-tools have evolved. It was a long time ago that we had some ideas of doing KDE HIG checks on the EBN.

There is of course also an implied 6(a) meet new people in the KDE community and 6(b) drink beer with them.

Categories: FLOSS Project Planets

The two sides of freedom

Planet KDE - Sun, 2014-08-17 16:04

Today, I am launching my new blog.
In this first post, I will discuss the concept of freedom.
I start with general thoughts, and then I apply them to Free Software licenses and especially the Free Software umbrella KDE.

Throughout history, the word “freedom” has been used to mean many different things. It has been the central word for revolutions, for declarations of independence, for human rights movements, for philosophers, for theologians, for the Free Software movement. It is used in marketing slogans for phone companies, for KDE, for both cigarettes and non-smoking initiatives. The word has also been used to justify military occupations, torture and mass surveillance.

In many cases, “more freedom” for one side of a conflict means “less freedom” for the other side. But if you consider the debates around freedom for a while, then you can see a deeper pattern within the concept of freedom itself:

How do you protect freedom without destroying it?

You can spot this basic question in many debates. I will give two general examples and two more specific examples from the world of software.

  • Democracy can only function if the voters do not need to fear harassment for their political decisions, if sufficient privacy is safeguarded. And democracy can only function properly where there are free media, where transparency is safeguarded. But how exactly should we draw the line between transparency on the one hand and the right to privacy on the other hand? And should the state protect the privacy of citizens – by reducing the freedom of companies to earn money with personal data?
    My observation is that many politicians only see the crime potential of internet while forgetting the other important aspects.
  • Criminal activity (such as organised crime or terrorism) can severely endanger and destroy the freedom of the victims to live in peace and in freedom from oppression and violence. If a state aims to prevent all criminal or terrorist behaviour, then this is only possible with extreme surveillance of the population. But extreme surveillance also harms freedom. Citizens need the freedom to critically assess government actions. They need to have the option of discussing with other people – without fearing that their conversation will be recorded and stored in a government database forever. How do we reach the right balance?
    I consider this topic to be extremely important, especially since the surveillance itself is already preventing many journalists from reporting on the problem.
  • People with disabilities are only free to use the computer programmes of their choice if these programmes are sufficiently accessible. But forcing all developers to take accessibility perfectly into account would limit the freedom of the developers – and lead to fewer applications overall. Should Free Software communities (such as KDE) put a higher emphasis on accessibility standards?
    Probably – but this is very difficult to achieve without alienating crucial contributors. It is also necessary to find a good balance between reliable accessibility standards on the one hand and the danger of stifling innovation – even in the field of accessibility itself – on the other hand.
  • There are two types of Free Software licenses: Share-Alike or Copyleft licenses allow developers to use the code in their own software – as long as this dependent software is also made available under the free license. Permissive licences do not contain this restriction and allow the developers to license the derived software under “unfree”, “all rights reserved” terms. One approach protects the freedom of the code. The other offers more freedom of choice for developers using the code. Which is better?
    I personally like the legal set-up of the Qt framework (which is not really applicable for other Free Software cases). The library is licensed under a (weak) copyleft license: LGPL. At the same time, companies that do not wish to accept the terms of this license can also buy a proprietary license from Digia, thereby funding the development of Qt. Of course this is only possible because all contributors to Qt grant Digia the necessary rights. They can do this without fearing that Digia might suddenly close down the Free Software version – because KDE has a very good contract that safeguards the Free Software terms (KDE Free Qt Foundation). I will post more details on this topic soon.
Categories: FLOSS Project Planets

Andrew Pollock: [tech] Solar follow up

Planet Debian - Sun, 2014-08-17 13:32

Now that I've had my solar generation system for a little while, I thought I'd write a follow up post on how it's all going.

Energex came out a week ago last Saturday and swapped my electricity meter over for a new digital one that measures grid consumption and excess energy exported. Prior to that point, it was quite fun to watch the old analog meter going backwards. I took a few readings after the system was installed, through to when the analog meter was disconnected, and the meter had a value 26 kWh lower than when I started.

I've really liked how the excess energy generated during the day has effectively masked any relatively small overnight power consumption.

Now that I have the new digital meter things are less exciting. It has a meter measuring how much power I'm buying from the grid, and how much excess power I'm exporting back to the grid. So far, I've bought 32 kWh and exported 53 kWh excess energy. Ideally I want to minimise the excess because what I get paid for it is about a third of what I have to pay to buy it from the grid. The trick is to try and shift around my consumption as much as possible to the daylight hours so that I'm using it rather than exporting it.

On a good day, it seems I'm generating about 10 kWh of energy.

I'm still impatiently waiting for PowerOne to release their WiFi data logger card. Then I'm hoping I can set up something automated to submit my daily production to PVOutput for added geekery.

Categories: FLOSS Project Planets

Randa Report: Hacking on KDE and meeting friends

Planet KDE - Sun, 2014-08-17 12:50

Hey there,

I'm already back home and now like to you let you know what I've been doing the last week during the Randa Sprint in the Swiss Alps.

Quick summay: It has been an immense event!

View from our hacking room, in Randa, Switzerland

Last week I've been mostly occupied with porting KDevelop to KF5 and, part of that, succeeded in making it compile and run on Windows as well. In between the hacking sessions, we had a lot of fruitful discussions which concerned the future of KDevelop and the whole KDE SDK itself. Let me break that down into the individiual tasks accomplished throughout the week.

Report Day 1 (Sat, 9th August)

Arriving to Randa, Switzerland at around afternoon. Meetup with all the KDE fellows scattered all around in the house.

Clang integration: C++ Macro navigation support

In the afternoon I was working a bit on tasks from kdev-clang (my GSoC project this year), which I wanted to get fixed first. So, I managed to bring back basic C++ Macro navigation support in our plugin (I'll explain the implementation details / shortcomings in another blog post).

KDevelop showing the uses of a macro definition when hovering the macro identifier and pressing "Show Uses" Resolving inter-library dependencies in kdevplatform

Another largish patch landed in kdevplatform.git this evening: Splitting up the huge kdevplatformlanguage library (which contains all the language agnostic Definition-Use-Chain logic) into more managable parts. I've factored out the part that provides the infrastructure for serializing the DUChain items, and created kdevplatformserialization, which now contains all the classes required for writing your own (serializable) item repository.

This change resolved a few issues with inter-library dependencies we had in kdevplatform. This also helped making kdevplatform compile on Windows in the end.

Day 2 (Sun, 10th August)

More porting of kdevplatform to KDE Frameworks. Mostly fixing up CMake-related code in order to be able to compile tests using new CMake macros (ecm_add_test). Pushing a few compilation fixes for Windows for both GCC and MSVC.

Day 3 (Mon, 11th August)

Fixing up unit tests in kdevelop/kdevplatform using the frameworks branch. Also fixing a few crashes that popped due to changed behavior in Qt5 (mostly event-handling related).

Day 4 (Tue, 12th August) Switch Declaration/Definition feature in kdevplatform

Moving support of the "Switch Declaration/Definition" feature (something you can trigger via the menu or via the "Ctrl+Shift+C" shortcut) to kdevplatform. That in turn means, that any language (well, any language which disambiguates between definitions and declarations) can make use of this feature without further work. Of course, the main motivation was to get this working for kdev-clang. Review-request here: https://git.reviewboard.kde.org/r/119648/

Basic Objective-C support in kdev-clang

Later that night, Alex Fiestas and me got into philosophical discussions about the Clang integration in KDevelop and suddenly the question about Objective-C support popped up. Realizing that we didn't ever look into that (despite being very well aware that Clang supports it), I decided to spent an hour on it in kdev-clang to see some first-hand results.

You can see the patch here. As you can see, this is mostly about making our Clang integration aware of Objective-C entities in the Clang AST. It's that trivial.

And here the result:

KDevelop providing basic support for the Objective-C language"

Note: If someone is interested in driving this further, that'd be greatly appreciated. Personally I won't have time in the near future to extend the Objective-C support (also, I don't do Objective-C these days, so I don't have a use for it)

Day 5 (Wed, 13th August)

Fixing up the Grep Dialog in KDevelop (frameworks branch). There were some issues with event-handling in KComboBox and default selection inside the dialog button box. In the end, I decided to port this over to QDialog and QComboBox right away and fixed up both issues during that.

Another major issue for KDevelop on Windows got fixed this day: Windows path handling in our custom QUrl-replacement class KDevelop::Path: We now also support Window's single-drive letter based file schemes (e.g. C:\tmp) here. That fixed include path / include lookup on Windows.

Day 6 (Thu, 14th August) Attempting to fix hang-on-exit issue in KDevelop

This day, I was mostly spending (read: wasting) time attempting to fix the most apparent issue in KDevelop (frameworks branch): KDevelop not exiting cleanly when asked to be shutdown. I'm still not exactly sure what's wrong, but it seems like some object is not calling QCoreApplicationPrivate::deref, and hence the event loop is not being quit when the last window is closed (because QCA still assumes to be in use, i.e. the refcount being non-zero)

tl;dr: I'll keep you posted as soon as I find out what's wrong here.

Daytrip time

Thursday afternoon a whole bunch of the KDE fellows made a great day trip to get a closer look at the wonderful Matterhorn. For this, we got to Randa by taxi and got the chance to walk around in the (admittedly very touristy) town Zermatt. After a few minutes of walk, we got to see this amazing view of the Matterhorn:

View to Matterhorn from Zermatt, Switzerland Day 7 (Fri, 15th August)

After a good week of productive hacking and meeting friends in the Swiss Alps, I left Randa very early in the morning by train towards Zurich, for my flight back to Berlin.


It has been a highly productive week this time. The team had a lot to discuss about future ideas that concern both KDevelop and the KDE SDK as a whole.

Interesting topics we've mentioned, which were directly related to KDevelop:

Improving the dashboards inside KDevelop: We'd like to introduce a session-specific dashboard that shows information about the currently loaded projects, such as recent commits, recent bug entries, latest mailing list entries, etc.

Reducing the ramp-up time needed for newcomers: We want to make it easier to download/build/run applications, and make it easier to contribute back patches to reviewboard for newcomers. Part of that we'd like make it easier to fetch dependencies of a particular KDE project, so the developer really doesn't need to worry too much about setting up his/her environment. We (the KDE SDK team) planned to improve kdesrc-build in a way it can be used as a backend for handling all this.

We also had a bunch of non-KDevelop related discussions, let me briefly summarize them:

  • The KF5 book is coming along nicely, and lots of people have been involved into making it shine
  • The KDE apidocs site got some love and looks much better now
  • (a lot more)

Thanks for making the event happen, thanks to all the donors for this year's Randa fundraiser! As always, it's been an terrific event, a super productive week altogether.

Thanks a lot to Mario Fux and his family/friends for organizing and keeping us happy during the event.

Categories: FLOSS Project Planets

Graham Dumpleton: Transparent object proxies in Python.

Planet Python - Sun, 2014-08-17 11:57
This is a quick rejoinder to a specific comment made in Armin Ronacher's recent blog post titled 'The Python I Would Like To See'. In that post Armin gives the following example of something that is possible with old style Python classes. >>> original = 42>>> class FooProxy:... def __getattr__(self, x):... return getattr(original, x)...>>> proxy = FooProxy()>>> proxy42>>> 1 + proxy43>>> proxy +
Categories: FLOSS Project Planets

Jamie McClelland: Getting to know systemd

Planet Debian - Sun, 2014-08-17 11:42

Somehow both my regular work laptop and home entertainment computers (both running Debian Jessie) were switched to systemd without me noticing. Judging from by dpkg.log it may have happened quite a while ago. I'm pretty sure that's a compliment to the backwards compatibility efforts made by the systemd developers and a criticism of me (I should be paying closer attention to what's happening on my own machines!).

In any event, I've started trying to pay more attention now - particularly learning how to take advantage of this new software. I'll try to keep this blog updated as I learn. For now, I have made a few changes and discoveries.

First - I have a convenient bash wrapper I use that both starts my OpenVPN client to a samba server and then mounts the drive. I only connect when I need to and rarely do I properly disconnect (the OpenVPN connection does not start automatically). So, I've written the script to carefully check if my openvpn connection is present and either restart or start depending on the results.

I had something like this:

if ps -eFH | egrep [o]penvpn; then sudo /etc/init.d/openvpn restart else sudo /etc/init.d/openvpn start fi

One of the advertised advantages of systemd is the ability to more accurately detect if a service is running. So, first I changed this to:

if systemctl -q is-active openvpn.service; then sudo systemctl restart openvpn.service else sudo systemctl start openvpn.service fi

However, after reviewing the man page I realized I can shorten if further to simply:

sudo systemctl restart openvpn.service

According to the man page, restart means:

Restart one or more units specified on the command line. If the units are not running yet, they will be started.

After discovering this meaning for "restart" in systemd, I tested and realized that "restart" works the same way for openvpn using the old sysv style init system. Oh well. At least there's a man page and a stronger guarantee that it will work with all services, not just the ones that happen to respect that convention in their init.d scripts.

The next step was to disable openvpn on start up. I confess, I never bothered to really learn update-rc.d. Everytime I read the manpage I ended up throwing up my hands and renaming symlinks by hand. In the case of openvpn I had previously edited /etc/default/openvpn to indicate that "none" of the virtual private networks should be started.

Now, I've returned that file to the default configuration and instead I ran:

systemctl disable openvpn.service
Categories: FLOSS Project Planets

Nigel Babu: OKFestival Fringe Events

Planet Python - Sun, 2014-08-17 11:00

The writeup of the OKFestival is very incomplete, because I haven’t mentioned the fringe events! I attended two fringe events and they both were very good.

First, I attended CKANCon right before OKFestival. It was informal and co-located with CSVConf. My best takeaway has been talking to people from the wider community around CKAN. I often feel blind-sided because we don’t have a good view of CKAN. I want to know how a user of a portal built on CKAN feels about the UX. After all, the actual users of open data portals are citizens who get data that they can do awesome things with. I had a good conversation with folks from DKAN about their work and I’ve been thinking about how we can make that better.

I finally met Max! (And I was disappointed he didn’t have a meatspace sticker :P

The other event I attended was Write the Docs. Ali and Florian came to Berlin to attend the event. It was total surprise running into them at the Mozilla Berlin office. The discussions at the event were spectacular. The talks by by Paul Adams and Jessica Rose were great and a huge learning experience. I missed parts of oncletom’s talk, but the bit I did catch sounded very different to my normal view of documentation.

We had a few discussions around localization and QA of docs which were pretty eye opening. At one of the sessions, Paul, Ali, Fabian and I discussed rules of documentation, which turned out pretty good! It was an exercise in patience narrowing them down!

I was nearly exhausted and unable to think clearly by the time Write the Docs started, but managed to face through it! Huge thanks to (among others ) Mikey and Kristof for organizing the event!

Categories: FLOSS Project Planets

Victor Kane: Super simple example of local drush alias configuration

Planet Drupal - Sun, 2014-08-17 10:12

So I have a folder for drush scripts _above_ several doc root folders on a dev user's server. And I want to run status or whatever and my own custom drush scripts on _different_ Drupal web app instances. Drush has alias capability for different site instances, so you can do:

$ drush @site1 status

So, how to set up an aliases file?

(I'm on Ubuntu with Drush 6.2.0 installed with PEAR as per this great d.o. doc page Installing Drush on Any Linux Server Out There (Kalamuna people, wouldn't you know it?)).

Careful reading of the excellent drush documentation points you to a Drush Shell Aliases doc page, and from there to the actual example aliases file that comes with every drush installation.

So to be able to run drush commands for a few of my local Drupal instances, I did this:

  • In my Linux user directory, I created the file ~/.drush/aliases.drushrc.php
  • Contents:
<?php $aliases['site1'] = array( 'root' => '/home/thevictor/site1/drupal-yii', 'uri' => 'drupal-yii.example.com', ); $aliases['site2'] = array( 'root' => '/home/thevictor/site2', 'uri' => 'site2.example.com', );

Then I can do, from anywhere as long as I am logged in as that user:

$ cd /tmp
$ drush @site1 status
$ drush @site2 status

and lots of other good stuff. Have a nice weekend.

read more

Categories: FLOSS Project Planets

Francesca Ciceri: Adventures in Mozillaland #4

Planet Debian - Sun, 2014-08-17 05:36

Yet another update from my internship at Mozilla, as part of the OPW.

An online triage workshop

One of the most interesting thing I've done during the last weeks has been to held an online Bug Triage Workshop on the #testday channel at irc.mozilla.org.
That was a first time for me: I had been a moderator for a series of training sessions on IRC organized by Debian Women, but never a "speaker".
The experience turned out to be a good one: creating the material for the workshop had me basically summarize (not too much, I'm way too verbose!) all what I've learned in this past months about triaging in Mozilla, and speaking of it on IRC was a sort of challenge to my usual shyness.

And I was so very lucky that a participant was able to reproduce the bug I picked as example, thus confirming it! How cool is that? ;)

The workshop was about the very basics of triaging for Firefox, and we mostly focused on a simplified lifecycle of bugs, a guided tour of bugzilla (including the quicksearch and the advanced one, the list view, the individual bug view) and an explanation of the workflow of the triager. I still have my notes, and I plan to upload them to the wiki, sooner or later.

I'm pretty satisfied of the outcome: the only regret is that the promoting wasn't enough, so we have few participants.
Will try to promote it better next time! :)


Another thing that had me quite busy in the last weeks was to learn more about crashes and stability in general.
If you are unfortunate enough to experience a crash with Firefox, you're probably familiar with the Mozilla Crash Reporter dialog box asking you to submit the crash report.

But how does it works?

From the client-side, Mozilla uses Breakpad as set of libraries for crash reporting. The Mozilla specific implementation adds to that a crash-reporting UI, a server to collect and process crash reported data (and particularly to convert raw dumps into readable stack traces) and a web interface, Socorro to view and parse crash reports.

Curious about your crashes? The about:crashes page will show you a list of the submitted and unsubmitted crash reports. (And by the way, try to type about:about in the location bar, to find all the super-secret about pages!)

For the submitted ones clicking on the CrashID will take you to the crash report on crash-stats, the website where the reports are stored and analyzed. The individual crash report page on crash-stats is awesome: it shows you the reported bug numbers if any bug summaries match the crash signature, as well as many other information. If crash-stats does not show a bug number, you really should file one!

The CrashKill team works on these reports tracking the general stability of the various channels, triaging the top crashes, ensuring that the crash bugs have enough information and are reproducible and actionable by the devs.
The crash-stats site is a mine of information: take a look at the Top Crashes for Firefox 34.0a1.
If you click on a individual crash, you will see lots of details about it: just on the first tab ("Signature Summary") you can find a breakdown of the crashes by OS, by graphic vendors or chips or even by uptime range.
A very useful one is the number of crashes per install, so that you know how widespread is the crashing for that particular signature. You can also check the comments the users have submitted with the crash report, on the "Comments" tab.

One and Done tasks review

Last week I helped the awesome group of One and Done developers, doing some reviewing of the tasks pages.

One and Done is a brilliant idea to help people contribute to the QA Mozilla teams.
It's a website proposing the user a series of tasks of different difficulty and on different topics to contribute to Mozilla. Each task is self-contained and can last few minutes or be a bit more challenging. The team has worked hard on developing it and they have definitely done an awesome job! :)

I'm not a coding person, so I just know that they're using Django for it, but if you are interested in all the dirty details take a look at the project repository. My job has been only to check all the existent tasks and verify that the description and instruction are correct, that the task is properly tagged and so on. My impression is that this an awesome tool, well written and well thought with a lot of potential for helping people in their first steps into Mozilla. Something that other projects should definitely imitate (cough Debian cough).

What's next?

Next week I'll be back on working on bugs. I kind of love bugs, I have to admit it. And not squashing them: not being a coder make me less of a violent person toward digital insects. Herding them is enough for me. I'm feeling extremely non-violent toward bugs.

I'll try to help Liz with the Test Plan for Firefox 34, on the triaging/verifying bugs part.
I'll also try to triage/reproduce some accessibility bugs (thanks Mario for the suggestion!).

Categories: FLOSS Project Planets

Krita booth at Siggraph 2014

Planet KDE - Sun, 2014-08-17 05:05

This year, for the first time, we had a Krita booth at Siggraph. If you don’t know about it, Siggraph is the biggest yearly Animation Festival, which happened this year in Vancouver.
We were four people to hold the booth:
-Boudewijn Rempt (the maintainer of the Krita project)
-Vera Lukman (the original author of our popup-palette)
-Oscar Baechler (a cool Krita and Blender user)
-and me (spreading the word about Krita training; more about this in a next post…)

Together with Oscar and Vera, we’ve been doing live demos of Krita’s coolest and most impressive features.

We were right next to the Blender booth, which made a nice free-open-source solution area. It was a good occasion for me to meet more people from the blender team.

People were all really impressed, from those who discovered Krita for the first time to those who already knew about it or even already used it.
As we already started working hard on integrating Krita for VFX workflow, with support for high-bit-depth painting on OpenEXR files, supporting OpenColorIO color management, and even animation support, it was a good occasion to showcase these features and get appropriate feedback.
Many studios expressed their interest to integrate Krita into their production pipeline, replacing less ideal solutions they are using currently…
And of course we met lots of digital painters like illustrators, concept artists, storyboarders or texture artists who want to use Krita now.
Reaching such kinds of users was really our goal, and I think it was a success.

There was also a bird of feather event with all Open-source projects related to VFX that were present there, which was full of great encounters.
I even could meet the guy who is looking at fixing the OCIO bug that I reported a few days before, that was awesome!

So hopefuly we’ll see some great users coming to Krita in the next weeks/months. As usual, stay tuned

*Almost all photos here by Oscar Baechler; much more photos here or here.

Categories: FLOSS Project Planets

One place to collect all Qt-based libraries

Planet KDE - Sun, 2014-08-17 04:34

A few weeks ago, during SUSE Hack Week 10 and the Berlin Qt Dev Days 2013, I started to look for Qt-based libraries, set myself the goal of creating one place to collect all Qt-based libraries, and made some good progress. We had come up with this idea when a couple of KDE people came together in the Swiss mountains for some intensive hacking, and where the idea of Inqlude, the Qt library archive was born. We were thinking of something like CPAN for Qt back then. Since then there was a little bit of progress here and there, but my goal for the Hack Week was to complete the data to cover all relevant Qt-based libraries out there.

The mission is accomplished so far. Thanks to the help of lots of people who contributed pointers, meta data, feedback, and help, we have a pretty comprehensive list of Qt libraries now. Some nuts and bolts are still missing in the infrastructure, which are required to put everything on the web site, and I'm sure we'll discover some hidden gems of Qt libraries later, but what is there is useful and up to date. If some pieces are not yet, contributions are more than welcome.

Many thanks as well to the people at the Qt Dev Days, who gave me the opportunity to present the project to the awesome audience of the Qt user and developer community.

The first key component of the project is the format for describing a Qt-based library. It's a JSON format, which is quite straightforward. That makes it easy to be handled programmatically by tools and other software, but is also still quite friendly to the human eye and a text editor.

The schema describes the meta data of a library and its releases, like name, description, release date and version, links to documentation and packages, etc. The data for Inqlude is centrally collected in a git repository using this schema, and the tools and the web site make use of it to provide nice and easy access to users.

The second key component is the tooling around the format. The big advantage of having a structured format to describe the data is that it makes it easy to write tools to deal with the data. We have a command line client, which currently is mostly used to validate and process the data, for example for generation of the web site, but is also meant to help users with installing and downloading libraries. It's not meant to replace a native package manager, but integrate with whatever your platform provides. This area needs some more work, though.

In the future it would be nice to have some more tools. I would like to see a graphical client for managing libraries, and integration with IDEs, such as Qt Creator or KDevelop would also be awesome.

Web site
The third key component is the web site. This is the central place for users to find and browse libraries, to read about details, and to have all links to what you need to use them in one place.

The web site currently is a simple static site with all its HTML and CSS generated from the meta data by the inqlude command line tool. Contributing data is still quite easy by providing patches to the data in the git repository. With GitHub's web interface you can even do that just using your web browser.

There are a few things worth pointing out explicitly as I got similar questions about these from various people.

The first thing is that Inqlude is meant to be a collection of pointers to releases, web sites, documentation, packages. It's not meant to host the actual code, tar balls, or any other content belonging to the libraries. There are plenty of ways how to do that in a better way, and all the projects out there already have something. Inqlude is just meant to be the central hub, where to find them all.

Another thing which came up from time to time is the question of dependencies. We don't want to implement yet another package management system, or another dependency resolver. So there we rely on integration with the native tools and mechanisms of the platforms, we run on. Still it would be nice to express dependencies in the meta data somehow, so that you have an easy way to judge, what you will need to run a given library. We will need to find a way how to do that in the best way, maybe a tier concept, like KDE Frameworks 5 is using it, would do the trick.

Finally I would like to stress that Inqlude is open to proprietary libraries as well. The infrastructure and the tooling is all free software, but Inqlude is meant as an open project to collect all libraries which are valuable for users on the same terms. The license is part of the meta data, so it's transparent to users, under which terms the library can be used, and this also allows to categorize libraries on the web site according to these terms. There still is a little bit of work missing to do that in a nice way, but that will be done soon. Free software libraries of course do have the advantage, that all information, code, and packages, is directly available, and can be accessed immediately.

There are a couple of short term goals I have for Inqlude, mostly to clean up loose ends from the work which happened during the last couple of weeks:

  • Collect and accurately present generic information about libraries, which is not tied to a release. This is particularly relevant for providing a place for libraries, which are under development and haven't seen a formal release yet.
  • As said above, the listing of proprietary libraries needs some work to categorize the data according to the license. Then we can display libraries of all licenses nicely on the web site.
  • Currently we have one big entry for the Qt base libraries. It would be nice to split this up and list the main modules of Qt separately, so it's easier to get an overview of their functionality, and use them in a modular way.

There also are a number of longer term goals. Some of them include:
  • Integration with Qt Designer, so that available libraries can be listed from within the IDE, and being used in your own development without having to deal with external tools, separate downloads or stuff like that.
  • Build packages in the Open Build Service, so that ready-to-use binary packages are available for all the major Linux distributions. This possibly could be automated, so that ideally putting up the meta data on Inqlude would be all what it takes to generate native packages for openSUSE, Fedora, Ubuntu, etc.
  • Integration with distributions, so that libraries can be installed from inqlude via the native package management systems. This already works for openSUSE provided the meta data is there, but it would be nice to expand this support to other systems as well.
  • Upstream the meta data, so that it can be maintained where it's most natural. To keep the data up to date it  would be best, if the upstream developers maintain it at the same place and in the same way as they also maintain the library itself. This needs a little bit of thought and tooling to make it convenient and practical, and it's probably something we only want to do when the format has settled down and is stable enough to not change frequently anymore.

There might be more things you would like to see to happen in Inqlude. I'm always happy about feedback, so let me know.

This was and is a fun side project for me. It's amazing what you can achieve with the help of the community and by putting together mostly existing bits and pieces.

Categories: FLOSS Project Planets

Announcing first Inqlude alpha release

Planet KDE - Sun, 2014-08-17 04:27

Three years ago, at Randa 2011, the idea and first implementation of Inqlude, the Qt library archive, was born. So I'm particularly happy today to announce the first alpha release of the Inqlude tool, live from Randa 2014.

Picture by Bruno Friedmann
I use the tool for creating the web site since quite a while and it works nicely for that. It also can create and check manifest files, which is handy when you are creating or updating these for publication on the web site.

The handling of download and installation of packages of libraries listed on Inqlude is not ready yet. There is some basic implementation, but the meta data needed for it, is not there yet. This is something for a future release.

I put down the plan for the future into a roadmap. This release 0.7 is the first alpha. The second alpha 0.8 will mostly come with some more documentation about how to contribute. Then there will be a beta 0.9, which marks the point where we will keep the schema for the manifest stable. Release 1.0 will then be the point where the Inqlude tool will come with support for managing local packages, so that it's useful for developers writing Qt applications as end users. This plan is not set in stone, but it should provide a good starting point. Longer term I intend to have frequent releases to address the needs reported by users.

You will here more in my lightning talk Everything Qt at Akademy.

Inqlude is one part of the story to make the libraries created by the KDE community more accessible to Qt developers. With the recent first stable release of Frameworks 5, we have made a huge step towards that goal, and we just released the first update. A lot of porting of applications is going on here at the meeting, and we are having discussion about various other aspects how to get there, such as a KDE SDK, how to address 3rd party developers, documentation of frameworks, and more. This will continue to be an interesting ride.

Categories: FLOSS Project Planets

GNUnet News: Talk @ GHM: Knocking down the HACIENDA

GNU Planet! - Sun, 2014-08-17 03:59

On August 15rd 2014, our student Julian Kirsch gave a talk on "Knocking down the HACIENDA" defending his Master's Thesis at GHM 2014 hosted at TUM. You can now find the video below.

Categories: FLOSS Project Planets

Andreas Metzler: progress

Planet Debian - Sun, 2014-08-17 02:03

The GnuTLS28 transition is making progress, more than 60% done:

Thanks to a national holiday combined with rainy weather this should look even better soon:
ametzler@argenau:~$ w3m -dump https://ftp-master.debian.org/deferred.html | grep changes | wc -l
ametzler@argenau:~$ w3m -dump https://ftp-master.debian.org/deferred.html | grep Metzler | wc -l

Categories: FLOSS Project Planets

Nick Coghlan: Why Python 4.0 won't be like Python 3.0

Planet Python - Sun, 2014-08-17 01:30

Newcomers to python-ideas occasionally make reference to the idea of "Python 4000" when proposing backwards incompatible changes that don't offer a clear migration path from currently legal Python 3 code. After all, we allowed that kind of change for Python 3.0, so why wouldn't we allow it for Python 4.0?

I've heard that question enough times now (including the more concerned phrasing "You made a big backwards compatibility break once, how do I know you won't do it again?"), that I figured I'd record my answer here, so I'd be able to refer people back to it in the future.

What are the current expectations for Python 4.0?

My current expectation is that Python 4.0 will merely be "the release that comes after Python 3.9". That's it. No profound changes to the language, no major backwards compatibility breaks - going from Python 3.9 to 4.0 should be as uneventful as going from Python 3.3 to 3.4 (or from 2.6 to 2.7). I even expect the stable Application Binary Interface (as first defined in PEP 384) to be preserved across the boundary.

At the current rate of language feature releases (roughly every 18 months), that means we would likely see Python 4.0 some time in 2023, rather than seeing Python 3.10.

So how will Python continue to evolve?

First and foremost, nothing has changed about the Python Enhancement Proposal process - backwards compatible changes are still proposed all the time, with new modules (like asyncio) and language features (like yield from) being added to enhance the capabilities available to Python applications. As time goes by, Python 3 will continue to pull further ahead of Python 2 in terms of the capabilities it offers by default, even if Python 2 users have access to equivalent capabilities through third party modules or backports from Python 3.

Competing interpreter implementations and extensions will also continue to explore different ways of enhancing Python, including PyPy's exploration of JIT-compiler generation and software transactional memory, and the scientific and data analysis community's exploration of array oriented programming that takes full advantage of the vectorisation capabilities offered by modern CPUs and GPUs. Integration with other virtual machine runtimes (like the JVM and CLR) is also expected to improve with time, especially as the inroads Python is making in the education sector are likely to make it ever more popular as an embedded scripting language in larger applications running in those environments.

For backwards incompatible changes, PEP 387 provides a reasonable overview of the approach that was used for years in the Python 2 series, and still applies today: if a feature is identified as being excessively problematic, then it may be deprecated and eventually removed.

However, a number of other changes have been made to the development and release process that make it less likely that such deprecations will be needed within the Python 3 series:

  • the greater emphasis on the Python Package Index, as indicated by the collaboration between the CPython core development team and the Python Packaging Authority, as well as the bundling of the pip installer with Python 3.4+, reduces the pressure to add modules to the standard library before they're sufficiently stable to accommodate the relatively slow language update cycle
  • the "provisional API" concept (introduced in PEP 411) makes it possible to apply a "settling in" period to libraries and APIs that are judged likely to benefit from broader feedback before offering the standard backwards compatibility guarantees
  • a lot of accumulated legacy behaviour really was cleared out in the Python 3 transition, and the requirements for new additions to Python and the standard library are much stricter now than they were in the Python 1.x and Python 2.x days
  • the widespread development of "single source" Python 2/3 libraries and frameworks strongly encourages the use of "documented deprecation" in Python 3, even when features are replaced with newer, preferred, alternatives. In these cases, a deprecation notice is placed in the documentation, suggesting the approach that is preferred for new code, but no programmatic deprecation warning is added. This allows existing code, including code supporting both Python 2 and Python 3, to be left unchanged (at the expense of new users potentially having slightly more to learn when tasked with maintaining existing code bases).
From (mostly) English to all written languages

It's also worth noting that Python 3 wasn't expected to be as disruptive as it turned out to be. Of all the backwards incompatible changes in Python 3, many of the serious barriers to migration can be laid at the feet of one little bullet point in PEP 3100:

  • Make all strings be Unicode, and have a separate bytes() type. The new string type will be called 'str'.

PEP 3100 was the home for Python 3 changes that were considered sufficiently non-controversial that no separate PEP was considered necessary. The reason this particular change was considered non-controversial was because our experience with Python 2 had shown that the authors of web and GUI frameworks were right: dealing sensibly with Unicode as an application developer means ensuring all text data is converted from binary as close to the system boundary as possible, manipulated as text, and then converted back to binary for output purposes.

Unfortunately, Python 2 doesn't encourage developers to write programs that way - it blurs the boundaries between binary data and text extensively, and makes it difficult for developers to keep the two separate in their heads, let alone in their code. So web and GUI framework authors have to tell their Python 2 users "always use Unicode text. If you don't, you may suffer from obscure and hard to track down bugs when dealing with Unicode input".

Python 3 is different: it imposes a much greater separation between the "binary domain" and the "text domain", making it easier to write normal application code, while making it a bit harder to write code that works with system boundaries where the distinction between binary and text data can be substantially less clear. I've written in more detail elsewhere regarding what actually changed in the text model between Python 2 and Python 3.

This revolution in Python's Unicode support is taking place against a larger background migration of computational text manipulation from the English-only ASCII (officially defined in 1963), through the complexity of the "binary data + encoding declaration" model (including the C/POSIX locale and Windows code page systems introduced in the late 1980's) and the initial 16-bit only version of the Unicode standard (released in 1991) to the relatively comprehensive modern Unicode code point system (first defined in 1996, with new major updates released every few years).

Why mention this point? Because this switch to "Unicode by default" is the most disruptive of the backwards incompatible changes in Python 3 and unlike the others (which were more language specific), it is one small part of a much larger industry wide change in how text data is represented and manipulated. With the language specific issues cleared out by the Python 3 transition, a much higher barrier to entry for new language features compared to the early days of Python and no other industry wide migrations on the scale of switching from "binary data with an encoding" to Unicode for text modelling currently in progress, I can't see any kind of change coming up that would require a Python 3 style backwards compatibility break and parallel support period. Instead, I expect we'll be able to accommodate any future language evolution within the normal change management processes, and any proposal that can't be handled that way will just get rejected as imposing an unacceptably high cost on the community and the core development team.

Categories: FLOSS Project Planets

Jean-Baptiste Onofré: Apache Syncope backend with Apache Karaf

Planet Apache - Sun, 2014-08-17 01:24

Apache Syncope is an identity manager (IdM). It comes with a web console where you can manage users, attributes, roles, etc.
It also comes with a REST API allowing to integrate with other applications.

By default, Syncope has its own database, but it can also “façade” another backend (LDAP, ActiveDirectory, JDBC) by using ConnId.

In the next releases (4.0.0, 3.0.2, 2.4.0, and 2.3.7), Karaf provides (by default) a SyncopeLoginModule allowing you to use Syncope as backend for users and roles.

This blog introduces this new feature and explains how to configure and use it.

Installing Apache Syncope

The easiest way to start with Syncope is to use the Syncope standalone distribution. It comes with a Apache Tomcat instance already installed with the different Syncope modules.

You can download the Syncope standalone distribution archive from http://www.apache.org/dyn/closer.cgi/syncope/1.1.8/syncope-standalone-1.1.8-distribution.zip.

Uncompress the distribution in the directory of your choice:

$ unzip syncope-standalone-1.1.8-distribution.zip

You can find the ready to use Tomcat instance in directory. We can start the Tomcat:

$ cd syncope-standalone-1.1.8 $ cd apache-tomcat-7.0.54 $ bin/startup.sh

The Tomcat instance runs on the 9080 port.

You can access the Syncope console by pointing your browser on http://localhost:9080/syncope-console.

The default admin username is “admin”, and password is “password”.

The Syncope REST API is available on http://localhost:9080/syncope/cxf.

The purpose is to use Syncope as backend for Karaf users and roles (in the “karaf” default security realm).
So, first, we create the “admin” role in Syncope:

Now, we can create an user of our choice, let say “myuser” with “myuser01″ as password.

As we want “myuser” as Karaf administrator, we define the “admin” role for “myuser”.

“myuser” has been created.

Syncope is now ready to be used by Karaf (including users and roles).

Karaf SyncopeLoginModule

Karaf provides a complete security framework allowing to use JAAS in an OSGi compliant way.

Karaf itself uses a realm named “karaf”: it’s the one used by SSH, JMX, WebConsole by default.

By default, Karaf uses two login modules for the “karaf” realm:

  • the PropertiesLoginModule uses the etc/users.properties as storage for users and roles (with user password)
  • the PublickeyLoginModule uses the etc/keys.properties as storage for users and roles (with user public key)

In the coming Karaf versions (3.0.2, 2.4.0, 2.3.7), a new login module is available: the SyncopeLoginModule.

To enable the SyncopeLoginModule, we just create a blueprint descriptor that we drop into the deploy folder. The configuration of the Syncope login module is pretty simple, it just requires the address of the Syncope REST API:

<?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:jaas="http://karaf.apache.org/xmlns/jaas/v1.1.0" xmlns:ext="http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0"> <jaas:config name="karaf" rank="1"> <jaas:module className="org.apache.karaf.jaas.modules.syncope.SyncopeLoginModule" flags="required"> address=http://localhost:9080/syncope/cxf </jaas:module> </jaas:config> </blueprint>

You can see that the login module is enabled for the “karaf” realm using the jaas:realm-list command:

karaf@root()> jaas:realm-list Index | Realm Name | Login Module Class Name ----------------------------------------------------------------------------- 1 | karaf | org.apache.karaf.jaas.modules.syncope.SyncopeLoginModule

We can now login on SSH using “myuser” which is configured in Syncope:

~$ ssh -p 8101 myuser@localhost The authenticity of host '[localhost]:8101 ([]:8101)' can't be established. DSA key fingerprint is b3:4a:57:0e:b8:2c:7e:e6:1c:f1:e2:88:dc:bf:f9:8c. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '[localhost]:8101' (DSA) to the list of known hosts. Password authentication Password: __ __ ____ / //_/____ __________ _/ __/ / ,< / __ `/ ___/ __ `/ /_ / /| |/ /_/ / / / /_/ / __/ /_/ |_|\__,_/_/ \__,_/_/ Apache Karaf (4.0.0-SNAPSHOT) Hit '<tab>' for a list of available commands and '[cmd] --help' for help on a specific command. Hit 'system:shutdown' to shutdown Karaf. Hit '<ctrl-d>' or type 'logout' to disconnect shell from current session. myuser@root()>

Our Karaf instance now uses Syncope for users and roles.

Karaf SyncopeBackendEngine

In addition of the login module, Karaf also ships a SyncopeBackendEngine. The purpose of the Syncope backend engine is to manipulate the users and roles back directly from Karaf. Thanks to the backend engine, you can list the users, add a new user, etc directly from Karaf.

However, for security reason and consistency, the SyncopeBackendEngine only supports the listing of users and roles defined in Syncope: the creation/deletion of an user or role directly from Karaf is disabled as those operations should be performed directly from the Syncope console.

To enable the Syncope backend engine, you have to register the backend engine as an OSGi service. Moreoever, the SyncopeBackendEngine requires two additional options on the login module: the admin.user and admin.password corresponding a Syncope admin user.

We have to update the blueprint descriptor like this:

<?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:jaas="http://karaf.apache.org/xmlns/jaas/v1.1.0" xmlns:ext="http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0"> <jaas:config name="karaf" rank="5"> <jaas:module className="org.apache.karaf.jaas.modules.syncope.SyncopeLoginModule" flags="required"> address=http://localhost:9080/syncope/cxf admin.user=admin admin.password=password </jaas:module> </jaas:config> <service interface="org.apache.karaf.jaas.modules.BackingEngineFactory"> <bean class="org.apache.karaf.jaas.modules.syncope.SyncopeBackingEngineFactory"/> </service> </blueprint>

With the SyncopeBackendEngineFactory register as an OSGi service, for instance, we can list the users (and their roles) defined in Syncope.

To do it, we can use the jaas:user-list command:

myuser@root()> jaas:realm-list Index | Realm Name | Login Module Class Name ----------------------------------------------------------------------------- 1 | karaf | org.apache.karaf.jaas.modules.syncope.SyncopeLoginModule myuser@root()> jaas:realm-manage --index 1 myuser@root()> jaas:user-list User Name | Group | Role ------------------------------------ rossini | | root rossini | | otherchild verdi | | root verdi | | child verdi | | citizen vivaldi | | bellini | | managingDirector puccini | | artDirector myuser | | admin

We can see all the users and roles defined in Syncope, including our “myuser” with our “admin” role.

Using Karaf JAAS realms

In Karaf, you can create any number of JAAS realms that you want.
It means that existing applications or your own applications can directly use a realm to delegate authentication and authorization.

For instance, Apache CXF provides a JAASLoginInterceptor allowing you to add authentication by configuration. The following Spring or Blueprint snippet shows how to use the “karaf” JAAS realm:

<jaxws:endpoint address="/service"> <jaxws:inInterceptors> <ref bean="authenticationInterceptor"/> </jaxws:inInterceptors> </jaxws:endpoint> <bean id="authenticationInterceptor" class="org.apache.cxf.interceptor.security.JAASLoginInterceptor"> <property name="contextName" value="karaf"/> </bean>

The same configuration can be applied for jaxrs endpoint instead of jaxws endpoint.

As Pax Web leverages and uses Jetty, you can also define your Jetty security configuration in your Web Application.
For instance, in the META-INF/spring/jetty-security.xml of your application, you can define the security contraints:

<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <bean id="loginService" class="org.eclipse.jetty.plus.jaas.JAASLoginService"> <property name="name" value="karaf" /> <property name="loginModuleName" value="karaf" /> </bean> <bean id="constraint" class="org.eclipse.jetty.util.security.Constraint"> <property name="name" value="BASIC"/> <property name="roles" value="user"/> <property name="authenticate" value="true"/> </bean> <bean id="constraintMapping" class="org.eclipse.jetty.security.ConstraintMapping"> <property name="constraint" ref="constraint"/> <property name="pathSpec" value="/*"/> </bean> <bean id="securityHandler" class="org.eclipse.jetty.security.ConstraintSecurityHandler"> <property name="authenticator"> <bean class="org.eclipse.jetty.security.authentication.BasicAuthenticator"/> </property> <property name="constraintMappings"> <list> <ref bean="constraintMapping"/> </list> </property> <property name="loginService" ref="loginService" /> <property name="strict" value="false" /> </bean> </beans>

We can link the security constraint in the web.xml:

<?xml version="1.0" encoding="UTF-8"?> <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"> <display-name>example_application</display-name> <welcome-file-list> <welcome-file>index.jsp</welcome-file> </welcome-file-list> <security-constraint> <display-name>authenticated</display-name> <web-resource-collection> <web-resource-name>All files</web-resource-name> <description/> <url-pattern>/*</url-pattern> </web-resource-collection> <auth-constraint> <description/> <role-name>user</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> <realm-name>karaf</realm-name> </login-config> <security-role> <description/> <role-name>user</role-name> </security-role> </web-app>

Thanks to that, your web application will use the “karaf” JAAS realm, which can delegates the storage of users and roles to Syncope.

Thanks to the Syncope Login Module, Karaf becomes even more flexible for the authentication and authorization of the users, as the users/roles backend doesn’t have to be embedded in Karaf itself (as for the PropertiesLoginModule): Karaf can delegates to Syncope which is able to façade a lot of different actual backends.

Categories: FLOSS Project Planets

Matt Brown: GPG Key Transition

Planet Debian - Sat, 2014-08-16 22:45

Firstly, thanks to all who responded to my previous rant. It turns out exactly what I wanted does exist in the form of a ID-000 format smartcard combined with a USB reader. Perfect. No idea why I couldn’t find that on my own prior to ranting, but very happy to have found it now.

Secondly, now that I’ve got my keys and management practices in order, it is time to begin transitioning to my new key.

Click this link to find the properly signed, full transition statement.

I’m not going to paste the full statement into this post, but my new key is:

pub 4096R/A48F065A 2014-07-27 [expires: 2016-07-26] Key fingerprint = DBB7 64A1 797D 2E21 C799 3CD7 A628 CB5F A48F 065A uid Matthew Brown <matt @mattb.net.nz> uid Matthew Brown <mattb @debian.org> sub 4096R/1937883F 2014-07-27 [expires: 2016-07-26]

If you signed my old key, I’d very much appreciate a signature on my new key, details and instructions in the transition statement. I’m happy to reciprocate if you have a similarly signed transition statement to present.

Categories: FLOSS Project Planets

lightning @ Savannah: GNU lightning 2.0.5 released!

GNU Planet! - Sat, 2014-08-16 20:00

GNU lightning is a library to aid in making portable programs
that compile assembly code at run time.


Download release:

2.0.5 comes with a new port to the Alpha architecture. Thanks
to Many Trent Nelson from snakebite.net for providing access to
an Alpha computer.


o Correct assertion on uninitialized state variables.

o Implement lightning Alpha port.

o Correct wrong table of instruction sizes in software float.
o When checking cpu features, do not get confused on Linux if /proc
is not mounted, and end up not properly checking for __ARM_PCS_VFP,
that is the best source to know if a fpu is available.

o Correct usage of wrong register in jit_bmsr, that was working
(passing all tests) by accident.

o Add consistency check on temporaries during a jump.
o Always mark return registers as live in epilog.
o Correct change of possibly wrong bitmask in jit_update.
o Convert all assertions to result in an int check.
On alpha assertions directly on a pointer or long would fail if
only checking the top 32 bits.
o Do not pass null as free, memcpy and memmove arguments.
o Remove the global but not advertised jit_progname variable.
o Add note about initialization and jit_set_memory_functions call.
o Do not export some expected to be private definitions and types in

Categories: FLOSS Project Planets

Konqueror is looking for a maintainer

Planet KDE - Sat, 2014-08-16 16:24
KDE Project:

For quite some time now (OK, many years...) I haven't had time for Konqueror.
KDE Frameworks 5 and Qt 5 have kept me quite busy.

It's time to face the facts: Konqueror needs a new maintainer.

This is a good time to publish the thoughts we had many years ago about a possible new GUI for Konqueror.

Kévin Ottens, Nuno Pinheiro and myself had a meeting (at some Akademy, many years ago) where we thought about how Konqueror's GUI could be improved to be cleaner and more intuitive, while keeping the idea of Konqueror being the universal navigator, i.e. the swiss-army knife that includes file management, web browsing, document viewing, and more. This is what makes it different from rekonq and dolphin, for instance.

So, if you're interested in Konqueror and you want to keep it alive, please consider taking over its development; whether the ideas below sound good to you or not. This is no obligation, just a brain-storming that could prove useful for a future redesign of Konqueror's UI.

The first task is much simpler anyway: it needs to be ported to KF 5 and Qt 5.

1) Definition of the target user group:

  • Users who prefer to stay within the same application as much as possible
    (which includes both those who don't like major changes, and those
    who prefer a single window over new windows and apps being started
    for every type of document)
  • All web users (from one webpage to many at the same time).
  • Heavy content consumers (local and remote)

2) Workflow diagrams

2.1) Navigation to a document in a known location
Pre-requisite: a "starting point" selector is available

  • Step 1: choosing a starting point
  • Step 2: navigating to that starting point in the main view
  • Step 3: when reaching the document, its contents are displayed, still in the main view.
  • Optional step 4: for edit the document, an easily accessible button is available.

2.2) File management during navigation
During step 2 above, the user might notice files that need renaming, deleting,
moving, etc. Moving and copying is especially interesting, we made a list of
the possible ways to do that, and kept:
Copy/Paste, Splitted views, Dropping on a tab, Dropping on a breadcrumb
while discarding the sidebar folder tree.

This also led to a new concept in addition, for the following workflow:

  • Step 1: Select files
  • Step 2: Trigger cut or copy action
  • Step 3: A "clipboard" window appears on the side of the current window,
    showing an icon for the copied files.
  • Optional step 4: drag more files to the clipboard, even from other
    locations, which creates more "groups of files" icons in the clipboard.
  • Step 5: Navigate to the target location
  • Step 6: Trigger the paste action to paste all the files from the clipboard,
    or drag individual groups of files from the clipboard to the intended

This has the advantage that no splitting or sidebar is necessary within konqueror.
The clipboard window (inspired by the concept of a shelf in NeXT) offers a temporary
area for holding the files while we are moving from the source location to the
target location within the main view.
Note that the clipboard would have a visible label "Copy" or "Move", depending
on the action in step 2 which made it appear. This makes it clear that the files
dropped in step 4 will also be copied or moved, and saves asking the user again
at every drop in step 4, as well as asking in the final drop in step 6.

This concept could even be made KDE-wide; whenever the user uses "copy" or "cut" in
an application, the clipboard window would appear (at a fixed location, in this idea,
not attached to the application window) and an icon for the copied or cut data would
appear in the clipboard window.
On the other hand it might be annoying when using a quick Ctrl+C Ctrl+V in a word
processor, so it should probably not appear automatically in most applications.
But for files it would be nice if it appeared automatically, because this makes the
feature very easy to discover, compared to a "Tools / Show Clipboard Window" that
nobody would use.
(Another issue is that the workspace-global window might appear on top of the
window where you wanted to copy or paste... this problem doesn't happen in the
initial idea of attaching to the konqueror window (a bit like a Mac slide panel
or whatever it's called). It could appear on whichever side there is room for it,
or the window could move to make room for it).

The nice thing is that this is actually very easy to implement, it fits exactly the
QMimeData/QClipboard concepts. But let's put this aside for now, it's a bit separate
from the main idea of this document. It was mostly a way to provide a solution for
the loss of sidebar folder tree, currently used by people who move files to other
directories. Meanwhile another solution for these users is to split and switch to
"tree view" mode, which konqueror would offer again (like the file dialog does,
and unlike dolphin). (However that's a tree of files and dirs, not a tree of dirs).

2.3) Previewing many files from the same directory

For fast multiple previews, konqueror currently has a very hidden but very nice
feature: split+link+lock. Since it's impossible to discover, these features would
be removed from the GUI, and instead a single action would be offered for these
three steps. A good name still has to be found, something like "Splitted Preview".
The workflow would be:

  • Step 1: Navigate to directory
  • Step 2: Trigger "Splitted preview" action.
    The window is split vertically, and the right view shows a single label
    like "click on files to preview them here".
  • Step 3: Click on a file in the left view
    The file is embedded into the right view.

Repeat step 3 as many times as needed.
This is a single-click preview, faster than waiting for tooltips or having to
navigate back after each file being opened.

For images especially we would like to also provide the following workflow:

  • Step 1: Navigate to directory
  • Step 2: Click on first image
    It appears embedded into the main view
  • Step 3: Click on "previous" or "next" buttons which appear on top of the

[Technically, since gwenview has this feature already, this is just a matter
of making it available in gwenview part]

Note that the first workflow is more generic since the user can use it to look
into text documents, html pages, pdfs, or anything else, not only images.

2.4) Internet search (not only google, but also wikipedia etc)

Pre-requisite: a "starting point" selector is available

  • Step 1: Choose starting point "web"
    Search form appears in main view
  • Step 2: Choose type of search (if different from the default one)
  • Step 3: Type search
  • Step 4: Resulting webpage appears in main view

2.5) Local document search using non-spatial criterias (nepomuk)

Pre-requisite: a "starting point" selector is available

  • Step 1: Choose starting point "local search"
    Search form appears in main view
  • Step 2: Fill in search form
  • Step 3: Results appear in main view

2.6) Search using time criteria (history)

Use case: I want to open again a location that I know I visited recently,
but I forgot the exact URL for it. E.g. after clicking on a link on irc.

Pre-requisite: a "starting point" selector is available

  • Step 1: Choose starting point "history".
    A list of recently visited locations appears in main view,
    sorted by time of last visit (most recent on top)
  • Step 2: Choose location (direct click, or using as-you-type filtering)
  • Step 3: Location is loaded in main view

General idea

It appears that all the navigation can be done in the main view, if "starting points"
are easily made available. The sidebar would therefore be removed. Most of its current
modules are covered above. The others do not appear to be useful.
We still have to check whether web sidebars still make sense or whether that
concept died.

The following four starting points would be used for navigation:

  • Places (basically: directories, local or remote)
  • Document search [how to say that in one word?]
  • Web
  • History

The current "Home" button would be replaced with four buttons, for these four starting points.
When clicking on one of them, the main view would show the corresponding part, so there is
room to make things look nice:

- "Places" would show large icons, for directories (somewhat similar to the
places model, but in two dimensions and with "groups" separated by a line, like the
grouped view in dolphin)

- "Document search" would be the opportunity for baloo to provide a
full form for searching.

- "Web" would show the web-search-lineedit (from the current konqueror), as
well as the list of bookmarks, with a lineedit filter.

- "History": as described before, list of recent locations ordered by time.


In konqueror.ui, you will find a konqueror window, shown in "wireframe" style so that nobody comments on colors and pixels, as recommended by Nuno.
Note how easily this can be done: just paste this line into the styleSheet property of the toplevel widget:
QLabel { border: none }\n * { border: 1px solid black; background: white }
This could probably be refined to remove some double borders, but it works good enough for now.

In this mockup window, you will find multiple tabs, to show the various things
one can do. First: one tab per "starting point": Places, Search, Web, History.
Then one tab per location where the user could end up, after some navigation:

  • a dolphinpart directory view ("dfaure")
  • a web page ("KDE - Experience Freedom!")
  • a PDF document ("foo.pdf"), using okularpart
  • an image ("image.jpg"), using gwenviewpart

The interesting part of this mockup is that the menu items in the main
konqueror menu and the toolbar buttons in the main toolbar always stay the same, whatever the
current part is. All the part-specific actions are shown in a toolbar
inside the part, with credits for the idea to a mockup which used to be
http://oversip.net/public/konqueror-mockup/ but no longer exists.
Where the mockup currently uses an ugly combobox, it would
obviously be a toolbar button with a dropdown menu, like the right-most button in the
file dialog.
It's just that this can't be added in Qt Designer with the default set of widgets.

The button saying [View Properties] would only have the tool icon, like the
equivalent button in the file dialog.
For the other buttons, it is yet to be decided if they should have icon-only
or icon-and text, which is definitely more discoverable but takes more space.

The available actions for each part have been taken from the current parts mentioned above.
The case of okularpart is still a problem though, we didn't find out how to finish the
grouping of actions so that the toolbar isn't too crowded. However the concept seems to
work OK for the other parts, okularpart is really the worst one in terms of the number
of actions :)

Anyhow - if you want to help port Konqueror to KF5, this is a good opportunity to start getting to know its code.
If you're not sure you're ready for the big step of becoming maintainer, you can at least start with porting it and decide later.
Join kde-frameworks-devel (for KF5 porting questions or patches) and kfm-devel (for general konqueror / dolphin discussions) (but CC me, I stopped monitoring kfm-devel since the only activity was about dolphin...). If this blog leads to new developers, maybe we could create a konqueror-devel mailing-list finally ;)

Categories: FLOSS Project Planets

Welcome to Ghost

Planet KDE - Sat, 2014-08-16 16:05

You're live! Nice. We've put together a little post to introduce you to the Ghost editor and get you started. You can manage your content by signing in to the admin area at <your blog URL>/ghost/. When you arrive, you can select this post from a list on the left and see a preview of it on the right. Click the little pencil icon at the top of the preview to edit this post and read the next section!

Getting Started

Ghost uses something called Markdown for writing. Essentially, it's a shorthand way to manage your post formatting as you write!

Writing in Markdown is really easy. In the left hand panel of Ghost, you simply write as you normally would. Where appropriate, you can use shortcuts to style your content. For example, a list:

  • Item number one
  • Item number two
    • A nested item
  • A final item

or with numbers!

  1. Remember to buy some milk
  2. Drink the milk
  3. Tweet that I remembered to buy the milk, and drank it

Want to link to a source? No problem. If you paste in url, like http://ghost.org - it'll automatically be linked up. But if you want to customise your anchor text, you can do that too! Here's a link to the Ghost website. Neat.

What about Images?

Images work too! Already know the URL of the image you want to include in your article? Simply paste it in like this to make it show up:

Not sure which image you want to use yet? That's ok too. Leave yourself a descriptive placeholder and keep writing. Come back later and drag and drop the image in to upload:


Sometimes a link isn't enough, you want to quote someone on what they've said. It was probably very wisdomous. Is wisdomous a word? Find out in a future release when we introduce spellcheck! For now - it's definitely a word.

Wisdomous - it's definitely a word.

Working with Code

Got a streak of geek? We've got you covered there, too. You can write inline <code> blocks really easily with back ticks. Want to show off something more comprehensive? 4 spaces of indentation gets you there.

.awesome-thing { display: block; width: 100%; } Ready for a Break?

Throw 3 or more dashes down on any new line and you've got yourself a fancy new divider. Aw yeah.

Advanced Usage

There's one fantastic secret about Markdown. If you want, you can write plain old HTML and it'll still work! Very flexible.

That should be enough to get you started. Have fun - and let us know what you think :)

Categories: FLOSS Project Planets
Syndicate content