FLOSS Project Planets

Mid-term eval – GSoC 2016

Planet KDE - Wed, 2016-06-22 06:02

Hi!

Today I will make a resume about everything that I already did in my Google Summer of Code since the beginning on May 23rd.

My GSoC project is work on Umbrello – The UML editor of KDE Community and give a New Breath to it because there’s a lot of things around there to be updated and improved.

My project is divided into this tasks:

  1. Remove Deprecated Code Qt4/KF4  to Qt5/KF5
    • This task is already finished, I’m just waiting for the final revision of my mentor so I can push the changes into the related branches.
  2. Fix issues on MsWindows
    • I already compile Umbrello in there, but for miss communication between me and my mentor, the issues that I need it to fix there aren’t clear, but I already reached him and we are discovering what need to be made.
  3. Fix bug related to missing keywords for C++
    • This bug is already fixed it. I need it to add the missing keywords for the type of the variables, and that was the easy part. But C++ have others special word’s that aren’t a type, but can be related, they are the type Qualifiers(volatile, const, mutable) and the type Modifiers(* and &), and for this two I need it to create new widgets to add in the Attribute Dialog for the Diagram Class, and that open a door to redo almost all the dialogs of Umbrello, because for the widgets have a good layout, the way that the dialogs were wrote, didn’t help, so I started to make UI’s to give me a better shot in make the dialogs, and like most of the widgets are used in all dialogs, I need it to make a big change in the code of Umbrello, and that is giving a final revision with 1928 lines added and  981 lines removed. I did some mistakes, since this patch is too big, but wasn’t an another way to do it, because every change in the widgets affected at least 5 others files. And with this new widgets, it happens that I will fix an another Umbrello bug that is open, that is related to create a new way to organize the widgets in the attribute. 
  4. Create an instance/object Diagram
    • I already started to study how to wrote this new diagram, but this task will be held until the tasks 2 and 3 be completed.

Also for my happiness, my mentors already give to me the green light in the Mid-Term evaluation, and I will continue to do the best work that I can do in my project. I have a month and a half to finish it, and I think that I can do it. =)

Thank’s for the reading!

That’s all folks!


Categories: FLOSS Project Planets

Andrew Cater: Why I must use Free Software - and why I tell others to do so

Planet Debian - Wed, 2016-06-22 05:43
My work colleagues know me well as a Free/Libre software zealot, constantly pointing out to them how people should behave, how FLOSS software trumps commercial software and how this is the only way forward. This for the last 20 odd years. It's a strain to argue this repeatedly: at various times,  I have been asked to set out more clearly why I use FLOSS, what the advantages are, why and how to contribute to FLOSS software.

"We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth.
We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.
Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here
 ...
 In our world, whatever the human mind may create can be reproduced and distributed infinitely at no cost. The global conveyance of thought no longer requires your factories to accomplish."
[John Perry Barlow - Declaration of the independence of cyberspace  1996  https://www.eff.org/cyberspace-independence]

That's some of it right there: I was seduced by a modem and the opportunities it gave. I've lived in this world since 1994, come to appreciate it and never really had the occasion to regret it.

I'm involved in the Debian community - which is very much  a "do-ocracy"  - and I've lived with Debian GNU Linux since 1995 and not had much cause to regret that either, though I do regret that force of circumstance has meant that I can't contribute as much as I'd like. Pretty much every machine I touch ends up running Debian, one way or the other, or should do if I had my way.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            
Digging through my emails since then on the various mailing lists - some of them are deeply technical, though fewer these days: some are Debian political: most are trying to help people with problems / report successes or, occasionally thanks and social chit chat. Most people in the project have never met me - though that's not unusual in an organisation with a thousand developers spread worldwide - and so the occasional chance to talk to people in real life is invaluable.

The crucial thing is that there is common purpose and common intelligence - however crazy mailing list flame wars can get sometimes - and committed, caring people. Some of us may be crazy zealots, some picky and argumentative - Debian is what we have in common, pretty much.

It doesn't depend on physical ability. Espy (Joel Klecker) was one of our best and brightest until his death at age 21: almost nobody knew he was dying until after his death. My own physical limitations are pretty much irrelevant provided I can type.

It does depend on collaboration and the strange, dysfunctional family that is our community and the wider FLOSS community in which we share and in which some of us have multiple identities in working with different projects.
This is going to end up too long for Planet Debian - I'll end this post here and then continue with some points on how to contribute and why employers should let their employers work on FLOSS.




Categories: FLOSS Project Planets

Martin-Éric Racine: Batch photo manipulation via free software tools?

Planet Debian - Wed, 2016-06-22 04:12

I have a need for batch-processing pictures. My requirements are fairly simple:

  • Resize the image to fit Facebook's preferred 960 pixel box.
  • Insert Copyright, Byline and Bylinetitle into the EXIF data.
  • Optionally, paste my watermark onto a predefined corner of the image.
  • Optionally, adjust the white balance.
  • Rename the file according to a specific syntax.
  • Save the result to a predefined folder.

Until recently, I was using Phatch to perform all of this. Unfortunately, it cannot edit the EXIF data of my current Lumix camera, whose JPEG it claims to be MPO. I am thus forced to look for other options. Ideally, I would do this via a script inside gThumb (which is my main photo editing software), but I cannot seem to find adequate documentation on how to achieve this.

I am thus very interested in hearing about other options to achieve the same result. Ideas, anyone?

Categories: FLOSS Project Planets

Call for submissions for the 2016 Art of Krita Book

Planet KDE - Wed, 2016-06-22 02:50

The Krita Foundation is going to publish a glossy, shiny book of art created with Krita! This book will be sent out to the seventy Kickstarter backers who selected the artbook as their reward, and it will be available from the Krita shop. We’ll also try and make sure it’s available through online bookshops! It’s the very first time the Krita Foundation will publish a book, and we’re really excited about it.

And we’re sure you will be excited, too. it’s kind of a historic moment, after all, so who wouldn’t want to be in on the very first Krita Art book? Space is going to be limited, though, so we’re going to assemble a jury of seasoned Krita artists to vet all submissions and make the final selection.

But first, we need you to submit your artwork for publication!

The book will be printed professionally, either hard-cover or soft-cover (that depends on the final page count). The dimensions are 200 x 280 mm. There will be a glossy color page and a black & white page available for every artist. The black & white page contains information about you, whatever you want to tell the world. And there will be room for a black & white illustration as well.

Here’s a mock-up of what we intend:

If you want to be in on it, send your submission to foundation@krita.org.  We need the following:

  • Your color art work at the full resolution. Keep in mind that this is print, so screen resolution sized images won’t look good. The printer wants a minimum resolution of 150dpi but our experience is that 300dpi looks better. Keep at least 3mm bleed on the outside of your picture (making the submission size 206×286 mm) and add cutting marks if you know how (if you don’t know how, we do, and will add them).
  • An optional black & white artwork. This needs to be even higher resolution (600dpi) since the black and white pages are printed at a higher resolution. It doesn’t need to be the same physical size, since your info goes on that page as well. We can resize images for layout purposes, anyway.
  • Information about you: a short bio, contact details — whatever you want to tell about yourself, about half a page so there’s room for the B&W artwork. There is no real-names rule: you can call yourself what you want.

The artwork needs to conform to a few rules:

  • There is no set topic: just show off what you think you do best! But…
  • No gore or pornography. Nudity is fine, flashing swords are fine, people being strangled by their own guts, no.
  • No fan-art or characters copyrighted by others than you. We don’t want the publicity of being sued by Nintendisnami and their ilk.
  • Made using Krita: you can use other tools, too, but Krita has to be used for most of the work.

The deadline for submissions is November 1st!

Send your submission to foundation@krita.org

Save

Save

Save

Save

Categories: FLOSS Project Planets

Clint Adams: Only in San Francisco would one brag about this

Planet Debian - Wed, 2016-06-22 02:46

“I dated Appelbaum!” she said.

“I gotta go,” I said.

Categories: FLOSS Project Planets

GSoC Update 1: The Beginning

Planet KDE - Wed, 2016-06-22 01:51
I have officially started my GSoC project under the mentorship of Boudhayan Gupta and Pinak Ahuja.

The project idea's implementation has undergone some changes from what I proposed. While the essence of the project is the same, it will now no longer be dependent on Baloo and xattr. Instead, it will use a QList to hold a list of staged files with a plugin to kiod. My next milestone before the mid-term evaluation is to implement this in a KIO slave which will be compatible with the whole suite of KDE applications. 

For the last two weeks, I've been busy with going through hundreds of lines of source code to understand the concept of a KIO slave. The KIO API is a very neat feature of KDE - it provides a single, consistent way to access remote and local filesystems. This is further expanded to KIO slaves which are programs based on the KIO API which allow for a filesystem to be expressed in a particular way. For instance, there is a KIO slave for displaying xattr file tags as a directory under which each file marked to a tag would be displayed. KIO slaves even expand to network protocols allowing for remote access using slaves such as http:/, ftp:/, smb:/ (for Windows samba shares), fish:/, sftp:/, nfs:/, and webdav:/. My project requires virtual folder constructed of URLs stored in a QList - an ideal fit for KIO slaves.

However, hacking on KIO slaves was not exactly straightforward. Prior to my GSoC selection, I had no idea on how to edit CMakeLists.txt files and it was a task to learn to make one by hand. Initially, it felt like installing the dependencies for building KIO slaves would almost certainly lead to me destroying my KDE installation, and sure enough, I did manage to ruin my installation. Most annoying. Fortunately, I managed to recover my data and with a fresh install of Kubuntu 16.04 with all the required KDE packages, I got back to working on getting the technical equivalent of a Hello World to work with a KIO slave.

This too, was more than a matter of just copying and pasting lines of code from the KDE tutorial. KIO slaves had dropped the use of .protocol files in the KF5 transition, instead opting for JSON files to store the properties of the KIO slave. Thankfully, I had the assistance of the legendary David Faure. Under his guidance, I managed to port the KIO slave in the tutorial to a KF5 compatible KIO slave and after a full week of frustration of dealing with dependency hell, I saw the best Hello World I could ever hope for:


Baby steps. The next step was to make the KIO slave capable of displaying the contents of a specified QUrl in a file manager. The documentation for KProtocolManager made it seem like a pretty straightforward task - apparently that all I needed to do was to add a "listing" entry in my JSON protocol file and I would have to re-implement the listDir method inherited from SlaveBase using a call to SlaveBase::listDir(&QUrl). Unbeknownst to me, the SlaveBase class actually didn't have any code for displaying a directory! The SlaveBase class was only for reimplementing its member functions in a derived class as I found out by going through the source code of the core of kio/core. Learning from my mistake here I switched to using a ForwardingSlaveBase class for my KIO slave which instantly solved my problems of displaying a directory.

Fistpump
According to my timeline, the next steps in the project are
  1. Finishing off the KIO slave by the end of this month
  2. Making GUI modifications in Dolphin to accommodate the staging area
  3. Thinking of a better name for this feature? 
So far, it's been a great experience to get so much support from the KDE community. Here's to another two and a half months of KDE development!
Categories: FLOSS Project Planets

GSoC Update(?): Writing a KIO slave 101!

Planet KDE - Wed, 2016-06-22 01:50
This project has been going well. Though it was expectedly difficult in the beginning, I feel like I am on the other side of the learning curve now. I will probably make a proper update post sometime later this month. My repo for this project can be found here: https://github.com/shortstheory/staging-kioslave
For now, this is a small tutorial for writing KDE I/O slaves (KIO slaves) which can be used for a variety of KDE applications. KIO slaves are a great way for accessing files from different filesystems and protocols in a neat, uniform way across many KDE applications. Their versatility makes them integral to the KIO library. KIO slaves have changed in their structure the transition to KF5 and this tutorial highlights some of these differences from preceding iterations of it.
Project Structure
For the purpose of this tutorial, your project source directory needs to have the following files.
  • kio_hello.h
  • kio_hello.cpp
  • hello.json
  • CMakeLists.txt
If you don't feel like creating these yourself, just clone it from here: https://github.com/shortstheory/kioslave-tutorial hello.json  The .json file replaces the .protocol files used in KIO slaves pre KF5. The .json file for the KIO slave specifies the properties the KIO slave will have such as the executable path to the KIO slave on installation. The .json file also includes properties of the slave such as being able to read from, write to, delete from, among many others. Fields in this .json file are specified from the KProtocolManager class. For creating a KIO slave capable of showing a directory in a file manager such as Dolphin, the listing property must be set to true. As an example, the .json file for the Hello KIO slave described in this tutorial looks like this:

{
    "KDE-KIO-Protocols" : {
        "hello": {
            "Class": ":local",
            "X-DocPath": "kioslave5/kio_hello.html",
            "exec": "kf5/kio/hello",
            "input": "none",
            "output": "filesystem",
            "protocol": "hello",
            "reading": true
        }
    }
}  
As for the CMakeLists.txt, you will need to link your KIO slave module with KF5::KIOCore. This can be seen in the project directory.

kio_hello.h  #ifndef HELLO_H
#define HELLO_H

#include <kio/slavebase.h>

/**
  This class implements a Hello World kioslave
 */
class Hello : public QObject, public KIO::SlaveBase
{
    Q_OBJECT
public:
    Hello(const QByteArray &pool, const QByteArray &app);
    void get(const QUrl &url) Q_DECL_OVERRIDE;
};

#endif

The Hello KIO slave is derived from KIO::SlaveBase. The SlaveBase class has some basic functions already implemented for the KIO slave. This can be found in the documentation. However, most of the functions of SlaveBase are virtual functions and have to be re-implemented for the KIO slave. In this case, we are re-implementing the get function to print a QString when it is called by kioclient5.

In case you don't need special handling of the KIO slave's functions, you can derive your KIO slave class directly from KIO::ForwardingSlaveBase. Here, you would only need to re-implement the rewriteUrl function to get your KIO slave working.

kio_hello.cpp
#include "hello.h"
#include <QDebug>

class KIOPluginForMetaData : public QObject
{
    Q_OBJECT
    Q_PLUGIN_METADATA(IID "org.kde.kio.slave.hello" FILE "hello.json")
};

extern "C"
{
    int Q_DECL_EXPORT kdemain(int argc, char **argv)
    {
        qDebug() << "Launching KIO slave.";
        if (argc != 4) {
            fprintf(stderr, "Usage: kio_hello protocol domain-socket1 domain-socket2\n");
            exit(-1);
        }
        Hello slave(argv[2], argv[3]);
        slave.dispatchLoop();
        return 0;
    }
}

void Hello::get(const QUrl &url)
{
    qDebug() << "Entering function.";
    mimeType("text/plain");
    QByteArray str("Hello world!\n");
    data(str);
    finished();
    qDebug() << "Leaving function";
}

Hello::Hello(const QByteArray &pool, const QByteArray &app)
    : SlaveBase("hello", pool, app) {}

#include "hello.moc"
 
The .moc file is, of course, auto-generated at compilation time.

As mentioned earlier, the KIO Slave's .cpp file will also require a new KIOPluginForMetaData class to add the .json file. The following is used for the hello KIO slave and can be used as an example:
 class KIOPluginForMetaData : public QObject
{
    Q_OBJECT
    Q_PLUGIN_METADATA(IID "org.kde.kio.slave.hello" FILE "hello.json")
};
 CMakeLists.txt
cmake_minimum_required(VERSION 3.5)
set(QT_MIN_VERSION "5.4.0")
set(KF5_MIN_VERSION "5.16.0")

find_package(ECM ${KF5_MIN_VERSION} REQUIRED NO_MODULE)
set(
    CMAKE_MODULE_PATH
        ${CMAKE_MODULE_PATH}
        ${ECM_MODULE_PATH}
        ${ECM_KDE_MODULE_DIR}
)

include(KDEInstallDirs)
include(KDECMakeSettings)
include(KDECompilerSettings NO_POLICY_SCOPE)
include(ECMSetupVersion)
include(FeatureSummary)
add_library(kio_hello MODULE hello.cpp)
find_package(KF5 ${KF5_MIN_VERSION} REQUIRED KIO)
target_link_libraries(kio_hello KF5::KIOCore)
set_target_properties(kio_hello PROPERTIES OUTPUT_NAME "hello")

install(TARGETS kio_hello DESTINATION ${KDE_INSTALL_PLUGINDIR}/kf5/kio )
 Installation
Simply run the following commands in the source folder:

mkdir build
cd build
cmake -DCMAKE_INSTALL_PREFIX=/usr -DKDE_INSTALL_USE_QT_SYS_PATHS=TRUE ..
make
sudo make install
kdeinit5
 
As shown above, we have to run kdeinit5 again so the new KIO slave is discovered by KLauncher and can be loaded when we run a command through an application such as kioclient5.
Testing

Run:

kioclient5 'cat' 'hello:/'

And the output should be:

Hello_world
Categories: FLOSS Project Planets

Gunnar Wolf: Answering to a CACM «Viewpoint»: on the patent review process

Planet Debian - Wed, 2016-06-22 00:40

I am submitting a comment to Wen Wen and Chris Forman's Viewpoint on the Communications of the ACM, titled Economic and business dimensions: Do patent commons and standards-setting organizations help navigate patent thickets?. I believe my comment is worth sharing a bit more openly, so here it goes. Nevertheless, please refer to the original article; it makes very interesting and valid points, and my comment should be taken as an extra note on a great text only!

I was very happy to see an article with this viewpoint published. This article, however, mentions some points I believe should be further stressed out as problematic and important. Namely, still at the introduction, after mentioning that patents «are intended to provide incentives for innovation by granting to inventors temporary monopoly rights», the next paragraph continues, «The presence of patent thickets may create challenges for ICT producers. When introducing a new product, a firm must identify patents its product may infringe upon.»

The authors continue explaining the needed process — But this simple statement should be enough to explain how the patent system is broken and needs repair.

A requisite for patenting an invention was originally the «inventive» and «non-obvious» characteristics. Anything worth being granted a patent should be inventive enough, it should be non-obvious to an expert in the field.

When we see huge bodies of awarded (and upheld) patents falling in the case the authors mention, it becomes clear that the patent applications were not thoroughly researched prior to their patent grant. Sadly, long gone are the days where the United States Patent and Trademarks Office employed minds such as Albert Einstein's; nowadays, the office is more a rubber-stamping bureaucracy where most patents are awarded, and this very important requisite is left open to litigation: If somebody is found in breach of a patent, they might choose to defend the issue that the patent was obvious to an expert. But, of course, that will probably cost more in legal fees than settling for an agreement with the patent holder.

The fact that in our line of work we must take care to search for patents before releasing any work speaks a lot about the process. Patents are too easily granted. They should be way stricter; the occurence of an independent developer mistakenly (and innocently!) breaching a patent should be most unlikely, as patents should only be awarded to truly non-obvious solutions.

Categories: FLOSS Project Planets

Mark Shropshire: Type Less with Drush site-set

Planet Drupal - Tue, 2016-06-21 23:53

I use drush aliases between Drupal VM and Drupal hosting services quite a bit. It was great to learn that drush site-set allows me to set the alias to use for the current session, so I don't have to type the alias name over and over again. For instance, I can set an alias like this: $ drush site-set @drupalvm.drupal8.dev, allowing me to check the status of the site on the Drupal VM with $ drush status. To make it even easier, use is an alias for site-set. Example: $ drush use @drupalvm.drupal8.dev.

Drush site-set has some other useful options beyond setting drush aliases. Check out the options available at the link below:

https://drushcommands.com/drush-8x/core/site-set/

Blog Category: 
Categories: FLOSS Project Planets

parallel @ Savannah: GNU Parallel 20160622 ('Orlando') released

GNU Planet! - Tue, 2016-06-21 22:55

GNU Parallel 20160622 ('Orlando') has been released. It is available for download at: http://ftp.gnu.org/gnu/parallel/

Haiku of the month:

Does path on remote
not include GNU Parallel?
Try --env PATH.
-- Ole Tange

New in this release:

  • $PATH can now be exported using --env PATH. Useful if GNU Parallel is not in your path on remote machines.
  • If --block is left out, --pipepart will use a block size that will result in 10 jobs per jobslot.
  • The cookie from 2016-01-04 was won by Morgan Rodgers on the 2016-06-06 after 5 months.
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O - pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2011): GNU Parallel - The Command-Line Power Tool, ;login: The USENIX Magazine, February 2011:42-47.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparallel.threadless.com/designs/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

About GNU SQL

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

Categories: FLOSS Project Planets

Django Weblog: Django 1.10 beta 1 released

Planet Python - Tue, 2016-06-21 21:16

As part of the Django 1.10 release process, today we've released Django 1.10 beta 1, a preview/testing package that represents the second stage in the 1.10 release cycle and an opportunity for you to try out the changes coming in Django 1.10.

Django 1.10 has a panoply of new features which you can read about in the in-development 1.10 release notes.

Only bugs in new features and regressions from earlier versions of Django will be fixed between now and 1.10 final (also, translations will be updated following the "string freeze" when the release candidate is issued). The current release schedule calls for a release candidate about a month from now with the final release to follow about two weeks after that around August 1. We'll only be able to keep this schedule if we get early and often testing from the community. Updates on the release schedule schedule are available on the django-developers mailing list.

As with all alpha and beta packages, this is not for production use. But if you'd like to take some of the new features for a spin, or to help find and fix bugs (which should be reported to the issue tracker), you can grab a copy of the beta package from our downloads page or on PyPI.

The PGP key ID used for this release is Tim Graham: 1E8ABDC773EDE252.

Categories: FLOSS Project Planets

Randa Meetings 2016 Part I: Okteta

Planet KDE - Tue, 2016-06-21 21:06

Help us keep going, at this and many other sprints!

Last Sunday the full week of Randa Meetings 2016 had passed, and it was time to find a route home, e.g. using KDE’s currently developed Marble Maps (here in the SailfishOS variant):

Home, that would be for a place somewhere on our planet, like Africa, Asia, Europe, North America, or South America, where the 40-50 people who had got together that week had come from. They came into the Swiss Alps, traveling through space (not sure if also time, but at least quite many for quite some time) to collaborate in a valley deep between mountains covered by glaciers. To collaborate on bringing more of the set of applications developed in the KDE community also to other, even non-libre operating systems.

Getting Dirty With Other Operating Systems

The KDE community these days creates a large set of applications, and many of these are not bound to the one workspace for unixoid operating systems created by KDE, which is Plasma. No, these applications also run more or less fine in other (unixoid) workspaces, thanks to all the shared specifications of the freedesktop.org movement and other shared stacks/technologies (like D-Bus, X11 & soon Wayland).

Unixoid operating systems, that does not only mean Linux, but also all the BSD derivats. Sadly the *BSD subcommunity in KDE had become quite inactive the last years. So it was good to see that with Adriaan some *BSD veteran reactivated himself and is now working hard to get current versions of KDE applications and also Plasma to first class positions in the official software supply system. Next to that he also showed his real-world server acting abilities, as kitchen demon for Saturday, to everybody’s pleasure.

While now the big majority of KDE developers develop their applications with FLOSS unixoid workspaces/operating systems in mind, they often also find themselves and their target groups being bound to devices controlled by non-libre operating systems, due to some key apps or infrastructure in work or other parts of life only available with those. Operating systems like Microsoft’s Windows or Apple’s OSX/macOS. Still, people needing to use these devices can liberate themselves a little by at least using FLOSS applications on them. Like Firefox as their Web browser, in place of the non-libre Internet Explorer or the non-libre Safari.

As almost all current KDE applications are based on the cross-platform library Qt and the cross-platform build system tool CMake, which include support for the platforms Windows, OSX, the extra development work needed to get KDE applications running also on those platforms should be relatively small. And indeed since many years some people have been doing that needed extra work, with more or less success (see wiki pages on Windows and Mac). And since using Qt5 work has started to also extend this for Android.

Still there are many things that need more work to create serious products on release day for the average enduser. From documentation for application developers what to care for, integration into the KDE CI with platform specific builds, organized pre-release testing of packages, to providing the software products to end-users via proper distribution channels. All these are areas which had been talked about and worked on during this year’s Randa Meetings (see also Kevin’s post with a KDE on Windows status update).

Okteta on Windows: built out of the source box

Already when Okteta was started many years ago, using CMake, Qt4 & kdelibs4 made it possible to have builds of Okteta for Windows (XP) and OSX done by mainly pointing some generic build and packaging scripts at its source code, as shown with Okteta 0.1 in 2008.

Today, with CMake, ECM, Qt5, KF5, it is still the same. When asking for a Windows build, just to see what the state is, it again was just a matter of pointing the generic scripts at the sources, and there was Okteta 0.19 running on Windows 10 (thanks Kevin for builds and screenshot):

One nice sideeffect of cross-testing on different platforms is that forgotten issues get into the spotlight again, like visible in the screenshot:

  • Accelerator syntax shown verbatim in docker widget title bars: KAcceleratorManager from the KWidgetAddons module needs to get an idea how to properly handle QDockWidgets
  • Bad initial size of window and set of initially shown dockers: the current hack to work-around improper API control in Qt needs adaption to Qt5 and perhaps also other platforms

Running the unit tests showed a few issues, half of them due to not using platform-neutral access to the filesystem (quickly fixed), the other half because of tricks with XDG env vars which need some platform-neutral solution (still to be solved). As a result Okteta’s source code will be more clean, which is also a win for the version for the libre operating systems.

So if somebody would want Okteta on Windows (I had a few people asking for that by the years) and/or OSX, you are welcome to help out in the packaging and testing area. I myself do not have any of those operating systems and also would not invest into that, given my own priorities. Still happy to work together.

Support us

The Randa Meetings and other sprints bring our software forward, and also to more people and more platforms. Please check out the fundraiser for the Randa Meetings, and consider to do your little contribution to get things going:


Help us keep going, at this and many other sprints!

Notes on other activites of mine at Randa Meetings 2016 in a following post.


Categories: FLOSS Project Planets

Talha Paracha: GSoC’16 – Pubkey Encrypt – Week 4 Report

Planet Drupal - Tue, 2016-06-21 20:00

I started the week by providing test coverage for functionalities I added to the module in week 3. Since the main functionality I added was the automatic generation of keys, the tests I wrote assert for these capabilities:

Categories: FLOSS Project Planets

Matthew Garrett: I've bought some more awful IoT stuff

Planet Debian - Tue, 2016-06-21 19:11
I bought some awful WiFi lightbulbs a few months ago. The short version: they introduced terrible vulnerabilities on your network, they violated the GPL and they were also just bad at being lightbulbs. Since then I've bought some other Internet of Things devices, and since people seem to have a bizarre level of fascination with figuring out just what kind of fractal of poor design choices these things frequently embody, I thought I'd oblige.

Today we're going to be talking about the KanKun SP3, a plug that's been around for a while. The idea here is pretty simple - there's lots of devices that you'd like to be able to turn on and off in a programmatic way, and rather than rewiring them the simplest thing to do is just to insert a control device in between the wall and the device andn ow you can turn your foot bath on and off from your phone. Most vendors go further and also allow you to program timers and even provide some sort of remote tunneling protocol so you can turn off your lights from the comfort of somebody else's home.

The KanKun has all of these features and a bunch more, although when I say "features" I kind of mean the opposite. I plugged mine in and followed the install instructions. As is pretty typical, this took the form of the plug bringing up its own Wifi access point, the app on the phone connecting to it and sending configuration data, and the plug then using that data to join your network. Except it didn't work. I connected to the plug's network, gave it my SSID and password and waited. Nothing happened. No useful diagnostic data. Eventually I plugged my phone into my laptop and ran adb logcat, and the Android debug logs told me that the app was trying to modify a network that it hadn't created. Apparently this isn't permitted as of Android 6, but the app was handling this denial by just trying again. I deleted the network from the system settings, restarted the app, and this time the app created the network record and could modify it. It still didn't work, but that's because it let me give it a 5GHz network and it only has a 2.4GHz radio, so one reset later and I finally had it online.

The first thing I normally do to one of these things is run nmap with the -O argument, which gives you an indication of what OS it's running. I didn't really need to in this case, because if I just telnetted to port 22 I got a dropbear ssh banner. Googling turned up the root password ("p9z34c") and I was logged into a lightly hacked (and fairly obsolete) OpenWRT environment.

It turns out that here's a whole community of people playing with these plugs, and it's common for people to install CGI scripts on them so they can turn them on and off via an API. At first this sounds somewhat confusing, because if the phone app can control the plug then there clearly is some kind of API, right? Well ha yeah ok that's a great question and oh good lord do things start getting bad quickly at this point.

I'd grabbed the apk for the app and a copy of jadx, an incredibly useful piece of code that's surprisingly good at turning compiled Android apps into something resembling Java source. I dug through that for a while before figuring out that before packets were being sent, they were being handed off to some sort of encryption code. I couldn't find that in the app, but there was a native ARM library shipped with it. Running strings on that showed functions with names matching the calls in the Java code, so that made sense. There were also references to AES, which explained why when I ran tcpdump I only saw bizarre garbage packets.

But what was surprising was that most of these packets were substantially similar. There were a load that were identical other than a 16-byte chunk in the middle. That plus the fact that every payload length was a multiple of 16 bytes strongly indicated that AES was being used in ECB mode. In ECB mode each plaintext is split up into 16-byte chunks and encrypted with the same key. The same plaintext will always result in the same encrypted output. This implied that the packets were substantially similar and that the encryption key was static.

Some more digging showed that someone had figured out the encryption key last year, and that someone else had written some tools to control the plug without needing to modify it. The protocol is basically ascii and consists mostly of the MAC address of the target device, a password and a command. This is then encrypted and sent to the device's IP address. The device then sends a challenge packet containing a random number. The app has to decrypt this, obtain the random number, create a response, encrypt that and send it before the command takes effect. This avoids the most obvious weakness around using ECB - since the same plaintext always encrypts to the same ciphertext, you could just watch encrypted packets go past and replay them to get the same effect, even if you didn't have the encryption key. Using a random number in a challenge forces you to prove that you actually have the key.

At least, it would do if the numbers were actually random. It turns out that the plug is just calling rand(). Further, it turns out that it never calls srand(). This means that the plug will always generate the same sequence of challenges after a reboot, which means you can still carry out replay attacks if you can reboot the plug. Strong work.

But there was still the question of how the remote control works, since the code on github only worked locally. tcpdumping the traffic from the server and trying to decrypt it in the same way as local packets worked fine, and showed that the only difference was that the packet started "wan" rather than "lan". The server decrypts the packet, looks at the MAC address, re-encrypts it and sends it over the tunnel to the plug that registered with that address.

That's not really a great deal of authentication. The protocol permits a password, but the app doesn't insist on it - some quick playing suggests that about 90% of these devices still use the default password. And the devices are all based on the same wifi module, so the MAC addresses are all in the same range. The process of sending status check packets to the server with every MAC address wouldn't take that long and would tell you how many of these devices are out there. If they're using the default password, that's enough to have full control over them.

There's some other failings. The github repo mentioned earlier includes a script that allows arbitrary command execution - the wifi configuration information is passed to the system() command, so leaving a semicolon in the middle of it will result in your own commands being executed. Thankfully this doesn't seem to be true of the daemon that's listening for the remote control packets, which seems to restrict its use of system() to data entirely under its control. But even if you change the default root password, anyone on your local network can get root on the plug. So that's a thing. It also downloads firmware updates over http and doesn't appear to check signatures on them, so there's the potential for MITM attacks on the plug itself. The remote control server is on AWS unless your timezone is GMT+8, in which case it's in China. Sorry, Western Australia.

It's running Linux and includes Busybox and dnsmasq, so plenty of GPLed code. I emailed the manufacturer asking for a copy and got told that they wouldn't give it to me, which is unsurprising but still disappointing.

The use of AES is still somewhat confusing, given the relatively small amount of security it provides. One thing I've wondered is whether it's not actually intended to provide security at all. The remote servers need to accept connections from anywhere and funnel decent amounts of traffic around from phones to switches. If that weren't restricted in any way, competitors would be able to use existing servers rather than setting up their own. Using AES at least provides a minor obstacle that might encourage them to set up their own server.

Overall: the hardware seems fine, the software is shoddy and the security is terrible. If you have one of these, set a strong password. There's no rate-limiting on the server, so a weak password will be broken pretty quickly. It's also infringing my copyright, so I'd recommend against it on that point alone.

comments
Categories: FLOSS Project Planets

KDE neon Press Coverage and Comments

Planet KDE - Tue, 2016-06-21 18:29

KDE neon User Edition 5.6 came out a couple of weeks ago, let’s have a look at the commentry.

Phoronix stuck to their reputation by announcing it a day early but redeemed them selves with a follow up article KDE neon: The Rock & Roll Distribution. “KDE neon feels amazing. There’s simply no other way to say it.

CIO had an exclusive interview with moi, “It is a continuously updated installable image that can be used not just for exploration and testing but as the main operating system for people enthusiastic about the latest desktop software.”

For the Spanish speaker MuyLinux wrote KDE Neon lanza su primera versión para usuarios. “La primera impresión ha sido buena.” or “The first impression was good”.

On YouTube we got a review from Jeff Linux Turner. “This thing’s actually pretty good.  I like it.” While Wooden User gives an unvoiced tour with funky music.  Riba Linux has the same but with more of an indy soundtrack.

Reddit had several threads on it including a review by luxitanium which I’ll selectively quote with “Is it ready for consumers? It is definitely getting there, oh yes“.

The award winning Spanish language KDE Blog covered Probando KDE Neon User Edition 5.6. “Estamos ante un gran avance para la Comunidad KDE” or “We are facing a breakthrough for the KDE Community“.

Meanwhile on Twitter:

@KdeNeon @kdecommunity The release model you have chosen is working well !

— Morgan Cox (@morgancox_uk) June 15, 2016

@KdeNeon @kdecommunity Perfect timing, I just received my new @system76 Lemur laptop yesterday. I will install KDE Neon on it tonight!

— Jean-François Juneau (@jfjuneau) June 9, 2016

Want to meet the genius behind the neon light? Harald is giving a talk at the opensuse conference on Thursday. Do drop by in Nürnberg.

by
Categories: FLOSS Project Planets

Ian Wienand: Zuul and Ansible in OpenStack CI

Planet Debian - Tue, 2016-06-21 18:16

In a prior post, I gave an overview of the OpenStack CI system and how jobs were started. In that I said

(It is a gross oversimplification, but for the purposes of OpenStack CI, Jenkins is pretty much used as a glorified ssh/scp wrapper. Zuul Version 3, under development, is working to remove the need for Jenkins to be involved at all).

Well some recent security issues with Jenkins and other changes has led to a roll-out of what is being called Zuul 2.5, which has indeed removed Jenkins and makes extensive use of Ansible as the basis for running CI tests in OpenStack. Since I already had the diagram, it seems worth updating it for the new reality.

OpenStack CI Overview

While previous post was really focused on the image-building components of the OpenStack CI system, overview is the same but more focused on the launchers that run the tests.

  1. The process starts when a developer uploads their code to gerrit via the git-review tool. There is no further action required on their behalf and the developer simply waits for results of their jobs.

  2. Gerrit provides a JSON-encoded "fire-hose" output of everything happening to it. New reviews, votes, updates and more all get sent out over this pipe. Zuul is the overall scheduler that subscribes itself to this information and is responsible for managing the CI jobs appropriate for each change.

  3. Zuul has a configuration that tells it what jobs to run for what projects. Zuul can do lots of interesting things, but for the purposes of this discussion we just consider that it puts the jobs it wants run into gearman for a launcher to consume. gearman is a job-server; as they explain it "[gearman] provides a generic application framework to farm out work to other machines or processes that are better suited to do the work". Zuul puts into gearman basically a tuple (job-name, node-type) for each job it wants run, specifying the unique job name to run and what type of node it should be run on.

  4. A group of Zuul launchers are subscribed to gearman as workers. It is these Zuul launchers that will consume the job requests from the queue and actually get the tests running. However, a launcher needs two things to be able to run a job — a job definition (what to actually do) and a worker node (somewhere to do it).

    The first part — what to do — is provided by job-definitions stored in external YAML files. The Zuul launcher knows how to process these files (with some help from Jenkins Job Builder, which despite the name is not outputting XML files for Jenkins to consume, but is being used to help parse templates and macros within the generically defined job definitions). Each Zuul launcher gets these definitions pushed to it constantly by Puppet, thus each launcher knows about all the jobs it can run automatically. Of course Zuul also knows about these same job definitions; this is the job-name part of the tuple we said it put into gearman.

    The second part — somewhere to run the test — takes some more explaining. To the next point...

  5. Several cloud companies donate capacity in their clouds for OpenStack to run CI tests. Overall, this capacity is managed by a customized management tool called nodepool (you can see the details of this capacity at any given time by checking the nodepool configuration). Nodepool watches the gearman queue and sees what requests are coming out of Zuul. It looks at node-type of jobs in the queue (i.e. what platform the job has requested to run on) and decides what types of nodes need to start and which cloud providers have capacity to satisfy demand.

    Nodepool will start fresh virtual machines (from images built daily as described in the prior post), monitor their start-up and, when they're ready, put a new "assignment job" back into gearman with the details of the fresh node. One of the active Zuul launchers will pick up this assignment job and register the new node to itself.

  6. At this point, the Zuul launcher has what it needs to actually get jobs started. With an fresh node registered to it and waiting for something to do, the Zuul launcher can advertise its ability to consume one of the waiting jobs from the gearman queue. For example, if a ubuntu-trusty node is provided to the Zuul launcher, the launcher can now consume from gearman any job it knows about that is intended to run on an ubuntu-trusty node type. If you're looking at the launcher code this is driven by the NodeWorker class — you can see this being created in response to an assignment via LaunchServer.assignNode.

    To actually run the job — where the "job hits the metal" as it were — the Zuul launcher will dynamically construct an Ansible playbook to run. This playbook is a concatenation of common setup and teardown operations along with the actual test scripts the jobs wants to run. Using Ansible to run the job means all the flexibility an orchestration tool provides is now available to the launcher. For example, there is a custom console streamer library that allows us to live-stream the console output for the job over a plain TCP connection, and there is the possibility to use projects like ARA for visualisation of CI runs. In the future, Ansible will allow for better coordination when running multiple-node testing jobs — after all, this is what orchestration tools such as Ansible are made for! While the Ansible run can be fairly heavyweight (especially when you're talking about launching thousands of jobs an hour), the system scales horizontally with more launchers able to consume more work easily.

    When checking your job results on logs.openstack.org you will see a _zuul_ansible directory now which contains copies of the inventory, playbooks and other related files that the launcher used to do the test run.

  7. Eventually, the test will finish. The Zuul launcher will put the result back into gearman, which Zuul will consume (log copying is interesting but a topic for another day). The testing node will be released back to nodepool, which destroys it and starts all over again — nodes are not reused and also have no sensitive details on them, as they are essentially publicly accessible. Zuul will wait for the results of all jobs for the change and post the result back to Gerrit; it either gives a positive vote or the dreaded negative vote if required jobs failed (it also handles merges to git, but that is also a topic for another day).

Work will continue within OpenStack Infrastructure to further enhance Zuul; including better support for multi-node jobs and "in-project" job definitions (similar to the https://travis-ci.org/ model); for full details see the spec.

Categories: FLOSS Project Planets

Acquia Developer Center Blog: Drupal 8 Module (Distro!) of the Week: Lightning

Planet Drupal - Tue, 2016-06-21 17:29

Each day, new functionality is being created for and built with Drupal 8. At the same time, more and more Drupal 7 modules are also being migrated to the Drupal community’s latest major release. In this series, the Acquia Developer Center is profiling some of the most prominent, useful modules, projects, and tools available for Drupal 8. This week: the Drupal 8 Lightning distribution.

Tags: acquia drupal planetlightningdistrodistributionauthoring
Categories: FLOSS Project Planets

Wayland in Plasma 5.7

Planet KDE - Tue, 2016-06-21 16:15

Last week we released the beta version of Plasma 5.7 which means we know what this release will have for better Wayland support. First of all I need to mention what didn’t make it: unfortunately I missed the freeze of Frameworks 5.23 to land support for xdg-shell. I have a working implementation, but was not yet satisfied with the API. This is a difficult interface to provide an API for due to the unstable nature of the interface. Due to lack of xdg-shell support GTK applications are still going to use X11 on Wayland (like the Firefox window I’m just typing this blog post in).

In the past I already blogged about a few new features in 5.7 like the improved task manager for Wayland, the virtual keyboard support, sub-surface support and improved input device support. So in this blog post I want to focus on a different topic: quality.

For Plasma 5.7 my aim was to get the Plasma session into a state that I can use it as my primary system. And since last week I have not started into an X11 session any more. This means that we needed to get the whole system stable enough to have neither KWin nor applications crash due to Wayland. Given that our Wayland code is quite a fair amount of new code, changing lots of assumptions there are of course bugs to be expected. We still have code which calls into X11 unconditionally, we have things which are not implemented correctly and of course we do stupid mistakes. So for Plasma 5.7 the task was to find these issues and fix one for one.

For Wayland it’s much easier for us to test. KWayland – our framework for Wayland support – is developed in a test driven approach making it possible to create test cases for every problem. They expose the problems and verify that they are fixed and as regression tests ensure that they won’t hit us again. Over the last release cycle we added several thousand lines of test code in KWayland alone.

Finding those issues is not always easy. If KWin crashes we don’t have DrKonqi like normally, it doesn’t work for Wayland (tries to connect to a display server, but that just crashed). What I saw on my Wayland test device was that KWin sometimes randomly crashed – more often when I interacted with X11 windows. But when attaching gdb to KWin it didn’t crash. But once I caught it: it turned out to be an error in KWin in the handling of Xwayland windows. There are two possible code paths it can take and one was with a mistake. Due to running through a debugger it was more likely to take the correct one. So yeah it’s not always easy.

With that problem gone we were able to find a few more and fix also some bugs which caused windows to quit. Unfortunately some of this fixes had to go into KWayland after the 5.23 release. This means the frameworks version used with Plasma 5.7 is not going to have all fixes. If you want to give Plasma/Wayland a try I recommend to not just wait for Plasma 5.7 but also for frameworks 5.24.

This week I will be at the openSUSE conference in Nuremberg, where I will also give a talk about how Wayland helps us to improve our quality and our workflows. I’ll do another blog post about the content of that presentation – don’t want to spoil Though if you follow our development you are already aware of it. Thanks to openSUSE to give me the possibility to present at the conference and thanks to KDE e.V. for the support to go there.

I can hear you asking now the question of all questions: “When will it be ready?” I think that I am not objective enough to answer the question or to say that it is ready. I’m too close to the code and might just omit important problems because I don’t see them. Thus I cannot say that it is ready. It depends on your workflow and whether that workflow is already fully implemented. This is something only you can know.

Last week KDE had a very important developer sprint (where I did not participate) and is currently running a fundraising campaign for this sprint. We need the money to send our developers to such meetings or to a conference like openSUSE conf where I will be this week. At the moment just 107 people have participated and donated. This is something which makes me sad. I see the statistics for my blog posts and know that this one will have at least 1000 direct hits. In addition there are people reading planetkde and not directly my blog. We are trying to raise 24000 EUR and please do the math yourself to see how close we would be to it if everybody would donate just 10 EUR.

Categories: FLOSS Project Planets

FSF Blogs: Building a better LibrePlanet: What we learned from the conference surveys

GNU Planet! - Tue, 2016-06-21 15:33


This work by Kori Feener is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Based on a work at https://u.fsf.org/1u8.

Our samples are usually about sixty to seventy respondents, and self-selecting -- from their responses, we can say with confidence that LibrePlanet attendees feel we're doing a decent job organizing the conference. The questions "How much did you enjoy the sessions you attended, compared to those at other conferences you have attended?" and "How likely is it that you will return to LibrePlanet next year?" received an average of about 3.5 out of 4 each of the last three years.

Here are some more takeaways:

  • Many LibrePlaneters want a quieter space where they can socialize and hack after the conference programming ends for the day. We organize parties after sessions wrap up, and will continue to do so, but parties aren't everyone's cup of tea. We're exploring ways to provide a comfortable, after-hours space in future years.

  • Our community tends to prefer that the sessions start at 9:45, not earlier. 9:45 is a little on the late side for a conference, but many attendees like to stay up late hacking, and need their sleep.

  • We have many newcomers each year at LibrePlanet -- about half of survey respondents say they are attending for the first time. Newcomers seems to have a good experience. About two-thirds of respondents answer yes to "If you are a beginner or intermediate hacker, did you find enough activities to match your skill level?" This is encouraging: one of the conference's core purposes is to provide a welcoming stepping-stone to our community.

  • It's important to have more coffee than we think we need. In years when we run out early, we hear about it on the survey.

Designing a good survey takes practice and, yes, people to give feedback on the survey's design. Though we keep a few core questions the same year-to-year, we adjust each survey based on things we've learned, like:

  • Don't use two-part or "double-barreled" questions. For example, one year, we tried to refine the question "If you are a beginner or intermediate hacker, did you find enough activities to match your skill level?" by rephrasing it as "Are you a newcomer to the free software community? If so, did you find enough sessions that were accessible and engaging for you?" Some people responded to the first part of that compound question, and some responded to the second part, making the results not very useful.

  • The survey has a subtler, secondary purpose: It shows attendees that we care about the LibrePlanet experience, and gives them a space to reflect on it. Easy access to the survey -- online and on paper -- helps achieve this.

It's important to us to make LibrePlanet better every year, and we appreciate your help. The community doesn't just provide feedback -- we do the planning, but you supply presentations, hallway conversations, and new projects to discuss each year.

We are working to finalize the dates for LibrePlanet 2017. Join the LibrePlanet announcements list on the conference site to receive updates. In the meantime, you can:

We hope to see you at LibrePlanet 2017!

Categories: FLOSS Project Planets

Building a better LibrePlanet: What we learned from the conference surveys

FSF Blogs - Tue, 2016-06-21 15:33


This work by Kori Feener is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Based on a work at https://u.fsf.org/1u8.

Our samples are usually about sixty to seventy respondents, and self-selecting -- from their responses, we can say with confidence that LibrePlanet attendees feel we're doing a decent job organizing the conference. The questions "How much did you enjoy the sessions you attended, compared to those at other conferences you have attended?" and "How likely is it that you will return to LibrePlanet next year?" received an average of about 3.5 out of 4 each of the last three years.

Here are some more takeaways:

  • Many LibrePlaneters want a quieter space where they can socialize and hack after the conference programming ends for the day. We organize parties after sessions wrap up, and will continue to do so, but parties aren't everyone's cup of tea. We're exploring ways to provide a comfortable, after-hours space in future years.

  • Our community tends to prefer that the sessions start at 9:45, not earlier. 9:45 is a little on the late side for a conference, but many attendees like to stay up late hacking, and need their sleep.

  • We have many newcomers each year at LibrePlanet -- about half of survey respondents say they are attending for the first time. Newcomers seems to have a good experience. About two-thirds of respondents answer yes to "If you are a beginner or intermediate hacker, did you find enough activities to match your skill level?" This is encouraging: one of the conference's core purposes is to provide a welcoming stepping-stone to our community.

  • It's important to have more coffee than we think we need. In years when we run out early, we hear about it on the survey.

Designing a good survey takes practice and, yes, people to give feedback on the survey's design. Though we keep a few core questions the same year-to-year, we adjust each survey based on things we've learned, like:

  • Don't use two-part or "double-barreled" questions. For example, one year, we tried to refine the question "If you are a beginner or intermediate hacker, did you find enough activities to match your skill level?" by rephrasing it as "Are you a newcomer to the free software community? If so, did you find enough sessions that were accessible and engaging for you?" Some people responded to the first part of that compound question, and some responded to the second part, making the results not very useful.

  • The survey has a subtler, secondary purpose: It shows attendees that we care about the LibrePlanet experience, and gives them a space to reflect on it. Easy access to the survey -- online and on paper -- helps achieve this.

It's important to us to make LibrePlanet better every year, and we appreciate your help. The community doesn't just provide feedback -- we do the planning, but you supply presentations, hallway conversations, and new projects to discuss each year.

We are working to finalize the dates for LibrePlanet 2017. Join the LibrePlanet announcements list on the conference site to receive updates. In the meantime, you can:

We hope to see you at LibrePlanet 2017!

Categories: FLOSS Project Planets
Syndicate content