Planet KDE

Syndicate content
Planet KDE - http://planetKDE.org/
Updated: 9 min 50 sec ago

Kubuntu: Statement from a not so important Kubuntu Developer.

Wed, 2015-05-27 21:56

I support Jonathan Riddell

First, I hate drama, no I loathe drama.
I have refrained from much more than the occasional social share up to this point.
I do however, stand by our fearless leader (pun intended, Jonathan has never claimed to be the leader).

As I sit here packaging what has to be my millionth package, I wonder..
why do I work so hard, for free, in what has become such a hostile environment?
For the following reasons:
Jonathan: who has taught me so much and removed the barrier of entry for me.         (Took me well over a decade to get through this barrier), not to mention he has a heart of gold, I am having a hard time believing the accusations. I do however know his frustrations, as he was trying to get the information for the people affected by it.
Kubuntu team: Every single one of them I consider family. Great teachers and great friends.
Kubuntu community: Our wonderful community of users. Time to test! Extremely great bunch.

It truly saddens me to see all this FUD being thrown around, by folks that up till recently I had great respect for.
Couple things that do not sit well with me at all.
1) Absolutely zero communication to the Kubuntu Council about the “issues” with Jonathan prior to the shocking “request”.
2) The Kubuntu Council asked (repeatedly) for one thing: proof. This still has not been provided.
So what was suppose to happen here? Evidently bow down, walk away and happily work away silenced.
This is NOT the open source / FLOSS way. At least not to my understanding. Perhaps I have misunderstood the meaning all these years.

The result of all of this… My motivation to dedicate every waking hour to my passion, open source software, is depleting rather quickly. At least in the corporate environment there is a paycheck at the end of the week.

I will stick by Jonathan and the rest of the team until the bitter end, but not at the capacity that I was. So with that said..
I will support our current releases with bugfix KDE releases. I have currently packaged 15.04.1 which is in testing, 5.3.1 Plasma is in the works.

And yes, I will work on 4.14.3 for trusty, but it will take time as it has to be done by hand.

I also want to make note that the super awesome folks at KDE are not affected by my recent woes, I will continue my Continuous Integration support!

Cheers,
Scarlett

Categories: FLOSS Project Planets

Challenges and opportunities

Wed, 2015-05-27 16:45

Challenges are a normal part of life; and seeing opportunities is a skill all of us can get better at. This past week, though, has been something new.

The Ubuntu community and philosophy has been home to me. The Ubuntu Code of Conduct is not just about individual conduct, but how we make a community. In fact, the first sentence is Ubuntu is about showing humanity to one another: the word itself captures the spirit of being human.[1] This is my kind of place, where we not only have high ideals, but live those out in our practice. And so it has been for many years.

So it was a complete shock to get a secret email from the Community Council to me as a Kubuntu Council member announcing that Jonathan Riddell had been asked to step down from Kubuntu leadership. We (the KC) recently met with the CC, and there was no discussion of any issues they had with Jon. They never wrote to us asking for feedback or discussion.

Jonathan's questions to the CC about a legal issue and that of funds donated to the flavors were not personal, but done on behalf of the Ubuntu community, and on behalf of us, the Kubuntu Council and the Kubuntu community as a whole. We are still concerned about both these issues, but that pales in comparison to the serious breach in governance we've experienced this past week.

The Code of Conduct states: We expect participants in the project to resolve disagreements constructively. When they cannot, we escalate the matter to structures with designated leaders to arbitrate and provide clarity and direction.

The CC did not follow this basic procedure. The Community Council is full of great people; a couple of them are personal friends. The CC was established after the Kubuntu Council, and while the KC consists of members nominated and elected by the Kubuntu Members, the CC candidates are selected by Mark Shuttleworth, and then elected by the Ubuntu Members. [2]All Kubuntu Members are also Ubuntu Members. I first stated that the CC is unelected, which is incorrect.[3] I regret the error.

The fact remains that the CC did not follow the Code of Conduct in their procedure.

We have had a number of emails back and forth during the week.[4] What has stood out to me is the contrast between their approach, and our own. They have focussed on their feelings (feelings about working with Jon), whereas we continue to point out facts and ask them to follow the Code of Conduct. Naturally, we all experienced emotions about the situation, but emotion is not a basis for decision-making.

Of course, the members of the CC may perceive the situation entirely differently.

I wish I knew how this conflict will work out long-term. The Council supports Jonathan, and continues to ask for resolution to the issues he has raised with the CC on the community list. We have done so formally yesterday.

Jon is the person who brought KDE to Ubuntu, and Ubuntu to KDE, and has always functioned as a bridge between the two projects and the two communities. He will continue to do this as long as he is able, and we rely on his faithfulness for the success of Kubuntu. He is the magnet who draws new developers to us, and his loss would spell the end of Kubuntu-the-project.

The CC did not follow the basic procedure and raise bring the issue they had with Jon to us, the Kubuntu Council. We await their return to this principle as we work to find a way forward. We are determined to find a way to make this work.

1. http://www.ubuntu.com/about/about-ubuntu/conduct
2. http://www.kubuntu.org/kubuntu-council
3. https://wiki.ubuntu.com/CommunityCouncil/Restaffing
4. https://skitterman.wordpress.com/2015/05/26/information-exchange-between-the-ubuntu-community-council-and-the-kubuntu-council/

Categories: FLOSS Project Planets

Qt on Android Episode 7

Wed, 2015-05-27 13:05

In the last two Qt on Android episodes we learned how to use basic JNI on Android and how to use an external IDE to easily manage the Java part. In this episode, it is time to move forward and focus on extending our Qt on Android Java part and also how to interact with it using JNI in a “safe way”.

In this part we are going to implement an SD-Card listener. This is quite a useful example for applications that are using SD-Cards to store their data, because if the application doesn’t close all the opened files immediately when it gets the notification, it will be killed by the Android O.S.

As we’ve seen in Episode 5 it’s quite easy to call a Java method from C/C++ and a C/C++ function from Java, but it doesn’t work on all cases. But why not?

To understand why not, we need first to understand the Qt on Android architecture.

Architecture diagram:

A few words about the architecture diagram.

  • the left blue rectangle represents the Android UI thread
  • the right green rectangle represents the main Qt thread (where the main QEventLoop is running). Read Episode 1 if you want to learn more about Android UI & Qt threads)
  • the top (black) rectangle is the Java part of your application. As you can see the biggest part of it runs on the Android UI thread. The only case when the Java part runs on the Qt thread is when we call it from C/C++ from Qt thread (as most of the JNI calls will come from there).
  • the bottom (black) rectangle is the C/C++ (Qt) part of your application. As you can see the biggest part of it runs on the Qt thread. The only case when the C/C++ part runs on the Android UI thread is when it’s called from the Java part from Android UI (as most of the Java callbacks will be from there).

Ok … so what’s the problem? Well, the problem is that there are SOME Android APIs that MUST be called from Android UI thread, and when we call a Java method from C/C++ we do it from Qt thread. It means that we need a way to run that code on Android UI not on Qt thread. To do such a call, from C/C++ Qt thread to Java Android UI thread, we need to do 3 steps:

  1. call a Java method from C/C++ Qt thread. The Java method will be executed in Qt thread, so we we need a way to access Android APIs in Android UI thread.
  2. our Java method uses Activity.runOnUiThread to post a runnable on Android UI thread. This runnable will be executed by the Android event loop on Android UI thread.
  3. the runnable accesses the Android APIs from Android UI thread.

The same problem occurs when Java calls a C/C++ function, because Java will call our C/C++ functions from Android UI and we need a way to pass that notification on Qt thread. Again there are 3 steps involved:

  1. call a C/C++ function from Android UI thread.
  2. use QMetaObject::invokeMethod to post a method call on Qt event loop.
  3. Qt event loop will execute that function on Qt thread.
Extending the Java part:

Before you start, make sure you read Episode 6 one more time because you’ll need it to easily manage the Java files. First step is to create a custom Activity by extending QtActivity and defining a method which will post our Runnable.

// src/com/kdab/training/MyActivity.java package com.kdab.training; import org.qtproject.qt5.android.bindings.QtActivity; public class MyActivity extends QtActivity { // this method is called by C++ to register the BroadcastReceiver. public void registerBroadcastReceiver() { // Qt is running on a different thread than Android. // In order to register the receiver we need to execute it in the Android UI thread runOnUiThread(new RegisterReceiverRunnable(this)); } }

Next step is to change the default activity to AndroidManifest.xml, from:

<activity ... android:name="org.qtproject.qt5.android.bindings.QtActivity" ... >

to:

<activity ... android:name="com.kdab.training.MyActivity" ... >

We need to do this to make sure that our custom Activity will be instantiated when the application starts.

Next step is to define our RegisterReceiverRunnable class: The run method of this class will be called on Android UI thread. In run method we register our SDCardReceiver listener.

// src/com/kdab/training/RegisterReceiverRunnable.java package com.kdab.training; import android.app.Activity; import android.content.Intent; import android.content.IntentFilter; public class RegisterReceiverRunnable implements Runnable { private Activity m_activity; public RegisterReceiverRunnable(Activity activity) { m_activity = activity; } // this method is called on Android Ui Thread @Override public void run() { IntentFilter filter = new IntentFilter(); filter.addAction(Intent.ACTION_MEDIA_MOUNTED); filter.addAction(Intent.ACTION_MEDIA_UNMOUNTED); filter.addDataScheme("file"); // this method must be called on Android Ui Thread m_activity.registerReceiver(new SDCardReceiver(), filter); } }

Let’s check what SDCardReceiver class looks like:

// src/com/kdab/training/SDCardReceiver.java package com.kdab.training; import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; public class SDCardReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { // call the native method when it receives a new notification if (intent.getAction().equals(Intent.ACTION_MEDIA_MOUNTED)) NativeFunctions.onReceiveNativeMounted(); else if (intent.getAction().equals(Intent.ACTION_MEDIA_UNMOUNTED)) NativeFunctions.onReceiveNativeUnmounted(); } }

SDCardReceiver overrides onReceive method, then it uses the declared native functions to send the notification to C/C++.

Last step is to declare our native functions that we used in SDCardReceiver:

// src/com/kdab/training/NativeFunctions.java package com.kdab.training; public class NativeFunctions { // define the native function // these functions are called by the BroadcastReceiver object // when it receives a new notification public static native void onReceiveNativeMounted(); public static native void onReceiveNativeUnmounted(); } Architecture diagram Java:

Let’s see the summary of the Java part calls on our architecture diagram:

Extending C/C++ part:

Now let’s see how we extend the C/C++ part. To illustrate how to do it, I’m using a simple widget application.

First thing we need to do, is to call the registerBroadcastReceiver method.

// main.cpp #include "mainwindow.h" #include <QApplication> #include <QtAndroid> int main(int argc, char *argv[]) { QApplication a(argc, argv); // call registerBroadcastReceiver to register the broadcast receiver QtAndroid::androidActivity().callMethod<void>("registerBroadcastReceiver", "()V"); MainWindow::instance().show(); return a.exec(); }

 

// native.cpp #include <jni.h> #include <QMetaObject> #include "mainwindow.h" // define our native static functions // these are the functions that Java part will call directly from Android UI thread static void onReceiveNativeMounted(JNIEnv * /*env*/, jobject /*obj*/) { // call MainWindow::onReceiveMounted from Qt thread QMetaObject::invokeMethod(&MainWindow::instance(), "onReceiveMounted" , Qt::QueuedConnection); } static void onReceiveNativeUnmounted(JNIEnv * /*env*/, jobject /*obj*/) { // call MainWindow::onReceiveUnmounted from Qt thread, we wait until the called function finishes // in this function the application should close all its opened files, otherwise it will be killed QMetaObject::invokeMethod(&MainWindow::instance(), "onReceiveUnmounted" , Qt::BlockingQueuedConnection); } //create a vector with all our JNINativeMethod(s) static JNINativeMethod methods[] = { {"onReceiveNativeMounted", "()V", (void *)onReceiveNativeMounted}, {"onReceiveNativeUnmounted", "()V", (void *)onReceiveNativeUnmounted}, }; // this method is called automatically by Java after the .so file is loaded JNIEXPORT jint JNI_OnLoad(JavaVM* vm, void* /*reserved*/) { JNIEnv* env; // get the JNIEnv pointer. if (vm->GetEnv(reinterpret_cast<void**>(&env), JNI_VERSION_1_6) != JNI_OK) return JNI_ERR; // search for Java class which declares the native methods jclass javaClass = env->FindClass("com/kdab/training/NativeFunctions"); if (!javaClass) return JNI_ERR; // register our native methods if (env->RegisterNatives(javaClass, methods, sizeof(methods) / sizeof(methods[0])) < 0) { return JNI_ERR; } return JNI_VERSION_1_6; }

In native.cpp we are registering the native functions. From our static native functions we are using QMetaObject::invokeMethod to post the slots call to Qt thread.

 

// mainwindow.h #ifndef MAINWINDOW_H #define MAINWINDOW_H #include <QMainWindow> namespace Ui { class MainWindow; } class MainWindow : public QMainWindow { Q_OBJECT public: static MainWindow &instance(QWidget *parent = 0); public slots: void onReceiveMounted(); void onReceiveUnmounted(); private: explicit MainWindow(QWidget *parent = 0); ~MainWindow(); private: Ui::MainWindow *ui; }; #endif // MAINWINDOW_H // mainwindow.cpp #include "mainwindow.h" #include "ui_mainwindow.h" MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); } MainWindow::~MainWindow() { delete ui; } MainWindow &MainWindow::instance(QWidget *parent) { static MainWindow mainWindow(parent); return mainWindow; } // Step 6 // Callback in Qt thread void MainWindow::onReceiveMounted() { ui->plainTextEdit->appendPlainText(QLatin1String("MEDIA_MOUNTED")); } void MainWindow::onReceiveUnmounted() { ui->plainTextEdit->appendPlainText(QLatin1String("MEDIA_UNMOUNTED")); }

MainWindow class is used just to add some text to our plainText control when it gets a notification. Calling these functions from Android thread might be very harmful to our application health – it might lead to crashes or unexpected behavior, so they MUST be called from Qt thread.

Architecture diagram C/C++:

This is the summary of C/C++ calls on our architecture diagram:

Architecture diagram Java & C/C++:

This is the summary of all the calls that we’ve done in C/C++ and in Java.

Here you can download the example source code.

Thank you for your time!

The post Qt on Android Episode 7 appeared first on KDAB.

Categories: FLOSS Project Planets

Google Summer of Code 2015 with KDE

Wed, 2015-05-27 10:05
Hello,
I will be using this blog primarily as a means to communicate on the progress of the project of "Porting of Amarok to Qt5/KF5" with the KDE Organization under the GSoC program 2015. My mentors for the project are Mark Kretschmann and Myriam Schweingruber.
Amarok has a huge codebase and I believe there will be a lot of commits involved. I will try to be as verbose as possible on the changes made to the codebase and I plan on posting frequent updates here through the summer.

I look forward to a very productive summer along with the open source community.

Cheers
Categories: FLOSS Project Planets

transactional b-trees and what-not

Wed, 2015-05-27 09:28

Over the last few months I've been reading more than the usual number of papers on a selection of software development topics that are of recent interest to me. The topics have been fairly far flung as there are a few projects I have been poking at in my free time.

By way of example, I took a couple weeks reading about transitory trust algorithms that are resistant to manipulation, which is a pretty interesting problem with some rather elegant (partial) solutions which are actually implementable at the individual agent level, though computationally impractical if you wish to simulate a whole network which thankfully was not what I was interested in. (So reasonable for implementing real-world systems with, though not simulations or finding definitive solutions to specific problems.)

This past week I've been reading up on a variety of B-tree algorithms. These have been around since the early 1970s and are extremely common in all sorts of software, so one might expect that after 40+ years of continuous use of such a simple concept that there'd be very little to talk about, but it's quite a vast territory. In fact, each year for the last two decades Donald Knuth has held a public lecture around Christmas-time about trees. (Yes, they are Christmas Tree Lectures. ;) Some of the papers I've been reading were published in just the last few years, with quite a bit of interesting research having gone on in this area over the last decade.

The motivation for reading up on the topic is I've been looking for a tree that is well suited to storing the sorts of indexes that Akonadi Next is calling for. They need to be representable in a form that multiple processes can access simultaneously without problems with multiple readers and (at least) one writer; they also need to be able to support transactions, and in particular read transactions so that once a query is started the data being queried will remain consistent at least until the query is complete even if an update is happening concurrently. Preferably without blocking, or at least as little blocking as possible. Bonus points for being able to roll-back transactions and keeping representations of multiple historic versions of the data in certain cases.

In the few dozen papers I downloaded onto the tablet for evening reading, I came across Transactions on the Multiversion B+-Tree which looks like it should do the trick nicely and is also (thankfully) nice and elegant. Worth a read if you're into such things.

As those who have been following Akonadi Next development know, we are using LMDB for storage and it does a very nice job of that but, unfortunately, does not provide "secondary" indexes on data which Akonadi Next needs. Of course one can "fake" this by inserting the values to be indexed (say, the dates associated with an email or calendar event) as keys with the value being they key of the actual entry, but this is not particularly beautiful for various reasons, including:

  • this requires manually cleaning up all indexes rather than having a way to efficiently note that a given indexed key/value pair has been removed and have the indexes cleaned up for you
  • some data sets have a rather low cardinality which would be better represented with approaches such as bitmap indexes that point to buckets (themselves perhaps trees) of matching values
  • being able to index multiple boolean flags simultaneously (and efficiently) is desirable for our use cases (think: "unread mails with attachments")
  • date range queries of the sort common in calendars ("show this month", "show this week", e.g.) could also benefit from specialized indexes

I could go on. It's true that these are the sorts of features that your typical SQL database server provides "for free", but in our case it ends up being anything but "free" due to overhead and constraints on design due to schema enforcement. So I have been looking at what we might be able to use to augment LMDB with the desired features, and so the hunt for a nice B+-tree design was on. :) I have no idea what this will all lead to, if anything at all even, as it is purely an evening research project for me at the moment.

They application-facing query system itself in Akonadi Next is slowly making its way to something nice, but that's another topic for another day.

Categories: FLOSS Project Planets

Making Sense of the Kubuntu/Canonical Leadership Spat

Wed, 2015-05-27 00:39

By now the news has spread quite quickly; the Ubuntu Community Council (or “CC” for short) had attempted to boot Jonathan Riddell as a community leader, asking him to “take an extended break” from the Kubuntu Council (“KC” for short) citing personality conflicts and breaches of the Ubuntu code of conduct.

So, what just happened? On the various news sites and through some broken telephones there’s several misconceptions about what happened. Being an outsider the whole issue is rather complicated, I know nothing of the structure around Canonical, Ubuntu, and these councils and how all this relates to Kubuntu.

This isn’t going to be a post about the he-said-she-said arguments, but is more of an outsiders explanation into how all this fits together and what it really means.

I’d like to mention I’ve received corrections in the comments, and would like to give a thank-you to the commenters for their feedback.

What is the Community Council? How does it work?

The Community Council is the highest governing body representing the Ubuntu umbrella of projects, including its derivatives. The group is open to anyone, but as of my research only one of the seven members of the council is not a Canonical employee. Of those seven members one is Mark Shuttleworth who has tie-breaking votes Update: It appears that the information found was outdated; currently 3 of 7 members of the council are Canonical employees, including Mark Shuttleworth. Please refer to the comments section for more details.

The group manages infrastructure and communication for Canonical to allocate its resources for Ubuntu and derivatives. The group manages non-technical communication and governance of the Ubuntu project and derivatives. An important part of this event is the mandate that the council operates transparently to the wider community, the idea being that they would also serve as a bridge between the commercial arm of Canonical and the open-source community at large.

What is the Kubuntu Council?

Just like a larger governing body, the Community Council has delegated sub-councils to represent larger projects within the community. The Kubuntu Council is one such branch managing the KDE-oriented Kubuntu project.

Unlike the Community Council, The Kubuntu Council is composed of members elected by the community, where the CC members are delegated by Mark Shuttleworth and later approved by the Ubuntu members.

When the system works the idea is that the Kubuntu Council will take care of project-level matters independently, and the Kubuntu Council lead will attend meetings to trade information and matters upstream with the Community Council.

So… Does Canonical Own Kubuntu?

I will note here that Canonical is not one of the active parties in this dispute – this section is only meant to clarify misconceptions I’ve seen online, and to help explain the next sections.

Canonical owns the trademark for Kubuntu – so as a ‘brand’ they own Kubuntu. Beyond that Canonical does not directly fund Kubuntu, instead they offer infrastructure in the form of repositories and servers, where Kubuntu is allowed to piggyback off the Canonical/Ubuntu project network and work more closely with upstream resources.

But Canonical does not employ the Kubuntu staff; previously they did employ staff but Blue Systems stepped in when Canonical cut funding. Blue Systems has since become a much larger part of what drives Kubuntu than Canonical. Both of these together have made Kubuntu (as a project) much more than a solely Canonical venture.

In over-simplified terms Canonical owns the franchise and Blue Systems runs the hottest ‘non-headquarters’ location.

Who is Jonathan Riddell?

Jonathan is an ex-Canonical employee who was scooped up by Blue Systems after Canonical cut funding.

Part of Canonical cutting Kubuntu funding was terminating Jonathan as an employee of Canonical. He essentially retained his position in all community aspects of Ubuntu, just without the paycheque: he is a Kubuntu Council member, has access to the Canonical infrastructure, and helps manage the Kubuntu project.

Blue systems picked him up and he is able to work full-time in an almost identical capacity that he did as a Canonical employee.

What was the Ruckus?

Mainly, there’s some conflicts between Riddell and members of the core Community Council. Riddell had repeatedly pushed several issues which the council was unable to fulfil, leading to frustration on both sides. In the end both sides showed the stress they were under, at which point the Community Council privately decided they would oust Jonathan from the Kubuntu Council.

The KC replied arguing that the decision was not made transparently, questioned how much power the Community Council should have over the community-elected Kubuntu Council roster, and was incensed by the CC not retracting the decision before a transparent conversation. The Kubuntu Council didn’t want to negotiate “with a gun to [their] heads”.

Who Ultimately Gives the Orders?

The Kubuntu Council is bound by their constitution to obey “legitimate orders” from the Community Council; if the CC makes a decision in line with the Code of Conduct and its own constitution the Kubuntu Council must obey that request. But no provisions have been made for when the two groups disagree over a decision. The Community Council may be forced to cut off Jonathan or supporters from Ubuntu support infrastructure, such as Canonical repositories and funding, and the group has already stated that he is keeping his upload rights and ability to request funding. However given the hostilities, revoking those privileges might be a hardball solution, and one that the Kubuntu Council may not have control over.

The reason Kubuntu believes it can reject an authoritative attempt is threefold; it had never happened before so there was no ‘precedent’, there was no warning for Jonathan to correct the ‘behavioural issues’, and the largest reason is because the Kubuntu Council does not feel the decision was legitimate.

The entire issue hinges on the legitimacy of the order; Kubuntu Council only has to obey legitimate orders, and questions whether a decision made behind closed-doors when the mandate is transparency be considered legitimate.

In short: yes the Community Council can remove people from its sub-councils, but it might have terrible fallout if done improperly. They can’t really tell the Kubuntu crew what to do if Kubuntu doesn’t find the orders legitimate. But if push comes to shove it is possible for the Community Council and Canonical to revoke infrastructure access if a resolution cannot be found.

What Happens Now?

Right now the Community Council is exerting control over projects using their infrastructure much like a company would manage employees; if someone isn’t in line they can be moved, removed, or suspended without public debate.

The problem with this strategy is the fact that communities don’t like being dictated to, and in attempting to do so rubbed the community the wrong way. The Community Council literally gave an order and the Kubuntu Council said “no”. So what happens now?

By removing Jonathan from his position in the Kubuntu community, it also affects his value for Blue Systems. If he were removed, it brings into question what Blue Systems and the community would do in response; Riddell is a Blue Systems employee and carries significant community favour from KDE users.

The first thing that can happen is… Nothing. Birds will sing, grass will grow, and the KC will make the CC grit their teeth a bit. Maybe Jonathan will be removed after a more transparent meeting, maybe not. If the KC doesn’t remove Jonathan, then it may force Canonical into an awkward situation where it must back the council and start cutting off infrastructure.

Second, if this is resolved, Mark and the Community Council may revise its community strategy and put in safeguards for these situations and possibly enforce a more formal structure over the ad-hoc sub-community model. This would need to apply to all communities as singling out specific projects would simply inflame the situation, in the future preventing other projects from entering a similar situation.

Third, instead of a split the Kubuntu crew might attempt to separate their internal governance a bit; possibly designating a separate group to work with the Community Council while the main leadership remains as-is. Ubuntu can work with their partners effectively without disturbing the leadership, but this solution complicates communication and doesn’t fix several underlying issues.

The next thing that may happen could be the start of a more gradual separation; Kubuntu as a project may slowly take on more infrastructure, growing apart and leaving the nest – maybe with Canonicals blessing and the transfer of the Kubuntu trademark. Who knows.

Lastly both sides could calmly file into a room before sizing up chairs to throw at each other; terrible words being said about peoples mothers before forking Kubuntu into ‘Librebuntu’. This would hurt as the Kubuntu and KDE developers already have poor relations with Canonical, meaning a fork would likely lead to a mass exodus from Kubuntu to the new project (much like the LibreOffice fork). While the freedom of not having Canonical or the Community Council dictate policy would be refreshing, the loss of infrastructure would be a certain setback.

In the End… ?

In the end, I think we all simply hope that projects, companies, communities, and benevolent dictators can all work together in relative harmony. The situation isn’t ideal, but a major part of building strong communities is occasionally finding out something doesn’t work – and fixing it; hopefully to the benefit of everyone involved.

Right now both sides are holding strong in a ‘grey zone’ with their actions – the CC seems to be meting out harsh decisions without clear policy, and the KC is refusing to listen until the CC backpedals on its position.

That’s my breakdown of the politics; I hope it helped and provided insight into this whole messy affair. I hope to gets all sorted out in the long run. If I have anything wrong, please do let me know in the comments and I’ll make the relevant corrections.


Categories: FLOSS Project Planets

Google Summer of Code 2015 Kick-Off

Tue, 2015-05-26 17:36

This year I have been accepted the second time to the Google Summer of Code programme. The community bounding period is over and there is a time to start the development. This year I’m doing the project for LabPlot application that has the aim to integrate VTK library for 3D data visualization.

I hope this summer will be rich and productive and my contribution into the LabPlot project will be valuable for its users.

And thank you Google for the sticker. I have successfully glued to my new laptop :)

Categories: FLOSS Project Planets

Reaffirmed on the Kubuntu Council

Tue, 2015-05-26 12:24

I’d like to thank all the Kubuntu members who just voted to re-affirm me on the Kubuntu Council.

Scott Kitterman’s blog post has a juicy details of the unprecedented and astonishing move by the Ubuntu Community Council asking me to step down as Kubuntu leader.  I’ve never claimed to be a leader and never used or been given any such title so it’s a strange request without foundation and without following the normal channels documented of consultation or Code of Conduct reference.

I hope and expect Kubuntu will continue and plan to keep working on the 15.10 release along with the rest of the community who I love dearly.

 

by
Categories: FLOSS Project Planets

Interview with Andrei Rudenko

Tue, 2015-05-26 08:09

Could you tell us something about yourself?

My name is Andrei Rudenko, I’m a freelance illustrator, graduated from the Academy of Fine Arts (as a painter) in Chisinau (Moldova). I have many hobbies, I like icon/UI design, photography, learned a few programming languages and make games in my spare time, and also have about 10 releases on musical labels as 2R. For now I’m trying to improve my skills in illustration and game development.

Do you paint professionally, as a hobby artist, or both?

Both, it is good when your hobby is your job.

What genre(s) do you work in?

I like surrealism, critical realism. I don’t care about genre much, I think the taste and culture in art is more important.

Whose work inspires you most — who are your role models as an artist?

I really like the Renaissance artists, Russian Wanderers, also Jacques-Louis David, Caravaggio, Anthony van Dyck, and Roberto Ferri.

When did you try digital painting for the first time?

I think about 2010, trying to paint in Photoshop but I didn’t like that to draw with, and I left it until I found Krita.

What makes you choose digital over traditional painting?

Digital painting has its advantages, speed, tools, ctrl+z. For me it is a place for experiments, which I can then use in traditional painting.­

How did you find out about Krita?

When I became interested in Linux and open source. I found Krita, it had everything that I needed for a digital painting. For me it is important to repeat that feeling like you paint using traditional materials.

What was your first impression?

As soon as I discovered a powerful brush engine. I realized that this is what I was looking for a long time.

What do you love about Krita?

I like its tools, as I have already said the brush engine, the large variety of settings. I like the team who are developing Krita, very nice people. And of course it is free.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I think better vector graphics tools, for designers. Also make some fixes for pixel art artists.

What sets Krita apart from the other tools that you use?

The possibility to customize it the way you like.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Monk for Diablo 3 contest, there is a lot of work and a lot that needs to be done. But Krita gave me everything i needed to make this art.

What techniques and brushes did you use in it?

Most of the time I use the color smudge brush (with dulling), like in traditional oil painting. For details a simple circle brush, for leaves I made a special brush with scattering. Almost all brushes I made myself, and also patterns for brushes I made from my texture photos.

Where can people see more of your work?

http://andreyrudenko.deviantart.com/
https://dribbble.com/Rudenko
https://twitter.com/AndreiRudenko

Anything else you’d like to share?

Thank you for inviting me to this interview. And thank you, Krita team, for Krita.

Categories: FLOSS Project Planets

Reducing relocations with Q_STRINGTABLE

Tue, 2015-05-26 04:52

Qt is a native library at the heart. As a native (C++) library, it already outperforms most higher-level language libraries when it comes to startup performance. But if you’re using native languages, you usually do so because you need to get the most out of the available hardware and being just fast may not be fast enough. So it should come as no surprise that we at KDAB are looking into how to speed things up even more.

A Look At Dynamic Linking

One source of startup delays in native applications is the dynamic linker. You can read all about how it works on Unix in Ulrich Drepper’s excellent article, How To Write Shared Libraries. For the purposes of this article, it is sufficient to understand that the final link step of a native application is performed at startup-time. In this step, the application, as well as the libraries it uses, are loaded into memory and adapted to the specific memory location they have been loaded to. Memory locations may differ from application to application because of security reasons (address space randomisation) or simply because one application loads more libraries than another, requiring a different memory layout, esp. on 32-bit platforms.

With this — very simplified — view of things in mind, let’s look at what “adapting to the specific memory location” actually involves.

Most of the library code is compiled in position-independent code, which means that jumps, as well as data references are always expressed as an offset to the current execution position (commonly called Program Counter – PC). That offset doesn’t change when the library is loaded at different memory locations, so code, for the most part, does not need to be adapted.

But a library does not only consist of code. It also contains data, and as soon as one piece of data points to another (say, a pointer variable which references a function), the content of that pointer suddenly becomes dependent on the actual position of the library in memory. Note that the trick used in code (offsetting from the PC) doesn’t work here.

So the linker is forced to go in and patch the pointer variable to hold the actual memory location. This process is called relocation. By performing relocations, the dynamic linker changes the data from how it is stored on disk, which has several drawbacks: First, the data (actually, the whole memory page – usually 4KiB) is no longer backed on-disk, so if memory gets tight, it has to be copied to swap instead of just being dropped from memory, knowing that it can always be loaded back from disk. Second, while unmodified data is shared among processes, once the data is modified in one process, the data is copied and no longer shared (copy-on-write), and this can be a real memory waster on systems where many applications use the same library: All the library copies living in different application address spaces are duplicated instead of shared, increasing the total memory footprint of the system.

V-Tables And String Tables

If all of the above was a bit abstract for you, let’s look at some concrete examples:

In a C++ library, the virtual function call mechanism is a major source of relocations, because vtables are simply lists of function pointers, all entries of which require relocation. But short of reducing the number of virtual functions (something Trolltech originally did for Qt 4), there’s not much one can do about those.

But there is a class of relocations that are 100% avoidable, with some work: string tables. In their simplest form, they come as an array of C strings:

enum Type { NoType, TypeA, TypeB, TypeC, _NumTypes }; const char * const type2string[] = { "", "A", "B", "C", }; static_assert(sizeof type2string / sizeof *type2string == _NumTypes);

But the above is just a short-cut for the following:

const char __string_A[2] = "A"; // ok, no relocation const char __string_B[2] = "B"; // ditto const char __string_C[2] = "C"; // ditto const char * const type2string[4] = { // oops, 4 entries each requiring relocation: &__string_A[1], // optimisation: common suffix is shared &__string_A[0], &__string_B[0], &__string_C[0], };

You can view this as a mapping between a zero-based integer and a string, with the integer implicitly encoded in the string position in the array. In the more complex form, the string table maps something else than a zero-based integer:

static const struct { QRgb color; const char * name; } colorMap[] = { { qRgb(0xFF, 0xFF, 0xFF), "white" }, { qRgb(0xFF, 0x00, 0x00), "red" }, { qRgb(0x00, 0xFF, 0x00), "green" }, // ... };

Here, too, what we colloquially call a “string” is actually a pointer-to-const-char, and therefore in need of relocation at dynamic link time.

One Solution

So the underlying problem here is that strings are inherently reference types — they are only a pointer to the data stored elsewhere. And we learned that data referring to other data causes relocations. So it would seem that the easiest way to avoid relocations is to store the data directly, and not reference it. The two examples above could be rewritten as:

const char type2string[2][] = { "", "A", "B", "C", }; // ok, type2string is an array of const char[2], no relocs static const struct { QRgb color; const char name[6]; // ok, name is const char[6] } colorMap[] = { // same as before };

In both cases, the string data is now stored in-line, and no relocations are necessary anymore.

But this approach has several drawbacks. First, it wastes some space if the strings are not all of the same length. In the above examples, that waste is not very large, but consider what happens if the colorMap above gets a member whose name is “azure light blue ocean waves”. Then the name member needs to be at least of size 31. Consequently, less than two of those structs now fit into one cache line, reducing scanning performance significantly — for both lookups: by-color as well as by-name, which is the second problem.

So, this simple approach that requires no changes to the code or data except to fix the declaration of the string member works well only if the strings are of essentially the same length. In particular, just one outlier pessimises the lookup performance of the whole lookup table.

A Better Solution

Data-Oriented Design suggests that we should prefer to separate data of different type. We can apply this in the colorMap case and hold colors and names in different arrays:

static const QRgb colors[] = { qRgb(0xFF, 0xFF, 0xFF), ... }; static const char names[6][] = { "white", ... };

We still have the gaps within the names array, but at least the colors are out of the way now. We can then compress the string data the way moc has been doing since Qt 4.0:

static const QRgb colors[] = { qRgb(0xFF, 0xFF, 0xFF), qRgb(0xFF, 0x00, 0x00), ... }; static const char names[] = { "white\0" "red\0" ... }; static const uint nameOffsets[] = { 0, 6, 10, ... }; // the i-th name is names[nameOffsets[i]]

We just concatenate all strings into one, with NULs as separators, and record the start of each one in an offset table. Please take a moment to digest this. We now have reached a point where there are no relocations, not more than sizeof(uint) bytes wasted per-entry (could be reduced to sizeof(ushort) or sizeof(uchar) for smaller tables, which is less than the sizeof(const char*) with which we started out), and nicely separated lookup keys and values.

But we have created an unmaintainable beast. The largest such table in Qt is ca. 650 entries in size. One problem is that key and value are now separated — those two arrays better stay in sync. The even larger problem is that no-one is calculating the offset table for us!

So, while this technique of avoiding relocations is pretty well-known, it is hardly ever applied in practice because it essentially forces you to write a code generator to create these intricately-connected sets of tables from a human-readable description.

Enter Q_STRINGTABLE

The key insight now is that C++ comes with powerful code generators built-in: Both Template Meta-Programming (TMP) can be used here, at least in C++11, as well as the good ol’ C preprocessor.

Using the preprocessor, the colorMap example can be written like this:

#define COLORS \ (("white", qRgb(0xFF, 0xFF, 0xFF))) \ (("red", qRgb(0xFF, 0x00, 0x00))) \ ... /*end*/ Q_STRINGTABLE_DATA_UNSORTED(ColorMap, COLORS) #undef COLORS

First, you describe the key-value pairs as a sequence of 2-tuples: ((.,.))(.,.))..., then you feed that into a magic macro (here, the one for when the strings are not sorted), and voila, you get all three tables generated for you, including a nice find() function for looking up values by string. To use:

if (const QRgb *color = ColorMap::find("red")) // found else // not found

Obviously, if you sort the data (one of the things that’s not done automatically for you, yet), you can use Q_STRINGTABLE_SORTED instead and get an O(log N) find() method.

Next week, we’ll look at both the Q_STRINGTABLE API and implementation in more depth. This will also reveal why Q_STRINGTABLE, despite its usefulness, has not been accepted into Qt, yet. If you can’t wait to start playing with it, head over to the Qt-Project Gerrit: Long live Q_STRINGTABLE!. The header file implementing all of this has minimal dependencies (Boost.PP and <QtGlobal>).

Stay tuned!

The post Reducing relocations with Q_STRINGTABLE appeared first on KDAB.

Categories: FLOSS Project Planets

A Linux proud history – 15 years ago and the Brazilian ATM

Mon, 2015-05-25 10:06

Some time ago i passed by in one of the bank agencies of a brazilian south bank, called Banrisul, and see a change, the ATM’s are evolving. The ATM’s are changing for modern code, and i don’t know what they are using now, but is the past that is the history itself.

Most of old Linux guys remember that as one of the firsts bank ATM done in Linux in the world ( or at least the first openly shown ) was made here in this bank, here’s a picture from the wonderful article of John MadDog Hall in this Linux Journal article. ( I hope he will not bother that i’m citing him here ).

 

The Banrisul “Tux” ATM, picture from John MadDog Hall

 

The history i want to share with you is how that “marble Tux” happens. Yes, it was a production machine that you see in the picture and was running in every place in Brazil for at least 10 years.

So, a 25 years old boy, in this case me, the guy typing now,  who was working in a ILOG graphical toolkit partner suddenly decide to look for Linux jobs, it was out of university for 1 year, but was already infected for the open source and Linux for more than 3 years, and thought it can be done.

Lucky me, that there was a company locally in Curitiba, hiring Linux guys, for a short time prototype project in C, and was the chance i foresse to enter in linux job world for good. This company was Conectiva, and then, this prototype end up to be my first job in the company, mostly at this time, was a universe confluence, since all the players involved, the bank through  the manager, Carlos Eduardo Wagner, the corporate development manager from Conectiva, João Luis Barbosa and the PERTO ATM company, moving to Linux, all believing that could be done.

And then, they need the suicide guys, meaning me and Ruben Trancoso which made the mainframe comm network stack.

To resume, 3 months, four different ATM’s with their original specific DOS code, one barely new ATM designed to be first time used in this project by PERTO, and that’s it.

We didn’t had much requisites that time, mostly keep the same original face and make it work. On the verge of everything, we made the base code been ported quickly, but still, was 2000, and linux graphics stack and licensing still not heavily clarified. qt was out of question, Gtk was not suitable for the older environments. Aside other toolkits, i decided go on X11 pure code, which at least took one layer of code bug testing on our side, despite the inherent difficulty and from a guy that get used already on C++ toolkits ( Ilog Views, today now owned by IBM ).

But worked, it paid the efforts, then one day, comes the day where the manager sit downs on your side and say: “We have a big meeting with bank directors to show the prototype, is it ready ?”. The interface was already exactly the same as the older DOS interfaces, and that’s our initial target.

The answer from me was a sound yes, from Ruben as well, but i asked if i could “pimp up” the interface a little. Was a demo anyway, and not need to be the final result.  Just don’t told what i will be doing, since i have some idea, but not THE FINAL idea.

So, i pick up gimp, pick the Conectiva logo, and then put on top right, as a proud developer of his company, and to show that it done by us, here, in Brazil. I know this would be for testing, never would go to production.

And for some reason as most aesthetically possible for a developer, the lower left corner was visibly empty, unbalanced, could have something else there, but couldn’t be too “loud” in terms of graphics, so i decided that an emboss figure could be ok’ish. And i start to drumming my fingers and i heard someone around the office saying something ..Linux…, and again, …Linux word, so i though that need to be something Linux related, obviously. But there are no Linux text logo, no official at least, the only thing was Tux. Then i placed that embossed Tux, proud myself that at least me, my coleagues and some guys at Banrisul will see what we achieved. Again, i know that was for demo day an in production, the clean face would be back.

Then the day of demo and approval came. My manager from Banrisul came back, and say everyone was happy with the results, everything worked as expected, with only one single remarks. ( i was expecting already ), the logo need be gone.

The CONECTIVA logo.

No one single remark over that embossed shadow Tux there.

And then again, the machine gone to a bank office to real public test, again, no remark of Tux logo, some people outside even noticed the penguin.

The rest is history, i left Banrisul after the work and back to Conectiva engineering and KDE  and several other Conectiva staff went there to finish the code that known better than me, polish or remove old DOS tidbits, and 15 years later, still you can see some TUX happily providing money and services for customers.

I remember the day John MadDog took that picture in one FISL, i remember a crazy Miguel de Icaza jumping over the machine taking pictures as well on FISL. Banrisul was smart in place a machine right aside the stairs of the entrance of FISL where thousands of geek was passing daily in ever conference.

Never intended, well executed

Categories: FLOSS Project Planets

Hitting the ground running

Mon, 2015-05-25 05:06

Today is officially the first day of coding for this year's Google Summer of Code. For the next three months I will be working on bringing animation to Krita. There's a lot of work ahead, but I have a solid plan to work with.

Timeline docker wireframesIn addition to the implementation plan from our sprint, we have been discussing the user interface design with some of the animators among our users. Scott Petrovic has made some very nice wireframes based on these. The discussion is still ongoing and constructive feedback is always welcome.

Even though coding officially starts today, I am not starting everything from scratch. As mentioned in my previous post, I have a partially working prototype to build upon. One can already add, move, delete and duplicate keyframes on a paint layer, as well as play the animation in real time. The animation can also be saved and loaded, albeit in an experimental file format.

However, the code is still in a rough state. There are a number of major issues with it, including crashes and even data loss. Due to a number of technical shortcuts taken for the sake of faster prototyping, it is cumbersome and unintuitive to use in places. For instance, in order to play the animation, one must have visited each frame in order to populate the playback cache. In short, it's a minefield of bugs and missing features.

I will start this week by finishing a refactoring of the prototype towards the final design and looking into some of the major issues, especially one relating to data loss with undo/redo operations. Hopefully in a couple of weeks I can get to implementing new features. I for one am looking forward to seeing fully functional animation playback and onion skinning in Krita.

Categories: FLOSS Project Planets

Interview with Griatch

Mon, 2015-05-25 05:01

Could you tell us something about yourself?

I, Griatch, am from Sweden. When not doing artwork I am a astrophysicist, mainly doing computer modeling of astronomical objects. I also spend time writing fiction, creating my own music and being the lead developer of Evennia, an open-source, professional-quality library for creating multiplayer text games (muds). I also try to squeeze in a roleplaying game or two now and then, as well as a beer at the local pub.

Do you paint professionally, as a hobby artist, or both?

A little bit of both. I’m mainly a hobby painter, but lately I’ve also taken on professional work and I’m currently commissioned to do all artwork and technical illustration for an upcoming book on black holes (to be published in Swedish). Great fun!

What genre(s) do you work in?

I try to be pretty broad in my genres and have dabbled in anything from fantasy and horror to sci-fi, comics and still life. I mostly do fantasy, sci-fi and other fantastical imagery but I often go for the mundane aspects of those genres, portraying scenes and characters doing non-epic things. I try to experiment a lot but like to convey or hint at some sort of story in my artwork.

Whose work inspires you most — who are your role models as an artist?

There are too many to list, including many involved in the Krita project! One thing you quickly learn as an artist (and in any field, I’ve found) is that no matter how well you think you are doing for yourself, there are always others who are way better at it. Which is great since it means you can learn from them!

How did you get to try digital painting for the first time?

I did my first digital drawing with a mouse on an Amiga 500 back in the mid-nineties. I used the classical program Deluxe Paint. You worked in glorious 32 colours (64 with the “halfbrite” hardware hack) on a whopping 320×240 pixel canvas. I made fantasy pictures and a 100+ frame animation in that program, inspired by the old Amiga game Syndicate.

But even though I used the computer quite a bit for drawing, digital art was at the time something very different from analogue art – pixel art is cool but it is a completely separate style. So I kept doing most my artwork in traditional media until much later.

What made you choose digital over traditional painting?

I painted in oils since I was seven and kept doing so up until my university years. I dropped the oils when moving to a small student apartment – I didn’t have the space for the equipment nor the willingness to sleep in the smell. So I drew in charcoal and pencils for many years. I eventually got a Linux machine in the early 2000’s and whereas my first tries with GIMP were abysmal (it was not really useful for me until version 2+), I eventually made my first GIMP images, based on scanned originals. When I got myself a Wacom Graphire tablet I quickly transitioned to using the computer exclusively. With the pen I felt I could do pretty much anything I could on paper, with the added benefits of undo’s and perfect erasing. I’ve not looked back since.

How did you find out about Krita?

I’ve known about Krita for a long time, I might have first heard about it around the time I started to complement my GIMP work with MyPaint for painting. Since I exclusively draw in Linux, the open-source painting world is something I try to keep in touch with.

What was your first impression?

My first try of Krita was with an early version, before the developers stated their intention of focusing on digital painting. That impression was not very good, to be honest. The program had a very experimental feel to it and felt slow, bloated and unstable. The kind of program you made a mental note of for the future but couldn’t actually use yet. Krita has come a long way since then and today I have no stability or performance issues.

What do you love about Krita?

Being a digital painter, I guess I should list the brush engines and nice painting features first here. And these are indeed good. But the feature I find myself most endeared with is the transform tool. After all my years of using GIMP, where applying scale/rotate/flip/morph etc is done by separate tools or even separate filters, Krita’s unified transform tool is refreshing and a joy to use.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I do wish more GUI toolkits would support the GTK2 direct-assignment of keyboard shortcuts: Hover over the option in the menu, then click the keyboard shortcut you want to that menu item. Fast and easy, no scrolling/searching through lists of functions deep in the keyboard shortcut settings. I also would like to see keyboard shortcuts assigned to all the favourite brushes so you can swap mid-stroke rather than having to move the pen around on the pop-up menu.

Apart from this, with the latest releases, most of my previous reservations with the program have melted away actually. Apart from stability concerns, one of the reasons I was slow to adopt Krita in the past was otherwise that Krita seems to want to do it all. Krita has brushes, filters, even vector tools under the same umbrella. I did (and still often do) my painting in MyPaint, my image manipulation in GIMP and my vector graphics in Inkscape – each doing one aspect very well, in traditional Unix/Linux fashion. For the longest time Krita’s role in this workflow was … unclear. However, the latest versions of Krita have improved the integration between its parts a lot, making it actually viable for me to stay in Krita for the entire workflow when creating a raster image.

The KDE forum and bug reporting infrastructure it relies on hides Krita effectively from view as one of many KDE projects. Compared to the pretty and modern Krita main website, the KDE web pages you reach once you dive deeper are bland and frankly off-putting, a generic place to which I have no particular urge to contribute. That the Krita KDE-forum can’t even downscale an image for you, but requires you to first yourself rescale the image before uploading, is so old-fashioned that it’s clear the place was never originally intended to be hosting art. So yes, this part is an annoyance, unrelated to the program itself as it is.

What sets Krita apart from the other tools that you use?

The transform tool mentioned above and the sketch-brush engine which is great fun. The perspective tool is also a very cool addition, just to name a few things. Krita seems to have the development push, support and ambition to create a professional and polished experience. So it will be very interesting to follow its development in the future.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

“The Curious Look”. This is a fun image of a recurring character of mine. Whereas I had done images in Krita before, this one was the first I decided to make in Krita from beginning to end.

What techniques and brushes did you use in it?

This is completely hand-painted only using one of Krita’s sketch brushes, which I was having great fun with!

Where can people see more of your work?

You can find my artwork on DeviantArt here: http://griatch-art.deviantart.com/
I have made many tutorials for making art in OSS programs: http://griatch-art.deviantart.com/journal/Tutorials-237116359
I also have a Youtube channel with amply commented timelapse painting videos: https://www.youtube.com/user/griatch/videos

Anything else you’d like to share?

Nothing more than wishing the Krita devs good luck with the future development of the program!

Categories: FLOSS Project Planets

A neat UNIX trick for your bash profile

Sun, 2015-05-24 11:12

Hi folks! I have been spending a lot of time with KStars lately. I will write a detailed account on the work done till now, but here’s something I found interesting. This, I think is a handy ‘precautionary’ trick that every newbie should implement to avoid pushing to the wrong git repo/branch.

Here’s what you do. Open up the konsole and type cd ~ (This should take you to your home directory. Now what we need to do is add one line to your .bashrc file.

$nano .bashrc

Opens up your .bashrc file in the nano editor (you could choose vim, or emacs too).

Add this line export PS1=’\W$(__git_ps1 “(%s)”)> to the part of the ‘if block’ that pertains to bash completion. In my case this is how my .bashrc looks.

  if [ -f /usr/share/bash-completion/bash_completion ]; then
    . /usr/share/bash-completion/bash_completion
        export PS1=’\W$(__git_ps1 “(%s)”)> ‘
  elif [ -f /etc/bash_completion ]; then
    . /etc/bash_completion
  fi

What this does is that it changes the text on your konsole. Whenever you enter a git repository, the text on your console reads the repo name with the git branch you are on currently (hence the %s to the __git_ps1 variable). This is how my kstars repository now looks.

~> cd Projects/kstars/
kstars(gsoc2015-constellationart)> git branch
* gsoc2015-constellationart
  master
kstars(gsoc2015-constellationart)> git checkout master
Switched to branch ‘master’
Your branch is up-to-date with ‘origin/master’.
kstars(master)>

Now you can always know what branch you are on, without typing git branch. Pretty neat!

 

 

Categories: FLOSS Project Planets

Reminder: Evolving KDE survey milestone on May 31st

Sun, 2015-05-24 00:44

Evolution is a powerful concept and tool. When harnessed properly, humans have been able to tailor and adapt crops and domesticate animals. We’ve been able to grow the Dutch unnecessarily tall and create beautiful and consequence-free theme parks as shown in the Jurassic Park documentary series on the BBC. However, when not monitored closely or left to nature’s own devices, the result is the terrifying land based sharks that have caused such recent devastation across most of Australia.

It has already been a month since KDE launched Evolving KDE: an initiative that allows our healthy community to continue growing organically while setting goals, direction, and taking action. The digital world is only accelerating in its pace of change; will we be proactive or reactive?

 

It has also been eight long years since I created this image for KDE, and I firmly believe the concept to be more relevant than ever. KDE is powerful enough to respect the different backgrounds, geographies and goals of the individual while channeling that diversity as a strength.  With unity and vision, our best is yet to come.

The beauty of the Evolving KDE announcement to me came from the three distinctive voices I’ve seen post on the topic here on the Planet.

You have Lydia, who as President of the KDE e.V. shows leadership  in announcing  and defining this initiative.

You have Paul, universally known as being to smart for his own good and apparently having enough time to read more that xkcd comics, showing the theory, necessity and impact of such ventures.

And finally you have Boudewijn, who actually gives a testimonial on his own experiences with Krita and powerful results yielded from taking the time to chart a course and create a plan to reach that destination.  Years ago, I distinctly remember Krita’s identity crisis, lack of momentum, and the very purposeful and honest conversations on their current state, definition of goals, and the plans created to achieve them.  When he writes, “Krita’s evolution has gone from being a weaker, but free-as-in-freedom alternative to a proprietary application to an application that aspires to be the tool of choice, even for people who don’t give a fig about free software” he may be underselling the hard work, the vision, the plan, and the metamorphosis.

So, as Lydia shared in her blog post, Evolving KDE is an ongoing journey, but it does have a specific train stop coming up quickly.  On May 31st, we’ll be taking the survey results entered.  We’ll summarize, review, discuss and present at Akademy.  The survey will remain open, and the questions will likely evolve over time (it be hypocritical to remain static when asking about change, n’est-ce pas?), but May 31st is a milestone necessary to harvest and present.

With one week remaining, you have plenty of time, so take the short and simple Evolving KDE Survey, and have your voice heard!

 


Categories: FLOSS Project Planets

GSOC 2015

Sat, 2015-05-23 09:39
I got accepted for the project, Integration of Cantor into LabPlot in this year's google summer of code (GSOC) under mentor Alexander Semke with KDE Organization. Looking forward to ..





First phase to GSOC 2015 has ended, that is community bonding period. I have tried my best to interact with my mentor and community members, but I was mostly occupied with my university examination during most of the time. 

Looking forward to the coding period I have started working on integration of Cantor's UI into LabPlot. I will push all my commits to branch integrate-cantor[1] of LabPlot. 

Looking forward to learn, code and develop this summer !

[1] https://projects.kde.org/projects/kdereview/labplot/repository/show?rev=integrate-cantor

Categories: FLOSS Project Planets

Interview with Mary Winkler

Sat, 2015-05-23 04:00

Could you tell us something about yourself?

My name is Mary Winkler and I work under the brand Acrylicana. I love coffee, cats, pastels, neons, sunshine, and sparkles.

Do you paint professionally, as a hobby artist, or both?

Professionally mostly, but also just because I love creating. If I can make a mark, painting, drawing, crafting, etcetera, I will.

What genre(s) do you work in?

Realism, kawaii, stylized, pop art… there’s a lot of terms that define my art, and it has changed and continues to change over time.

Whose work inspires you most — who are your role models as an artist?

I adore the work of Peter Max, Macoto, Junko Mizuno, Lisa Frank, Bouguereau, and Erte, as well as artist friends Miss Kika, Anneli Olander, Zambicandy, and Brittany Ngo.

How did you get to try digital painting for the first time?

In high school my oldest brother bought me an off-brand graphics tablet, well over a decade ago. I’ve been creating art digitally ever since.

What makes you choose digital over traditional painting?

I love both mediums, actually. If I’m pressed for time, working with a client, or just don’t want a mess, digital is the way to be. Most of my work is done digitally. I do love to be able to paint up wood, canvas, or paper with acrylics or watercolors for gallery shows or small pieces to put in my shop.

How did you find out about Krita?

I was writing an article for Tuts+ covering drawing and design programs that weren’t made by Adobe. I had some twitter followers mention it and later when the article ran a few people commented about the program because I missed it. Rectified that mistake by painting for three days straight and haven’t shut up about Krita.

What was your first impression?

The program immediately detected my tablet (Wacom Cintiq) and while some larger file sizes and my machine can produce a little bit of lag, the program doesn’t freeze on me, crash unexpectedly, or cause weird jagged lines when they should be smooth. Krita’s smooth and lighter than Photoshop, and has such good painting tools!

What do you love about Krita?

LOVE the blending tools. I’m used to those of Paint Tool SAI, and finding a program whose brushes are far more customizable and can do more is digital art heaven. Especially an open source one!

What do you think needs improvement in Krita? Is there anything that really annoys you?

I know it’s small, but I’d love a zoom tool in the toolbar. I’m happy to push plus and minus, but not seeing the little magnifying glass first thing was something I missed from a new user standpoint.

What sets Krita apart from the other tools that you use?

It’s not hogging all of my RAM like Painter or Adobe products can. While it’s lighter than those programs, it’s also packed with more features than something like FireAlpaca or Paint Tool SAI. I have the ability to customize brushes and tools fantastically, and have barely done so so far thanks to kickass default tools.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

So far I’ve only done two: the tart piece and a poster design for an upcoming gallery show. I love them both and cannot choose. I do plan on adding hundreds of doodles done in Krita to my harddrive.

What techniques and brushes did you use in it?

So far I love the watercolor-style brushes, sparkle brushes, and the blending ones. I’ve been playing with default ones mostly to get the hang of what Krita has to offer. Simply love anything that is intuitive in its use. Immediately I could apply my painting techniques to the program without having to learn new ways to use layers or complex blending or painting styles. It’s like working with acrylics, and I love that.

Where can people see more of your work?

You can follow me on behance, instagram, facebook, twitter, and deviantart.

Anything else you’d like to share?

I write a lot of tutorials for Tuts+ (http://tutsplus.com/authors/mary-winkler) and add videos occasionally on youtube (https://www.youtube.com/user/acrylicana). I hope to add Krita to my roster of tutorials/courses/process videos soon.

Categories: FLOSS Project Planets

Second stretchgoal reached and new builds!

Fri, 2015-05-22 03:38

We’ve got our second stretchgoal through both Kickstarter and the Paypal donations! We hope we can get many more so that you, our users, get to choose more ways for us to improve Krita. And we have got half a third stretch goal actually implemented: modifier keys for selections!

Oh — and check out Wolthera’s updated brush packs! There are brush packs for inking, painting, filters (with a new heal brush!), washes, flow-normal maps, doodle brushes, experimental brushes and the awesome lace brush in the SFX brush pack!

We’ve had a really busy week. We already gave you an idea of our latest test-build on Monday, but we had to hold back because of the revived crash file recovery wizard on windows… that liked to crash. But it’s fixed now, and we’ve got new builds for you!

So what is exactly new in this build? Especially interesting are all the improvements to PSD import/export support. Yesterday we learned that Katarzyna uses PSD as her working format when working with Krita – we still don’t recommend that, but it’s easier now!

Check the pass-through switch in the group layer entry in the layerbox!

  • Dmitry implemented Pass-Through mode for group layers. Note: filter, transform and transparency masks and pass-through mode don’t work together yet, but loading and saving groups from and to PSD now does! Pass-through is not a fake blending mode as in Photoshop: it is a switch on the group layer. See the screenshot!
  • We now can load and save layerstyles, with patterns from PSD files! Get out your dusty PSDs for testing!
  • Use the right Krita blending mode when a PSD image contains Color Burn.
  • Add Lighter Color and Darker Color blending modes and load them from PSD.
  • When using Krita with a translation active on windows, the delay on starting a stroke is a bit less, but we’re still working on eliminating that delay completely.
  • The color picker cursor now shows the currently picked and previous color.
  • Layer styles can now be used with inherit-alpha
  • Fix some issues with finding templates.
  • Work around an issue in the oxygen widget style on Linux that would crash the OpenGL-based canvas due to double initialization
  • Don’t toggle the layer options when right-clicking on a layer icon to get the context menu (patch by Victor Wåhlström)
  • Update the Window menu when a subwindow closes
  • Load newer Photoshop-generated JPG files correctly by reading the resolution information from the TIFF tags as well. (Yes, JPG resolution is marked in the exiv metadata using TFF tags if you save from Photoshop…)
  • Show the image name in the window menu if it hasn’t been saved yet.
  • Don’t crash when trying to apply isolate-layer on a transform mask
  • Add webp support (at least on Linux, untested on Windows)
  • Add a shortcut to edit/paste into a new image. Patch by Tiffany!
  • Fix the autosave recovery dialog on Windows for unnamed autosaves!
  • Added a warning for intel users who may still be dealing with the broken driver. If Krita works fine for you, just click okay. If not, update your drivers!

New builds for Linux are being created at the moment and will be available through the usual channels.

Linux: Windows:

From Vista and up, Windows 7 and up is recommended. There is no Windows XP build. If you have a 64 bits version of Windows, don’t use the 32 bits build! The zip files do not need installing, just unpacking, but do not come with the Visual Studio C runtime that is included in the msi installer.

OSX:

(Please keep in mind that these builds are unstable and experimental. Stuff is expected not to work. We make them so we know we’re not introducting build problems and to invite hackers to help us with Krita on OSX.)

Categories: FLOSS Project Planets

Updates on Kate's Rust plugin, syntax highlighting and the Rust source MIME type

Thu, 2015-05-21 19:17
KDE Project:

The other day I introduced a new Rust code completion plugin for Kate, powered by Phil Dawes' nifty Racer. Since then there's been a whole bunch of additional developments!

New location

Originally in a scratch repo of mine, the plugin has now moved into the Kate repository. That means the next Kate release will come with a Rust code completion plugin out of the box! (Though you'll still need to grab Racer yourself, at least until it finds its way into distributions.)

For now the plugin still works fine with the stable release of Kate, so if you don't want to build all of Kate from git, it's enough to run make install in addons/rustcompletion in your Kate build directory.

This also means the plugin is now on bugs.kde.org - product kate, component plugin-rustcompletion (handy pre-filled form link). And you can submit patches via ReviewBoard now.

New feature: Go to Definition

In addition to code completion popups, the plugin now also installs Go to Definition action (in the Edit menu, the context menu, and you can configure a keyboard shortcut for it as well). It will open the document containing the definition if needed, activate its view and place the cursor at the start of the definition.

Rust syntax highlighting now bundled with Frameworks

After brainstorming with upstream, we decided together that it's best for Rust and Kate users to deprecate the old rust-lang/kate-config repository and move the syntax highlighting file into KDE's KTextEditor library (the foundation of Kate, KDevelop and several other apps) for good, where it now resides among the many other rules files. With 1.0 out the door, Rust is now stable enough that delivering the highlighting rules via distro packages becomes feasible and compelling, and moving the development location avoids having to sync multiple copies of the file.

The full contribution history of the original repo has been replayed into ktexteditor.git, preserving the record of the Rust community's work. The license remains unchanged (MIT), and external contributions remain easy via ReviewBoard or bugs.kde.org.

KTextEditor is a part of KDE's Frameworks library set. The Frameworks do monthly maintenance releases, so keeping up with the Rust release cadence will be easy, should the rules need to be amended.

It's a MIME type: text/rust

Kate plugins and syntax highlighting files preferably establish document identity by MIME type, as do many other Linux desktop applications. The desktop community therefore maintains a common database in the shared-mine-info project. With the inclusion of a patch of mine on May 18th, shared-mime-info now recognizes the text/rust type for files matching a *.rs glob pattern.

If you're searching, opening or serving Rust source files, you should be using text/rust from now on.

That's it for today! I still have a bunch of improvements to the plugin planned, so stay tuned for future updates.

Categories: FLOSS Project Planets