Daniel Pocock: Automatically creating repackaged upstream tarballs for Debian

Planet Debian - Tue, 2014-04-22 16:34

One of the less exciting points in the day of a Debian Developer is the moment they realize they have to create a repackaged upstream source tarball.

This is often a process that they have to repeat on each new upstream release too.

Wouldn't it be useful to:

  • Scan all the existing repackaged upstream source tarballs and diff them against the real tarballs to catalog the things that have to be removed and spot patterns?
  • Operate a system that automatically produces repackaged upstream source tarballs for all tags in the upstream source repository or all new tarballs in the upstream download directory? Then the DD can take any of them and package them when he wants to with less manual effort.
  • Apply any insights from this process to detect non-free content in the rest of the Debian archive and when somebody is early in the process of evaluating a new upstream project?
Google Summer of Code is back

One of the Google Summer of Code projects this year involves recursively building Java projects from their source. Some parts of the project, such as repackaged upstream tarballs, can be generalized for things other than Java. Web projects including minified JavaScript are a common example.

Andrew Schurman, based near Vancouver, is the student selected for this project. Over the next couple of weeks, I'll be starting to discuss the ideas in more depth with him. I keep on stumbling on situations where repackaged upstream tarballs are necessary and so I'm hoping that this is one area the community will be keen to collaborate on.

Categories: FLOSS Project Planets

gnubatch @ Savannah: GNUBatch 1.11 Released

GNU Planet! - Tue, 2014-04-22 16:30

I have just uploaded GNUBatch 1.11

The configuration file has been updated to a more recent version.

I've fixed a couple of bugs affecting the network interactions which crept in a couple of releases ago. Sorry about that!

The reference manual has been updated in a few places.

Categories: FLOSS Project Planets

Drupalize.Me: Webinar: Easily Create Maps with Leaflet

Planet Drupal - Tue, 2014-04-22 15:50

Curious about Leaflet? Join Drupalize.Me Trainer Amber Matz for a live tutorial on how to add Leaflet maps to your Drupal site during this Acquia hosted webinar on May 1, 2014 at 1:00 PM EDT.

Categories: FLOSS Project Planets

Ritesh Raj Sarraf: Basis B1

Planet Debian - Tue, 2014-04-22 15:32


Starting yesterday, I am a happy user of the Basis B1 (Carbon Edition) Smart Watch

The company recently announced being acquired by Intel. Overall I like the watch. The price is steep, but if you care of a watch like that, you may as well try Basis. In case you want to go through the details, there's a pretty comprehensive review here.

Since I've been wearing it for just over 24hrs, there's not much data to showcase a trend. But the device was impressively precise in monitoring my sleep.


Pain points - For now, sync is the core of the pains. You need either a Mac or a Windows PC. I have a Windows 7 VM with USB Passthru, but that doesn't work. There's also an option to sync over mobile (iOS and Android). That again does not work for my Chinese Mobile Handset running MIUI.

AddThis:  Categories: Keywords: 
Categories: FLOSS Project Planets

Dcycle: Simpletest Turbo: how I almost quadrupled the speed of my tests

Planet Drupal - Tue, 2014-04-22 15:15

My development team is using a site deployment module which, when enabled, deploys our entire website (with translations, views, content types, the default theme, etc.).

We defined about 30 tests (and counting) which are linked to Agile user stories and confirm that the site is doing what it's supposed to do. These tests are defined in Drupal's own Simpletest framework, and works as follows: for every test, our site deployment module is enabled on a new database (the database is never cloned), which can take about two minutes; the test is run, and then the temporary database is destroyed.

This created the following problem: because we were deploying our site 30 times during our test run, a single test run was taking over 90 minutes. Furthermore, we are halfway into the project, and we anticipate doubling, perhaps tripling our test coverage, which would mean our tests would take over four hours to run.

Now, we have a Jenkins server which performs all the tests every time a change is detected in Git, but even so, when several people are pushing to the git repo, test results which are 90 minutes old tend to be harder to debug, and developers tend to ignore, subvert and resent the whole testing process.

We could combine tests so the site would be deployed less often during the testing process, but this causes another problem: tests which are hundreds of lines long, and which validate unrelated functionality, are harder to debug than short tests, so it is not a satisfactory solution.

When we look at what is taking so long, we notice that a majority of the processing power goes to install (deploy) our testing environment for each test, which is then destroyed after a very short test.

Enter Simpletest Turbo, which provides very simple code to cache your database once the setUp() function is run, so the next test can simply reuse the same database starting point rather than recreate everything from scratch.

Although Simpletest Turbo is in early stages of development, I have used it to almost quadruple the speed of my tests, as you can see from this Jenkins trend chart:

I know: my tests are failing more than I would like them to, but now I'm getting feedback every 25 minutes instead of every 95 minutes, so failures are easier to pinpoint and fix.

Furthermore, fairly little time is spent deploying the site: this is done once, and the following tests use a cached deployment, so we are not merely speeding up our tests (as we would if we were adding hardware): we are streamlining duplicate effort. It thus becomes relatively cheap to add new independent tests, because they are using a cached site setup.

Tags: planetblog
Categories: FLOSS Project Planets

C.J. Adams-Collier: AD Physical to Virtual conversion… Continued!

Planet Debian - Tue, 2014-04-22 14:54

So I wasn’t able to complete the earlier attempt to boot the VM. Something to do with the SATA backplane not having enough juice to keep both my 6-disk array and the w2k8 disk online at the same time. I had to dd the contents off of the w2k8 disk and send it to the SAN via nc. And it wouldn’t write at more than 5.5MB/s, so it took all day.

cjac@foxtrot:~$ sudo dd if=/dev/sdb | \ pv -L 4M -bWearp -s 320G | \ nc 4242 cjac@san0:~$ nc -l 4242 | \ pv -L 4M -bWearp -s 320G | \ sudo dd of=/dev/vg0/ad0

Anyway, I’ve got a /dev/vg0/ad0 logical volume all set up now which I’m exporting to the guest as USB.

Here’s the libvirt xml file: win2k8.xml

No indication as to how long this will take. But I’ll be patient. It will be nice to have the AD server back online.

[edit 20140422T172033 -0700]

… Well, that didn’t work …

[edit 20140422T204322 -0700]
Maybe if I use DISM…?

[edit 20140422T204904 -0700]

Yup. That did ‘er!

C:\>dism /image:c:\ /add-driver /driver:d:\win7\amd64\VIOSTOR.INF

Categories: FLOSS Project Planets

Axel Beckert: GNU Screen 4.2.0 in Debian Experimental

Planet Debian - Tue, 2014-04-22 14:22
About a month ago, on 20th of March, GNU Screen had its 27th anniversary.

A few days ago, Amadeusz Sławiński, GNU Screen’s new primary upstream maintainer, released the status quo of Screen development as version 4.2.0 (probably to distinguish it from all those 4.1.0 labeled development snapshots floating around in most Linux distributions nowadays).

I did something similar and uploaded the status quo of Debian’s screen package in git as 4.1.0~20120320gitdb59704-10 to Debian Sid shortly afterwards. That upload should hit Jessie soon, too, resolving the following two issues also in Testing:

  • #740301: proper systemd support – Thanks Josh Triplett for his help!
  • #735554: fix for multiuser usage – Thanks Martin von Wittich for spotting this issue!

That way I could decouple these packaging fixes/features from the new upstream release which I uploaded to Debian Experimental for now. Testers for the 4.2.0-1 package are very welcome!

Oh, and by the way, that upstream comment (or ArchLinux’s according announcement) about broken backwards compatibility with attaching to running sessions started with older Screen releases doesn’t affected Debian since that has been fixed in Debian already with the package which is in Wheezy. (Thanks again Julien Cristau for the patch back then!)

While there are bigger long-term plans at upstream, Amadeusz is already working on the next 4.x release (probably named 4.2.1) which will likely incorporate some of the patches floating around in the Linux distributions’ packages. At least SuSE and Debian offered their patches explicitly for upstream inclusion.

So far already two patches found in the Debian packages have been obsoleted by upstream git commits after the 4.2.0 release. Yay!

Categories: FLOSS Project Planets

AGLOBALWAY: Drupal & Bootstrap

Planet Drupal - Tue, 2014-04-22 13:34
Here at AGLOBALWAY, we are constantly learning to take advantage of the myriad of tools available to us whether for communication, productivity, or development. As a company dedicated to all things open source, one of the tools we employ is Twitter’s Bootstrap framework. Thanks to the industriousness and generosity of companies like Twitter, (or Zurb for its own Foundation framework, among others), the web community has a tremendous amount of resources from which to draw upon.   Drupal has been a key content management framework for us ever since the inception of our company for its flexibility, power, and configurability. Bootstrap has made an excellent companion for Drupal in several of our projects so far, so I will highlight just a few of the many ways that a Bootstrap-based theme can compliment your Drupal website.   Bootstrapped Style   While some decry the design style and “look” that is quintessentially Bootstrap, there really is no need to. Yes, it has been a major influence on modern web design trends. Bootstrap has prepackaged layouts and default styles for nearly all UI elements, taking away the need to create styles for everything. But, building a unique site that doesn’t follow that typical Bootstrap style own doesn’t have to be difficult. The real undertaking is in learning the ins and outs of the preprocessor system employed by Bootstrap (Less, in this case), and how they have laid everything out.   Once familiar with the system, one will quickly realize that it’s relatively straightforward to take advantage of all of the mixins and variables already given in order to generate the styles you have designed. In one .less file, we can quickly define colours, sizes, and other default settings that will appear throughout your site. Again, like the JavaScript libraries above, this is not unique to Drupal. However, being able to take advantage of these tools helps immensely to speed up the development cycle of building Drupal sites.   JavaScript Libraries   Having access to a number of common functions used throughout the web is a huge time saver. Already bundled with jQuery, a Drupal Bootstrap-based theme allows for easy integration of accordions, image carousels and more, without having to write your own JavaScript. While these libraries are certainly not exclusive to Drupal, there can be unique ways of making use of them with Drupal. For example, rendering a block inside of a modal for a login form is a snap, and css is all you really need to customize it once you initialize it with the proper JavaScript.   Another example would be to pair Bootstrap with the popular CKEditor module to generate templates, using Bootstrap’s markup. Users may want to place an accordion inside their own managed content, so we can create a template with CKEditor’s default.js file (even better, create a separate file and use that one instead), following the pattern of the templates already given. Add the Bootstrap markup with the appropriate classes, and voila! Your users now have an accordion they can insert using only the WYSIWYG editor.   Bootstrap Views   This is a Drupal module I have yet to really play around with personally, but a cursory look tells me just how easy it can be to display content using Bootstrap elements without even getting into template files or writing code. While I generally prefer to separate markup from data output, I can see the potential here for a lot of time saving, while avoiding some head-scratching at the same time. This is the whole point of views in the first place - making it easy to display the content you want without having to dive too deep.   As we can see, integrating Drupal with Twitter Bootstrap has considerable advantages. While its heaviness is a fair criticism, I believe those advantages justify the use of Bootstrap, particularly in an Agile development environment. Besides, we can always eliminate the JavaScript or CSS we don’t use once we’re done developing our site. Whether it’s Bootstrap, Foundation, or your framework of choice, having such front-end tools to integrate with Drupal can only be a good thing. Many thanks to all who are dedicated to creating and maintaining these resources for the benefit of us all. Tags: drupal planetBootstrap
Categories: FLOSS Project Planets

Propeople Blog: Propeople at Stanford Drupal Camp 2014

Planet Drupal - Tue, 2014-04-22 13:26

This past weekend, some of our team had the pleasure of attending Stanford Drupal Camp, which Propeople supported as a Gold Sponsor. Stanford University is one of the biggest advocates of Drupal in higher-education, and is home to an active and passionate Drupal community. Hosted on the world famous (and too-gorgeous-to-put-into-words) Stanford campus, the Stanford Drupal Camp is an annual event focused on Drupal and the state of Drupal at the university (where thousands of websites are powered by the CMS). Propeople has participated in the event the past few years, and we've had the pleasure to work on a wide variety of Stanford projects in the same amount of time. These include the Stanford Graduate School of Business, SLAC National Accelerator Laboratory, Stanford Student Affairs, Riverwalk Jazz, and many others. 

Stanford Drupal Camp featured a great line-up of sessions and talks all day Friday and Saturday, ranging from the simple to the complex. The talks focused on a variety of topics, from site building to Agile and Scrum methodology to specific Drupal use cases in higher education. As you can expect at any Drupal camp, more casual BoFs and lightning talks were interspersed throughout the conference.

We were happy to have the packed schedule include two sessions presented by one of Propeople’s own Drupal experts, Yuriy Gerasimov, on Saturday. Yuriy’s first session was titled “CI and Other Tools for Feature Branch Development”, aimed at helping developers and organizations implement feature-branch workflow. The second was “Local Development with Vagrant”, which, as you might have guessed from the title, was all about the benefits of using Vagrant to spin up local virtual machines with the same settings on different platforms.



Overall, the many sessions, BoFs, and lightning talks provided Stanford staff, faculty, students, and developers (from the university and beyond) with plenty of great information.

In addition to the full session schedule, Stanford Drupal Camp featured plenty of opportunities for those at the event to enjoy each other’s company, catch up, and engage in some great conversation about every attendee’s favorite topic...Drupal! As a technology partner to more than a dozen Stanford departments and institutions, Propeople has learned first hand how great the Stanford community is, and it was a treat to have some of our team on campus to join in on the fun at Stanford Drupal Camp. We’ll be looking forward to next year!

Tags: DrupalDrupal campStanfordCheck this option to include this post in Planet Drupal aggregator: planetTopics: Community & Events
Categories: FLOSS Project Planets

Phase2: Exploring Maps In Sass 3.3(Part 3): Calling Variables with Variables

Planet Drupal - Tue, 2014-04-22 13:25

For this blog entry, the third in a series about Sass Maps, I am going to move away from niche application, and introduce some more practical uses of maps.

Living Style Guides

In my current project, a large Drupal media site,  I wanted to have a style guide, a single static page where we could see all of the site colors, along with the variable name. I collected all of my color variables, and created some static markup with empty divs. Below is the loop I started to write.

<!-- The HTML for our Style Guide --> <div class="styleguide"> <div class="primary-color"></div> <div class="secondary-color"></div> <div class="tertiary-color"></div> </div>

// Our site color variables $primary-color: #111111; $secondary-color: #222222; $tertiary-color: #333333; // Make a list of the colors to display $styleguide-colors: primary-color, secondary-color, tertiary-color; // Loop through each color name, create class name and styles @each $color in $styleguide-colors { .styleguide .#{$color} { background-color: $#{$color}; // Spoiler Alert: Does not work!! &:after { content: “variable name is #{$color}” } } }

This loop goes through each color in my $styleguide-colors list and creates a class name based on the color name. It then attempts to set the background-color by calling a variable that matches the name from the list. We also set the content of a pseudo element to the variable name, so that our styleguide automatically prints out the name of the color.

This is what we want the first loop to return:

.styleguide .primary-color {  background-color: $primary-color; // Nope, we won’t get this variable  &:after {      content: “variable name is primary-color”    } }

The problem is that we can’t interpolate one variable to call another variable! $#{$color}  doesn’t actually work in Sass. It won’t interpolate into $ + primary-color , and then yield #111111  in the final CSS. This 3 year old github issue points out this exact issue, and hints at how maps is going to be introduced in Sass 3.3 to solve this problem. https://github.com/nex3/sass/issues/132

Make it better with maps

So now that we have maps, how can we create this color styleguide? Lets take this a step at a time.

First we need to wrap all of our colors in a map. Remember, any of these colors can be accessed like this: map-get($site-colors, primary-color)

$site-colors: (  primary-color: #111111,  secondary-color: #222222,  tertiary-color: #333333, );

Now we can create a list of the colors we want to iterate through and loop through them just like we did before.

$styleguide-colors: primary-color, secondary-color, tertiary-color; @each $color in $styleguide-colors {  .styleguide .#{$color} {    background-color: map-get($site-colors, $color); // This DOES work!    &:after {      content: “variable name is #{$color}”    }  } }

This time when we loop through our colors we get the same class name and pseudo element content, but lets look at what happens with the background color. Here is the first pass through the loop, using primary-color as $color :

.styleguide .primary-color {   background-color: map-get($site-colors, primary-color);   &:after {      content: “variable name is primary-color”    } }

As you can see in this intermediate step, we are able to use map-get($site-colors, primary-color)  to programmatically pass our color name into a function, and get a returned value. Without maps we’d be stuck waiting for $#{$color} to be supported (which will probably never happen). Or in the case of my project, write all 20 site color classes out by hand!

Make it awesomer with maps

Astute readers might realize that I am still doing things the hard way. I created a map of colors, and then duplicated their names in a list called $styleguide-colors . We can skip that middle step and greatly simplify our code, if we are wanting to print out every single value in the map.

$site-colors: ( primary-color: #111111, secondary-color: #222222, tertiary-color: #333333, ); @each $color, $value in $site-colors { .styleguide .#{$color} { background-color: $value; &:after { content: “variable name is #{$color}” } } }

Now, instead of passing a list into the @each loop, we pass the entire map. We can do this with the following pattern: @each $key, $value in $map . Each iteration of the loop has access to both the key primary-color  AND the value #111111 , so we don’t even need the map-get function.

The ability to ‘call variables with variables’ is incredibly useful for creating these programmatic classes, and is a foundational process upon which we start to build more complex systems. Be sure to check out part 1 and 2 of my Sass Maps blog series!

Categories: FLOSS Project Planets

Martin Pitt: Booting Ubuntu with systemd: Test packages available

Planet Debian - Tue, 2014-04-22 12:54

On the last UDS we talked about migrating from upstart to systemd to boot Ubuntu, after Mark announced that Ubuntu will follow Debian in that regard. There’s a lot of work to do, but it parallelizes well once developers can run systemd on their workstations or in VMs easily and the system boots up enough to still be able to work with it.

So today I merged our systemd package with Debian again, dropped the systemd-services split (which wasn’t accepted by Debian and will be unnecessary now), and put it into my systemd PPA. Quite surprisingly, this booted a fresh 14.04 VM pretty much right away (of course there’s no Plymouth prettiness). The main two things which were missing were NetworkManager and lightdm, as these don’t have an init.d script at all (NM) or it isn’t enabled (lightdm). Thus the PPA also contains updated packages for these two which provide a proper systemd unit. With that, the desktop is pretty much fully working, except for some details like cron not running. I didn’t go through /etc/init/*.conf with a small comb yet to check which upstart jobs need to be ported, that’s now part of the TODO list.

So, if you want to help with that, or just test and tell us what’s wrong, take the plunge. In a 14.04 VM (or real machine if you feel adventurous), do

sudo add-apt-repository ppa:pitti/systemd sudo apt-get update sudo apt-get dist-upgrade

This will replace systemd-services with systemd, update network-manager and lightdm, and a few libraries. Up to now, when you reboot you’ll still get good old upstart. To actually boot with systemd, press Shift during boot to get the grub menu, edit the Ubuntu stanza, and append this to the linux line: init=/lib/systemd/systemd.

For the record, if pressing shift doesn’t work for you (too fast, VM, or similar), enable the grub menu with

sudo sed -i '/GRUB_HIDDEN_TIMEOUT/ s/^/#/' /etc/default/grub sudo update-grub

Once you are satisfied that your system boots well enough, you can make this permanent by adding the init= option to /etc/default/grub (and possibly remove the comment sign from the GRUB_HIDDEN_TIMEOUT lines) and run sudo update-grub again. To go back to upstart, just edit the file again, remove the init=sudo update-grub again.

I’ll be on the Debian systemd/GNOME sprint next weekend, so I feel reasonably well prepared now.

Categories: FLOSS Project Planets

parallel @ Savannah: GNU Parallel 20140422 ('세월호') released

GNU Planet! - Tue, 2014-04-22 12:37

GNU Parallel 20140422 ('세월호') has been released. It is available for download at: http://ftp.gnu.org/gnu/parallel/

New in this release:

  • --pipepart is a highly efficient alternative to --pipe if the input is a real file and not a pipe.
  • If using --cat or --fifo with --pipe the {} in the command will be replaced with the name of a physical file and a fifo respectively containing the block from --pipe. Useful for commands that cannot read from standard input (stdin).
  • --controlmaster has gotten an overhaul and is no longer experimental.
  • --env is now copied when determining CPUs on remote system. Useful for copying $PATH if parallel is not in the normal path.
  • --results now chops the argument if the argument is longer than the allowed path length.
  • Build now survives if pod2* are not installed.
  • The git repository now contains tags of releases.
  • Bug fixes and man page updates.
About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job is can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O - pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2011): GNU Parallel - The Command-Line Power Tool, ;login: The USENIX Magazine, February 2011:42-47.


GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

Categories: FLOSS Project Planets

Gábor Hojtsy: Drupal Developer Days 2014 Organizers Report

Planet Drupal - Tue, 2014-04-22 12:21

The organizer team is still energized after our experience putting together Drupal Dev Days Europe 2014 in Szeged, Hungary between 24 and 30 March.

Several people asked about details and we wanted to document the event for future event organizers to share what worked best for us. We prepared a report for you so if you experienced Drupal Dev Days Szeged, you can look behind the curtain a bit, or if you heard about it, you can see what we did to pull off an event like this. If you were not there and did not hear about it, we included several feedback references as well to give you an idea.

Do you want to see tweets and articles like those about your event? Read the report for our tips!

We definitely did not do everything right but we hope we can help people learn from the things we did right.

Excuse us if the report is a bit too long, we attempted to pack useful information to every single sentence to make reading it worth your time. Send questions and comments to the team.

Categories: FLOSS Project Planets

Some proposals for discussion about future releases

Planet KDE - Tue, 2014-04-22 11:04

What I wrote to the kde-community mailing list today:

Good morning

With the KDE Frameworks 5 [1] and Plasma [2] well underway the only thing of
the old KDE Software Collection are the KDE Applications that don’t yet have a
release plan or schedule for the Qt5/KF5 port. Beneath these big changes in a
lot of our software there are quite a few other changing and exciting things
- New Qt based software like GCompris and Kronometer
- New software for the web like Bodega and WikiFM
- Even greater hardware projects like the Improv and Vivaldi
- And other things like the new KDE Manifesto and our new Visual Design Group

Everything under the KDE umbrella and in the KDE family. I’d like to take this
time to discuss some ideas for future releases of our software. How we could
reorganize it, what we could formalize and where we need decisions. So this
email is the start of a series of proposals to discuss here and to then get an
agreement on.

The following three (or more proposals are mostly independent:
- Proposal email One: KDE (Core) Apps and Suites
- Proposal email Two: The Sigma Release Days and independent releases
- Proposal email Tre: More formal release and announcement processes
- Proposal email For: More architectures/platforms for KDE’s CI

They will be sent to this mailing list in a minute and are quite short in text
and that on purpose. We can’t yet discuss these things in every detail but we
want to paint the direction in which we are planning to go.

Short disclaimer: With these ideas I’m sitting on the shoulders of giants as
you. I don’t want to steal ideas or that it looks like I stole them.
These are proposals based on several threads, IRC discussions, personal
discussions and other summaries.

Another thing, not included in these proposals, is the KDE Applications 4.14
release schedule [3] and if we want to make KDE Applications 4.14 an LTS
release till August 2015 (when the Plasma 4.11.x LTS release ends) or if there
should be another 4.15 release. But this discussion should be held on the
release-team mailing list [4].

So thanks for reading and tell me your opinion and constructive feedback to
these proposals. Try to keep it short and precise and I’ll try to keep record
of numberous opinions and ideas and will post summaries in one or two weeks.
Details can be discussed in Randa and/or at Akademy. So let’s concentrate on
the bigger ideas.

Best regards and hugs

PS: Don’t sent email to this thread for ideas to the proposals mentioned

[1] http://community.kde.org/Frameworks/Epics
[2] http://techbase.kde.org/Schedules/Plasma/2014.6_Release_Schedule
[3] http://techbase.kde.org/Schedules/KDE4/4.14_Release_Schedule
[4] https://mail.kde.org/mailman/listinfo/release-team

Categories: FLOSS Project Planets

Acquia: I’m Kris Vanderwater, Drupal Developer and Acquia’s Developer Evangelist.

Planet Drupal - Tue, 2014-04-22 11:01

I’ve been working at Acquia for a little over two weeks now. The experience has been one I would characterize as “whirlwind” in nature. If you ask new Acquians about their on-boarding experience, the most common comparison is the age old “Drinking from a firehose” analogy. I honestly expected, as someone who already knew Drupal, that this might in some way lessen the stream of information to manageable levels. I was wrong. If anything the fire hose is a bit like sticking your toes into the shallow end of the pool, and knowing Drupal already was like “Oh you know how to swim?

Categories: FLOSS Project Planets

I am GSOCer this year

Planet KDE - Tue, 2014-04-22 10:42

Hello planet! Okay, that’s not even a word, GSOCer..! but I am selected in Google Summer of Code 2014 with KDE. My project is Integrating Plasma mediacenter with Plasma Next and porting it to KF5 and Qt5. My mentors are Sinny Kumari and Shantanu Tushar Jha.

This project involves various task,

  • Porting plasma mediacenter library, plugins, and backends to Qt5/KF5
  • Porting user interface elements to QtQuick2 and Plasma Next components or rewriting if needed.
  • Creating Plasma shell package which wraps user interface elements.
  • Writing more unit tests for plasma-mediacenter library.
  • Porting away from deprecated API in KDELibs4Support.

I discussed this project with Shantanu Tushar at the conf.kde.in 2014. Given I already worked on plasmoids porting to Qt5/KF5 in Plasma Next during Season of KDE I found this project perfect for me. Some more things about this project.

  • In a free time I already ported plasma mediacenter library and unit tests to KF5 and Qt5. Code lives in frameworks-scratch branch of plasma-mediacenter repo,
  • Also I have ported browsing backends and plugins to KF5 and Qt5.
  • This will allow me to focus on user interface and shell implementation in coding period.
  • During community bonding period I will cordinate with KDE Visual design group and KDE Usability group for new design

Overall, this is going to be a great experience for me like Season of KDE. I will get involved with KDE Community more and more during GSoC and hopefully our users will benifit from it. That’s my wish.

Again Thank you my mentors, Plasma team and KDE for giving me chance to do things this GSoC. Also I thank Google for organizing such a nice program.

Categories: FLOSS Project Planets

PyCharm: Announcing PyCharm 3.1.3

Planet Python - Tue, 2014-04-22 10:40

The PyCharm 3.1.3 build 133.1347 has been uploaded and is now available from the download page. It also will be available shortly as a patch update from within the IDE.

This minor bug update delivers a significant fix in the debugger to properly support Python 3.4 (PY-12317). This fix for core Python support effectively works for both free and open source PyCharm Community Edition and full-fledged PyCharm Professional Edition.

Should you have any problems or queries with this release please file them to our Issue Tracker.

So what’s coming up next? We’re still working on PyCharm 3.4. The first PyCharm 3.4 Early Access Preview build is almost ready to be revealed and we aim to release it on this week.

Develop with pleasure!
-PyCharm team

Categories: FLOSS Project Planets

Europython: You need some good reasons for going to EuroPython 2014?!

Planet Python - Tue, 2014-04-22 09:54

Some good reasons for coming to Berlin for EuroPython 2014:

Last but not least: mean, speak and discuss with hundreds of other Python enthusiats. Share your knowledge, ask questions, make connections, extend your network or just enjoy some days with other Pythonistas.

Get your ticket now!

Categories: FLOSS Project Planets

Open source lecture capture at The University of Manchester

OSS Watch team blog - Tue, 2014-04-22 09:51

We received an excellent contribution to our Open Source Options for Education list this week, in the shape of a real-world usage example of Opencast Matterhorn at the University of Manchester.  The previous examples of Matterhorn usage we’ve had on the list have been documentation of pilot projects, so it’s great to have such an in-depth look at a full scale deployment to refer to.

Cervino (Matterhorn) by Eider Palmou. CC-BY

The case study looks at the University’s movement from a pilot project, with 10 machines and the proprietary Podcast Producer 2 software, to a deployment across 120 rooms using Opencast Matterhorn.  During the roll-out, the University adopted an opt-out policy meaning all lectures are captured by default, collecting around 1000 hours of media per week.

The University has no policy to preferentially select open source or proprietary software.   However, Matterhorn gave the University some specific advantages.  The lack of licensing or compulsory support costs kept the costs down, and combined with the system’s modularity allowed them to scale the solution from an initial roll-out to institution wide solution in a flexible way.  The open nature also allowed for customisations (such as connecting to the University timetable system) to be added and modified as requirements developed, without additional permissions being sought.  These advantages combined to provide a cost-effective solution within a tight timescale.

If your institution uses an open source solution for an educational application or service, let us know about it and we’ll include it in the Open Source Options for Education list.

Categories: FLOSS Research

Frederick Giasson: Configuring and Using OSF Ontology (Screencast)

Planet Drupal - Tue, 2014-04-22 09:47

This screencast will quickly introduce you to ontologies, and will explain you what are their rules in the Open Semantic Framework (OSF).

You will see how you can manage ontologies in OSF using the OSF for Drupal web interface. You will be able to import, create, update, delete and export ontologies. You will see how you can search within imported ontologies, how you can manage their permissions.

Finally you will see how you can manage the ontologies themselves: how you can create, update and delete classes, properties and named individuals using the Web user interface.

Categories: FLOSS Project Planets
Syndicate content