FLOSS Project Planets

Last two weeks in Krita — and development builds!

Planet KDE - Thu, 2014-11-27 09:55

Lots and lots and lots (and lots) of new stuff! We’re getting really close to the 2.9 release freeze, which is currently set for December 10th. That means that the current builds are full of fresh code. In other words — they need testing. Now is the time to report bugs and issues!


Let’s start with the Kickstarter topics. Boudewijn has started work on loading and saving masks from PSD files, as well as the PSD layer style feature. The dialogs are mocked up, and part of the loading code has been written. But it’s a huge amount of work, and we’re simply not sure whether we’ll make it before the code freeze hits… Right now, Dmitry is working on fixing the rendering of vector objects and then the next step is improving the color smudge brush.

All other items are done… New in this build are:

  • *Non-destructive transformation masks. Only affine transforms, like rotate, resize, skew, move and perspective get real-time feedback. Warp, Liquify and Cage tools do have non-destructive transforms, but are a little expensive on the cpu, so Krita updates them  only every 3 seconds.
  • Shaped gradients (and they are really fast, too!)
  • Easy mask creation: you can import any kind of image as a mask and save masks as images.

In addition to the kickstarter topics that were already in the previous builds:

  • Liquify transform
  • Cage transform
  • Perspective transform
  • Selection transformation
  • Improved warp transform anti-aliasing
  • Improved thin line quality

Additionally, Dmitry has added a way to split the alpha channel out and edit it separately: combined with the Isolate Layer feature, this allows the user to edit the alpha channel separately and save the resulting layer with color channels having real color data even though alpha is zero. Previously, a fully transparent pixel would be set to black.

Check http://docs.unity3d.com/Manual/HOWTO-alphamaps.html on why it’s useful, and here’s a video by Paul Geraskin showing off the feature:

And check out the transformation mask video as well:


Windows and OSX users have an extra treat this time: the builds are made from the multi-view branch. There are still bugs and usability niggles, but it’s usable now, for real work, too. Here’s a screenshot showing how the same image can be open twice, once with autowrap enabled.

Krita’s G’Mic plugin is immensely powerful, but was hard to use without any preview… So Lukáš Tvrdý has added just that: you can preview the G’Mic effect in several sizes and on the canvas, too, depending on the speed with which the filter works.

Timothée Giet has implemented a long-standing wish for Krita: the ability to separate the opacity and flow parameters — and updated all default brush presets to reflect this improvement.

Scott Petrovic not only continued polishing Krita’s user interface by checking layout, alignment, use of slider widgets, adding tooltips and so on — he also updated the download, store and donation pages for krita.org. (And may we just suggest that the Muses DVD makes an excellend December present, get yours now!)

Bug Fixes:
  • There was a regression in the saving of Paintop presets, or Brush Presets as you know them. Brush Presets would not be registered seperately when created from a previous preset. This has now been fixed.
  • Aipek Hyperpen 12000u tablet support has been added.
  • Damien de Lemeny fixed the speed sensor for brushes
  • Jouni Pentikäinen fixed a regression in the brush editor, where checking items was broken and made it possible to choose whether the background color is set on the image or on the first layer
  • Timothée Giet added an option to select between synchronizing the eraser brush size with the current brush preset.

Both Jouni and Damien have applied for and received their KDE committer accounts!


For Linux users, Krita Lime has been been updated. This doesn’t include the multi-view feature yet. OpenSUSE users can use the new OBS repositories created by Leinir:

Windows users can choose between an installer and the zip file. You can unzip the zip file anywhere and start Krita by executing bin/krita.exe. We’re still working on acquiring a Surface Pro 3 so we can fix the tablet offset issue that happens when desktop scaling is enabled.

OSX users can open the dmg and copy krita.app where they want. Note that OSX still is not supported. There are OSX-specific bugs and some features are missing.

Categories: FLOSS Project Planets

Jonathan Dowland: PGP transition statement

Planet Debian - Thu, 2014-11-27 08:48

I'm transitioning from my old, 1024-bit DSA PGP key, FD35 0B0A C6DD 5D91 DB7A 83D1 168B 4E71 7032 F238, to my newer, 4096-bit RSA key, E037 CB2A 1A00 61B9 4336 3C8B 0907 4096 06AA AAAA.

If you have signed my old key, I'd be very grateful if you would consider signing my new key. (Thanks in advance!)

This is long overdue! I've had 06AAAAAA since 2009, but it took me a while to get enough signatures on it for me to consider a transition. I still have far more signatures on my older key, owing to attending more conferences when I was using it than since I switched.

This statement, available in plaintext at http://jmtd.net/log/pgp_transition/statement.txt, has been signed with both keys.

I've marked my old key as expiring in around 72 days time, which coincides with my change of job, and will be just short of ten years since I generated it.

-----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) iQIcBAEBCgAGBQJUdysuAAoJEAkHQJYGqqqquQ0QALVGcn9Wbasg0oh8JeE8rN7h k7FeZjmGk+ZwfXwo4Eq4pjxKp+TN+r4xBCFSHDRmMaAm9q7aWF/RqkdNiEtioQfJ SmfzLsoEnSBmS9dr1LvAoxIlUzs/9Lt8EY2tB4NQne3QIu3VyRBjQDvqff6KJgkV 849+GsrJezkg/2VFZkZp1N9y0YwrmezV/1j0N6zR8Y20LlF4YuD3egUJYvbrQ1pA krJwhiHskXRinqviYMg4CuTZZ3m2wc4fM6uiY5t9taqK90KFgmg73KvW6Rx2L+XP BUzTlSYHnK+qI23Cng//DthFww2Wf9RofkfRgXk0zAe4+DcoMZzALKtZQA6GrT46 wzsmu459p5nUwwa6BzUvqBzyVtzbtHN5xaFwbjHCPlemsFlH/8LHackJLsBvr2Gp q292huYu/HNo+Q6OIslFX6rc5AR3HKdbU2DxYMPAfI+CEtW1XwetK8ADSZc1v54C +sc+Yb2kAleLX2xMIfS9JqvcKOP2Gbo7nTdRPnS2jZRWOv8JxVfFiSzHVODtlLfk AmRvms5rQ2nPnB21yOd+DP2sACVKY0MS7/UwMh2hoQLR/bSu/jRh24n3WHDfQWtF Jp4AD80ABFV2t/pSP1L70HlxwPmOzra8AVzCdXMOT/SdT387rSM9fvue4FY5goT+ /pResJSl9pAbBCtjOFJoiEYEARECAAYFAlR3Ky4ACgkQFotOcXAy8jiLawCgofsp ggze/iSpfkyeL0vhXi6N/WMAoLQogFQY+VaQrVP3lGl1VVh4jw0W =JBA4 -----END PGP SIGNATURE-----
Categories: FLOSS Project Planets

Cheppers blog: Acquia Certified Developer exam - passed!

Planet Drupal - Thu, 2014-11-27 07:21

We are proud to announce that Cheppers has now three Acquia Certified Developers!
This Monday Mau, Attila and Andor have all passed the exam held by Acquia, and we are very proud of them.

Categories: FLOSS Project Planets

Kristian Polso: Crawling the top 15,000 Drupal websites

Planet Drupal - Thu, 2014-11-27 05:01
So I crawled the top 1,000,000 websites from Alexa, looking for all of the Drupal websites (and other popular CMS's). Here are the results.
Categories: FLOSS Project Planets

Joachim's blog: A git-based patch workflow for drupal.org (with interdiffs for free!)

Planet Drupal - Thu, 2014-11-27 03:39

There's been a lot of discussion about how we need github-like features on d.org. Will we get them? There's definitely many improvements in the pipeline to the way our issue queues work. Whether we actually need to replicate github is another debate (and my take on it is that I don't think we do).

In the meantime, I think that it's possible to have a good collaborative workflow with what we have right now on drupal.org, with just the issue queue and patches, and git local branches. Here's what I've gradually refined over the years. It's fast, it helps you keep track of things, and it makes the most of git's strengths.

A word on local branches

Git's killer feature, in my opinion, is local branches. Local branches allow you to keep work on different issues separate, and they allow you to experiment and backtrack. To get the most out of git, you should be making small, frequent commits.

Whenever I do a presentation on git, I ask for a show of hands of who's ever had to bounce on CMD-Z in their text editor because they broke something that was working five minutes ago. Commit often, and never have that problem again: my rule of thumb is to commit any time that your work has reached a state where if subsequent changes broke it, you'd be dismayed to lose it.

Starting work on an issue

My first step when I'm working on an issue is obviously:

  git pull

This gets the current branch (e.g. 7.x, 7.x-2.x) up to date. Then it's a good idea to reload your site and check it's all okay. If you've not worked on core or the contrib project in question in a while, then you might need to run update.php, in case new commits have added updates.

Now start a new local branch for the issue:

  git checkout -b 123456-foobar-is-broken

I like to prefix my branch name with the issue number, so I can always find the issue for a branch, and find my work in progress for an issue. A description after that is nice, and as git has bash autocompletion for branch names, it doesn't get in the way. Using the issue number also means that it's easy to see later on which branches I can delete to unclutter my local git checkout: if the issue has been fixed, the branch can be deleted!

So now I can go ahead and start making commits. Because a local branch is private to me, I can feel free to commit code that's a total mess. So something like:

  // Commented-out earlier approach that didn't quite work right.
  $foo += $bar;
  // Badly-formatted code that will need to be cleaned up.
  if($badly-formatted_code) { $arg++; }

That last bit illustrates an important point: commit code before cleaning up. I've lost count of the number of times that I've got it working, and cleaned up, and then broken it because I've accidentally removed an important that was lost among the cruft. So as soon as code is working, I make a commit, usually whose message is something like 'TOUCH NOTHING IT WORKS!'. Then, start cleaning up: remove the commented-out bits, the false starts, the stray code that doesn't do anything. (This is where you find it actually does, and breaks everything: but that doesn't matter, because you can just revert to a previous commit, or even use git bisect.)

Keeping up to date

Core (or the module you're working on) doesn't stay still. By the time you're ready to make a patch, it's likely that there'll be new commits on the main development branch (with core it's almost certain). And prior to that, there may be commits that affect your work in some way: API changes, bug fixes that you no longer need to work around, and so on.

Once you've made sure there's no work currently uncommitted (either use git stash, or just commit it!), do:

git fetch
git rebase BRANCH

where BRANCH is the main development branch that is being committed to on drupal.org, such as 8.0.x, 7.x-2.x-dev, and so on.

(This is arguably one case where a local branch is easier to work with than a github-style forked repository.)

There's lots to read about rebasing elsewhere on the web, and some will say that rebasing is a terrible thing. It's not, when used correctly. It can cause merge conflicts, it's true. But here's another place where small, regular commits help you: small commits mean small conflicts, that shouldn't be too hard to resolve.

Making a patch

At some point, I'll have code I'm happy with (and I'll have made a bunch of commits whose log messages are 'clean-up' and 'formatting'), and I want to make a patch to post to the issue:

  git diff 7.x-1.x > 123456.PROJECT.foobar-is-broken.patch

Again, I use the issue number in the name of the patch. Tastes differ on this. I like the issue number to come first. This means it's easy to use autocomplete, and all patches are grouped together in my file manager and the sidebar of my text editor.

Reviewing and improving on a patch

Now support Alice comes along, reviews my patch, and wants to improve it. She should make her own local branch:

  git checkout -b 123456-foobar-is-broken

and download and apply my patch:

  patch -p1 < 123456.PROJECT.foobar-is-broken.patch

(Though I would hope she has a bash alias for 'patch -p1' like I do. The other thing to say about the above is that while wget is working at downloading the patch, there's usually enough time to double-click the name of the patch in its progress output and copy it to the clipboard so you don't have to type it at all.)

And finally commit it to her branch. I would suggest she uses a commit message that describes it thus:

  git commit -m "joachim's patch at comment #1"

(Though again, I would hope she uses a GUI for git, as it makes this sort of thing much easier.)

Alice can now make further commits in her local branch, and when she's happy with her work, make a patch the same way I did. She can also make an interdiff very easily, by doing a git diff against the commit that represents my patch.

Incorporating other people's changes to ongoing work

All simple so far. But now suppose I want to fix something else (patches can often bounce around like this, as it's great to have someone else to spot your mistakes and to take turns with). My branch looks like it did at my patch. Alice's patch is against the main branch (for the purposes of this example, 7.x-1.x).

What I want is a new commit on the tip of my local branch that says 'Alice's changes from comment #2'. What I need is for git to believe it's on my local branch, but for the project files to look like the 7.x-1.x branch. With git, there's nearly always a way:

  git checkout 7.x-1.x .

Note the dot at the end. This is the filename parameter to the checkout command, which tells git that rather than switch branches, you want to checkout just the given file(s) while staying on your current branch. And that the filename is a dot means we're doing that for the entire project. The branch remains unchanged, but all the files from 7.x-1.x are checked out.

I can now apply Alice's patch:

  patch -p1 < 123456.2.PROJECT.foobar-is-broken.patch

(Alice has put the comment ID after the issue ID in the patch filename.)

When I make a commit, the new commit goes on the tip of my local branch. The commit diff won't look like Alice's patch: it'll look like the difference between my patch and Alice's patch: effectively, an interdiff.

  git commit -m "Alice's patch at comment #2"

I can now do a diff as before, post a patch, and work on the issue advances to another iteration.

Here's an example of my local branch for an issue on Migrate I've been working on recently. You can see where I made a bunch of commits to clean up the documentation to get ready to make a patch. Following that is a commit for the patch the module maintainer posted in response to mine. And following that are a few further tweaks that I made on top of the maintainer's patch, which I then of course posted as another patch.

Improving on our tools

Where next? I'm pretty happy with this workflow as it stands, though I think there's plenty of scope for making it easier with some git or bash aliases. In particular, applying Alice's patch is a little tricky. (Though the stumbling block there is that you need to know the name of the main development branch. Maybe pass the script the comment URL, and let it ask d.org what the branch of that issue is?)

Beyond that, I wonder if any changes can be made to the way git works on d.org. A sandbox per issue would replace the passing around of patch files: you'd still have your local branch, and merge in and push instead of posting a patch. But would we have one single branch for the issue's development, which then runs the risk of commit clashes, or start a new branch each time someone wants to share something, which adds complexity to merging? And finally, sandboxes with public branches mean that rebasing against the main project's development can't be done (or at least, not without everyone know how to handle the consequences). The alternative would be merging in, which isn't perfect either.

The key thing, for me, is to preserve (and improve) the way that so often on d.org, issues are not worked on by just one person. They're a ball that we take turns pushing forward (snowball, Sisyphean rock, take your pick depending on the issue!). That's our real strength as a community, and whatever changes we make to our toolset have to be made with the goal of supporting that.

Categories: FLOSS Project Planets

Keith Packard: Black Friday 2014

Planet Debian - Thu, 2014-11-27 02:47
Altus Metrum's 2014 Black Friday Event


Altus Metrum announces two special offers for "Black Friday" 2014.

We are pleased to announce that both TeleMetrum and TeleMega will be back in stock and available for shipment before the end of November. To celebrate this, any purchase of a TeleMetrum, TeleMega, or EasyMega board will include, free of charge, one each of our 160, 400, and 850 mAh Polymer Lithium Ion batteries and a free micro USB cable!

To celebrate NAR's addition of our 1.9 gram recording altimeter, MicroPeak, to the list of devices approved for use in contests and records, and help everyone get ready for NARAM 2015's altitude events, purchase 4 MicroPeak boards and we'll throw in a MicroPeak USB adapter for free!

These deals will be available from 00:00 Friday, 28 November 2014 through 23:59 Monday, 1 December, 2014. Only direct sales through our web store at http://shop.gag.com are included; no other discounts apply.

Find more information on all Altus Metrum products at http://altusmetrum.org.

Thank you for your continued support of Altus Metrum in 2014. We continue to work on more cool new products, and look forward to meeting many of you on various flight lines in 2015!

Categories: FLOSS Project Planets

PreviousNext: Lightning talk - Drupal 8's Third Party Settings Interface

Planet Drupal - Wed, 2014-11-26 21:22

During this weeks developers' meeting our lightning talk was all about Drupal 8's ThirdPartySettingsInterface.

Here's the video introduction to this powerful new feature in Drupal.

Categories: FLOSS Project Planets

Gardening Recipes

Planet KDE - Wed, 2014-11-26 21:02

You thought this would have something to do with working outside and/or cooking something. You were wrong.

As Albert blogged previously the KDE gardening team's love project this time is KRecipes.
The mailing list has been moved.
The website content has been added to userbase.
The 2.0 release is on upload.kde.org, soon to be moved into place (a new place since download.kde.org hasn't done a krecipes release before).

Anyone that would like a relatively small way to quickly help KRecipes out it could use some patches on reviewboard (group added, sending to the new mailing list already) to port away from Qt3support classes. Some other ideas off the top of my head also:

1. Make it use kunitconversion to convert metric units to imperial units for those of us stuck in countries that use imperial measuring systems.
2. Freshen up the ui a bit so it looks less like a database viewer and more like a recipe viewer/editor.

Categories: FLOSS Project Planets

Reminder: Desktops DevRoom @ FOSDEM 2015

Planet KDE - Wed, 2014-11-26 19:14

We are less than 10 days away from the deadline for the Desktops DevRoom at FOSDEM 2015, the largest Free and Open Source event in Europe.

Do you think you can fill a room with 200+ people out of 6,000+ geeks? Prove it!

Check the Call for Talks for details on how to submit your talk proposal about anything related to the desktop:

  • Development
  • Deployment
  • Community
  • SCM
  • Software distribution / package managers
  • Why a particular default desktop on a prominent Linux distribution
  • etc


Categories: FLOSS Project Planets

Pau Garcia i Quiles: Reminder: Desktops DevRoom @ FOSDEM 2015

Planet Debian - Wed, 2014-11-26 19:14

We are less than 10 days away from the deadline for the Desktops DevRoom at FOSDEM 2015, the largest Free and Open Source event in Europe.

Do you think you can fill a room with 200+ people out of 6,000+ geeks? Prove it!

Check the Call for Talks for details on how to submit your talk proposal about anything related to the desktop:

  • Development
  • Deployment
  • Community
  • SCM
  • Software distribution / package managers
  • Why a particular default desktop on a prominent Linux distribution
  • etc


Categories: FLOSS Project Planets

Acquia: Part 2 – Cal Evans and Jeffrey A. "jam" McGuire talk open source

Planet Drupal - Wed, 2014-11-26 18:53
Language Undefined

Voices of the ElePHPant / Acquia Podcast Ultimate Cage Match Part 2 - I had the chance to try to pull Cal Evans out of his shell at DrupalCon Amsterdam. After a few hours, he managed to open up and we talked about a range of topics we have in common. In this part of our conversation we talk about 'Getting off the Island', inter-project cooperation in PHP and Drupal's role in that; the reinvention and professionalization of PHP core development; decoupled, headless Drupal 8; PHP and the LAMP stack as tools of empowerment and the technologists' responsibility to make devices and applications that are safe, secure, and private by default.

Categories: FLOSS Project Planets

Adrien Grand: lz4-java 1.3.0 is out

Planet Apache - Wed, 2014-11-26 18:06

A new release of lz4-java is out and already available on Maven Central. As usual, documentation and benchmarks (compression, decompression and hashing) are published at http://jpountz.github.io/lz4-java/1.3.0/.

So what’s new? This long overdue release mainly includes 3 new features:


xxhash added a 64-bits hash function as part of r35, which just like xxh32 is a high-quality hash function and is very fast (especially on 64-bits platforms).

ByteBuffer support

Something which is a bit annoying in Java is that there are two ways to store sequences of bytes: byte[] arrays and direct ByteBuffers. New APIs that work on top of the ByteBuffer API have been added so that you can compress, decompress or hash them without having to first copy their content to a byte[].

Configurable compression levels for lz4 hc

In r113, lz4 hc gained support for configurable compression levels. This is now available to the Java API too.

A more complete list of the changes is available on Github.

Special thanks to Branimir Lambov for the ByteBuffer support and to Linnaea Von Lavia for having added support for xxh64 and configurable compression levels to lz4 hc!

Happy compressing and hashing!

Categories: FLOSS Project Planets

Nathan Lemoine: My Ideal Python Setup for Statistical Computing

Planet Python - Wed, 2014-11-26 15:14
I’m moving more and more towards Python only (if I’m not there already). So I’ve spent a good deal of time getting the ideal Python IDE setup going. One of the biggest reasons I was slow to move away from … Continue reading →
Categories: FLOSS Project Planets

Four Kitchens: Extracting data from Drupal entities the right way

Planet Drupal - Wed, 2014-11-26 13:54

If you’ve ever had to extract data from Drupal entities you know it can be a painful process. This post presents a solution for distilling Drupal entities into human-readable documents that can be easily consumed by other modules and services.

Projects Drupal
Categories: FLOSS Project Planets


Planet KDE - Wed, 2014-11-26 12:36

Recently I wrote about my so-called "lightweight project management policy". I am going to start slowly and present a small side-project: Colorpick.

Colorpick is a color picker and contrast checker. I originally wrote it to help me check and fix the background and foreground colors of the Oxygen palette to ensure text was readable. Since then I have been using it to steal colors from various places and as a magnifier to inspect tiny details.

The main window looks like this:

Admittedly, it's a bit ugly, especially the RGB gradients (KGradientSelector and the Oxygen style do not play well together). Nevertheless, it does the job, which is what side-projects are all about.

Here is an annotated image of the window:

  1. The current color: clicking it brings the standard KDE color dialog. The main reason it's here is because it can be dragged: drag the color and drop on any application which supports color.

  2. The color in hexadecimal.

  3. Luminance buttons: click them to adjust the luminance of the color.

  4. Color picker: brings the magnifier to pick a color from the screen. One nice thing about this magnifier is that it can be controlled from the keyboard: roughly move the mouse to the area where you want to pick a color then position the picker precisely using the arrow keys. When the position is OK: press Enter to pick the color. Pressing Escape or right-clicking closes the magnifier.

    Picking the color of the 1-pixel door knob from the home icon. The little inverted-color square in the center shows which pixel is being picked.

  5. Copy button: clicking this button brings a menu with the color expressed in different formats. Selecting one entry copies the color to the clipboard, ready to be pasted.

  6. RGB sliders: not much to say here. Drag the cursors or enter values, your choice.

  7. Contrast test text: shows some demo text using the selected background and foreground colors, together with the current contrast value. It lets you know if your contrast is good enough according to http://www.w3.org/TR/WCAG20/#visual-audio-contrast.

Interested? The project is on GitHub at https://github.com/agateau/colorpick. Get it with git clone https://github.com/agateau/colorpick then follow the instructions from the INSTALL.md file.

Categories: FLOSS Project Planets

Rich Bowen: ApacheCon Budapest 2014

Planet Apache - Wed, 2014-11-26 11:31

Last week, the Apache Software Foundation, with the help of the Linux Foundation event team, hosted ApacheCon Europe in lovely Budapest, Hungary at the gorgeous Corinthia hotel.

If my count is right, this was the 24th event to bear the name ‘ApacheCon’, and the 8th time we’ve done it in Europe. Also, we were celebrating the 15th anniversary of the Apache Software Foundation, which incorporated in June of 1999.

Every ApacheCon has its own set of memories, from Douglas Adams pacing the stage in London, to the ApacheCon Jam Sessions in Dublin, to the Segway tours in San Diego, to the funeral march in New Orleans. And Budapest was no different – a wonderful event with lots of great memories.

On Sunday night, I had dinner with the TAC’ers. The Apache Travel Assistance Committee is a program by which we get people to ApacheCon who could otherwise not afford to be there. This is critical to the mission of the ASF, because it builds the community in an inclusive way, rather than limiting it to people with the funds to travel. TAC recipients have to give back a little – they provide session chair services, introducing speaker and counting attendees. A large percentage of our former TAC recipients have become deeply involved in the ASF, more than paying off the investment we make in them.

Although I didn’t try the Tripe And Trotters on the buffet line, I did enjoy great conversation with old friends and new ones around the table.

Monday morning, I opened the conference with the State Of The Feather keynote – our annual report on what the ASF has done with sponsor dollars and volunteer time over the last year, and some thoughts about where we’re going in the next 15 years. The latter is, of course, very difficult in an organization like the ASF, where projects, not the Foundation leadership, make all of the technical decisions. However, David Nalley, the VP of Infrastructure, had some pretty specific ideas of what we have to do in terms of Infrastructure investment to ensure that we’re still able to support those projects, which are being added at about 1.5 a month, for the next 15 years and beyond.

After the State of the Feather, I had the enormous privilege to stay on the stage with Hugh Howey to discuss the parallels between self publishing and open source software development. I’ve got another blog post in the works specifically about that, so stay tuned, and I’ll add a link here when it’s ready. Any day that starts with me hanging out on stage with a favorite author in front of 300 of my closest friend is a good day.

Once the sessions started, everyone went their separate ways, and I gave several talks about the Apache httpd project. httpd has been my main focus at Apache for 15 years, and although it’s faded into the background behind more exciting projects like Spark, Hadoop, CloudStack, Solr, and so on, it’s still the workhorse that powers more than half of the websites you’ll ever see, so there’s always a decent audience that turns out to these talks, which is very gratifying.

One of my talks was more focused on the business of doing documentation and “customer support” in the open source world. My “RTFM? Write a better FM!” talk discusses the RTFM attitude that exists in so many open source software communities, and how destructive it to the long term health of the projects. I’ve got another blog post in the works specifically about that, too, and I’ll add a link here when it’s ready.

Tuesday and Wednesday were a whirlwind of sessions, meetings – both formal and informal, and meals with friends, colleagues, and newly-met conference attendees. As a board member, I’d sometimes get pulled into project community discussions to offer the board’s perspective on things. As conference chair, there we numerous discussions about the upcoming event – Austin, Texas, April 13-17 – and the next Europe event – stay tuned, announcement coming soon!

Session highlights during the week include:

  • Shane Curcuru’s talks on trademarks, copyrights, and protecting the Apache brand.
  • Jesus Barahona’s talk about the statistical analysis work he’s done for Cloudstack, and other projects, and how it can be used to support and encourage community growth.
  • Pierre Smits’ case study talk about OFBiz and beer, which I missed because I was speaking at the time, but which I heard was amazing.
  • Joe Brockmeier’s talk about Docker, which was apparently the best-attended talk of the entire event.

Although we didn’t record the talks this year (if you’re interested in sponsoring that for next time, get in touch – rbowen@apache.org), you can see the slides for most of these talks on the conference website.

On Monday night we had a birthday cake for the ASF, and I got all emotional about it. The ASF has been hugely influential in so many aspects of my life, from my amazing friends to my amazing job, and it’s such an honor to serve the Foundation in the capacity of conference chair. I look forward to the next 15 years and seeing where we go.

And then, so fast, it was Wednesday evening. David Nalley gave his keynote about the value of the Apache Software Foundation. While I was expecting a number – something like 3 trillion dollars or something – instead, he talked about the many ways that the ASF adds value to companies, to individuals, and to the world as a whole. A truly inspiring talk, and it made me incredibly proud to be associated with the ASF. Bror Salmelin then talked about the Open Innovation 2.0 project at the European Commission to close out the formal portion of our event.

The lightning talks were a big hit this time around, with a great mix of serious and lighthearted talks, all in five minutes or less, MC’ed by the inimitable Joe Brockmeier.

On the whole, I was very pleased with this conference. If there’s anything that disappointed me about the conference, it’s only the number of old friends who couldn’t make it. I hope that everyone who couldn’t make it to Budapest is able to come to Austin to celebrate the 25th ApacheCon, and the 20th anniversary of the first release of the Apache HTTP Server!


[Note: Some of these photos are mine, and are on Flickr. Some of them are from the Linux Foundation, and are also on Flickr.]

Categories: FLOSS Project Planets

Martijn Faassen: Morepath 0.9 released!

Planet Python - Wed, 2014-11-26 11:30

Yesterday I released Morepath 0.9 (CHANGES)!

What is Morepath? Morepath is a Python web framework. It tries to be especially good at implementing modern, RESTful backends. It is very good at creating hyperlinks. It is easy to use, but still lets you write flexible, maintainable and reusable code. Morepath is very extensively documented.

This release doesn't involve earth-shaking changes like the 0.7 and 0.8 releases did, but it still has an interesting change I'd like to discuss.

Fully qualified links

Morepath from the beginning generated path-based links that look like this:


In Morepath 0.9 this changed. Now we generate the fully qualified URL, like this:


That's what most other web frameworks do.

The path-based links have advantages: they are shorter, you don't have to worry about the URL scheme, and they have less chance to be exploited by HTTP Host header attacks. But they do make the life of REST clients slightly harder, as custom client code has to be written that adds the base URL to path-based links. Since Morepath wants to be a good framework for writing RESTful applications, we decided to change the default behavior to full links.

Morepath wouldn't be Morepath if you couldn't do a few more interesting things.

Just path-based links, please

Want the old behavior back for an application? That's easy:

class MyApp(morepath.App): pass @MyApp.link_prefix() def myapp_link_prefix(request): return ''

Now your application generates path based links again like before.

(thanks Denis Krienbühl for contributing this directive to Morepath!)

Proxy support

Morepath by default doesn't obey the HTTP Forwarded header in link generation, which is a good thing, as it would allow various link hijacking attacks if it did. But if you're behind a trusted proxy that generates the Forwarded header you do want Morepath to take it into account. To do so, you install the more.forwarded extension and subclass your (root) application from it:

from more.forwarded import ForwardedApp class MyApp(ForwardedApp): pass

We don't have support yet for the old-style X_FORWARDED_HOST and X_FORWARDED_PROTO that the Forwarded header replaces; we're open to contributions to more.forwarded!

Linking to external applications

Now we come to a very interesting capability of Morepath: the ability to model and link to external applications.

Let's consider a hypothetical external application. It's hosted on the ubiquitous http://example.com. It has documents listed on URLs like this:


We could of course simply create links to it by concatenating http://example.com/documents and the document id, foo. For such a simple external application that is probably the best way to go. So what I'm going to describe next is total overkill for such a simple example, but I have to use a simple example to make it comprehensible at all.

Here's how we'd go about modeling the external site:

class ExternalDocumentApp(morepath.App): pass class ExternalDocument(object): def _init__(self, id): self.id = id @ExternalDocumentApp.path(model=ExternalDocument, path='/documents/{id}') def get_external_document(id): return ExternalDocument(id)

We don't declare any views for ExternalDocument as our code is not going to create representations for the external document, just create links to it. We need to mount it into our actual applicatino code so that we can use it:

@App.mount(path='external_documents', app=ExternalDocumentApp) def mount_external_document_app(): return ExternalDocumentApp()

Now we set up the link_prefix for ExternalDocumentApp to point to http://example.com:

@ExternalDocumentApp.link_prefix() def external_link_prefix(request): return 'http://example.com'

As you can see, we've hardcoded http://example.com in it. Now if you're in some view code for your App, you can create a link to an ExternalDocument like this:

@App.json(model=SomeModel) def some_model_default(self, request): return { 'link': request.link( ExternalDocument('foo'), app=request.app.child('external_documents')) }

This will generate the correct link to the external document foo:

http://example.com/documents/foo Simplification

You can make this simpler by using a defer_links directive for your App (introduced in Morepath 0.7):

@App.defer_links(model=ExternalDocument) def defer_document(app, obj): return app.child('external_documents')

We've now told Morepath that any ExternalDocument objects need to have their link generated by the mounted external_documents app. This allows you to write link generation code that's a lot simpler:

@App.json(model=SomeModel) def some_model_default(self, request): return { 'link': request.link(ExternalDocument('foo')) } In review

As I said previously, this is total overkill for an external application as simple as the hypothetical one I described. But this technique of modeling an external application can be very useful in specific circumstances:

  • This is declarative code. If you are dealing with a lot of different kind of links to an external application, it can be worthwhile to properly model it in your application, instead of spreading more failure-prone link construction code all over the place.

  • If you have to deal with an external application that for some reason is expected to change its structure (or hostname) in the future. By explicitly modeling what you link to, you can easily adjust all the outgoing links in your application when that change happens.

  • Consider a Morepath application that has a sub-application, mounted into it in the same process. You now decide to run this sub-application in a separate process, with a separate hostname. To do this you break out the code out into its own project so you can run it separately.

    In this case you already have declarative link generation to it. In the original project, you create a hollowed-out version of the sub-application that just has the path directives that describe the link structure. You then hardcode the new hostname using link_prefix.

    The code that links to it in the original application will now automatically update to point to the sub-application on the new host.

    This way you can break a larger application into multiple separate pieces pretty easily!


If you've read all the way to the end, I hope you've enjoyed that and aren't completely overwhelmed by these options! Just remember: these are advanced use cases. Morepath grows with your application. It is simple for simple things, but is there for you when you do have more complex requirements.

Categories: FLOSS Project Planets

Bryan Pendleton: The bug it took three dozen engineers to fix

Planet Apache - Wed, 2014-11-26 10:53

Don't miss this nicely presented story about a subtle but devastating performance bug in a database connection pool cache eviction policy choice which was finally solved by a task force of Facebook engineers: Solving The Mystery of Link Imbalance: a Metastable Failure State at Scale

Categories: FLOSS Project Planets

Gunnar Wolf: Guests in the classroom: @Rolman talks about persistent storage and filesystems

Planet Debian - Wed, 2014-11-26 10:49

On November 14, as a great way to say goodbye to a semester, a good friend came to my class again to present a topic to the group; a good way to sum up the contents of this talk is "everything you ever wondered about persistent storage".

As people who follow my blog know, I like inviting my friends to present selected topics in my Operating Systems class. Many subjects will stick better if presented by more than a single viewpoint, and different experiences will surely enrich the group's learning.

So, here is Rolando Cedillo — A full gigabyte of him, spawning two hours (including two hiccups where my camera hit a per-file limit...).

Rolando is currently a RedHat Engineer, and in his long career, he has worked from so many trenches, it would be a crime not to have him! Of course, one day we should do a low-level hardware session with him, as his passion (and deep knowledge) for 8-bit arcades is beyond any other person I have met.

So, here is the full video on my server. Alternatively, you can get it from The Internet Archive.

Categories: FLOSS Project Planets

Code Karate: Drupal 7 File Resumable Upload Module

Planet Drupal - Wed, 2014-11-26 10:04
Episode Number: 181

The Drupal 7 File Resumable Upload Module is a great way to allow your Drupal site to upload large files. This is especially helpful if your server limits the size of files you can upload to your Drupal site. The module simply replaces the standard Drupal file upload field with a better alternative that allows:

Tags: DrupalDrupal 7File ManagementMediaDrupal Planet
Categories: FLOSS Project Planets
Syndicate content