FLOSS Project Planets

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, April 2015

Planet Debian - Mon, 2015-05-18 05:58

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In April, 81.75 work hours have been dispatched among 5 paid contributors (20.75 hours where unused hours of Ben and Holger that were re-dispatched to other contributors). Their reports are available:

Evolution of the situation

May has seen a small increase in terms of sponsored hours (66.25 hours per month) and June is going to do even better with at least a new gold sponsor. We will have no problems sustaining the increased workload it implies since three Debian developers joined the team of contributors paid by Freexian (Antoine Beaupré, Santiago Ruano Rincón, Scott Kitterman).

The Jessie release probably shed some light on the Debian LTS project since we announced that Jessie will benefit from 5 years of support. Let’s hope that the trend will continue in the following months and that we reach our first milestone of funding the equivalent of a half-time position.

In terms of security updates waiting to be handled, the situation is a bit contrasted: the dla-needed.txt file lists 28 packages awaiting an update (12 less than last month), the list of open vulnerabilities in Squeeze shows about 60 affected packages in total (4 more than last month). The extra hours helped to make a good stride in the packages awaiting an update but there are many new vulnerabilities waiting to be triaged.

Thanks to our sponsors

The new sponsors of the month are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Categories: FLOSS Project Planets

Nick Clifton: May 2015 GNU Toolchain Update

GNU Planet! - Mon, 2015-05-18 05:32
Hi Guys,

  There are several things to report this month:

    * GCC now supports targets configured to use the MUSL C library:   http://www.musl-libc.org/


    * The Compiler has a new warning option: -Wmisleading-indentation
  
      This generates warnings when the indentation of the code does not reflect the block structure.  For example:

       if (some_condition ())
          foo ();
          bar ();
/* Gotcha: this is not guarded by the "if".  */

      The warning is disabled by default.


    * The Compiler also has a new shift warning: -Wshift-negative-value
    
      This generates warnings when left shifting a negative value.  The warning is enabled by -Wextra in C99 and C++11 modes (and newer).  The warning can be suppressed by an appropriate cast.  For example:
    
       val |= ~0 << loaded;       // Generates warning
       val |= (unsigned) ~0 << loaded;    // Does not warn


    * GCC supports a new option: -fno-plt

      When compiling position independent code this tells the compiler not to use PLT for external function calls.  Instead the address is loaded from the GOT and then branched to directly.  This leads to more efficient code by eliminating PLT stubs and exposing GOT load to optimizations.

      Not all architectures support this option, and some other optimization features, such as lazy binding, may disable it.


    * GCC's sanitizer has a new option: -fsanitize=bounds-strict

      This option enables strict instrumentation of array bounds.  Most out of bounds accesses are detected, including flexible array members and flexible array member-like arrays.


    * The AArch64 backend supports a new option to enable a workaround for the ARM Cortex-A53 erratum number 843419.  The workaround itself is implemented in the linker, but it can be enabled via the compiler option:

        -mfix-cortex-a53-843419
    
      Note, specifying -mcpu=cortex-a53 is not enough to enable this option as not all versions of the A53 need the erratum.


    * The AArch64 backend also supports a new core type of "native".  When used as -mcpu=native or -mtune=native it tells the backend to base its core selection on the host system.  If the compiler cannot recognise the processor of the host system then the option does nothing.


    * The Linker now supports the Intel MCU architecture:  https://groups.google.com/forum/#!topic/ia32-abi/cn7TM6J_TIg


    * GDB 7.9.1 has been released!

      GDB 7.9.1 brings the following fixes and enhancements over GDB 7.9:

     + PR build/18033 (C++ style comment used in gdb/iq2000-tdep.c and gdb/compile/compile-*.c)
     + PR build/18298 ("compile" command cannot find compiler if tools configured with triplet instead of quadruplet)
     + PR tui/18311 (Random SEGV when displaying registers in TUI mode)
     + PR python/18299 (exception when registering a global pretty-printer in verbose mode)
     + PR python/18066 (argument "word" seems broken in Command.complete (text, word))
     + PR pascal/17815 (Fix pascal behavior for class fields with testcase)
     + PR python/18285 (ptype expr-with-xmethod causes SEGV)

Cheers
  Nick
Categories: FLOSS Project Planets

Interview with Evgeniy Krivoshekov

Planet KDE - Mon, 2015-05-18 04:00

Could you tell us something about yourself?

Hi! My name is Evgeniy Krivoshekov, 27 years old, I’m from the far east of Russia, Khabarovsk. I’m an engineer but have worked as sales manager, storekeeper and web programmer. Now I’m a 3d-modeller! I like to draw, read books, comics and manga, to watch fantastic movies and cartoons and to ride my bicycle.

Do you paint professionally, as a hobby artist, or both?

I’m not a pro-artist yet. Drawing is my hobby now but I really want to become a full-time professional artist. I take commissions for drawings occasionally, but not all the time.

What genre(s) do you work in?

Fantasy, still life.

Whose work inspires you most — who are your role models as an artist?

Wah! So many artists who inspire me!

I think that I love not the artists but their works. For example: Peter Han’s drawings in traditional technique; Ilya Kuvshinov’s work in photoshop and with anime style; Dave Rapoza, who is an awesome artist who draws in traditional and digital technique with his own style and very detailed; Pascal Campion – his work is full of mood and motion and life! And all those artists who inspire me a little. I like many kinds of art: movies, cartoon, anime, manga and comics, music and all kinds of art inspire me,

How and when did you get to try digital painting for the first time?

Hmmmm… I’m not sure but I think that was in 2007 when my father bought our (my family’s) first computer for learning and studying. I was a student, my sister too, and we needed a computer. My first digital tablet was Genius, and the software was Adobe Photoshop CS2.

What makes you choose digital over traditional painting?

I don’t choose between digital and traditional drawing – I draw with digital and traditional techniques. I’ve been doing traditional drawing since childhood but digital drawing I think I’m just starting to learn.

How did you find out about Krita?

I think it was when I started using Linux about 3-4 years ago. Or when I found out about the artist David Revoy and read about Krita on his website.

What was your first impression?

Ow – it was realy cool! Krita’s GUI is like Photoshop but the brushes are like brushes in Sai, wonderful smudge brushes! It was a very fast program and it was made for Linux. I was so happy!

What do you love about Krita?

Surprisingly freely configurable interface. I used to draw in MyPaint or GIMP, but it was not so easy and comfortable as in Krita. Awesome smudge brushes, dark theme, Russian support by programmer Dmitriy Kazakov. The wheel with brushes and the color wheel on right-click of the mouse – what a nice idea! The system of dockers.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Managing very high resolution files, the stability and especially ANIMATION! I want to do cartoons, that’s why I want an animation toolkit in Krita. It will be so cool to draw cartoons in Krita as in TV Paint. But Krita is so powerful and free.

What sets Krita apart from the other tools that you use?

I use Blender, MyPaint, GIMP and Krita but I rarely mix them. MyPaint and GIMP I rarely use, only when I really need them. Blender and Krita are my favourite software. I think that I will soon start to combine them for mix-art: 3d-art+hand-drawing.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I think Frog-rider, Sunny, detailed work with an interesting plot about the merchant on the frog. Funny and simple – everything I like.

What techniques and brushes did you use in it?

I used airbrush and circle standard brushes, basic wet brush, fill block and fill circle brushes, ink brush for sketching, my own texture brush and move tool. That’s all I need for drawing. As regards techniques… sometimes I draw by value, sometimes from a sketch with lines, sometimes black and white with colors underneath (layer blending mode) or with colors without shading – it depends on my mood of the moment,

Where can people see more of your work?

My daily traditional and digital pieces on Instagram. Some photos, but many more drawings. More art at DeviantArt and Artstation.

Anything else you’d like to share?

I just want to say that anyone can draw, it’s all a matter of practice!

Categories: FLOSS Project Planets

Web Omelette: Adding new HTML tags in the <head> in Drupal 8

Planet Drupal - Mon, 2015-05-18 03:05

In a previous article I've shown you how you can add new html elements to the <head> of your Drupal 7 site. Recently, however, I was working on a Drupal 8 project and encountered the need to do this in D8. And it took me a while to figure it out so I thought I'd share the process with you.

As you know, in Drupal 7 we use drupal_add_html_head() from anywhere in the code to add a rendered element into the <head>. This is done by passing a render array and most of the time you'll use the type #tag. In Drupal 8, however, we no longer have this procedural function so it can be a bit tricky to find out how this is done.

Although existing in Drupal 7 as well, the #attached key in render arrays really becomes important in D8. We can no longer add any scripts or stylesheets to any page without such proper attachment to render arrays. In my last article I've shown you how to add core scripts to pages in case they were missing (which can happen for anonymous users). In essence, it is all about libraries now that get attached to render arrays. So that is most of what you'll hear about.

But libraries are not the only thing you can attach to render arrays. You can also add elements to the head of the page in a similar way you'd attach libraries. So if we wanted to add a description meta tag to all of the pages on our site, we could implement hook_page_attachments() like so:

/** * Implements hook_page_attachments(). */ function module_name_page_attachments(array &$page) { $description = [ '#tag' => 'meta', '#attributes' => [ 'name' => 'description', 'content' => 'This is my website.', ], ]; $page['#attached']['html_head'][] = [$description, 'description']; }

In the example above we are just adding a dummy description meta tag to all the pages. You probably won't want to apply that to all the pages though and rather have the content of the description tag read the title of the current node. In this case you can implement hook_entity_view() like so:

/** * Implements hook_entity_view(). */ function demo_entity_view(array &$build, \Drupal\Core\Entity\EntityInterface $entity, \Drupal\Core\Entity\Display\EntityViewDisplayInterface $display, $view_mode, $langcode) { if ($entity->getEntityTypeId() !== 'node') { return; } $description = [ '#tag' => 'meta', '#attributes' => [ 'name' => 'description', 'content' => \Drupal\Component\Utility\SafeMarkup::checkPlain($entity->title->value), ], ]; $build['#attached']['html_head'][] = [$description, 'description']; }

Now you targeting the node entities and using their titles as the content for the description meta tag. And that is pretty much it.

Hope this helps.

In Drupal 8 var switchTo5x = true;stLight.options({"publisher":"dr-8de6c3c4-3462-9715-caaf-ce2c161a50c"});
Categories: FLOSS Project Planets

Drupal core announcements: Drupal core security release window on Wednesday, May 20

Planet Drupal - Mon, 2015-05-18 01:02
Start:  2015-05-20 (All day) America/New_York Online meeting (eg. IRC meeting) Organizers:  David_Rothstein

The monthly security release window for Drupal 6 and Drupal 7 core will take place on Wednesday, May 20.

This does not mean that a Drupal core security release will necessarily take place on that date for either the Drupal 6 or Drupal 7 branches, only that you should prepare to look out for one (and be ready to update your Drupal sites in the event that the Drupal security team decides to make a release).

There will be no bug fix/feature release on this date; the next window for a Drupal core bug fix/feature release is Wednesday, June 3.

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Categories: FLOSS Project Planets

Adding my blog to Planet KDE

Planet KDE - Mon, 2015-05-18 00:58

Hi folks! So this blog post is kinda a test post. I just git pushed the RSS feed from my blog to Planet KDE, KDE’s blog aggregator. Although I can see it in the git log here, I just want to make sure that my posts appear in real time on Planet KDE. If this doesn’t work, I would have to file a bug on Bugzilla, to get my blog added to Planet KDE.

Categories: FLOSS Project Planets

Bruce Snyder: Check Out My Latest X-Rays :: Bruce Snyder's Status

Planet Apache - Sun, 2015-05-17 23:54
On Friday, I paid a visit to my neurosurgeon and I have x-rays to show off! 
My neurosurgeon was happy to see me because it was the first time that he had seen me walking. He happily greeted me at the front desk which doctors almost never do, he just happened to be there when I walked in. It made me feel pretty good that my surgeon was so happy to see me. After all, this guy sees lots and lots of people who have had surgery. He said he was happy to see me walking because when I last saw him in November I was still in the wheelchair. 
Below you can see the two sets of x-rays -- one from the back and one from side. From both vantage points, you can easily see the hardware that was inserted. Even though I can feel the hardware in my back, it's still crazy for me to actually see it. Especially when I see how deep the screws go into each vertebrae. In the view from the back, you can also see the curve in my spine because the hardware is crooked. Oh well, I've been told that spinal surgery is more art than science -- sounds like writing code. Also, if you look closely you will see some little dots in between the L3 and L4 vertebrae. This is a plastic spacer and the dots are metal so it will show up in an x-ray. It's typical for the surgeon to insert a spacer in between the vertebrae in place of the disc that had to be removed (the disc was so badly damaged that they had to scrape it out). The spacer keeps the vertebrae the proper distance apart as the bone grows and fills in the space. According to the surgeon, the bone growth between the two vertebrae looks really good.
Based on  my recovery and the state of healing in my spine, the surgeon told me that he doesn't want to see me for a year! He said that he feels that I'm ahead of the curve and that I should keep doing everything I'm doing. Yay! 
Categories: FLOSS Project Planets

Dirk Eddelbuettel: random 0.2.4

Planet Debian - Sun, 2015-05-17 23:02

A new release of our random package for truly (hardware-based) random numbers as provided by random.org is now on CRAN.

The R 3.2.0 release brought the change to use an internal method="libcurl" which we are using if available; else the curl::curl() method added in release 0.2.3 is used. We are also a little more explicit about closing connections, and added really basic regression tests -- as it is hard to test hardware-based RNGs draws.

Courtesy of CRANberries comes a diffstat report for this release. Current and previous releases are available here as well as on CRAN.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Daniel Bader: OS X notifications for your pytest runs

Planet Python - Sun, 2015-05-17 20:00
OS X notifications for your pytest runs

This article shows you how to use the pytest-osxnotify, a plugin for pytest that adds native Mac OS X notifications to the pytest terminal runner.

pytest + OS X notifications = happy developers

pytest-osxnotify is a plugin for the pytest testing tool. It adds OS X notifications to your test runs so you know when a test run completes and whether it failed or succeeded without looking at your terminal window.

This is especially useful when you re-run your tests automatically every time a source file was modified.

A quick example

Installing pytest-osxnotify is easy. Let’s set up a simple example that shows you how to use pytest so that it watches your source files for modifications and re-runs the tests as necessary.

We start by installing pytest, pytest-xdist and pytest-osxnotify1.

$ pip install pytest pytest-xdist pytest-osxnotify

Let’s also create a simple test file for us to run. Save the following as example_test.py in the current folder.

def test_example1(): assert True def test_example2(): assert True def test_example3(): assert True

Now we start the pytest watcher that monitors our source file for modifications and re-runs the tests when necessary.

$ py.test -f example_test.py

That’s it. We can now move our terminal to the background and hack away in our favourite editor knowing that we’ll stay informed about the results of our test runs.

  1. You’ll typically want to install your dependencies into a Python virtualenv so that they don’t pollute your system install. Look here for a good tutorial on using virtualenv. 

Categories: FLOSS Project Planets

Justin Mason: Links for 2015-05-17

Planet Apache - Sun, 2015-05-17 19:58
  • ‘Can People Distinguish Pâté from Dog Food?’

    Ugh.

    Considering the similarity of its ingredients, canned dog food could be a suitable and inexpensive substitute for pâté or processed blended meat products such as Spam or liverwurst. However, the social stigma associated with the human consumption of pet food makes an unbiased comparison challenging. To prevent bias, Newman’s Own dog food was prepared with a food processor to have the texture and appearance of a liver mousse. In a double-blind test, subjects were presented with five unlabeled blended meat products, one of which was the prepared dog food. After ranking the samples on the basis of taste, subjects were challenged to identify which of the five was dog food. Although 72% of subjects ranked the dog food as the worst of the five samples in terms of taste (Newell and MacFarlane multiple comparison, P<0.05), subjects were not better than random at correctly identifying the dog food.

    (tags: pate food omgwtf science research dog-food meat economics taste flavour)

  • Redditor runs the secret Python code in Ex Machina

    and finds:

    when you run with python2.7 you get the following: ISBN = 9780199226559 Which is Embodiment and the inner life: Cognition and Consciousness in the Space of Possible Minds. and so now I have a lot more respect for the Director.

    (tags: python movies ex-machina cool books easter-eggs)

  • Metalwoman beer recipe

    via the Dublin Ladies Beer Society ;)

    (tags: metalman metalwoman recipes beer brewing hops dlbs)

Categories: FLOSS Project Planets

Drupal for Government: From spreadsheet to citizen government with Drupal - Volume 1 - Feeds

Planet Drupal - Sun, 2015-05-17 19:31

Thanks to the local Charlottesville GOP we have a FOIA'ed copy Charlottesville city hall expenses.  It's a small windows in to how our city staff spends money.  Without putting any value judgements on the numbers themselves, let's look at how to go from a spreadsheet to pretty charts and maps!  

Categories: FLOSS Project Planets

Basic code completion for Rust in KDE's Kate (and later KDevelop)

Planet KDE - Sun, 2015-05-17 18:45
KDE Project:

A few days ago the Rust community announced v1.0 of their new systems programming language, Rust. Having followed the project for some time and finally having used the language for a number of small projects this year, I've come to feel that using Rust is interesting, fun and productive. I'd like to highly encourage everyone to give it a look now that it's finally considered ready for prime time.

To aid the effort I've put some Sunday hacking time into a basic Rust code completion plugin for Kate today. It's built around Phil Dawes' very nifty Racer, freeing it up to concern itself only with exposing some configuration and getting data provided by Racer into Kate. Not difficult at all.

This is what it looks like in action:


Completin'

Caveats

The plugin is very basic at the time of writing (which is minutes after getting things working, this Sunday is running out!). The mapping of completion match metadata to Kate's format (from fundamentals like the match type, to more complex features like smart grouping) can likely be improved still. Ditto for auto-completion behavior (i.e. defining the circumstances in which the completion popup will kick in automatically, as opposed to waiting for manual invocation) and simply performance.

The plugin also doesn't work in KDevelop 5 yet, although that one's on KDevelop -- it doesn't support the KTextEditor plugin type present in Frameworks 5 yet. I'm told the KDevelop team has addressing this on their agenda.

Syntax highlighting

Both KDE and Rust implement the AwesomeCommunity trait. The Rust community in particular maintains an official Kate syntax highlighting file here. The repository for the Kate plugin includes this as a submodule and installs it by default; if you'd like it not to do that, there's a build system toggle to disable it. Update: With the move of the plugin source (see below), this is currently no longer the case, but the upstream repo now has install instructions!

A MIME type for Rust source code

While the plugin will run for all *.rs files, Kate/KDevelop plugins and syntax highlighting files preferably identify documents by their MIME type. It turns out there isn't a MIME type for Rust source code in shared-mime-info yet, so I've started the process of getting one in. (Update: Merged, along with this and this!)

Getting the plugin

The plugin is currently still hosted in a personal scratch repo of mine, on git.kde.org. You can browse the source , or skip straight to this clone URL:

git://anongit.kde.org/scratch/hein/kterustcompletion.git

This location might change if/when the plugin is promoted to proper project status; I'll update the post should that happen. I've included some install instructions that go over requirements, options and configuration. Note that you need a Rust source tree to get this working. Check them out!

Update: As of May 20th 2015, the plugin is now bundled with Kate in its repository (browse), to be included in the next Kate release. You can get it by building Kate from git as whole, or you can do make install in just the plugin's subdir of your build dir; at this time it works fine with Kate's last stable release. Given all this, I'll only quickly recap some additional info for getting it working here: Once you've enabled the Rust code completion item in the plugin page, a new page will appear in your Kate config dialog. Within are config knobs for the command to run Racer (which you'll therefore need an install of) and the path to a Rust source tree (which you can of course grab here), which Racer needs to do its job. Make sure both are set correctly, and then start editing!

Getting in touch

If you have any comments or requests and the blog won't do (say, you have a cool patch for me), you can find my mail address inside the AUTHORS file included in the repository. Signing up for KDE.org's proper bug tracking and patch submission pathways is still pending :-). Update: After the move to Kate's repository (see above), you can now file bugs and wishes against the plugin via the plugin-rustcompletion component of the kate product on our bugtracker (handy pre-filled form link). You can also submit patches via ReviewBoard now.

Categories: FLOSS Project Planets

Lunar: Reproducible builds: week 3 in Stretch cycle

Planet Debian - Sun, 2015-05-17 18:16

What happened about the reproducible builds effort for this week:

Toolchain fixes

Tomasz Buchert submitted a patch to fix the currently overzealous package-contains-timestamped-gzip warning.

Daniel Kahn Gillmor identified #588746 as a source of unreproducibility for packages using python-support.

Packages fixed

The following 57 packages became reproducible due to changes in their build dependencies: antlr-maven-plugin, aspectj-maven-plugin, build-helper-maven-plugin, clirr-maven-plugin, clojure-maven-plugin, cobertura-maven-plugin, coinor-ipopt, disruptor, doxia-maven-plugin, exec-maven-plugin, gcc-arm-none-eabi, greekocr4gamera, haskell-swish, jarjar-maven-plugin, javacc-maven-plugin, jetty8, latexml, libcgi-application-perl, libnet-ssleay-perl, libtest-yaml-valid-perl, libwiki-toolkit-perl, libwww-csrf-perl, mate-menu, maven-antrun-extended-plugin, maven-antrun-plugin, maven-archiver, maven-bundle-plugin, maven-clean-plugin, maven-compiler-plugin, maven-ear-plugin, maven-install-plugin, maven-invoker-plugin, maven-jar-plugin, maven-javadoc-plugin, maven-processor-plugin, maven-project-info-reports-plugin, maven-replacer-plugin, maven-resources-plugin, maven-shade-plugin, maven-site-plugin, maven-source-plugin, maven-stapler-plugin, modello-maven-plugin1.4, modello-maven-plugin, munge-maven-plugin, ocaml-bitstring, ocr4gamera, plexus-maven-plugin, properties-maven-plugin, ruby-magic, ruby-mocha, sisu-maven-plugin, syncache, vdk2, wvstreams, xml-maven-plugin, xmlbeans-maven-plugin.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

Ben Hutchings also improved and merged several changes submitted by Lunar to linux.

Currently untested because in contrib:

  • Dmitry Smirnov uploaded fheroes2-pkg/0+svn20150122r3274-2-2.
reproducible.debian.net

“Thanks to the reproducible-build team for running a buildd from hell.” — gregor herrmann

Mattia Rizzolo modified the script added last week to reschedule a package from Alioth, a reason can now be optionally specified.

Holger Levsen splitted the package sets page so each set now has its own page. He also added new sets for Java packages, Haskell packages, Ruby packages, debian-installer packages, Go packages, and OCaml packages.

Reiner Herrmann added locales-all to the set of packages installed in the build environment as its needed to properly identify variations due to the current locale.

Holger Levsen improved the scheduling so new uploads get tested sooner. He also changed the .json output that is used by tracker.debian.org to lists FTBFS issues again but only for issues unrelated to the toolchain or our test setup. Amongst many other small fixes and additions, the graph colors should now be more friendly to red-colorblind people.

The fix for pbuilder given in #677666 by Tim Landscheidt is now used. This fixed several FTBFS for OCaml packages.

Work on rebuilding with different CPU has continued, a “kvm-on-kvm” build host has been set been set up for this purpose.

debbindiff development

Version 19 of debbindiff included a fix for a regression when handling info files.

Version 20 fixes a bug when diffing files with many differences toward a last line with no newlines. It also now uses the proper encoding when writing the text output to a pipe, and detects info files better.

Documentation update

Thanks to Santiago Vila, the unneeded -depth option used with find when fixing mtimes has been removed from the examples.

Package reviews

113 obsolete reviews have been removed this week while 77 has been added.

Categories: FLOSS Project Planets

Ga&#235;l Varoquaux: Software for reproducible science: let’s not have a misunderstanding

Planet Python - Sun, 2015-05-17 18:00

Note

tl;dr:   Reproducibilty is a noble cause and scientific software a promising vessel. But excess of reproducibility can be at odds with the housekeeping required for good software engineering. Code that “just works” should not be taken for granted.

This post advocates for a progressive consolidation effort of scientific code, rather than putting too high a bar on code release.

Titus Brown recently shared an interesting war story in which a reviewer refuses to review a paper until he can run the code on his own files. Titus’s comment boils down to:

“Please destroy this software after publication”.

Note

Reproducible science: Does the emperor have clothes?

In other words, code for a publication is often not reusable. This point of view is very interesting from someone like Titus, who is a vocal proponent of reproducible science. His words triggered some surprises, which led Titus to wonder if some of the reproducible science crowd folks live in a bubble. I was happy to see the discussion unroll, as I think that there is a strong risk of creating a bubble around reproducible science. Such a bubble will backfire.

Replication is a must for science and society

Science advances by accumulating knowledge built upon observations. It’s easy to forget that these observations, and the corresponding paradigmatic conclusions, are not always as simple to establish as the fact that hot air rises: replicating many times the scientific process transforms an evidence into a truth.

One striking example of scientific replication is the on-going effort in psychology to replay the evidence behind well-accepted findings central to current line of thoughts in psychological sciences. It implies setting up the experiments accordingly to the seminal publications, acquiring the data, and processing it to come up to the same conclusions. Surprisingly, not everything that was taken for granted holds.

Note

Findings later discredited backed economic policy

Another example, with massive consequences on Joe Average’s everyday, is the failed replication of Reinhart and Rogoff’s “Growth in a Time of Debt” publication. The original paper, published in 2010 in the American Economic Review, claimed empirical findings linking important public debt to failure of GDP growth. In a context of economical crisis, it was used by policy makers as a justification for restricted public spending. However, while pursuing a mere homework assignment to replicate these findings, a student uncovered methodological flaws with the paper. Understanding the limitations of the original study took a while, and discredited the academic backing to the economical doctrine of austerity. Critically, the analysis of the publication was possible only because Reinhart and Rogoff released their spreadsheet, with data and analysis details.

Sharing code can make science reproducible

A great example of sharing code to make a publication reproducible is the recent paper on orthogonalization of regressors in fMRI models, by Mumford, Poline and Poldrack. The paper is a didactic refutation of non-justified data processing practices. The authors made their point much stronger by giving an IPython notebook to reproduce their figures. The recipe works perfectly here, because the ideas underlying the publication are simple and can be illustrated on synthetic data with relatively inexpensive computation. A short IPython notebook is all it takes to convince the reader.

Note

Sharing complex code… chances are it won’t run on new data.

At the other end of the spectrum, a complex analysis pipeline will not be as easy to share. For instance, a feat of strength such as Miyawaki et al’s visual image reconstruction from brain activity requires complex statistical signal processing to extract weak signatures. Miyawaki et al shared the data. They might share the code, but it would be a large chunk of code, probably fragile to changes in the environment (Matlab version, OS…). Chances are that it wouldn’t run on new data. This is the scenario that prompted Titus’s words:

“Please destroy this software after publication”.

I have good news: you can reproduce Miyawaki’s work with an example in nilearn, a library for machine learning on brain images. The example itself is concise, readable and it reliably produces figures close to that of the paper.

Note

Maintained libraries make feats of strength routinely reproducible.

This easy replication is only possible because the corresponding code leverages a set of libraries that encapsulate the main steps of the analysis, mainly scikit-learn and nilearn here. These libraries are tested, maintained and released. They enable us to go from a feat of strength to routine replication.

Reproducibility is not sustainable for everything Thinking is easy, acting is difficult       —       Goethe

Note

Keeping a physics apparatus running for replication years later?

I started my scientific career doing physics, and fairly “heavy” physics: vacuum systems, lasers, free-falling airplanes. In such settings, the cost of maintaining an experiment is apparent to the layman. No-one is expected to keep an apparatus running for replication years later. The pinnacle of reproducible research is when the work becomes doable in a students lab. Such progress is often supported by improved technology, driven by wider applications of the findings.

However, not every experiment will give rise to a students lab. Replicating the others will not be easy. Even if the instruments are still around the lab, they will require setting up, adjusting and wiring. And chances are that connectors or cables will be missing.

Software is no different. Storing and sharing it is cheaper. But technology evolves very fast. Every setup is different. Code for a scientific paper has seldom been built for easy maintenance: lack of tests, profusion of exotic dependencies, inexistent documentation. Robustness, portability, isolation, would be desirable, but it is difficult and costly.

Software developers know that understanding the constraints to design a good program requires writing a prototype. Code for a scientific paper is very much a prototype: it’s a first version of an idea, that proves its feasibility. Common sense in software engineering says that prototypes are designed to be thrown away. Prototype code is fragile. It’s untested, probably buggy for certain usage. Releasing prototypes amounts to distributing semi-functioning code. This is the case for most code accompanying a publication, and it is to be expected given the very nature of research: exploration and prototyping [1].

No success without quality, …

Note

Highly-reliable is more useful than state-of-the-art.

My experience with scientific code has taught me that success require quality. Having a good implementation of simple, well-known, methods seems to matter more than doing something fancy. This is what the success of scikit-learn has taught us: we are really providing classic “old” machine learning methods, but with a good API, good docs, computational performance, and stable numerics controlled by stringent tests. There exists plenty of more sophisticated machine-learning methods, including some that I have developed specifically for my data. Yet, I find myself advising my co-workers to use the methods in scikit-learn, because I know that the implementation is reliable and that they will be able to use them [2].

This quality is indeed central to doing science with code. What good is a data analysis pipeline if it crashes when I fiddle with the data? How can I draw conclusions from simulations if I cannot change their parameters? As soon as I need trust in code supporting a scientific finding, I find myself tinkering with its input, and often breaking it. Good scientific code is code that can be reused, that can lead to large-scale experiments validating its underlying assumptions.

Sqlite is so much used that its developers have been woken up at night by users.

You might say that I am putting the bar too high; that slightly buggy code is more useful than no code. But I frown at the idea of releasing code for which I am unable to do proper quality assurance. I may have done too much of that in the past. And because I am a prolific coder, many people are using code that has been through my hands. My mailbox looks like a battlefield, and when I go the coffee machine I find myself answering questions.

… and making difficult choices

Note

Craftsmanship is about trade-offs

Achieving quality requires making choices. Not only because time is limited, but also because the difficulty to maintain and improve a codebase increases much quicker than the numbers of features [3]. This phenomena is actually frightening to watch: adding a feature in scikit-learn these days is much much harder than what it used to be in the early days. Interactions between features is a killer: when you modify something, something else unrelated breaks. For a given functionality, nothing makes the code more incomprehensible than cyclomatic complexity: the multiplicity of branching, if/then clauses, for loops. This complexity naturally appears when supporting different input types, or minor variants of a same method.

The consequence is that ensuring quality for many variants of a method is prohibitory. This limit is a real problem for reproducible science, as science builds upon comparing and opposing models. However, ignoring it simply leads to code that fails doing what it claims to do. What this is telling us, is that if we are really trying to do long-term reproducibility, we need to identify successful and important research and focus our efforts on it.

If you agree with my earlier point that the code of a publication is a prototype, this iterative process seems natural. Various ideas can be thought of as competing prototypes. Some will not lead to publication at all, while others will end up having a high impact. Knowing before-hand is impossible. Focusing too early on achieving high quality is counter productive. What matters is progressively consolidating the code.

Reproducible science, a rich trade-off space

Note

Verbatim replication or reuse?

Does Reinhart and Rogoff’s “Growth in a Time of Debt” paper face the same challenges as the manuscript under review by Titus? One is describing mechanisms while the other is introducing a method. The code of the former is probably much simpler than that of the latter. Different publications come with different goals and code that is more or less easy to share. For verbatim replication of the analysis of a paper, a simple IPython notebook without tests or API is enough. To go beyond requires applying the analysis to different problems or data: reuse. Reuse is very difficult and cannot be a requirement for all publications.

Conventional wisdom in academia is that science builds upon ideas and concepts rather than methods and code. Galileo is known for his contribution to our understanding of the cosmos. Yet, methods development underpins science. Galileo is also the inventor of the telescope, which was a huge technical achievement. He needed to develop it to back his cosmological theories. Today, Galileo’s measurements are easy to reproduce because telescopes are readily-available as consumer products.


Standing on the shoulders of giants     —     Isaac Newton, on software libraries

Related posts:

[1]To make my point very clear, releasing buggy untested code is not a good thing. However, it is not possible to ask for all research papers to come with industial-quality code. I am trying here to push for a collective, reasoned, undertaking of consolidation. [2]Theory tells us that there is there is no universal machine learning algorithm. Given a specific machine-learning application, it is always possible to devise a custom strategy that out-performs a generic one. However, do we need hundreds of classifiers to solve real world classification problems? Empirical results [Delgado 2014] show that most of the benefits can be achieved with a small number of strategies. Is it desirable and sustainable to distribute and keep alive the code of every machine learning paper? [3]Empirical studies on the workload for programmers to achieve a given task showed that 25 percent increase in problem complexity results in a 100 percent increase in programming complexity: An Experiment on Unit increase in Problem Complexity, Woodfield 1979.

I need to thank my colleague Chris Filo Gorgolewski and my sister Nelle Varoquaux for their feedback on this note.

Categories: FLOSS Project Planets

Steve Loughran: Distributed System Testing: where now, where next?

Planet Apache - Sun, 2015-05-17 17:30
Confluent have announced they are looking for someone to work on an open source framework for distributed system testing.

I am really glad that they are sitting down to do this. Indeed, I've thought about sitting down to do it myself, the main reason I've been inactive there is "too much other stuff to do".



Distributed System Testing is the unspoken problem of Distributed Computing. In single-host applications, all you need to do is show that the application "works" on the target system, with its OS,  enviroment (timezone, locale, ...), installed dependencies and application configuration.

In modern distributed computing you need to show that the distributed application works across a set of machines, in the presence of failures.

Equally importantly: when your tests fail, you need the ability to determine why they failed.

I think there is much scope to improve here, as well as the fundamental problem: defining works in the context of distributed computing.

I should write a post on that in future. For now, my current stance is: we need stricter specification of desired observable behaviour and implementation details. While I have been doing some Formal Specification work within the Hadoop codebase, there's a lot more work to be done there —and I can't do it all myself.

Assuming that there is a good specification of behaviour, you can then go on to defining tests which observe the state of the system, within the configuration space of the system (now including multiple hosts and the network), during a time period in which failures occur. The observable state should continue to match the specification, and if not, you want get the logs to determine why not. Note here that "observed state" can be pretty broad, and includes
  • Correct processing of requests
  • The ability to serialize an incoming stream of requests from multiple clients (or at least, to not demonstrate non-serialized behaviour)
  • Time to execute operations is one (performance),
  • Ability to support the desired request rate (scalability)
  • Persistence of state, where appropriate
  • Reslience to failures of : dependent services, network, hosts, 
  • Reporting of detected failure conditions to users and machines (operations needs)
  • Ideally: ability to continue in the presence of byzantine failures. Or at least detect them and recover.
  • Ability to interact with different versions of software (clients, servers, peers)
  • Maybe: ability to interact with different implementations of the same protocol.
I've written some slides on this topic, way back in 2006, Distributed Testing with SmartFrog. There's even a sub-VGA video to go with it from the 2006 Google Test Automation Conference.

My stance there was
  1. Tests themselves can be viewed as part of a larger distributed system
  2. They can be deployed with your automated deployment tools, bonded to the deployed system via the configuration management infrastructure
  3. You can use the existing unit test runners as a gateway to these tests, but reporting and logging needs to be improved.
  4. Data analysis is a critical area to be worked on.
I didn't look at system failures, I don't think I was worry enough about that, showing we weren't deploying things at scale, and before cloud computing took failures mainstream. Nowadays nobody can avoid thinking about VM loss at the very least.

Given I did those slides nine years ago, have things improved? Not much, no
  • Test runners are still all generating the Ant XML test reports written along with the matching XSLT transforms up by Stephane Balliez in 2000/2001
  • Continuous Integration servers have got a lot better, but even Jenkins, wonderful as it is, presents results as if they were independent builds, rather than a matrix of (app, environment, time). We may get individual build happiness, but we don't get reports  by test, showing that Hadoop/TestATSIntegrationFailures is working intermittently on all debian systems -but has been reliable elsewhere. The data is all there, but the reporting isn't.
  • Part of the problem is that they are still working with that XML format, one that, due to its use of XML attributes to summarise the run, buffers things in memory until the test test case finishes, then writes out the results. stdout and stderr may get reported -but only for the test client, and even then, there's no awareness of the structure of log messages
  • Failure conditions aren't usually being explicitly generated. Sometimes they happen, but then its complaints about the build or the target host being broken.
  • Email reports from the CI tooling is also pretty terse. You may get the "build broken, test XYZ with commits N1-N2", but again, you can get one per build, rather than a summary of overall system health.
  • With a large dependent graph of applications (hello, Hadoop stack!), there's a lot of regression testing that needs to take place —and fault tracking when something downstream fails. 
  • Those big system tests generate many, many logs, but they are often really hard to debug. If you haven't spent time with 3+ windows trying to sync up log events, you've not been doing test runs.
  • In a VM world, those VMs are often gone by the time you get told there's a problem.
  • Then there's the extended life test runs, the ones where we have to run things for a few days with Kerberos tokens set to expire hourly, while a set of clients generate realistic loads and random servers get restarted.
Things have got harder: bigger systems, more failure modes, a whole stack of applications —yet testing hasn't kept up.

In slider I did sit down to do something that would work within the constraints of the current test runner infrastructure yet still let us do functional tests against remote Hadoop clusters of variable size . Our functional test suite, funtests, uses Apache Bigtop's script launcher to start  Slider via its shell/py scripts. This tests those scripts on all test platforms (though it turns out, not enough locales), and forces us to have a meaningful set of exit codes —enough to distinguish the desired failure conditions from unexpected ones. Those tests can deploy slider applications on secure/insecure clusters (I keep my VM configs on github, for the curious), deploy test containers for basic operations, upgrade test, failure handling tests. For failure generation our IPC protocol includes messages to kill a container, and to have the AM kill itself with a chosen exit code.

For testing slider-deployed HBase and accumulo we go one step further. Slider deploys the application, and we run the normal application functional test suites with slider set up to generate failures.

How do we do that? With the Slider Integral Chaos Monkey. That's something which can run in the AM, and, at a configured interval, roll some virtual dice to see if the enabled failure events should be triggered: currently container and AM (we make sure the AM isn't chosen in the container kill monkey action, and have a startup delay to let the test runs settle in before starting to react).

Does it work? Yes. Which is good, because if things don't work, we've got the logs of all the machines in the cluster that ran slider to go through. Ideally, YARN-aggregated logs would suffice, but not if there's something up between YARN and the OS.

So: test runner I'm happy with. Remote deployment, failure injection, both structured and random. Launchable from my deskop and CI tooling; tests can be designed to scale. For testing rolling upgrades (Slider 0.80-incubating feature), we run the same client app while upgrading the system. Again: silence is golden.

Where I think much work needs to be done is what I've mentioned before: the reporting of problems and the tooling to determine why a test has failed.

We have the underlying infrastructure to stream logs out to things like HDFS or other services, there's nothing to stop us writing code to collect and aggregate those -with the recipient using the order of arrival to place an approximate time on events (not a perfect order, obviously, but better than log events with clocks that are wrong). We can collect those entire test run histories, along with as much environment information that we could grab and preserve. Junit: system properties. My ideal: VM snapshots & virtual network configs.

Then we'd go beyond XSLT reports of test runs and go to modern big data analysis tools. I'm going to propose here: Spark Why? so you can do local scripts, things in Jenkins & JUnit, and larger bulk operations. And for that get-your-hands-dirty test-debug festival, I can use a notebook like Apache Zepplin (incubating) can then be a no

we should be using our analysis tools for the automated analysis and reporting of test runs, the data science tooling for the debugging process.

Like I said, I'm not going to do all this. I will point to a lovely bit of code by Andrew Or @ databricks, spark-test-failures. which gets Jenkins's JSON-formatted test run history, determines flaky tests and posts the results on google docs. That's just a hint of what is possible —yet it shows the path forwards.

(Photo: "crayola", St Pauls:work commissioned by the residents)
Categories: FLOSS Project Planets

Another Drop in the Drupal Sea: DrupalCon LA Saturday Recap

Planet Drupal - Sun, 2015-05-17 17:18

I headed to the Saturday sprint after completing my workout, showering, eating breakfast and packing my bags. Eventually, there were probably at least 30 people at the sprint. I worked a bit more on a patch I submitted to the Flag module and eventually started working on testing the changes I pushed to OG Forum D7. Unfortunately, they changes appeared to be doing absolutely nothing. I didn't figure out what I was overlooking before I had to leave.

read more

Categories: FLOSS Project Planets

Python Software Foundation: Read the Docs: growing with a little help from its friends at the PSF (and elsewhere)

Planet Python - Sun, 2015-05-17 16:53
Today's post, like the previous one, features a development project that the PSF has been delighted to fund once again this year. On April 28, 2015, the PSF Board unanimously approved the following resolution: RESOLVED, that the Python Software Foundation grant  $8,000 to Read the Docs, Inc. for developmental work.
What is RTD? Looking for somewhere to host your open source project’s documentation in a way that will make it readily available, easy to find, fully searchable for your users, and exportable in PDF format, while at the same time offering you ease of use and the ability to add content as your project develops? Then, you’ll want to check out Read the Docs, the world’s largest documentation website for open source projects.  Read the Docs … hosts documentation, making it fully searchable and easy to find. You can import your docs using any major version control system, including Mercurial, Git, Subversion, and Bazaar. We support webhooks so your docs get built when you commit code. There’s also support for versioning so you can build docs from tags and branches of your code in your repository. RTD’s History
RTD was created in 2010 by Eric Holscher, Charles Leifer, and Bobby Grace for the 2010 Django Dash. Eric tells the interesting story at Djangocon. A Django Dash is a coding contest that allows 48 hours for development and implementation of a project. Eric and his team considered what to do and decided that, since current documentation hosting was less than satisfactory, they could be of most help to the community by creating a web-based doc hosting solution. They agreed that Sphinx was the best document tool for Python, so they went with that. According to Eric, 2011 was the year that saw RTD go … from a hobby project, into something projects depended on. At that point, they were hosting documentation for Celery, Fabric, Nose, py.test, Virtualenv, Pip, Django CMS, Django, Grapelli/Floppyforms/Sentry, mod_wsgi. Currently, they are hosting what Eric describes as a decent part of the Python ecosystem, including SQL Academy, Pyramid, Requests, Minecraft Overviewer, and many others. They have over 50 contributors, 7500 users, and get over 15,000,000 pageviews a month. The code for RTD is on GitHub and its documentation can be found on the site. Rackspace provides free hosting. A full list of features is available on the site.

Photo Credit: Aaron Hockley, October 2014  Creative Commons license 2.00

Use of PSF Grant The PSF award was part of a fundraising drive that opened at PyCon 2015 and brought in $24,000 USD from 157 contributions since then (see the RTD Blog). Corporate sponsors included Twilio, Sentry, DreamHost, and Lincoln Loop; with service sponsorships from Elastic Search, MaxCDN, and Gandi. This funding will support RTD for 3 months of development work on the path toward sustainability as an open source project. More specifically, the funds will allow RTD to hire 2 part-time paid positions: Community Developer and Operations Developer (see RTD Blogpost for details and how to apply). Furthermore, RTD intends to document its use of PSF grant money;  how development time is spent and how funds are allocated will be posted on RTD’s public Trello board. If you’d like to help, you can contribute to RTD at Gratipay and you can follow them on Twitter. I would love to hear from readers. Please send feedback, comments, or blog ideas to me at msushi@gnosis.cx.
Categories: FLOSS Project Planets

Chris Moffitt: Notebooks Now on Github and Other Updates

Planet Python - Sun, 2015-05-17 16:30
Introduction

In case you missed it, github recently announced that Jupyter notebooks will be natively rendered by github. This useful new feature will make it easier for followers of pbpython to view notebooks through github as well as download them to your local system and follow along.

I have moved over 4 notebooks to github and set up the associated files so that it should be pretty straightforward for anyone to checkout the pbpython repo and work with the notebooks. This will also make it easier for others to follow along and help spot issues and make this collection of tips and tricks even more robust.

This post also contains a couple of helpful links I wanted to pass on and keep record of because I think they are really useful.

Notebooks

The following blog posts now have their notebooks in github:

Going forward, I plan to put new notebooks, code and data samples in the repo. The nice thing about this approach is that you can still use nbviewer if you’d like. I did find that there are some cases where the nbviewer rendering looks a little nicer.

Helpful Links

Many of you may have also noticed that pandas did have a new release recently. I have not had time to look into the new features in more detail but did notice that the documentation has included graphical representations of the various merge, join and concatenate options. I personally find these really helpful for quickly understanding how the various functions work. If you haven’t looked at them yet, I encourage you to bookmark them and study them the next time you have the need.

I also found a really useful intro to pandas notebook by Dr. Chris Fonnesbeck. I found the introduction to be well done and showed a couple different useful data sets and manipulations. This is part of a larger collection hosted on github that is also worth checking out.

Categories: FLOSS Project Planets

Carl Trachte: Lenovo Thinkpad X201 Fan Replacement

Planet Python - Sun, 2015-05-17 11:34
This is not a Python-related post per se, but it may be useful to people getting started with UNIX-based, open source software, or even a Windows user who happens to be using a Thinkpad X201 laptop.

Background:

1) I use OpenBSD as my operating system because I am striving to learn UNIX and I find that distro the best for me for that purpose.






2) The venerable legacy IBM/current Lenovo Thinkpad line of laptops tends to be one of the best supported by OpenBSD and other BSD development communities (small, but loyal dev and user base).

3) I buy my Thinkpads refurb'd because they're cheaper that way.

4) Laptop parts only last so long before they start failing, more so with refurbished ones.  It was easy to replace the hard drive; the fan is a bit more complicated in terms of disassembling the laptop.

5) I'm a bit mechanically challenged and tend to break things permanently when trying to fix them.  This post hopefully will serve to help others overcome this lack of confidence and fear.

There is actually a really good step by step still frame photo series on the web about how to take apart a Thinkpad X201 and replace the fan.  I used that extensively during this task:

http://www.myfixguide.com/manual/lenovo-thinkpad-x201-disassembly-clean-cooling-fan-remove-keyboard/

A walkthrough of my experience and a few notes:




Mr. Dexter's Star Wars joke about philips head screws (actually bolts) notwithstanding, stripping those little guys is a problem.  I was lucky this time.  In my model train adventures, I've been less so.

There are few things more annoying than a deep seated, little phillips head bolt or screw.  The thought of taking a power drill to a laptop to extract one of these makes me a bit nervous.  Fortunately, just before Radioshack went bankrupt a few months back, I found a nice set of long shaft phillips head screwdrivers in Tucson in one of their stores.  Those tools have been indispensible.






This is the laptop after I got the fan hooked up.  There is a forum on the internet from a few years back where someone is asking how to test the fan.  People just kept trolling him and laughing at him.  Here is how it's done.  Basically you hook everything up (in my case I only needed the power, screen, and keyboard) without actually putting the laptop back together (those zillion phillips head bolts!) and boot up.  It's hard to see, but the fan is happily whirring away over there on the left.

It's weird operating on a machine you're used to having in one piece - reminiscent of those scenes in STTNG where they take apart LCDR Data ("Data and Commander Riker are in engineering examining Data's head.")

Heating up:








You don't want to test the computer too long in this state (without the fan in the proper place and the machine put back together).  Thinkpads and the X201 in particular are notorious for running hot.  You can see from the screen that the machine is heating up at about a degree Centigrade in the time it takes me to type in the next sysctl command/query.

sudo shutdown -hp now

I put it all back together and the only (well, not really - see below) thing different was a slight nick on the keyboard:



Hey!  Where did these "extra" screws (bolts) come from?!  Uh-oh . . .

. . . and the sound doesn't work either - looks like I was a bit too hasty in putting this thing back together.  We'll give it another try . . .

. . . it looks like some of those extra bolts hold that important piece of aluminum in place . . .





. . . and a few more over here . . .

. . . uh, OK, that's my problem with the sound :-(

That sound card connection is paper thin - I think that's why it has to be secured with a little snap-in thingy in the picture.  Hardware is pretty amazing sometimes.  To all my electrical engineering friends:  I bow to you.




This time I test the laptop again, but for sound.  Doing what computer techs do (taking apart laptops and fixing them)

I tell you folks
It's harder than it looks


(Sorry, had to).

After I got all the screws (bolts) put back in (save 4 - I have no idea and I'm leaving good enough alone), I still (thought) I had a problem with the sound.  It turns out I've been through this before.  On UNIX-based systems the X201 mute key works funkily.  This link explains it a bit for a Debian Linux system.  Whether OpenBSD is different under the hood or not, the behavior to the user is essentially the same:

http://www.stderr.nl/Blog/Hardware/Thinkpad/WeirdMuteButtonBehaviour.html

And I was good to go!

Hope this helps someone like me.

Thanks for stopping by.








Categories: FLOSS Project Planets

Paul Johnson: Could VR tech make a child's dying wishes come true?

Planet Drupal - Sun, 2015-05-17 11:11

For over a year I have had the honour of being responsible for delivering 2 new web platforms for Great Ormond Street Hospital for Children NHS Foundation Trust (GOSH), the hospital and charity websites. During that time I've witness and learnt so much about the exemplary way they care for children, their families from both a medical and pastoral perspective. The good news is that now, using open source content management system called Drupal, they are now in a position to have a web presence which adequatley supports and reflects their internationally celebrated work.

One of the inevitable aspects of treating children with the most severe illnesses is sadly not every child can be made better. It is a reality which has hit me hard the whole time I've worked for GOSH.

Whilst I was at DrupalCon Los Angeles I met Joe Caccavano, CMO at Phase2, who was showing me an curious device having 6 GoPro array of cameras strapped together into a single head. With it something remarkable is possible. Watching footage taken during the conference with the GoPros using a VR headset (just an android phone) allowed me to immerse myself into a virtual world - try it for yourself. For those of you who have tried this, perhaps you shared my pulse raising hair on the back of your neck standing up reaction. It literally felt like I was there, on the drone from which the footage had been shot.

That moment I had an epiphany. I thought about sick children, how film and TV personalities generously visit them or send video messages with well wishes. What if the GoPro camera array captured a child's idol speaking to the camera as if it were the child? Using the child's name, speaking to them (well the camera). Imagine how lifting that would be to a kid, who perhaps couldn't leave bed or due to infection risk couldn't have visitors. They could repeat the experience too. How amazing would that be? Not only this, busy stars could do shoots from anywhere in the world.

The great news is that thanks to Google the technology to watch these films is now so cheap anyone can afford it - £4.99! All that remains is for someone to try my idea out. I will certainly be letting GOSH know of the concept, perhaps you know of a children's hospital or hospice who could do the same.

If this idea has inspired you please share it on social media, with your help maybe the idea will reach someone who could make it happen.

Joe Caccavano, CMO at Phase2, with his 6 camera GoPro Array

VR tech is now in the realms of being affordable to many

Further information: Great Ormond Street Hospital for Children NHS Foundation TrustGreat Ormond Street Hospital CharityGoogle Cardboard's Cheap VR Can Work With iPhones TooAbout the Drupal project
Categories: FLOSS Project Planets
Syndicate content