# FLOSS Project Planets

Planet Drupal - Thu, 2014-07-24 10:38

A few days ago, I sat down with Quentin Hardy of The New York Times to talk Open Source. We spoke mostly about the Drupal ecosystem and how Acquia makes money. As someone who spent almost his entire career in Open Source, I'm a firm believer in the fact that you can build a high-growth, high-margin business and help the community flourish. It's not an either-or proposition, and Acquia and Drupal are proof of that.

Rather than an utopian alternate reality as Quentin outlines, I believe Open Source is both a better way to build software, and a good foundation for an ecosystem of for-profit companies. Open Source software itself is very successful, and is capable of running some of the most complex enterprise systems. But failure to commercialize Open Source doesn't necessarily make it bad.

I mentioned to Quentin that I thought Open Source was Darwinian; a proprietary software company can't afford to experiment with creating 10 different implementations of an online photo album, only to pick the best one. In Open Source we can, and do. We often have competing implementations and eventually the best implementation(s) will win. One could say that Open Source is a more "wasteful" way of software development. In a pure capitalist read of On the Origin of Species, there is only one winner, but business and Darwin's theory itself is far more complex. Beyond "only the strongest survive", Darwin tells a story of interconnectedness, or the way an ecosystem can dictate how an entire species chooses to adapt.

While it's true that the Open Source "business model" has produced few large businesses (Red Hat being one notable example), we're also evolving the different Open Source business models. In the case of Acquia, we're selling a number of "as-a-service" products for Drupal, which is vastly different than just selling support like the first generation of Open Source companies did.

As a private company, Acquia doesn't disclose financial information, but I can say that we've been very busy operating a high-growth business. Acquia is North America's fastest growing private company on the Deloitte Fast 500 list. Our Q1 2014 bookings increased 55 percent year-over-year, and the majority of that is recurring subscription revenue. We've experienced 21 consecutive quarters of revenue growth, with no signs of slowing down. Acquia's business model has been both disruptive and transformative in our industry. Other Open Source companies like Hortonworks, Cloudera and MongoDB seem to be building thriving businesses too.

Society is undergoing tremendous change right now -- the sharing and collaboration practices of the internet are extending to transportation (Uber), hotels (Airbnb), financing (Kickstarter, LendingClub) and music services (Spotify). The rise of the collaborative economy, of which the Open Source community is a part of, should be a powerful message for the business community. It are the established, proprietary vendors whose business models are at risk, and not the other way around.

Hundreds of other companies, including several venture backed startups, have been born out of the Drupal community. Like Acquia, they have grown their businesses while supporting the ecosystem from which they came. That is more than a feel-good story, it's just good business.

Categories: FLOSS Project Planets

### No Gmail integration in 4.14 after all :(

Planet KDE - Thu, 2014-07-24 10:32

Hi folks,

I’m sorry to bring bad news, but after trying to fight some last minute bugs in the new Gmail resource today, I realized that pushing the resource into KDE Applications 4.14 was too hurried, and so I decided not to ship it in KDE Applications 4.14. I know many of you are really excited about the Gmail integration, but there are far too many issues that cannot be solved this late in 4.14 cycle. And since this will probably be the last 4.x release, shipping something that does not perform as expected and cannot be fixed properly would only be disappointing and discouraging to users. In my original post I explained that I was working on the Gmail integration to provide user experience as close as possible to native Gmail web interface so that people are not tempted to switch away from KMail to Gmail. But with the current state of the resource, the effect would be exactly the opposite. And if the resource cannot fulfil it’s purpose, then there’s no point in offering it to users.

Instead I will focus on implementing the new native Gmail API and merging together the existing Google resources to create a single groupware solution that will provide integration will all Google’s PIM services – contacts, calendars, tasks and emails.

Categories: FLOSS Project Planets

### drunken monkey: Updating the Search API to D8 – Part 3: Creating your own service

Planet Drupal - Thu, 2014-07-24 08:28

Even though there was somewhat of a delay since my last post in this series, it seems no-one else has really covered any of the advanced use cases of Drupal 8 in tutorials yet. So, here is the next installment in my series. I initially wanted to cover creating a new plugin type, but since that already requires creating a new servive, I thought I'd cover that smaller part first and then move on to plugin types in the next post.
I realize that now already a lot more people have started on their Drupal 8 modules, but perhaps this will make this series all the more useful.

Services in Drupal 8

First, a short overview of what a service even is. Basically it is a component (represented as a class) providing a certain, limited range of functionality. The database is a service, the entity manager (which is what you now use for loading entities) is a service, translation, configuration – everything handled by services. Getting the current user – also a service now, ridding us of the highly unclean global variable.
In general, a lot of what was previously a file in includes/ containing some functions with a common prefix is now a service (or split into multiple services).

The upsides of this is that the implementation and logic is cleanly bundled and properly encapsulated, that all these components can easily be switched out by contrib or later core updates, and that these systems can also be very well tested with unit tests. Even more, since services can be used with dependency injection, it also makes it much easier to test all other classes that use any of these services (if they can use dependency injection and do it properly).

(For reference, here is the official documentation on services.)

Dependency injection

This has been covered already in a lot of other blog posts, probably since it is both a rather central concept in Drupal 8, and a bit complicated when you first encounter it. However, before using it, I should still at least skim over the topic. Feel free to skip to the next heading if you feel you already know what dependency injection is and how it roughly works in Drupal 8.

Dependency injection is a programming technique where a class with external dependencies (e.g., a mechanism for translating) explicitly defines these dependencies (in some form) and makes the class which constructs it responsible for supplying those dependencies. That way, the class itself can be self-contained and doesn't need to know about where it can get those dependencies, or use any global functions or anything to achieve that.

Consider for example the following class:

<?php
class ExampleClass {

public function getDefinition() {
return array(
'label' => t('example class'),
'type' => 'foo',
);
}

}
?>

For translating the definition label, this explicitly uses the global t() function. Now, what's bad about this I, hear you ask, it worked well enough in Drupal 7, right?
The problem is that it becomes almost impossible to properly unit-test that method without bootstrapping Drupal to the point where the t() function becomes available and functional. It's also more or less impossible to switch out Drupal's translation mechanism without hacking core, since there is no way to redirect the call to t().

But if translation is done by a class with a defined interface (in other words, a service), it 's possible to do this much cleaner:

<?php
class ExampleClass {

public function __construct(TranslationServiceInterface $translation) {$this->translation = $translation; } public function getDefinition() { return array( 'label' =>$this->translation->translate('example class'),
'type' => 'foo',
);
}

}
?>

Then our example class just has to make it easily possible for code that wants to instantiate it to know how to pass its dependencies to it. In Drupal, there are two ways to do this, depending on what you are creating:

• Services, which themselves use dependency injection to get their dependencies (as you will see in a minute) have a definition in a YAML file that exactly states which services need to be passed to the service's constructor.
• Almost anything else (I think) uses a static create() method which just receives a container of all available services and is then responsible for passing the correct ones to the constructor.

In either case, the idea is that subclasses/replacements of ExampleClass can easily use other dependencies without any changes being necessary to code elsewhere instantiating the class.

Creating a custom service

So, when would you want to create your own service in a module? Generally, the .module file should more or less only contain hook implementations now, any general helper functions for the module should live in classes (so they can be easily grouped by functionality, and the code can be lazy-loaded when needed). The decision to make that class into a service then depends on the following questions:

• Is there any possibility someone would want to swap out the implementation of the class?
• Do you want to unit-test the class?
• Relatedly, do you want dependency injection in the class?

I'm not completely sure myself about how to make these decisions, though. We're still thinking about what should and shouldn't be a service in the Search API, currently there is (apart from the ones for plugins) only one service there:

The "server tasks" system, which already existed in D7, basically just ensures that when any operations on a server (e.g., removing or adding an index, deleting items, …) fails for some reason (e.g., Solr is temporarily unreachable) it is regularly retried to always ensure a consistent server state. While in D7 the system consisted of just a few functions, in D8 it was decided to encapsulate the functionality in a dedicated service, the "Server task manager".

Defining an interface and a class for the service

The first thing you need, so the service can be properly swapped out later, is an interface specifying exactly what the service should be able to do. This completely depends on your use case for the service, nothing to keep in mind here (and also no special namespace or anything). In our case, for server tasks:

<?php

public function execute(ServerInterface $server = NULL); public function add(ServerInterface$server, $type, IndexInterface$index = NULL, $data = NULL); public function delete(array$ids = NULL, ServerInterface $server = NULL,$index = NULL);

}
?>

(Of course, proper PhpDocs are essential here, I just skipped them for brevity's sake.)

Then, just create a class implementing the interface. Again, namespace and everything else is completely up to you. In the Search API, we opted to put interface and class (they usually should be in the same namespace) into the namespace \Drupal\search_api\Task. See here for their complete code.
For this post, the only relevant part of the class code is the constructor (the rest just implements the interface's methods):

<?php

public function __construct(Connection $database, EntityManagerInterface$entity_manager) {
$this->database =$database;
$this->entity_manager =$entity_manager;
}

}
?>

As you can see, we require the database connection and the entity manager as dependencies, and just included them in the constructor. We then save them to properties to be able to use them later in the other methods.

Now we just need to tell Drupal about our service and its dependencies.

The services.yml file

As mentioned earlier, services need a YAML definition to work, where they also specify their dependencies. For this, each module can have a MODULE.services.yml file listing services it wants to publish.

In our case, search_api.services.yml looks like this (with the plugin services removed):

services:
arguments: ['@database', '@entity.manager']

As you see, it's pretty simple: we assign some ID for the service (search_api.server_task_manager – properly namespaced by having the module name as the first part), specify which class the service uses by default (which, like the other definition keys, can then be altered by other modules) and specify the arguments for its constructor (i.e., its dependencies). database and entity.manager in this example are just IDs of other services defined elsewhere (in Drupal core's core.services.yml, in this case).

There are more definition keys available here, and also more features that services support, but that's more or less the gist of it. Once you have its definition in the MODULE.services.yml file, you are ready to use your new service.

Using a service

You already know one way of using a service: you can specify it as an argument for another service (or any other dependency injection-enabled component). But what if you want to use it in a hook, or any other place where dependency injection is not available (like entities, annoyingly)?

You simply do this:

<?php
/** @var \Drupal\search_api\Task\ServerTaskManagerInterface $server_task_manager */$server_task_manager = \Drupal::service('search_api.server_task_manager');
$server_task_manager->execute(); ?> That's it, now all our code needing server tasks functionality benefits from dependency injection and all the other Drupal 8 service goodness. Categories: FLOSS Project Planets ### Craig Small: PHP uniqid() not always a unique ID Planet Debian - Thu, 2014-07-24 08:17 For quite some time modern versions of JFFNMS have had a problem. In large installations hosts would randomly appear as down with the reachability interface going red. All other interface types worked, just this one. Reachability interfaces are odd, because they call fping or fping6 do to the work. The reason is because to run a ping program you need to have root access to a socket and to do that is far too difficult and scary in PHP which is what JFFNMS is written in. To capture the output of fping, the program is executed and the output captured to a temporary file. For my tiny setup this worked fine, for a lot of small setups this was also fine. For larger setups, it was not fine at all. Random failed interfaces and, most bizzarely of all, even though a file disappearing. The program checked for a file to exist and then ran stat in a loop to see if data was there. The file exist check worked but the stat said file not found. At first I thought it was some odd load related problem, perhaps the filesystem not being happy and having a file there but not really there. That was, until someone said “Are these numbers supposed to be the same?” The numbers he was referring to was the filename id of the temporary file. They were most DEFINITELY not supposed to be the same. They were supposed to be unique. Why were they always unique for me and not for large setups? The problem is with the uniqid() function. It is basically a hex representation of the time. Large setups often have large numbers of child processes for polling devices. As the number of poller children increases, the chance that two child processes start the reachability poll at the same time and have the same uniqid increases. It’s why the problem happened, but not all the time. The stat error was another symptom of this bug, what would happen was: • Child 1 starts the poll, temp filename abc123 • Child 2 starts the poll in the same microsecond, temp filename is also abc123 • Child 1 and 2 wait poller starts, sees that the temp file exists and goes into a loop of stat and wait until there is a result • Child 1 finishes, grabs the details, deletes the temporary file • Child 2 loops, tries to run stat but finds no file Who finishes first is entirely dependent on how quickly the fping returns and that is dependent on how quicky the remote host responds to pings, so its kind of random. A minor patch to use tempnam() instead of uniqid() and adding the interface ID in the mix for good measure (no two children will poll the same interface, the parent’s scheduler makes sure of that.) The initial responses is that it is looking good. Categories: FLOSS Project Planets ### Andrew Dalke: Lausanne Cheminformatics workshop and contest Planet Python - Thu, 2014-07-24 08:00 Dietrich Rordorf from MDPI sent an announcement to the CHMINF mailing list about the upcoming 9th Workshop in Chemical Information. It will be on 12 September 2014 in Lausanne, Switzerland. It seems like it will be a nice meeting, so I thought to forward information about it here. They also have a software contest, with a 2,000 CHF prize, which I think will interest some of my readers. The workshop has been around for 10 years, so I was a bit suprised that I hadn't heard of it before. Typically between 30 and 50 people attend, which I think is a nice size. The preliminary program is structured around 20 minute presentations, including: • Peter Ertl - Database of bioactive ring systems with calculated properties and its use in bioisosteric design and scaffold hopping • Michaël Zasso - Slice based electronic laboratory notebook • Dragos Horvath - Chemigenomics - is it more than inductive transfer? • Guillaume Godin - GC/MS identification in a Browser • Modest Korff - Sub-pharmacophore models as seeds in drug discovery • Jean-Louis Reymond - Interactive tools for visualisation and virtual screening of large compound databases • Michael J E Sternberg - INDDEx - logic-based ligand screening for scaffold hopping • Thomas Sander - 2D scaling with rubber bands and descriptors If you know the authors, you might recognize that one is from Strasbourg and another London, and the rest from Switzerland. I can understand. From where I live in Sweden it will cost over US$300 in order to get there, and Lausanne doesn't have its own commercial airport so I would need to fly into Geneva or Bern, while my local air hub doesn't fly there directly.

But I live in a corner of Europe, and my constraints aren't yours.

Source code contest

I had an email conversation with Luc Patiny about an interesting aspect of the workshop. They are running a contest to identify the best open source cheminformatics tool of the year, with a prize of 2000 CHF. That's 1650 EUR, 2200 USD, 1300 GBP, or 15000 SEK, which is plenty enough for someone in Europe or even the US to be able to travel there! They have a time slot set aside for the winner of the contest to present the work. The main requirement is that contest participants are selected from submissions (1-2 pages long) to the open access journal Challenges. (And no, there are no journal page fees for this contest, so it doesn't seem like a sneaky revenue generating trick.)

The other requirement is that the submission be "open source". I put that in quotes because much of my conversation with Luc was to understand what they mean. They want people to be able to download the (unobsfucated) software source code for no cost and be able to read and review it to gain insight.

I think this is very much in line with classical peer review thought, even though it can include software which are neither open source nor free software. For example, software submissions for this contest could be "for research purposes only" or "not for use in nuclear reactors", or "redistributions of modified versions are not permitted."

Instead, I think their definition is more in line with Microsoft terms shared source.

In my case, my latest version of chemfp is 'commercial open source', meaning that those who pay me money for it get a copy of it under the MIT open source license. It's both "free software" and "open source", but it's not eligible for this workshop because it costs money to download it.

But I live in a corner of open source, and my constraints aren't yours. ;) If you have a free software project, open source software project, or shared source software project, then you might be interested in submitting it to this workshop and journal. If you win, think of it as an all-expenses paid trip to Switzerland. If you don't win, think of it as a free publication.

Categories: FLOSS Project Planets

### Martin Pitt: vim config for Markdown+LaTeX pandoc editing

Planet Debian - Thu, 2014-07-24 05:38

I have used LaTeX and latex-beamer for pretty much my entire life of document and presentation production, i. e. since about my 9th school grade. I’ve always found the LaTeX syntax a bit clumsy, but with good enough editor shortcuts to insert e. g. \begin{itemize} \item...\end{itemize} with just two keystrokes, it has been good enough for me.

A few months ago a friend of mine pointed out pandoc to me, which is just simply awesome. It can convert between a million document formats, but most importantly take Markdown and spit out LaTeX, or directly PDF (through an intermediate step of building a LaTeX document and calling pdftex). It also has a template for beamer. Documents now look soo much more readable and are easier to write! And you can always directly write LaTeX commands without any fuss, so that you can use markdown for the structure/headings/enumerations/etc., and LaTeX for formulax, XYTex and the other goodies. That’s how it should always should have been! ☺

So last night I finally sat down and created a vim config for it:

"-- pandoc Markdown+LaTeX ------------------------------------------- function s:MDSettings() inoremap <buffer> <Leader>n \note[item]{}<Esc>i noremap <buffer> <Leader>b :! pandoc -t beamer % -o %<.pdf<CR><CR> noremap <buffer> <Leader>l :! pandoc -t latex % -o %<.pdf<CR> noremap <buffer> <Leader>v :! evince %<.pdf 2>&1 >/dev/null &<CR><CR> " adjust syntax highlighting for LaTeX parts " inline formulas: syntax region Statement oneline matchgroup=Delimiter start="\$" end="\$" " environments: syntax region Statement matchgroup=Delimiter start="\\begin{.*}" end="\\end{.*}" contains=Statement " commands: syntax region Statement matchgroup=Delimiter start="{" end="}" contains=Statement endfunction autocmd BufRead,BufNewFile *.md setfiletype markdown autocmd FileType markdown :call <SID>MDSettings()

That gives me “good enough” (with some quirks) highlighting without trying to interpret TeX stuff as Markdown, and shortcuts for calling pandoc and evince. Improvements appreciated!

Categories: FLOSS Project Planets

### Europython: EuroPython Society Sessions at EuroPython 2014

Planet Python - Thu, 2014-07-24 05:02

We are having three EuroPython Society (EPS) sessions today at EuroPython 2014. They are all held in room B09.

All EuroPython attendees are invited to join in to these sessions and to become EuroPython Society members.

Membership is free and we’d like to get as many EuroPython attendees signed up as members as possible, because the EuroPython conference series is all about its attendees.

Enjoy,

EuroPython Society

Categories: FLOSS Project Planets

### EuroPython Society: EuroPython Society Sessions at EuroPython 2014

Planet Python - Thu, 2014-07-24 05:01

We are having three EuroPython Society (EPS) sessions today at EuroPython 2014. They are all held in room B09.

All EuroPython attendees are invited to join in to these sessions and to become EuroPython Society members.

Membership is free and we’d like to get as many EuroPython attendees signed up as members as possible, because the EuroPython conference series is all about its attendees.

Enjoy,

EuroPython Society

Categories: FLOSS Project Planets

### S. Lott: Building Probabilistic Graphical Models with Python

Planet Python - Thu, 2014-07-24 05:00
A deep dive into probability and scipy: https://www.packtpub.com/building-probabilistic-graphical-models-with-python/book

I have to admit up front that this book is out of my league.

The Python is sensible to me. The subject matter -- graph models, learning and inference -- is above my pay grade.

Let me summarize before diving into details.

Asking someone else if a book is useful is really not going to reveal much. Their background is not my background. They found it helpful/confusing/incomplete/boring isn't really going to indicate anything about how I'll find it.

Asking someone else for a vague, unmeasurable judgement like "useful" or "appropriate" or "helpful" is silly. Someone else's opinions won't apply to you.

Asking if a book is technically correct is more measurable. However. Any competent publisher has a thorough pipeline of editing. It involves at least three steps: Acceptance, Technical Review, and a Final Review. At least three. A good publisher will have multiple technical reviewers. All of this is detailed in the front matter of the book.

Asking someone else if the book was technically correct is like asking if it was reviewed: a silly question. The details of the review process are part of the book. Just check the front matter online before you buy.

It doesn't make sense to ask judgement questions. It doesn't make sense to ask questions answered in the front matter. What can you ask that might be helpful?

I think you might be able to ask completeness questions. "What's omitted from the tutorial?" "What advanced math is assumed?" These are things that can be featured in online reviews.

Irrational Questions

A colleague had some questions about the book named above. Some of which were irrational. I'll try to tackle the rational questions since emphasis my point on ways not to ask questions about books.

2.  Is the Python code good at solidifying the mathematical concepts?

This is a definite maybe situation. The concept of "solidifying" as expressed here bothers me a lot.

Solid mathematics -- to me -- means solid mathematics. Outside any code considerations. I failed a math course in college because I tried to convert everything to algorithms and did not get the math part. A kindly professor explained that "F" very, very clearly. A life lesson. The math exists outside any implementation.

I don't think code can ever "solidify" the mathematics. It goes the other way: the code must properly implement the mathematical concepts. The book depends on scipy, and scipy is a really good implementation of a great deal of advanced math. The implementation of the math sits squarely on the rock-solid foundation of scipy. For me, that's a ringing endorsement of the approach.

If the book reinvented the algorithms available in scipy, that would be reason for concern. The book doesn't reinvent that wheel: it uses scipy to solve problems.

4. Can the code be used to build prototypes?

Um. What? What does the word prototype mean in that question? If we use the usual sense of software prototype, the answer is a trivial "Yes." The examples are prototypes in that sense. That can't be what the question means.

In this context the word might mean "model". Or it might mean "prototype of a model". If we reexamine the question with those other senses of prototype, we might have an answer that's not trivially "yes." Might.

When they ask about prototype, could they mean "model?" The code in the book is a series of models of different kinds of learning. The models are complete, consistent, and work. That can't be what they're asking.

Could they mean "prototype of a model?" It's possible that we're talking about using the book to build a prototype of a model. For example, we might have a large and complex problem with several more degrees of freedom than the text book examples. In this case, perhaps we might want to simplify the complex problem to make it more like one of the text book problems. Then we could use Python to solve that simplified problem as a prototype for building a final model which is appropriate for the larger problem.

In this sense of prototype, the answer remains "What?"  Clearly, the book solves a number of simplified problems and provides code samples that can be expanded and modified to solve larger and more complex problems.

To get past the trivial "yes" for this question, we can try to examine this in a negative sense. What kind of thing is the book unsuitable for? It's unsuitable as a final implementation of anything but the six problems it tackles. It can't be that "prototype" means "final implementation." The book is unsuitable as a tutorial on Python. It's not possible this is what "prototype" means.

Almost any semantics we assign to "prototype" lead to an answer of "yes". The book is suitable for helping someone build a lot of things.

Summary

Those two were the rational questions. The irrational questions made even less sense.

Including the other irrational questions, it appears that the real question might have been this.

Q: "Can I learn Python from this book?"

A: No.

Q: "Can I learn advanced probabilistic modeling with this book?"

A: Above my pay grade. I'm not sure I could learn probabilistic modeling from this book. Maybe I could. But I don't think that I have the depth required.

Q: Can I learn both Python and advanced probabilistic modeling with this book?"

A: Still No.

Gaps In The Book

Here's what I could say about the book.

You won't learn much Python from this book. It assumes Python; it doesn't tutor Python. Indeed, it assumes some working scipy knowledge and a scipy installation. It doesn't include a quick-start tutorial on scipy or any of that other hand-holding.

This is not even a quibble with the presentation. It's just an observation: the examples are all written in Python 2. Small changes are required for Python 3. Scipy will work with Python 3. http://www.scipy.org/scipylib/faq.html#do-numpy-and-scipy-support-python-3-x. Reworking the examples seems to involve only small changes to replace print statements. In that respect, the presentation is excellent.

Categories: FLOSS Project Planets

### PyCon PL Conference: We are starting Call for Workshop Proposals

Planet Python - Thu, 2014-07-24 04:44
We have additional Call for Proposals, but this time only for workshops/tutorials. The deadline for CfP is due to 15th of August.
Categories: FLOSS Project Planets

### Nick Kew: Cut off again

Planet Apache - Thu, 2014-07-24 04:20

For some time now, my ‘net connection has been up and down like the proverbial whore’s drawers.  But for a succession of feeble reasons, I didn’t get around to doing anything about it until today.

Well, that’s not entirely true.  First time it happened I thought it could be a repeat of a recent nationwide cockup, and configured DNS to bypass Virgin.  But subsequent outages showed that it wasn’t DNS, it was overall connectivity that was disappearing, sometimes for hours at a time.  So although I did something, it wasn’t actually relevant to the problem.

I think last night was typical.  Connectivity vanished at about 10pm, returning at 12:26[1] for a tantalising 4 minutes before disappearing for another hour.  Bedtime obscures the record of what may have happened overnight, but in the morning it vanished again at 9:21[1].  It showed no sign of coming back anytime soon, so I finally got around to trying to contact Virgin and ask WTF is going on.

Easier said than done.  For some reason I don’t understand, my connection sharing app (joikuspot on symbian) was unable to acquire a connection either last night or this morning.  So I had just a hopelessly slow 3g connection and a 3-inch screen to try and wade through Virgin’s notoriously crap-filled website and make contact.  And since my home ‘phone uses VOIP, I had only the mobile on which to try and call them.  In other words, everything I do is challenging and very slow, and any ‘phone call going through endless menus and adverts has the add annoyance of mobile costs.

Anyway, I made it to Virgin’s status page, which told me my broadband was just fine – though there might be problems with cable telly.  Then I made it through various help/support options to run a test on my line.  Now it tells me the test was unable to run, and gives me a ‘phone number (hurrah)!

So I ‘phone them.  There’s no option to speak to a human, so I just have to go through lots of menus interspersed with adverts.  These include supplying my details and repeating the same test I’d just run online, which is at least mercifully quicker to fail on voice than on 3g.  After that it told me it was putting me through to an operator.  It didn’t: instead there was another caricature[2] of an advert for the telly and some more menus, before it again told me it was putting me through to an operator.  And finally a denouement so splendidly appropriate to the whole experience I transcribed it verbatim:

Sorry, this number is not in service.

All that call in vain.  No chance of getting through to a human.

OK, back to the 3-inch screen and the crap-filled webpages.  Find another ‘phone number, try it.  Soon converge with a horribly familiar sequence of menus and hang up.

The phone is getting uncomfortably hot to hold (due only in part to it being the hottest day of the year).  I’ve been struggling alone for long enough: time to try and enlist some moral support.  None of the neighbours are around, so I call John, who I expect probably has a decent-sized screen in front of him.  Enlist his help in finding the address of the Virgin shop in central Plymouth, with a view to getting on the bike and demanding to speak to someone who deals with broadband problems.  He also finds – with a lot of difficulty despite a full-size PC screen – another couple of ‘phone numbers.

I try the number for the shop, and after hearing opening hours and adverts, and declining to get directions for it, find myself back in the same menus I’ve learned go nowhere.

By now it’s past noon, and I see next door’s front door is open.  Knowing some of my neighbours use Virgin, I decide to ask.  Karen is just back from work, and confirms her internet is dead too.  So it’s not just me!  She also tells me the TV and phone – also supplied by Virgin in the same bundle – are working fine (so much for that status page)!!!  Using the Virgin ‘phone, a call to 150 is free to her, and takes her through the same rigmarole as my first call.  Only this time, it ends with her being put through to a human.  Hallelujah!

Turns out the human is, to take a charitable view, suffering from the time difference between the Uk and India, and has probably had a good night out or a rough night.  That ‘phone call must’ve broken all records for the number of times Karen, and later I, repeated our respective addresses to the same person.  But we got some information: yes there is a fault in the area, and they anticipate a fix on July 29th.  Aaargh!!!  YOUR WIFE IS A BIG HIPPO!!!!

This is the point where I ask Karen if I can have a word with them, to try and ask what they can do for me in the meantime.  A connection over oldfashioned copper?  A 4g dongle?  No use, and asking to speak to her supervisor doesn’t help. Well, actually he refers me on to Customer services when I ask about alternatives, but after several more minutes on hold I regret that.  Where can I send the bill for my time, and for finding an alternative?

At least now I know the Virgin shop in town would be a waste of time.  How soon can I get a connection from someone else?  Fibre broadband is now available here, so there should be alternatives.

Try plusnet.  I was their customer for over ten years, with fewer problems than other ISPs I’ve used.  And there I can get to speak to a human when necessary!  Their website is unusable from the ‘phone, but I have their number.  Dammit, they tell me there’s a 15 minute wait, and the muzak is utterly horrendous.  Guess that’s what happens when a medium-sized ISP gets borged by BT :(

Hmmm …

What about a 4G dongle?  Would Currys or PCWorld sell me one?  Do we have 4G coverage?  I just about manage to access EE’s coverage map, which tells me yes I should.  OK, worth a try.  So braving the early afternoon heat, I trundle over to Currys, who can indeed sell me one, and a subscription to EE.  Great!

Actually not a dongle.  It’s a gadget that gives me another wifi signal, but whose connection to the outside world is 4G.  But it’s an emergency, and beggars can’t be choosers.  Indeed, in principle it’s a rather good solution: my problem with it is just the wifi-less macbook.

Is 4g as good as its enthusiasts claim?  Maybe I can make it my regular connection and ditch Virgin?  Guess I’ll find out over the coming week, and thereafter if I continue to use it.  Interesting times.

[UPDATE] Composing this on the wifi-less macbook, I’m now disconnected again, so this post won’t appear today.  If I have no connection tomorrow I’ll cut&paste it to another machine and publish from there.  Grrrr …

[1] These times are approximate, taken from when an IRC client – configured to connect automatically – notes connection and loss of connection.  The computer, and with it the IRC client, sleep when I’m not at a computer with IRC (which includes when I’m at the ultrabook, where screen space is too limited to run IRC unless I have a specific reason).

[2] I suspect I’m being over-polite in describing it as a caricature, as that would imply some kind of self-awareness.  Virgin’s current owners “Liberty Global” seem more likely to be the kind of corporation that gives the ‘merkins a bad name for being utterly oblivious to irony.

Categories: FLOSS Project Planets

### Pedro Rocha: OOP in Drupal 7: Cool module in practice

Planet Drupal - Thu, 2014-07-24 04:08
Hey, do you wanna know how to: reduce the amount of code needed to create modules turns the code more readable make it easier to find bugs make it easier to change business logic make it easier to improve code Sounds good, right? If you only wanna know about code, jump to here, or read about the historical background of the module ;)
Categories: FLOSS Project Planets

### FSF Blogs: Friday Free Software Directory IRC meetup: July 25

GNU Planet! - Thu, 2014-07-24 00:52

Join the FSF and friends on Friday, July 25, from 2pm to 5pm EDT (18:00 to 21:00 UTC) to help improve the Free Software Directory by adding new entries and updating existing ones. We will be on IRC in the #fsf channel on freenode.

Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

While the Free Software Directory has been and continues to be a great resource to the world over the past decade, it has the potential of being a resource of even greater value. But it needs your help!

If you are eager to help and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today!

Categories: FLOSS Project Planets

### Friday Free Software Directory IRC meetup: July 25

FSF Blogs - Thu, 2014-07-24 00:52

Join the FSF and friends on Friday, July 25, from 2pm to 5pm EDT (18:00 to 21:00 UTC) to help improve the Free Software Directory by adding new entries and updating existing ones. We will be on IRC in the #fsf channel on freenode.

Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

While the Free Software Directory has been and continues to be a great resource to the world over the past decade, it has the potential of being a resource of even greater value. But it needs your help!

If you are eager to help and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today!

Categories: FLOSS Project Planets

### NEWMEDIA: Avoiding the "API Integration Blues" on a Drupal Project

Planet Drupal - Wed, 2014-07-23 23:31
Avoiding the "API Integration Blues" on a Drupal ProjectAs Drupal continues to mature as a platform and gain adoption in the enterprise space, integration with one or more 3rd party systems is becoming common for medium to large scale projects. Unfortunately, it can be easy to underestimate the time and effort required to make these integrations work seamlessly. Here are lessons we've learned...

Mailchimp, Recurly, Mollom, Stripe, and on and on—It's easy to get spoiled by Drupal's extensive library of contributed modules that allow for quick, easy, and robust integration with 3rd party systems. Furthermore, if a particular integration does not yet exist, the extendability of Drupal is straightforward enough that it could be done given the usual caveats (i.e. an appropriate amount of time, resources, and effort). However, these caveats should not be taken lightly. Our own experiences have unearthed many of the same pain points again and again, which almost always result in waste. By applying this gained wisdom to all subsequent projects involving integrations, we've been much more successful in identifying and addressing these issues head on. We hope that in sharing what we've learned that you can avoid some of the more common traps.

API and Integration Gotchas Vaporware

Shocking as it may seem, there are situations where a client will assume an API exists when there isn't one to be found. Example: a client may be paying for an expensive enterprise software license that can connect to other programs within the same ecosystem, but there may not be an endpoint that can be accessed by Drupal. They key here is to ensure you have documentation up front along with a working example of a read and/or write operation written in php or through a web services call. Doing this as early as possible within the project will help prevent a nasty surprise when it's too late to change course or stop the project altogether.

An alternative to the scenario above is when an endpoint can be made available for an additional one-time or recurring fee. This can be quite an expensive surprise. It can also result in a difficult conversation with the client, particularly if it wasn't factored into the budget and now each side must determine who eats the cost. The key to preventing this is to verify (up front) if the API endpoint is included with the client's current license(s) or if it will be extra.

Limited Feature Sets

One can never assume that an entire feature set is available. Example: an enterprise resource planning (ERP) software solution may provide a significant amount of data and reporting to its end users, but it may only expose particular records (e.g. users, products, and inventory) to an API. The result: a Drupal site's scope document might include functionality that simply cannot be provided. To avoid this issue, you'll want to get your hands on any and all documentation as soon as possible. You'll also want to create an inventory of every feature that requires a read/write operation so that you can verify the documentation covers each and every item.

Documentation

Transcending the "Drupal learning cliff" was and continues to be a difficult journey for many members of the community despite the abundance of ebooks, videos, and articles on the subject matter. Consider how much more difficult building Drupal sites would be if these resources didn't exist. Now imagine trying to integrate with a systems you've never heard of using a language you're unfamiliar with and no user guide to point you in the right direction.

Sounds scary, doesn't it?

Integrating with a 3rd party application without documentation is akin to flying blind. Sure you might eventually get to the appropriate destination, but you will likely spend a significant amount of time using trial and error. Worse yet, you may simply miss certain pieces of functionality altogether.

The key here, as always, is to get documentation as soon as you can. Also, pay attention to certain red flags, such as the client not having the documentation readily available or requiring time for one of their team members to write it up. This is particularly important if the integration is a one-off that is specific to the customer versus an integration with a widely known platform (e.g. Salesforce or PayPal).

One of Drupal's strengths is the ability for other modules to hook in to common events. For example, a module could extend the functionality of a user saving his or her password to email a notification that a password was changed.When integrating with another system, it's equally important to understand what events may be triggered as a result of reading or writing a record. Otherwise, you may be in for a surprise to find out the external system was firing off emails or trying to charge credit card payments.

Documentation is invaluable to prevent these types of gaffs. However, in our experience it has been important to have access to a support resource that can provide warnings up front.

Support

What happens when the documentation is wrong or the software doesn't work? If support regarding the API is slow or non-existant, the project may grind to halt until this block is removed. For enterprise level solutions, there is usually some level of support that can be accessed via phone, forums, or support tickets. However, there can sometimes be a sizable fee for this service and your particular questions might not be in scope with respect to what their service provides. In those instances, it might be helpful to contract with a 3rd party vendor or contractor that has performed a similar integration in the past. This can be costly up front while saving a tremendous of time over the course of the project.

Domain Knowledge

As consultants, one of our primary objectives is to merge our expertise with that of the customer's domain knowledge in order to best achieve their goals. Therefore, it's important that we understand why the integration should work the way it does instead of just how we read and write data back and forth. A great example of this involves integrating Drupal Commerce with Quickbooks through the Web Connecter application. It's imperative to understand how the client's accounting department configures the Quickbooks application and how it manages the financial records. Otherwise a developer may make an assumption that results in an inefficient or (worse) incorrect functionality.

Similar to having a resource available for support on the API itself, it's invaluable to have access to team members on the client side that use the software on a daily basis so that nothing is missed.

Stability

Medium to large sized companies are becoming increasingly reliant on their websites to sustain and grow their businesses. Therefore, uptime is critical. And if the site depends on the uptime of a 3rd party integration to function properly, it may be useful to consider some form of redundancy or fallback solution. It is also important to make sure that support tickets can be filed with a maximum response time specified in any service licensing agreement (SLA) with the client.

Communication and Coordination

The rule of thumb here is simple: more moving parts in a project equals more communication time spent keeping everyone in sync. Additionally, it's usually wise to develop against an API endpoint specifically populated with test data so that it will not impact the client's production data. At some point, the test data will need to be cleared out and production data will need to be imported. This transition could be as simple as swapping a URL or it could involve significant amount of QA time testing and retesting full imports before making the final switch.

The best way to address these issues is simply to buffer in more communication time into the budget than a normal Drupal project.

SDKs

One gotcha that can be particularly difficult to work around is that an API may require you to use their specific software development kit (SDK) instead of a native PHP library. This may require the server to run a different OS (Windows instead of Linux) and web server (IIs instead of Apache). If you're not used to developing on these platforms, development time may be slowed down by a significant percentage. For example: a back-end developer may not be able to use the same IDE they are accustomed to (with all of their optimized configurations and memorized shortcuts). This requirement may be unavoidable in some circumstances, so the best way to deal with these situations is a simple percentage on the budgeted hours.

VMs

When possible, it is ideal for developers can work on their own machine locally with a fully replicate instance of the API they are interacting with. Example: Quickbooks connecting through their Web Connector application to read and write records from a Drupal Commerce site. To test this connection, it is extremely helpful to have a local virtual machine (VM) with Windows and Quickbooks, which a developer could then use to trigger the process. If a project involves multiple developers, they could each have their own copy to use a sandbox.

Setting up a local VM definitely adds an upfront cost to create. However, for larger projects this investment can generally be recouped many times over with respect to increased development speed and the ability to start from a consistent target.

By now, we hope that we've made the case that it's important to do your due diligence when taking on project involving integrations. And while this entire list of potential pain points may seem overkill, we've personally experienced the effects of every one of them at some point in our company's history. Ultimately, both you and the client want to avoid the uncomfortable conversation of a project's timeline slipping and going over budget. Therefore, it's critical to have address these issues thoroughly and as early in the project as possible. If uncertainty is especially high, it's usually beneficial to include a line item within the project statement of work to evaluate this piece separately. Finally, if you're able to effectively negotiate the terms of a contract, the budget for the integration shouldn't be set until an evaluation (even a partial one) has been completed.

Thoughts? Story to share? We'd love to get your feedback on how to improve upon this article.

Categories: FLOSS Project Planets

### Matthew Palmer: First Step with Clojure: Terror

Planet Debian - Wed, 2014-07-23 20:30
$sudo apt-get install -y leiningen [...]$ lein new scratch [...] $cd scratch$ lein repl Downloading: org/clojure/clojure/1.3.0/clojure-1.3.0.pom from repository central at http://repo1.maven.org/maven2 Transferring 5K from central Downloading: org/sonatype/oss/oss-parent/5/oss-parent-5.pom from repository central at http://repo1.maven.org/maven2 Transferring 4K from central Downloading: org/clojure/clojure/1.3.0/clojure-1.3.0.jar from repository central at http://repo1.maven.org/maven2 Transferring 3311K from central [...]

Wait… what? lein downloads some random JARs from a website over HTTP1, with, as far as far I can tell, no verification that what I’m asking for is what I’m getting (has nobody ever heard of Man-in-the-Middle attacks in Maven land?). It downloads a .sha1 file to (presumably) do integrity checking, but that’s no safety net – if I can serve you a dodgy .jar, I can serve you an equally-dodgy .sha1 file, too (also, SHA256 is where all the cool kids are at these days). Finally, jarsigner tells me that there’s no signature on the .jar itself, either.

It gets better, though. The repo1.maven.org site is served by the fastly.net2 pseudo-CDN3, which adds another set of points in the chain which can be subverted to hijack and spoof traffic. More routers, more DNS zones, and more servers.

I’ve seen Debian take a kicking more than once because packages aren’t individually signed, or because packages aren’t served over HTTPS. But at least Debian’s packages can be verified by chaining to a signature made by a well-known, widely-distributed key, signed by two Debian Developers with very well-connected keys.

This repository, on the other hand… oy gevalt. There are OpenPGP (GPG) signatures available for each package (tack .asc onto the end of the .jar URL), but no attempt was made to download the signatures for the .jar I downloaded. Even if the signature was downloaded and checked, there’s no way for me (or anyone) to trust the signature – the signature was made by a key that’s signed by one other key, which itself has no signatures. If I were an attacker, it wouldn’t be hard for me to replace that key chain with one of my own devising.

Even ignoring everyone living behind a government- or company-run intercepting proxy, and everyone using public wifi, it’s pretty well common knowledge by now (thanks to Edward Snowden) that playing silly-buggers with Internet traffic isn’t hard to do, and there’s no shortage of evidence that it is, in fact, done on a routine basis by all manner of people. Serving up executable code to a large number of people, in that threat environment, with no way for them to have any reasonable assurance that code is trustworthy, is very disappointing.

Please, for the good of the Internet, improve your act, Maven. Putting HTTPS on your distribution would be a bare minimum. There are attacks on SSL, sure, but they’re a lot harder to pull off than sitting on public wifi hijacking TCP connections. Far better would be to start mandating signatures, requiring signature checks to pass, and having all signatures chain to a well-known, widely-trusted, and properly secured trust root. Signing all keys that are allowed to upload to maven.org with a “maven.org distribution root” key (itself kept in hardware and only used offline), and then verifying that all signatures chain to that key, wouldn’t be insanely difficult, and would greatly improve the security of the software supply chain. Sure, it wouldn’t be perfect, but don’t make the perfect the enemy of the good. Cost-effective improvements are possible here.

Yes, security is hard. But you don’t get to ignore it just because of that, when you’re creating an attractive nuisance for anyone who wants to own up a whole passel of machines by slipping some dodgy code into a widely-used package.

1. To add insult to injury, it appears to ignore my http_proxy environment variable, and the repo1.maven.org server returns plain-text error responses with Content-Type: text/xml. But at this point, that’s just icing on the shit cake.

2. At one point in the past, my then-employer (a hosting provider) blocked Fastly’s caching servers from their network because they took down a customer site with a massive number of requests to a single resource, and the incoming request traffic was indistinguishable from a botnet-sourced DDoS attack. The requests were coming from IP space registered to a number of different ISPs, with no distinguishing rDNS (184-106-82-243.static.cloud-ips.com doesn’t help me to distinguish between “I’m a professionally-run distributed proxy” and “I’m a pwned box here to hammer your site into the ground”).

3. Pretty much all of the new breed of so-called CDNs aren’t actually pro-actively distributing content, they’re just proxies. That isn’t a bad thing, per se, but I rather dislike the far-too-common practice of installing varnish (and perhaps mod_pagespeed, if they’re providing “advanced” capabilities) on a couple of AWS instances, and hanging out your shingle as a CDN. I prefer a bit of truth in my advertising.

Categories: FLOSS Project Planets

### Justin Mason: Links for 2014-07-23

Planet Apache - Wed, 2014-07-23 19:58
• An art professor from Syracuse University in the US, Van Aken grew up on a family farm before pursuing a career as an artist, and has combined his knowledge of the two to develop his incredible Tree of 40 Fruit.  In 2008, Van Aken learned that an orchard at the New York State Agricultural Experiment Station was about to be shut down due to a lack of funding. This single orchard grew a great number of heirloom, antique, and native varieties of stone fruit, and some of these were 150 to 200 years old. To lose this orchard would render many of these rare and old varieties of fruit extinct, so to preserve them, Van Aken bought the orchard, and spent the following years figuring out how to graft parts of the trees onto a single fruit tree. [...] Aken’s Tree of 40 Fruit looks like a normal tree for most of the year, but in spring it reveals a stunning patchwork of pink, white, red and purple blossoms, which turn into an array of plums, peaches, apricots, nectarines, cherries and almonds during the summer months, all of which are rare and unique varieties.

Categories: FLOSS Project Planets

### Russ Allbery: WebAuth 4.6.1

Planet Debian - Wed, 2014-07-23 18:59

This is a bug-fix release of the WebAuth site-wide web authentication system. As is typical, I accumulated a variety of minor bug fixes and improvements that I wanted to get into a release before starting larger work (in this case, adding JSON support for the user information service protocol).

The most severe bug fix is something that only folks at Stanford would notice: support for AuthType StanfordAuth was broken in the 4.6.0 release. This is for legacy compatibility with WebAuth 2.5. It has been fixed in this release.

In other, more minor bug fixes, build issues when remctl support is disabled have been fixed, expiring password warnings are shown in WebLogin after any POST-based authentication, the confirmation page is forced if authorization identity switching is available, the username field is verified before multifactor authentication to avoid subsequent warnings, newlines and tabs are allowed in the XML sent from the WebKDC for user messages, empty RT and ST parameters are correctly diagnosed, and there are some documentation improvements.

The main new feature in this release is support for using FAST armor during password authentication in mod_webkdc. A new WebKdcFastArmorCache directive can be set to point at a Kerberos ticket cache to use for FAST armor. If set, FAST is required, so the KDC must support it as well. This provides better wire security for the initial password authentication to protect against brute-force dictionary attacks against the password by a passive eavesdropper.

This release also adds a couple of new factor types, mp (mobile push) and v (voice), that Stanford will use as part of its Duo Security integration.

Note that, for the FAST armor feature, there is also an SONAME bump in the shared library in this release. Normally, I wouldn't bump the SONAME in a minor release, but in this case the feature was fairly minor and most people will not notice the change, so it didn't feel like it warranted a major release. I'm still of two minds about that, but oh well, it's done and built now. (At least I noticed that the SONAME bump was required prior to the release.)

You can get the latest release from the official WebAuth distribution site or from my WebAuth distribution pages.

Categories: FLOSS Project Planets

### Metal Toad: Drupal Solr Search with Domain Access filtering

Planet Drupal - Wed, 2014-07-23 17:34

Metal Toad has had the privilege to work over the past two years with DC Comics. What makes this partnership even more exciting, is that the main dccomics.com site also includes sites for Vertigo Comics and Mad Magazine. Most recently Metal Toad was given the task of building the new search feature for all three sites. However, while its an awesome privilege to work with such a well known brand as DC, this does not come without a complex set of issues for the three sites when working with Apache Solr search and Drupal.

Categories: FLOSS Project Planets

### It’s Aliiiiive!

Planet KDE - Wed, 2014-07-23 17:22

On February, I wrote a blog post entitled “Leveraging the Power of Choice“, in which I described an idea I had discussed with Àlex Fiestas about making it easy for users to choose between different Plasmoids for the same task (e.g. different application launchers, task managers, clocks, …). At the time of my writing the blog post, Marco Martin already had ideas about how to implement the feature, though he said that he wouldn’t have time to implement it before the Plasma 5.0 release. Shortly after Plasma 5.0 was released, Marco started implementation as promised. We decided it would make sense to start a thread in the VDG forum to collect ideas for the UI’s design. Together with several other forum users (most notably rumangerst and andreas_k) we fleshed out the design, which currently looks like this:

Plasmoid alternatives switching UI, latest draft

Fast forward to today, when Marco announced on the Plasma mailing list that “now the alternatives config ui has landed.”. It always feels great to see one’s ideas come to life thanks to the collaboration with developers and designers. Even now, though, input is still welcome in the forum thread!

Everyone who wants to switch between Plasmoid alternatives easily, look forward to Plasma 5.1!

Filed under: KDE
Categories: FLOSS Project Planets