OSS Watch team blog

Syndicate content
open source software advisory service
Updated: 4 hours 7 min ago

If all bugs are shallow, why was Heatbleed only just fixed?

Thu, 2014-04-10 08:25

This week the Internet’s been ablaze with news of another security flaw in a widely used open source project, this time a bug in OpenSSL dubbed “Heartbleed”.

This is the third high-profile security issue in as many months. In each case the code was not only open but being used by thousands of people including some of the world’s largest technology companies, and had been in place for a significant length of time.

In his 1999 essay The Cathedral and The Bazaar, Eric Raymond stated that “Given enough eyeballs, all bugs are shallow.”  If this rule (dubbed Linus’s Law by Raymond, for Linus Torvalds) still holds true, then how can these flaws exist for such a long time before being fixed?

Let’s start by looking at what we mean by a “bug”.  Generally speaking the term “bug” refers to any defect in the code of a program, whereby it doesn’t function as required.  That definition certainly applies in all these cases, but for a bug to be reported, it has to affect people in a noticeable way.  The particular variety of bugs we’re talking about here, security flaws in encryption libraries, don’t affect the general population of users, even where we’re talking about users on the scale we see here.  It’s only when people try and use the software in a way other than it’s intended, or specifically audit code looking for such issues, that these bugs become apparent.

When Raymond talks about bugs being shallow, he’s really talking about how many people looking at a given problem will find the cause and solution more quickly than one person looking at that problem.  In the essay, Raymond quotes Torvalds saying “I’ll go on record as saying that finding [the bug] is the bigger challenge.”

So the problem we’ve been seeing here isn’t the the bugs took a long time to diagnose and fix, instead it’s that their lack of impact on the intended use of the software means they’ve taken a long time to be noticed.  Linus’s Law still holds true, but it’s not a panacea for security. The recent events affirm that neither open or closed code is inherently more secure.

For more about security in open source software, check out our briefing on the subject.

Categories: FLOSS Research

Software Freedom and Frontline Services

Wed, 2014-04-09 05:34

There are many reasons why organisations choose open source software. Sometimes Total Cost of Ownership (TCO) is a key factor. Sometimes the FOSS offering is the market leader in its niche (such as Hadoop for Big Data, or WordPress for blogs). Sometimes its the particular features and qualities of the software.

However, one of the other key characteristics of FOSS is freedom, and this is emerging as an important consideration when using software to deliver key user-facing services.

At the Open Source Open Standards conference in London on April 3rd, James Stewart from the UK Government Digital Service (GDS) stressed the importance of the permissionless character of FOSS. This enables GDS to modify government frontline services whenever there is need, or whenever an improvement can be identified. This means that services can innovate and improve continually without requiring cycles of negotiation with suppliers.

Businesses are also identifying the importance of “delivering constant value and incremental improvement” that can be delivered through participation in FOSS projects, and through permissionless modification.

While it may seem at odds with typical procurement and deployment practices, where software is used for delivering key services to customers, organisations can choose to devote resources to continual innovation and improvement (using agile processes and continuous testing) rather than more traditional models of sparse planned service upgrades. This can make the difference in crowded markets; or for public sector, react to public demand. With FOSS, continual service improvement processes can be implemented in an agile manner.

Free and Open Source Software is an enabler of permissionless innovation, so when evaluating software for frontline services, bear this in mind.

Image by Thomas Hawk used under CC-BY-NC. Please note that the NC clause may be incompatible with reuse of the text of this blog post, which is CC-BY-SA. For avoidance of doubt, syndicate this article using another image.

Categories: FLOSS Research

What can we learn from security failures?

Thu, 2014-03-20 10:45

After posting on the Apple goto fail bug, it is regrettable to have to talk about another serious, major bug in open source software so soon. This time it is more serious still, in that it has existed for over ten years, and is relied upon by many other pieces of standardly deployed open source software. The bug is strikingly similar to Apple’s, in that it happens as a result of code which is intended to signal an error but which through a subtle programming fault in fact fails to do so. This bug was found as a result of an audit commissioned by commercial Linux provider Red Hat, and the bug was discovered and publicised by its own author. What can we learn from these two failures in security critical open source code? For a start, it might lead us to question the so-called ‘Linus’ Law‘, first recorded by Eric Raymond:

Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix will be obvious to someone.

This is sometimes referred to as the ‘many eyes’ principle, and is cited by some open source proponents as a reason why open source should be more secure than closed source. This conclusion is, however, controversial, and this particular bug shows one reason why. In discussing the reasons why this bug slipped through ten years worth of review, the reviewer/author says the following:

As this code was on a critical part of the library it was touched and thus read, very rarely.

A naive view – and certainly one I’ve subscribed to in the past – is that critical code must surely get reviewed more frequently than non-critical code. In practice though, It can be the subject of a lot of assumptions, for example that it must be sound, given its importance, or that it should not be tinkered with idly and so is not worth reviewing.

So must we abandon the idea that source code availability leads to better security? As I said in the previous post, I think not. We just have to accept that source code availability in itself has no effect. It facilitates code review and improvement, if there’s a will to undertake that work. It makes it easy to share exactly what a bug was once it was found, and in turn it makes it easier for maintainers of other code bases to examine their own source for similar issues. Finally it allows anyone who finds a problem to fix it for themselves, and to share that fix. What we must not do is assume that because it is open source someone has already reviewed it, and – if this incident teaches anything at all – we must not assume that old, critical code is necessarily free of dumb errors.

Categories: FLOSS Research

5 lessons for OER from Open Source and Free Software

Wed, 2014-03-19 07:05

While the OER community owes some of its genesis to the open source and free software movements, there are some aspects of how and why these movements work that I think are missing or need greater emphasis.

1. Its not what you share, its how you create it

One of the distinctive elements of the open source software movement are open development projects. These are the projects where software is developed cooperatively (not collaboratively, necessarily) in public, often by people contributing from multiple organisations. All the processes that lead to the creation and release of software – design, development, testing, planning – happen using publicly visible tools. Projects also actively try to grow their contributor base.

When a project has open and transparent governance, its much easier to encourage people to voluntarily provide effort free of charge that far exceeds what you could afford to pay for within a closed in-house project. (Of course, you have to give up a lot of control, but really, what was that worth?)

While there are some cooperative projects in the OER space, for example some of the open textbook projects, for the most part the act of creating the resources tends to be private; either the resources are created and released by individuals working alone, or developed by media teams privately within universities.

Also, in the open source world its very common for multiple companies to put effort into the same software projects as a way of reducing their development costs and improving the quality and sustainability of the software. I can’t think offhand of any examples of education organisations collaborating on designing materials on a larger scale – for example, cooperating to build a complete course.

Generally, the kind of open source activity OER most often resembles is the “code dump” where an organisation sticks an open license on something it has essentially abandoned. Instead, OER needs to be about open cooperation and open process right from the moment an idea for a resource occurs.

Admittedly, the most popular forms of OER today tend to be things like individual photos, powerpoint slides, and podcasts. That may partly be because there is not an open content creation culture that makes bigger pieces easier to produce.

2. Always provide “source code”

Many OERs are distributed without any sort of “source code”. In this respect, license aside, they don’t resemble open source software so much as “freeware” distributed as executables you can’t easily pick apart and modify.

Distributing the original components of a resource makes it much easier to modify and improve. For example, where the resource is in a composite format such as a PDF, eBook or slideshow, provide all the embedded images separately too, in their original resolution, or in their original editable forms for illustrations. For documents, provide the original layout files from the DPT software used to produce them (but see also point 5).

Even where an OER is a single photo, it doesn’t hurt to distribute the original raw image as well as the final optimised version. Likewise for a podcast or video the original lossless recordings can be made available, as individual clips suitable for re-editing.

Without “source code”, resources are hard to modify and improve upon.

3. Have an infrastructure to support the processes, not just the outputs

So far, OER infrastructure has mostly been about building repositories of finished artefacts but not the infrastructure for collaboratively creating artefacts in the open (wikis being an obvious exception).

I think a good starting point would be to promote GitHub as the go-to tool for managing the OER production process. (I’m not the only one to suggest this, Audrey Watters also blogged this idea)

Its such an easy way to create projects that are open from the outset, and has a built in mechanism for creating derivative works and contributing back improvements. It may not be the most obvious thing to use from the point of view of educators, but I think it would make it much clearer how to create OERs as an open process.

There have also been initiatives to do a sort of “GitHub for education” such as CourseFork that may fill the gap.

4. Have some clear principles that define what it is, and what it isn’t

There has been a lot written about OER (perhaps too much!) However what there isn’t is a clear set of criteria that something must meet to be considered OER.

For Free Software we have the Four Freedoms as defined by FSF:

  • Freedom 0: The freedom to run the program for any purpose.
  • Freedom 1: The freedom to study how the program works, and change it to make it do what you wish.
  • Freedom 2: The freedom to redistribute copies so you can help your neighbor.
  • Freedom 3: The freedom to improve the program, and release your improvements (and modified versions in general) to the public, so that the whole community benefits.

If a piece of software doesn’t support all of these freedoms, it cannot be called Free Software. And there is a whole army of people out there who will make your life miserable if it doesn’t and you try to pass it off as such.

Likewise, to be “open source” means to support the complete Open Source Definition published by OSI. Again, if you try to pass off a project as being open source when it doesn’t support all of the points of the definition, there are a lot of people who will be happy to point out the error of your ways. And quite possibly sue you if you misuse one of the licenses.

If it isn’t open source according to the OSI definition, or free software according to the FSF definition, it isn’t some sort of “open software”. End of. There is no grey area.

Its also worth pointing out that while there is a lot of overlap between Free Software and Open Source at a functional level, how the criteria are expressed are also fundamentally important to their respective cultures and viewpoints.

The same distinctive viewpoints or cultures that underlie Free Software vs. Open Source are also present within what might be called the “OER movement”, and there has been some discussion of the differences between what might broadly be called “open”, “free”, and “gratis” OERs which could be a starting point.

However, while there are a lot of definitions of OER floating around, there hasn’t emerged any of these kind of recognised definitions and labels – no banners to rally to for those espousing these distinctions .

Now it may seem odd to suggest splitting into factions would be a way forward for a movement, but the tension between the Free Software and Open Source camps has I think been a net positive (of course those in each camp might disagree!) By aligning yourself with one or the other group you are making it clear what you stand for. You’ll probably also spend more of your time criticising the other group, and less time on infighting within your group!

Until some clear lines are drawn about what it really stands for, OER will continue to be whatever you want to make of it according to any of the dozens of competing definitions, leaving it vulnerable to openwashing.

5. Don’t make OERs that require proprietary software

OK, so most teachers and students still use Microsoft Office, and many designers use Adobe. However, its not that hard to develop resources that can be opened with and edited using free or open source software.

The key to this is to develop resources using open standards that allow interoperability with a wider range of tools.

This could become more of an issue if (or rather when) MOOC platforms start to  ”embrace and extend” common formats for authors to make use of their platform features. Again, there are open standards (such as IMS LTI and the Experience API) that mitigate against this. This is of course where CETIS comes in!

Is that it?

As I mentioned at the beginning of this post, OER to some extent is inspired by Open Source and Free Software, so it already incorporates many of the important lessons learned, such as building on (and to some extent simplifying and improving) the concept of free and open licenses. However, its about more than just licensing!

There may be other useful lessons to be learned and parallels drawn – add your own in the comments.

Originally posted on Scott’s personal blog

Categories: FLOSS Research

Open Source Open Standards 2014

Tue, 2014-03-18 06:28

The Open Source, Open Standards conference is in London on the 3rd of April 2014, and this year OSS Watch has joined the list of supporters for the event.

I attended the conference in 2013, which was well attended from across the public sector and from open source companies and organisations. You can read my post on that event here.

Given how closely aligned open standards and open source software are in the view of policy makers (though we do like to point out that its not quite that simple), its perhaps odd that so few events explicitly cover both bases.

The event is also interesting in that there are as many – more perhaps – talks from the customer point of view as from suppliers. This means it is much more about sharing experiences than the sales-oriented events that are commonly targeted at the public sector, and this is why we decided to be associated with the event this year.

Public sector organisations need to share experiences with each other, not just engage with suppliers, if they are to take advantage of the opportunities of open source software and open standards, and events like this are one place to do just that. (Of course, this applies equally to educational institutions such as universities and colleges – and the organisers are keen this year to open up the scope to include them.)

If you attend the event, feel free to say hello – one of us at least will be there on the day.

Categories: FLOSS Research

Should Markdown become a standard?

Thu, 2014-03-06 06:37

We’re big fans of Markdown at OSS Watch; the lightweight format is how we create all the content for the OSS Watch website (thanks to the very nice Jekyll publishing engine).

Markdown is more or less a set of conventions for semi-structured plain text that can be fairly easily converted into HTML (or other formats). The idea is that its easy to read and write using text editors. This makes it great for wikis, and for source code documentation. Its the format used for most of the README files on Github, for example.

# This is a header Some text with bits in **bold** or in *italics*. > I'm a blockquote! ## Sub-header * Bullet one * Bullet two * Bullet three

Markdown has flourished despite the fact it isn’t really an open standard as such. There are numerous “flavours” of Markdown, such as GitHub-Flavoured Markdown, and different processing engines have their own extensions, particularly in areas where there is no common agreement on how to represent things such as tables and definition lists.

Despite this lack of consistency, Markdown works very well without ever becoming a de jure standard. The closest its got to standardisation is a community site on GitHub following a call to action by Jeff Atwood.

What has brought this into focus for me is the discussion around open formats for the UK government. Putting the (huge) issue of ODF versus OOXML to one side, I would actually prefer it if more government content was in an easy to read plain text format rather than in any flavour of whopping great office-style documents. In fact, wouldn’t it be excellent if they were to use Markdown?

Which is where the problem lies – its difficult for government to mandate or even recognise “standards” that have no clear provenance or governance model arising out of some sort of recognisable standards organisation. This isn’t a problem when its just a case of “using whatever works” as an individual developer (which is how sort-of standards like Markdown and RSS take off), but seems to be a major barrier when trying to formulate policy.

So sadly, unless there is a new concerted effort to make some sort of standardised Markdown, I don’t think my dream of reading government documents in markdown using TextEdit or on GitHub are likely to become a reality.

Categories: FLOSS Research

A new Voice in the crowd

Thu, 2014-02-27 12:48

6 Months ago, 4 journalists quit their respective jobs at the leading UK Linux magazine, Linux Format.  Today, a new magazine hit the shelves of the country’s newsagents: Linux Voice.

Linux Voice on the shelves of a popular high-street newsagent (yes, that one)

With the same team behind it, Linux Voice has the same feel and a similar structure to old issues of Linux Format. However, Linux Voice aims to be different to other Linux publications in 3 key ways: It’s independent, so only answerable to readers; 9 months after publication, all issues will be licensed CC-BY-SA; 50% of the profits at the end of each financial year will be donated to free software projects, as chosen by the readers.

Linux Voice’s Copyright notice, including an automatic re-licensing clause

By presenting itself with these key principles, Linux Voice embodies in a publication the spirit of the community it serves, which provides a compelling USP for free software fans.  On top of that, Linux Voice was able to get started thanks to a very successful crowd funding campaign on IndieGoGo, allowing the community to take a real sense of ownership.

Aside from the business model, the first issue contains some great content.  There’s a 2-page section on games for Linux, which would have been hard to fill two years ago, but is now sure to grow.  There’s a round-up of encryption tools looking at security, usability and performance, to help average users keep their data safe.  There’s a bundle of features and tutorials, including homebrew monitoring with a RaspberryPi and PGP email encryption. Plus, of course, letters from users, news, and the usual regulars you’d expect from any magazine.

I’m particularly impressed by what appears to be a series of articles about the work of some of the female pioneers of computing. Issue 1 contains a tutorial looking at the work of Ada Lovelace, and Issue 2 promises to bring us the work of Grace Hopper.  It’s great to see a publication shining the spotlight on some of the early hackers, and it’s fascinating to see how it was done before the days of IDEs, text editors, or even in some cases electricity!

For your £6 (less if you subscribe) you get 114 pages jammed with great content, plus a DVD with various linux distros and other software to play with. Well worth it in my opinion, and I look forward to Issue 2!

Categories: FLOSS Research

Open Source Phones at MWC

Thu, 2014-02-27 06:00

Mobile World Congress is running this week in Barcelona.  While it’s predicable that we’ve seen lots of Android phones, including the big unveiling of the Galaxy S5 from Samsung, I’ve found it interesting to see the coverage of the other devices powered by open source technologies.

Mozilla announced their plans for a smartphone that could retail for as little as $25. It’s based on a new system-on-chip platform that integrates a 1GHz processor, 1GB of RAM and 2GB of flash memory, and will of course be running the open source Firefox OS.  It’s very much an entry level smartphone, but the $25 price point gives real weight to Mozilla’s ambition to target the “next billion” web users in developing countries.

Ubuntu Touch is finally seeing the light of day on 2 phones, one from Chinese manufacturer Meizu and one from Spanish manufacturer Bq.  Both phones are currently sold running Android, but will ship with Ubuntu later this year.  The phones’ internals have high-end performance in mind, with the Meizu sporting an 8-core processor and 2GB of RAM, clearly chosen to deliver Ubuntu’s fabled “convergence story”.

There’s been rumours abound this year that Nokia have been planning to release an Android smartphone, and they confirmed the rumours were true at MWC, sort of.  “Nokia X” will be a fork of Android with its own app store (as well as third-party ones) and a custom interface that borrows elements from Nokia’s Asha platform and Windows Phone.  Questions were raised at the rumour mill over whether Microsoft’s takeover of Nokia’s smartphone business would prevent an Android-based Nokia being possible.  However, Microsoft’s vice-president for operating systems Joe Belfiore said “Whatever they do, we’re very supportive of them,” while Nokia’s Stephen Elop maintains that the Windows-based Lumia range is still their primary smartphone product.

A slightly more left-field offering comes in the shape of Samsung’s Gear 2 “smartwatch” running Tizen, the apparently-not-dead-after-all successor to Maemo, Meego, LiMo, and all those other Linux-based mobile operating systems that never quite made it.  The device is designed to link up to the Samsung Galaxy range of Android phones, but with the dropping of “Galaxy” from the Gear’s branding, perhaps we’ll be seeing a new brand of Tizen powered smartphones from Samsung in the future.

Categories: FLOSS Research

GotoFail, Open Source and Edward Snowden

Wed, 2014-02-26 07:31

On Friday Apple released a patch for a flaw in one of their core security libraries. The library is used both in Apple’s mobile operating system iOS, and their desktop operating system OSX. As of today, the desktop version has yet to be patched. This flaw, and its aftermath, are interesting for a number of reasons.

Firstly, it’s very serious. The bug means that insecure network connections are falsely identified as secure by the operating system. This means that the flaw has an impact across numerous programs; anything that relies on the operating system to negotiate a secure connection could potentially be affected. This makes a whole range of services like web and mail vulnerable to so-called ‘man-in-the-middle’ attacks where a disreputable network host intercepts your network traffic, and potentially thereby gains access to your personal information.

Secondly, the flaw was dumb. The code in question includes an unnecessarily duplicated ‘goto’, highlighted here:

It looks like a cut-and-paste error, as the rogue ‘goto’ is indented as though it is conditional when – unlike the one above it – it is not. There are many reasons a bug like this ought not to get through quality assurance. It results in unreachable code, which the compiler would normally warn about. It would have been obvious if the code had been run through a tool that checks coding style, another common best practice precaution. Apple have received a huge amount of criticism for both the severity and the ‘simplicity’ of this bug.

Thirdly, and this is where we take a turn into the world of free and open source software, the code in question is part of Apple’s open source release programme. That is why I can post an image of the source code up there, and why critics of Apple have been able to see exactly how dumb this bug is. So one effect of Apple making the code open source has been that – arguably – it has increased the anger and ridicule to which they have been exposed. Without the source being available, we would have a far less clear idea of how dumb a mistake this was. Alright, one might argue, open source release makes your mistakes clear, but it also lets anyone fix them. That is a good trade-off, you might say. Unfortunately, in this case, it is not that simple. Despite being open source, the security framework in question is not provided by Apple in a state which makes it easy to modify and rebuild. Third party hackers have found it easier to fix the OSX bug by patching the faulty binary – normally a much more difficult route – rather than using Apple’s open source code to compile a fixed binary.

It is often argued that one key benefit of open source is that it permits code review by anyone. In this case, though, despite being a key security implementation and being available to review for over a year, this bug was not seemingly identified via source review. For me, this once again underlines that – while universal code review is a notional benefit of open source release – in practice it is universal ability to fix bugs once they’re found that is the strongest argument for source availability strengthening security. In this case Apple facilitated the former goal but made the latter problematic, and thereby in my opinion seriously reduced the security benefit open source might have brought.

Finally, it is interesting to note that a large number of commentators have asked whether this bug might have been deliberate. In the atmosphere of caution over security brought about by Edward Snowden’s revelations, these questions naturally arise. Did Apple deliberately break their own security at the request of the authorities? Obviously we cannot know. However it is interesting to note the relation between that possibility and the idea that open source is a weapon against deliberate implantation of flaws in software.

Bruce Schneier, the security analyst brought in by The Guardian to comment on Snowden’s original documents, noted in his commentary that the use of free and open source software was a means of combating national security agencies and their nasty habit of implanting and exploiting software flaws. After all if you can study the source you can see the backdoors, right? Leaving aside the issue of compromised compiler binaries, which might poison your binaries even when the source is ‘clean’, the GotoFail incident raises another question about the efficacy of open source as a weapon against government snooping. Whether deliberate or not, this flaw has been available for review for over a year.

The internet is throbbing with the schadenfreude of programmers and others attacking Apple over their dumbness. Yet isn’t another lesson of this debacle that we cannot rely on open source release on it’s own to act as a guarantee that our security critical code is neither compromised nor just plain bad?

Categories: FLOSS Research

Google Summer of Code 2014 is nearly here – are you ready?

Mon, 2014-02-24 10:35

2014 marks the 10th anniversary of the Google Summer of Code, a competition that brings  students and open source communities together through summer coding projects.

Each year the competition gathers ideas for contributions from a wide range of projects and foundations, and invites proposals from students for tackling them. As well as the experience of working on major projects with expert mentors, students are also paid a stipend of $5,500 for their effort.

With the date for accepting proposals only a few weeks away (10th-21st March), its time to get a move on!

Personally I think GSoC is an excellent opportunity for students to develop real-world development and project participation skills and make connections that will be useful after they graduate, and I’m always surprised how few students from the UK apply each year.

If you are a lecturer in a university, its a good time to raise awareness of the competition with your students. You can download a flyer here for example.

Time too tight? Well, after the Google Summer of Code, there is also the VALS Semester of Code in the Spring. 

Categories: FLOSS Research

OSS Watch publishes National Software Survey 2013

Thu, 2014-02-13 10:31

OSS Watch, supported by Jisc, has conducted the National Software Survey roughly every two years since 2003. The survey studies the status of open and closed source software in both Further Education (FE) and Higher Education (HE) institutions in the UK. OSS Watch is a non-advocacy information service covering free and open source software. We do not necessarily advocate the adoption of free and open source software. We do however advocate the consideration of all viable software solutions – free or constrained, open or closed – as the best means of achieving value for money during procurement.

Throughout this report the term “open source” is used for brevity’s sake to indicate both free software and open source software.

Looking back over 10 years of surveys, we can see how open source has grown in terms of its impact on ICT in the HE and FE sectors. For example, when we first ran our survey in 2003, the term “open source” was to be found in only 30% of ICT policies – and in some of those it was because open source software was prohibited! In our 2013 survey we now find open source considered as an option in the majority of institutions.

Open source software has also grown as an option for procurement; while only a small number of institutions use mostly open source software, all institutions now report they use a mix of open source and closed source.

However, the picture is not all positive for open source advocates, and we’ve noticed the differences between HE and FE becoming more pronounced.

You can read the full report online, or download the PDF from the OSS Watch website.

Categories: FLOSS Research

Is license compatibility worth worrying about?

Wed, 2014-02-05 07:08

At FOSDEM last weekend I saw an excellent talk by Richard Fontana entitled Taking License Compatibility Semi-Seriously. The talk took a look at the origins of compatibility issues between free and open source software licences, how efforts have been made to either address them directly or dodge around them, and ask whether it’s worth worrying about them in the first place.  This post will summarise the talk and delve into some of the points I found most interesting.

The idea of FOSS license compatibility isn’t one that was created alongside the FOSS movements, but rather one that came about when projects started to combine code released under different licences, particularly copyleft and non-copyleft licenses.  As such, there’s no real definition of what license compatibility means, and so people tend to defer to received doctrine (such as the FSF’s list of GPL compatible licenses), or leave it up to lawyers to sort out.

Early versions of KDE and Qt created the biggest significant license compatibility issue in the FOSS world.  Qt’s original proprietary license and later the QPL under which it was relicenced were considered incompatible with the GPLv2 under which the KDE project (or at least, parts of it) were licensed.  Qt is now dual-licensed under LGPL or a commercial proprietary license which fixes this incompatibility, but the FSF also suggest a remedy whereby a specific exception is added to the QPL allowing differently-licensed software to be treated under the terms of the GPL.

Another common incompatibility issue with FOSS licenses has arisen where projects have wanted to combine GPLv2 code with ASLv2 code. The FSF consider the patent termination and indemnification provisions in ASLv2 to make it incompatible with GPLv2, however they believed these provisions to be a good thing so ensured that GPLv3 was compatible with it.  Indeed, the GPLv3 went on to codify what it meant for another license to be compatible with it.

While this means at first glance that only code explicitly licensed as GPLv3 and ASLv2 can be used together while GPLv2 and ASLv2 cannot, this isn’t necessarily the case.  The FSF encouraged projects to license their code “GPLv2 or later“, in the hope that when future versions of the license were released, they would be encouraged to transition to the new license and in doing so benefit from features such as ASLv2 compatibility.  However, this method of licensing can be interpreted as “GPLv2 with the option to treat it as GPLv3 instead”, meaning that for the purposes of compatibility it can be treated to be GPLv3, while remaining “GPLv2 or later”.

This has the opposite effect of the FSF’s intention by encouraging projects to remain “GPLv2 or later” for the added flexibility it provides while avoiding forcing licensees to be bound by parts of GPLv3 that either party may not like.

While the above trick won’t work for code licensed “GPLv2 only“, a similar thing is possible for code licensed “LGPLv2 only“.  As the LGPLv2 is intended for library code, it contains a clause allowing you to re-license the code to GPLv2 or any later version, in case you wanted to include it in non-library software.  This means that you could, for the purposes of compatibility, treat the code as GPLv3.  The Artistic License 2.0 and EUPL contain similar re-licensing clauses.

What all of this shows us is that while it’s a complex issue, it’s a somewhat artificial one, and there’s all sorts of tricks one can use to circumvent it.  In practice, these compatibility “rules” are rarely followed, and rarely enforced.

In response to this, Richard Fontana suggests that we borrow the idea of “duck typing” from programming to make our lives easier.  If a FOSS project wants to combine some code under the GPL with code under a more permissive, possibly incompatible license, as long as they’re willing to follow the convention of distributing the source as though it was all GPL, the community still gets the benefit without the additional headache of worrying over which bits are allowed to be combined with which.

Categories: FLOSS Research

Commitment Gradients in OSS projects

Thu, 2014-01-16 06:25

In a recent training session, I discussed commitment gradients – how much extra effort is involved to move between each stage of involvement within a project.  After the session I was asked for some examples of commitment gradients and how it’s possible to make them shallower, so it’s easier for people to progress their involvement in a project.

This graph represents a desirable commitment gradient.  The move from knowing about the project to using and discussing it is fairly trivial.  Reporting bugs requires some extra knowledge, e.g. using the bug tracker, but isn’t a significantly harder step.  Contributing patches is slightly harder as it requires knowledge of the programming language and awareness of things such as coding styles.  Finally, moving into a leadership role requires significant additional effort, as leaders need to have an awareness of all aspects of the project, including understanding of the governance model, as well as having gained the confidence of other community members.

Using the software

This graph represents a project where the software is so hard to install that you need to have intimate knowledge of the project to even get it working.  For example, if configuration settings are hard-coded, setting the software up involves knowledge of the language, changing the code, then compiling it yourself before you even get started.  By this point, you know the software so well that there’s nearly no extra effort required for the following stages, but most people won’t bother, and your user base will suffer.

To make the commitment gradient lower at this stage, a project should make it easy to acquire, install and configure its outputs.  For example, having packaged versions in the software repositories or app stores for the target platforms makes installation easier.  Where this isn’t appropriate, an automated installer requiring little technical knowledge (such as that used by WordPress) can be used as an option for beginners, with a more configurable “expert’s mode” available for more experienced users.  For configuration, being able to change settings through the software’s interface rather than in code or configuration files is helpful.

Discussing the project

This graph represents a project where the software is easy to use, but the community has an elitist attitude and is hostile to newcomers.  Responses to questions asked assume deep technical understanding of the software, and people who don’t have such understanding are expected to find out for themselves before they engage with those who do.

The solution to this is to promote a culture of moderation and mentorship which ensures that discussions are conducted in a tolerant way that allows newcomers to learn.

Another issue at this stage may the the technology used – for example, if all user support takes place on Usenet newsgroups, many people wont know how to access them, or the conventions they are expected to follow.  Using channels that new users will be more familiar with, such as web forums or social media, can help lower the commitment gradient here.

Reporting bugs

The step from discussing the project to reporting a bug can be high where the project uses a complex bug tracker, where there is an involved process to get access to the tracker, and where gathering the information required to submit a useful report involves intimate knowledge of the software.

The Ubuntu project lowers the gradient at this stage through use of the ubuntu-bug utility.  Any user can run the command ubuntu-bug <software name>, and have a template bug report generated with all the required information about their environment, and any relevant logs or crash reports.  All they then need to do is write a description of the problem.  Again, a culture of moderation and mentorship is useful here to help guide people into writing useful reports.

Submitting Patches

Submitting patches inevitably involves a step up in terms of effort, as the contributor needs sufficient knowledge of the programming language, the source code, a development environment and so on.  However, the commitment gradient can be made too steep if contributors are expected to follow a complex or poorly documented coding style, if they are expected to do a lot of manual testing before submission, and if the actual submission process is esoteric.

The main way to lower the gradient here is documentation, and automation where possible.  Coding styles should be well-defined and documented.  Tests should be automated using a unit testing tool.  The submission process should be well documented, using a well known workflow such as GitHub’s pull requests can help here.

The Moodle community has a tool called “code checker” which is packaged as a Moodle plugin, and allows developers to analyse their code to ensure it meets the project’s coding style.  This allows them to quickly identify and fix any issues before submission, and allows reviewers to quickly direct them to instructions on how to fix any discrepancies.

Taking Charge

Again, a large step up at this stage is inevitable, and in some respects desirable, as a project probably doesn’t want to be led by someone who hasn’t shown sufficient commitment.  There may also be legal requirements for the people in charge to adhere to.

However, excessive or unclear requirements for how a person might get voting rights within a project may make this step too large, so these need to be fair and well-documented.  Also, a leader will need to have a good understanding of the project’s governance model and its decision-making process, so these need to be well-documented too.

If an project is large enough, it may be possible to allow different levels of commitment at this stage, so not everyone who has a say on technical issues is also required to, for example, make budget decisions.

Categories: FLOSS Research

Open Source Options for Education updated

Tue, 2014-01-07 12:11

We’ve just updated our Open Source Options for Education list, providing a list of alternatives to common proprietary software used in schools, colleges and universities.  Most of the software we list is provided by the academic and open source communities via our publicly editable version.  Some new software we’ve added in this update includes:

SageMath

SageMath is a package made from over 100 open source components including R and Python with the goal of creating “a viable free open source alternative to Magma, Maple, Mathematica and Matlab.”  Supported by the University of Washington, the project is currently trialling SageMath Cloud, a hosted service allowing instant access to the suite of SageMath tools with no setup required.

R and R Commander

R is the go-to language for open source statistical analysis, and R Commander provides a graphical interface to make running R commands easier. Steven Muegge got in touch to let us know that he uses the two projects for teaching graduate research methods at Carleton University. Thanks, Steven!

Gibbon

Gibbon is a management system combining features of a VLE (such as resource sharing, activities and markbooks) and MIS systems (such as attendance, timetables, and student information).  The system was developed by the International College of Hong Kong.  Thanks to Ross Parker for letting us know about Gibbon.

OwnCloud Documents

The recent release of OwnCloud 6 includes a new tool called OwnCloud Documents allowing real-time collaboration on text documents. Collaborators can be other users on the Owncloud system, or anonymous users with the link from the author.  With support for LDAP and Active Directory, could this represent a viable alternative to Google Docs for privacy-conscious institutions?

Categories: FLOSS Research

Koha trademark case settled

Tue, 2013-12-24 04:55

Earlier in the year, I wrote a case study on Koha, the open source library management system released under the GPL, detailing the history of the project and how the sale of assets had created confusion and disagreements between the Horowhenua Library Trust (HLT) who originally commissioned the system, and PTFS who now holds the copyright for most of the project’s original assets, publishing their own fork under the name LibLime Koha.

At the time of writing, the major issue at hand was PTFS’s trademark application for the mark KOHA in New Zealand, which HLT and Catalyst IT who provide commercial support for Koha were opposing.  This month, the case was settled, with the commissioner ruling against PTFS and rejecting the application.

HLT and Catalyst opposed the application on 6 grounds:

  1. The mark was likely to deceive or cause confusion.
  2. The application for the mark was contrary to New Zealand law (specifically, The Fair Trading Act 1986), on the basis of ground 1.
  3. Use of the mark would amount to passing off, also in breach of New Zealand law.
  4. The mark was identical to an existing trade mark in use in New Zealand.
  5. PTFS wasn’t the rightful owner of the mark, HLT was.
  6. The application was made in bad faith, on the basis that HLT owns the mark.

Interestingly, grounds 3, 4, and 5 were rejected by the commissioner, largely on the grounds that HLT’s use of the name Koha didn’t constitute a trade mark.  When HLT originally open sourced Koha, the evidence presented showed that it intended Koha to be given away for free so other libraries could benefit from it.  The commissioner didn’t consider this to constitute trading, and therefore Koha, while identical to the mark being registered, didn’t constitute a trade mark.

As ground 5 didn’t show HLT to be the rightful owner, ground 6 was also rejected as PTFS weren’t seen to be acting in bad faith by trying to register a mark which clearly belonged to someone else.

However, HLT and Catalyst’s success in this case hinges on the fact that when the trademark application was made in 2010, HLT’s Koha software had existed for 10 years and was well known in New Zealand’s library sector.  Since the commissioner considered the mark being registered to be identical to the name Koha, and HLT’s software to be the same class of product as PTFS’s, it was found that the two could be confused by a substantial number of people, allowing ground 1 to succeed.

Furthermore, the cited sections of the Fair Trading Act had a similar but stricter requirement that there not be a real risk that such a confusion or deception might happen.  The commissioner believed that due to Koha’s prominence in the industry there was a real risk in this case, allowing ground 2 to succeed.

The application for the trade mark has now been defeated, with HLT and Catalyst being awarded nearly 7,500 NZD in legal costs between them.  What affect this will have on the use of the Koha name in New Zealand isn’t clear – since HLT have been shown not to own the mark themselves, they are unlikely to be able to stop PTFS from using the name in New Zealand should they choose to.  However the Koha community in New Zealand can now rest easy knowing that they won’t be stopped from continuing to use the name as they always have.

I hope that other open source software projects use the case of Koha as a lesson to ensure that your branding and IP is well-managed, so that cases like this can be avoided.

You can read the Commissioner’s full ruling here.

Categories: FLOSS Research