One of the hot topics of commentary on open source development at the moment is the licensing situation on GitHub. When code is committed to GitHub, the copyright owner (usually the author or their employer) retains all rights to the code, and anyone wishing to re-use the code (by downloading it, or by “forking” and modifying it) is bound by the terms of the license the code is published under. The point of discussion in this case, is that many (indeed, the majority) of repositories on GitHub contain no license file at all.
There are two troubling points to the commentary on this phenomenon. The first is that some discussions suggest that publishing with no license is “highly permissive”, implicitly allowing anyone to take the code and do with it as they wish.
In fact, it’s usually the case that having no license on your code is equivalent to having an “All Rights Reserved” notice, preventing any re-use of your code at all. Whether it’s the copyright holder’s intention to enforce these rights isn’t being made clear, but it’ll be enough to put off any company who might want to engage with such a project under an open development model.
The second troubling point is that commentators are time and again dressing this up as a wilful movement. James Governor coined the term “Post Open Source Software“, while Matt Asay claims “Open Source Is Old School, Says The GitHub Generation“. These commentaries seem to imply that there’s some sort of “No License Manifesto” being championed (in a similar fashion to the Agile Manifesto, perhaps).
The only movement I’ve seen which would be akin to this is the Unlicense, which encourages authors to wilfully lay aside any claims to their rights, effectively a Public Domain dedication which Glyn Moody has suggested is the way forward for open source.
However, what we’ve seen on GitHub shows no such conscious setting aside of rights, it shows a lack of education. Publishing articles touting release without a license as how all the cool new kids are working encourages behaviour which could prove damaging to the development of a project’s community, and the wider community in turn.
Fortunately there are voices of reason in these discussions. Stephen Walli of the Outercurve Foundation points out that governance == community. If a project seeks to “fuck the license and governance” as James Governor suggests, then they risk doing the same to their community by alienating contributors (particularly those that are part of a larger organisation, rather than individual developers), as these contributors have no predictable structure to work within.
If the project lead might turn around and say “I dont feel like accepting your contributions, and by the way, if you keep using my code I’ll sue you”, you’ve got very little incentive to work with them.
By neglecting your community in this way, you project is at risk of being limited to a few individual contributors who know and trust one another implicitly. I can’t believe that developers seeking to allow permissive use of their code would be happy with this as an outcome.
GitHub haven’t yet made any suggestion that they feel this is a problem they should work to solve. It’s our responsibility as a community to ensure that we educate newcomers to become responsible open source citizens, rather than encouraging them to follow established bad practices.
At OSS Watch we periodically review all the resources on our main website to make sure they’re accurate and up to date. Last week it was time to revise our case study on Apache Wookie, which is a project I’ve been involved with for some time.
OSS Watch became involved with Wookie while I was working in an EU project based at the University of Bolton. The project as a whole had done lots of interesting stuff, but as with many large projects the whole was somewhat less than the sum of its parts; the central joined-up platform wasn’t really going to take off after the project finished. However, in the process we had built quite a promising system for adding functionality to the core portal shell using the W3C Widgets specification.
Towards the end of the project I went to an OSS Watch event, and spoke with Ross Gardler about what we were doing. Ross explained the Apache Incubator model to me, and from there on I was hooked.
Fast forward to 2013, and Apache Wookie is out of the incubator and a top-level Apache project, and is now on its seventh official release (the last one was in April). Its not a huge project – the team is still small, though its far more diverse than when we started out.
The tempo of development has also slowed in recent years. However, in part thats due to the maturing of the software to a point where code churn for its own sake has a negative impact on the projects that depend on it. Most recent updates have been fixing bugs affecting deployment in various unusual configurations, driven largely from reports by users. So this isn’t necessarily a bad thing!
Something that has also had a very positive impact on the project is having a very active downstream project – Apache Rave. This has driven a lot of improvements to Wookie to improve integration and deployment.
Two major EU projects have been working with Wookie and Rave over the past two years, and are coming towards their end – one this year, and the other in 2014.
Unlike previous projects they have focussed on working with existing software projects rather than going it alone, and have contributed code, user studies and content. This has been a great experience, and hopefully future projects can learn from this approach.
Wookie stands as an example of how OSS Watch can help take work from within the HE sector and turn it into a sustainable open source software project; and as a beneficiary of this approach I’m keen to offer the same help I received to others.
Do you think your University-based project has the potential to go further? If so, get in touch!
(Photo by Silus Grok, used under CC-BY-SA license)
As part of OSS Watch’s regular review of our website’s content, I’ve taken a look through the publicly editable version of our Open Source Options for Education list and added some new contributions to our website.
The response from the educational community has been overwhelming in helping us find both alternatives to common proprietary software and real-world examples of these alternatives being used. I’d like to extend my thanks to everyone who’s contributed.
I’m particularly pleased this time to include a new category for Management Information System (MIS) software. These tools often represent a significant investment to an institution and requirements for compatibility with these systems which perform a key administrative role can be a strong influence over procurement of related software such as VLEs.
The new OSI Board met in Washington DC last week. We held an effective face-to-face meeting where we discussed the progress of our plans to transform OSI into a member-based organisation. We held officer elections, once again electing Martin Michlmayr as Secretary, filling the vacancy for CFO left by Alolita Sharma by electing Karl Fogel and replacing him as Assistant Treasurer by electing Mike Milinkovich. I was re-elected as President and thank the Board for that vote of confidence in this time of change.
Among much other business, we considered how to fill the Board vacancy we have reserved for an OSI Individual Member and how to structure the Board going forward. We decided we will hold an election in June where any Individual Member may stand for election and vote for their preferred candidate. Details will be sent to members soon; now is a great time to join OSI if you want to be involved in the election.
We also decided to change the bylaws so that the seats on the Board are now allocated for election either by Affiliates or Individual Members when they fall vacant. Seats elected by Affiliates will have three-year terms; seats elected by Individual Members will have one-year terms. We will retain a term limit of six consecutive years. The transformation to a fully elected Board will be completed by 2016.
Concerning support of OSI by companies, we heard that there are many businesses that want to support OSI financially, but that none of those approached so far has felt they want to be involved in OSI's governance. We therefore decided we will no longer aim to have any Directors elected by corporate members, and will convert our outreach to businesses into sponsorship instead. We'll post details of how your company can sponsor OSI soon - of course, donations are welcome at any time and we are grateful to the companies which have already decided to support us.
Probably the largest development we discussed was staffing. The role of the Board to this point has been to serve as a pool of volunteers to directly deliver OSI's mission. As such, we have been limited in our activity by the free time of the volunteer Board. The future we envisage is one where our members can bring their vision for OSI, devise new resources and activities and have the Board deploy and sustain what they create. To deliver this and other member services, we have decided to recruit a General Manager for OSI. We have formed a staffing committee to formalise the job description we've agreed and we will publish the vacancy soon.
We also held two successful OSI events in DC; a report on those will follow.
OSI Board of Directors 2013-14
Back row: Bruno Souza, Karl Fogel, Luis Villa, Mike Millinkovich, Tony Wasserman, Martin Michlmayr
Front row: Jim Jagielski, Simon Phipps, Deb Bryant
Not shown: Harshad Gune
There is one vacancy at present, to be filled in June by election.
Almost any software project involves working with dependencies – from single-purpose libraries to complete frameworks. When you’re working on a project it’s tempting to bring in libraries, focus on meeting the user need, and figure out the niceties later. However, a little thought early on can go a long way.
This is because every dependency can bring its own licensing obligations that affect how you are able to distribute your own software. In some cases, in order to release the software under a particular license you may end up having to rewrite substantial amounts of software to remove reliance on a library or framework that is distributed under an incompatible license.
So there is a tradeoff between being agile and productive in the short term, against the risk of needing to do a costly refactoring triggered by a compatibility check before – or even worse, after – a release.
For larger projects, and organisations with multiple projects, this starts to stray into the territory of open source policies and compliance processes, but for this post lets just focus on the basics for small projects.1. Make it routine
A good strategy is to build good dependency management practices into your general software development practices – similar to the concept of building in quality or building in security.
In other words, given that the cost of fixing things later can be significant, it’s worth investing in the practices and tools that can ensure potential issues are spotted and fixed earlier.
At its simplest, this can just mean developing a greater awareness as an individual developer of where your code comes from, knowing that what you reuse can limit your choices for how you license and distribute your own code.
So in practical terms, this means being careful about copying and pasting code from the web, and making sure you know the licenses of any dependencies, preferably before working with them, but certainly before building any reliance on them into your code.
It may also make sense to handle any required attribution notices for inclusion in a NOTICE and README as you go along, rather than just rely on a release audit to always pick them up.2. Let tools take some of the strain
There are also tools that can help make things easier. For example, if you use Maven for Java projects, there is a License Validator plugin that can help flag up problems as part of your compile and build process.
Alternatively, Ninka is an Open Source tool for scanning files for licenses and copyrights. While it can’t follow import declarations or dynamically linked libraries, it can be useful to periodically check builds. A similar project is Apache RAT (Release Audit Tool) which was originally created for use within the Apache Software Foundation for reviewing releases made in the Apache Incubator.
It’s also worth pointing out that, while tools can be a part of the solution – and can be invaluable for large projects – ultimately it’s still your responsibility to make sure you meet the obligations of the software you are reusing.3. Remember to check more than just the licences!
If a dependency has a compatible licence, thats great. But what about if the project that distributes it doesn’t bother checking their own dependencies?
This is where it’s good to have an idea about the governance and processes of projects you depend on.
There aren’t just licensing risks associated with dependencies – if you rely heavily on a library that has only one or two developers then you also run the risk that it may become a “zombie” project with implications for the rest of your code, for example, if security patches are no longer being applied.
The commercial tools mentioned above are also typically backed by a knowledge base that can also flag up other issues with dependencies, such as governance or sustainability problems. However, just having a check for the project on Ohloh is often good enough for most smaller projects to check that a library is still “live”.
If you need to know more about the sustainability of a particular project, OSS Watch can carry out an Openness Review to check its viability using a range of factors – get in touch with us if you want to know more.4. Keep track of past decisions and share knowledge with colleagues
Some organisations make use of component registries to keep track of which components they approve on in their software projects. This can save time spent by developers researching the same libraries, but makes most sense when you have a lot of projects that probably need the same kinds of components, in which case focussing on reusing the same set of libraries makes sense.
Another reason for using a registry is where you need to perform more detailed evaluations, for example for security, and so checking a dependency is more involved than just figuring out which license it uses, and that the project isn’t dead.
Some examples of commercial registries are Sonatype Component Lifecycle Management and Black Duck Code Center. Again, for a smaller project or an organisation with a relatively small set of projects this can be overkill, and just having a shared document somewhere where you can keep note of which libraries you’ve used can be effective.
For example, you could share a spreadsheet with colleagues containing some basic information on each library like what version you’re using, what license it’s under and the date and results of any investigations you’ve done into sustainability, security or risk assessment.Is it worth it?
Reusing code is good practice and should save you time and expense – so it’s annoying if the administration associated with it starts affecting your productivity.
You can make a judgement call about what level of risk you feel is acceptable; for example, on an internal-only research project the risk of having to undergo a major refactoring should the project be successful may be one worth taking.
However, for a production system, or a component that is itself intended for reuse, you may just have to accept that you have to be a bit more diligent in how you reuse code.
Photo by DieselDemon used under CC-BY-2.0.
The Guardian Careers site published an article yesterday discussing which skills you should have on your CV to ensure your application is “at the top of the pile” when applying for IT jobs.
Among the usual traits such as being able to program (they suggest Java, but with a willingness to learn new languages), one of the recommendations is “Open up to open source”.
In a succinct paragraph the article manages to introduce the idea of open source, as well has explaining both its benefits to the public (in terms of having access to zero-cost versions of software) and why IT companies and departments would be looking for it.
Engaging with an open source community provides you with the opportunity to gain practical experience in working on projects with a distributed team from diverse backgrounds. Any skills relevant to the IT industry would be desirable to an open source project – not just programming but also skills like project management and technical writing.
The public nature of open source projects also means that your work will be open for potential employers to examine. Code you’ve written for a previous job may be locked up in a company’s version control system, but by contributing open source code you give a potential employer the opportunity to see evidence of your competence in the field.
Of course, beyond the benefits of the general IT skills you can acquire, specific experience in open source engagement can be of value to IT companies who are increasingly taking advantage of open source software. To get the full value from open source implemented in an organisation, that organisation should be prepared to engage with the community process, allowing them to get bugs fixed, contribute to the project, and possibly influence the project’s direction in their favour. To make this possible, they’ll need people with experience of community engagement.
On April 19th the United States Patent and Trademark Office finally rejected an application for the trademark ‘Open Source Hardware’. The grounds for the rejection were that the term was ‘merely descriptive’. Trademarks are intended to identify a specific source of goods or services, protecting that source from confusion in the minds of consumers with other sources. Naturally then, if you try to obtain a trademark which is just a description of a type of product or service, it is proper that you should be refused; it would not be distinctive and it would distort the market by allowing one source to control the generic term. If I market a car for a hamster, I should not be able to get a trademark for the name ‘hamster car’, as that would improperly restrain competitors from bringing their own hamster cars to market. So should we be pleased that the application was rejected? After all there is no trademark ‘open source software’ (although the Open Source Initiative do hold one for their own name and logo which acts as a kind of accreditation mark for their approved licences and projects that use them). In this case it’s a little confusing, because the applicants do not seem to have been actually looking to use the mark to describe what is usually understood by the phrase ‘open source hardware’ at all. In fact they were looking to protect their offering comprising:
Computer services, namely, providing an interactive web site featuring technology that allows users to consolidate and manage social networks, accounts, and connections to existing and emerging application programming interfaces (APIs)
Reading the decision it seems that the services relate to providing and managing services for children on a variety of devices, and that the trademark is supposed to imply the ‘general freedom’ of open source software but applied to one’s hardware devices in a surprising new way:
In support of registration, applicant maintains in Section 1 of its brief that the mark is not merely descriptive because OPEN SOURCE was used initially with the Open Source Software Movement; that applicant’s use of “open source” would associate that term with the provision of software and that “this causes a jarring effect that is overcome by the user’s imagination to the play on words.”… Additionally, applicant argues that joining HARDWARE next to OPEN SOURCE causes consumers to think of “physical artifacts of technology designed and offered in the same manner as free and open source software,” citing to the wikipedia.com definition of “open source hardware.”
So, I would argue, this is really not an application to use the term ‘open source hardware’ on what is normally understood to be open source hardware, so it’s not merely descriptive. This is more like the the Irish company that holds the trademark ‘open source’ for use on dairy products. Indeed, the decision does have a strong dissenting opinion which argues that the trademark ought to be allowed as non-descriptive but then properly obstructed by complaints from the actual ‘open source hardware’ community before its final grant.
What this shows, I think, is a couple of things. Firstly, that bodies like the USPTO have trouble understanding phrases like ‘open source’ where they relate to technology. Secondly, that terms that the community relies on to describe their interests and enthusiasms are not necessarily immune from proprietary seizure. While the decision here seems to contain an error that worked to deny the trademark, it’s possible to imagine a similar error that would allow a troublesome trademark to be granted.
In connection with trademarks and FOSS I was interested to see the establishment of modeltrademarkguidelines.org, a wiki-based site which
…proposes language one might use for trademark guidelines for FLOSS software projects.
It already contains a very useful page listing pre-existent FOSS project trademark policies. I would encourage readers to read the draft version of the guidelines and comment.
One of the ways we're turning OSI into a member organisation is to gradually replace the Board with member-selected directors. This process started last year when OSI's Affiliate members -- non-profit organizations themselves -- selected candidates for the Board. This year, two directors have left the Board; Fabio Kon, whose education initiatives for OSI have been held in high regard, and Alolita Sharma, who has been an OSI director for many years and in multiple roles but most recently served as OSI's Treasurer. The Board thanks them both warmly for their service to OSI and to open source.
The vacancies they left were allocated to our two member categories. The Affiliates selected Bruno Souza of Brazil as their candidate and the Board duly appointed Bruno as a director of OSI yesterday - welcome! The Affiliates have now selected three of OSI's eleven directors. The second vacancy will be filled via an election by the Individual members of OSI later this quarter -- details to follow.
The Board -- including Bruno Souza -- will meet in Washington DC next week to select its officers for 2013-14 and to plan the next steps in OSI's transformation. If you would like to meet them, please come to OSI's DC Metro Open Source Community Summit on May 10.
Software Freedom Conservancy have announced a fundraising campaign for an Open Source non-profit accounting system. The campaign seeks to raise $75,000 to fund a full-time developer for one year to first reevaluate existing solutions for their viability as a non-profit accounting system, and then improve and augment the best available system to create a new solution that will help non-profits around the world manage their finances better.
To keep their books and produce annual government filings, most non-profit organizations (NPOs) rely on proprietary software, paying exorbitant licensing fees. This is fundamentally at cross purposes with their underlying missions of charity, equality, democracy, and sharing. Conservancy, as a non-profit charity dedicated to the advancement and improvement of Open Source and Free Software, seeks to address this problem.
This project has the potential to save the non-profit sector millions in licensing fees every year. Even NPOs that continue to use proprietary accounting software will benefit from the competition it creates in the market. But, more powerfully, this project's realization will increase the agility and collaborative potential for the non-profit sector — a boon to funders, boards, and employees — bringing software freedom related and general NPO communities into closer collaboration and understanding.
The OSI Board endorses this campaign and encourages contributions to support it.
Overall the key message to take away from the event was just how central to public sector IT strategy these two themes have become, and also how policy is being rapidly turned into practice, everywhere from the NHS to local government.
Tariq Rashid, the Open Source policy lead for the UK Government, spoke of the need for IT to be focussed on user needs, and to deliver sustained value, by moving from “special” software procured for the public sector, to services delivered using commodified IT.
Even where services are unique to the public sector, Rashid and other speakers at the event made the case that most elements of such services can be delivered by building on commodified IT. For example, the open source CMS Drupal is used for delivering increasing numbers of public sector IT services, and the Government Digital Service builds its services from open source components.
The two strategies of Open Source and Open Standards are necessary as they create the ‘competitive tension’ needed to drive down cost and improve sustainability.
Mark Bohannon of Red Hat gave an overview of the global landscape of Open Source in government, in the US and UK, and identified the UK policies as being particularly forward looking. Mark positioned Cloud and Big Data as two key areas where Open Source and Open Standards were critical, calling out OpenStack and Hadoop as particular cases, and also provided some great case studies on open source from the military and from space exploration.
Mark made the point that Open Source and Open Standards underpin a more fundamental change in IT, away from big IT projects towards IT that is agile, modular and responsive to user needs.
Ian Levy of CESG dispelled some myths around security and Open Source (“If anyone in UK government says CESG has banned open source send their name to me and I’ll have them killed”) and made the case for a common sense approach to security, whether the software or service is open source or closed source.
Mark Taylor from Sirius has long been an advocate for open source in the public sector, and it was good to be at a point where the message has been heeded! He began with a nice Schopenhauer quote:
All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident.
In the talk he provided lots of practical advice for public sector organisations on putting Open Source into practice, which include calling on those writing tenders to focus on user needs instead of naming technology solutions. Mark also gave a workshop later in the day where he continued this theme, expanding on how public sector organisations and companies had made transitions to open source. Its not very easy to summarise here in a post, but I found the information very practical and useful; for example, when transitioning IT, to start with the systems furthest away from users, such as backend services and infrastructure, to avoid sparking the usual neophobia when you change technologies for users.
Inderjit Singh gave an overview of the NHS standards-based approach to IT, with some nice background on which approaches had been tried and where the current strategy is going. The current approach has been to use a programme of change projects involving SMEs that have engaged 40 new suppliers, and which is accelerating the take up of the standards.
Singh asserted that standards and fundamental for enabling an open architecture, and that open source and open standards go hand in hand in delivering value for users.
After some workshop sessions, we had Alasdair Mangham from the London borough of Camden giving us a look into how they’ve been building services using open source software in collaboration with SMEs. This involved a major shift in contracting – rather than write an huge set of requirements in a tender document, they disaggregated the project and bought in specialist capabilities (in usability, service design, SOA etc) as needed in smaller chunks of time using an agile process.
Graham Mellin gave an overview of the Met Office’s new space weather system built using open standards and using open source software; for their own specialist systems they decided to go down the route of making it Open Source rather than the private partner sharing route as result of an exploitation planning process.
I met with a lot of people at the event, from suppliers, local government, NHS and national government departments, and it was good to get a sense of how the public sector is moving – whatever the pace in individual areas – towards this vision of more affordable, sustainable and user focussed IT, and better utilising the capabilities of UK SMEs and startups.
We pointed out recently in our post in the Guardian, Higher Education in particular is in a strong position in this area as a result of past investments in Open Source and Open Standards, and we now need to think about how we take that forwards.
As Mark Taylor pointed out in his talk, the public sector accounts for over half of IT spend in the UK – and we can choose to either unite and use that market power to shape the future, or be divided up and conquered.
Another in the series of public meetings to be hosted by the OSI around its next face-to-face board meeting, OSI will also host the non-profit DC Metro Open Source Community Summit at the Mayflower Renaissance in Washington, D.C. The May 10th, 2013 program will include short sessions by some of our OSI board members and an “unconference” format for maximum attendee participation, collaboration, and learning.
Open source community and user group leadership, open source project leads, committers and developers, non-profit foundations, open data engineers and others with an interest in learning more about growing and sustaining open source are invited to attend and participate. Registration is free to government employees, $20 to non.
Program details and registration information is available at the event web site at http://opensourcecommunitysummit.org .
Event Sponsors helping underwrite the non-profit event include Google, Eclipse Foundation, Red Hat, GitHub, Georgia Tech Research Institute (GTRI), and MIL-OSS. Labor for producing the summit has been donated by The Open Bastion, along with the efforts of local volunteers and OSI board members to organize the Summit’s program.
Last week BBC’s Horizon put out a special episode looking at the next generation of technological advances. Two of the stories they reported caught my eye as they suggest that the future of innovation lies in an open way of working.
The first story looked at the work of Professor Bob Langer at MIT. Professor Langer has received the Draper Prize and National Medal of Science for his work in biomedical engineering. Langer’s approach to research is to bring experts from a range of fields together to create an interdisciplinary team.
Previous approaches to designing medical devices were designed by doctors based on existing materials. Langer sought to design new materials to operate inside the body and be safely absorbed once their job was done. To make this possible he assembled a team including engineers, chemists, neurosurgeons, pharmacologists and a number of other disciplines.
The approach of applying one expert’s knowledge to the problem posed in another’s primary field has many parallels with open innovation, and led to advances never thought possible by those working in single fields.
The second story reported on the Protei project which we heard about recently at Open Source Junction. Protei was founded by Cesar Harada, and seeks to produce sailing drones which can be used to clean up oil spills.
Harada released his initial designs online and set out forming a community of scientists and engineers to collaborate on the project. Supported by a kickstarter campaign, over $33,000 dollars were raised allowing him to hire a work shop and invite his community to work together on the open hardware project.
The programme then focused on the contrast between the model of inventors patenting an invention which Harada characterised as “good for the manufacturer but not very good for the people”, to the “new culture of openness” shaping what we invent.
One comment that piqued my interest came from Gia Milinovich, who spoke of a “tension between the open source movement and business”, and a “battle between these two worlds”. While this paints an exciting picture for a science documentary, I think the language used here was slightly disingenuous.
While we hear of stories where one company attacks another company who backs an open source project, these bear little distinction from companies litigating against each other over issues with no relation to open source. It’s fortunately very rare that we see a “battle” between a business and an open source community, and the examples of this are greatly outstripped by the examples where the two work together in harmony, indeed furthering one another’s goals.
Designer Wayne Hemingway then described how he “loved the idea” of an environment with no patents and no copyright, which while certainly a valid goal doesn’t do well to represent the way open source works. The most common open source licences all at least require that the the original author be credited for their work, which in a copyright-free world wouldn’t be enforceable.
These criticisms aside, It’s great to see open source and open hardware getting airtime from a mainstream broadcaster like this.
While compiling OSS Watch’s list of Open Source Options for Education, I discovered Koha, an open source Integrated Library System (ILS). I discovered, with some confusion, that there seemed to be several ILS systems called Koha. Investigation into the reason for this uncovered a story which provides valuable lessons for open source project ownership, including branding, trademarks, and conflict resolution.
In a previous post I discussed two different models for open source services; the “secret source” model, which is based on providing a differentiated offering on top of an open source stack, and a copyleft model using licenses that address the “ASP loophole” such as AGPL.
Another way of looking at these two models is in terms of the level and characteristics of differentiation that they afford.Shallow versus Deep
If a service offering – and this applied whether its a SaaS solution or infrastructure virtualization or anything in between – uses a copyleft license such as AGPL, then this tends to encourage shallow differentiation. By this I mean that the service offered to users by different providers is differentiated in a way that does not involve significant changes to the core codebase. For example, service providers may differentiate on price points, service packages, location, and reputation, while offering the same solution.
There can also be differentiation at the technology level including custom configurations, styling and so on, or added features; however under an AGPL-style license these are also typically distributed under AGPL, so if service providers do want to extend and enhance the codebase, this is contributed back to the community. If a provider really did want to provide deep differentiation, it would effectively have to create a fork.
If a service offering instead builds on top of an open source stack using a permissive license such as the Apache license, then it becomes possible for providers to offer deep differentiation in the services they provide; they are at liberty to make significant changes to the software without contributing this back to the community developing the core codebase. This is because, under the terms of most open source licenses, providing an online service using software is not considered “distribution” of the software.What does this all mean?
For service providers this presents something of a quandary. On the one hand, a common software base represents a significant cost saving as development effort is pooled, reducing waste. On the other, there is a clear business case for greater differentiation to compete as the market becomes more crowded.
How this is resolved is something of a philosophical question.
It may be that, acting out of self-interest, service providers will over time balance out the issues of differentiation and pooled development regardless of any kind of licensing constraint; the cost savings and reduced risk offered by pooling development effort for the core codebase will be clear and significant, and providers will apply deep differentiation only where there is very clear benefit in doing so, while contributing back to the core codebase for everything else.
Alternatively, service providers may rush to differentiate deeply from the outset, leaving the core codebase starved of resources while each provider focusses on their own enhancements. In this scenario, copyleft licensing would be needed to sustain a critical mass of contributions to the core.Which is it to be?
Given that OpenStack and Apache CloudStack, two of the main cloud infrastructure-as-a-service projects, are both licensed under the Apache license, we can observe over the coming year or two which seems to be the likely scenario. Under the first model, we should see the developer community and contributions for these projects continue to grow, irrespective of how deeply providers differentiate services based on the software.
Under the second scenario, we should see something rather different, in that the viability of the project should suffer even as the number of providers building services on them grows.
As of now, both projects seem to be growing in terms of contributors; here’s OpenStack (source: Ohloh):
… and here is CloudStack (source: Ohloh):
(Both projects have slightly lower numbers of commits, though that can simply reflect greater maturity of codebases rather than reduced viability, which is why I’ve focussed on the number of contributors)
If the concerns over “deep differentiation” turn out to be justified, then community engagement in these two projects should suffer as effort is diverted into differentiated offerings built on them, rather than channelled into contributions back to the core projects.Is deep differentiation really an issue for cloud?
Deep and shallow differentiation is a concept borrowed from marketing, and is sometimes used to refer to how easy it is for a competitor to copy a service offering. One example of this is the Domino’s Pizza “hot in 30 minutes or its free” service promise – it would be difficult for a competitor to copy this offering without actually changing the nature of its operation to match – it can’t just copy the tagline without risking giving away free pizza and going out of business.
In cloud services, its arguable how much differentiation will be in terms of software functionalities and capabilities, and how much on the operational and marketing aspects of the services: things like pricing, reliability, support, speed, ease of use, ease of payment and so on.
If the key to success in cloud is in amongst the latter, then it really doesn’t matter that most providers use basically the same software, and providers will want to take advantage of having a common, high quality software platform with pooled development costs.
A further problem with deep differentiation in the software stack is that this could impact portability and interoperability – having extra features is great, but cloud customers also value the ability to choose providers and to switch when they need to. Providers focussing on a few popular open source cloud offerings are another kind of standardisation, complementing interoperability standards such as OVF, and one that gives customers confidence that they aren’t being locked in; as well as being able to move to another provider, they also get the option to in-source the solution if they so wish.Are there better reasons for copyleft?
It remains to be seen whether there really is a problem with the open cloud, and whether copyleft is an answer. Personally I’m not convinced there is.
However, that doesn’t mean copyleft on services isn’t important; on the contrary I think that licenses such as AGPL offer organisations a useful option when looking to make their services open.
Recent examples such as EdX highlight that AGPL is a viable alternative for licensing software that runs services, and that perhaps with greater awareness among service providers we may see more usage of it in future. For example, for the public sector it may offer an appropriate approach for making government shared services open source.
(Sea and cloud photo by Michiel Jelijs licensed under CC-BY 2.0)
The Open Source Initiative (OSI) will host a small open source license clinic as part of its non-profit educational mission, in collaboration with federal agency participants and the Washington D.C. technology community, at the US Library of Congress in Washington, D.C..
Who Should Attend? The clinic is designed as a cross-industry, cross-community workshop for legal, contract, acquisition and program professionals who wish to deepen their understanding of open source software licenses, and raise their proficiency to better serve their organizations objectives as well as identify problems which may be unique to government. Discussion of licenses and issues in straight-forward terms make the clinic of value to anyone involved in the life-cycle of a technology decision/acquisition or strategy for internal software development.
Your Moderator: The morning will be moderated by OSI board director and license committee chair Mr. Luis Villa. Mr. Villa is currently Deputy General Counsel at the Wikimedia Foundation. Previously he was an attorney at Greenberg Traurig and Mozilla, where he worked on the revision of the Mozilla Public License (MPL).
Open Source Licenses 201 - A tour of standard open source software licenses and their most common use.
Invited Expert Presenters & PanelistsMs. Vicki Allums, General Counsel, Defense Information Systems Agency, Department of Defense Mr. Jim Jagielski, OSI board director and President, Apache Foundation Mr. Mike Milinkovich, OSI board director and Executive Director, Eclipse Foundation Mr. Luis Villa, OSI board director and Deputy General Counsel, WIkimedia Foundation Mr. David Wheeler, Analyst, Institute for Defense Analyses
Round Table: A panel of experts representing open source community, industry and government will discuss key licensing issues. Audience participation encouraged, questions will be taken from the floor. Some of the topics discussed will include:
- What are the common barriers - real or perceived - in government adoption of open source with regard to the licenses under which the software is distributed? What are the successful approaches to overcoming these? Where are the reference models in this regard?
- What are the challenges for industry or open source community in working with federal agencies? Who has been successful in overcoming these?
- How are government agencies distributing their own code under open source licenses? Include external shareholders in the process?
- What is the rational behind license non-proliferation? Does government need special license? What are the case studies or history in this area?
Registration and Venue Details at: Open Source License Clinic Registration
Please Note: You must register to attend the clinic; the clinic will be limited to the maximum capacity of the facility.
The clinic is designed to educate and provide a forum for discussion, and should not be construed as legal advice. Attendees should consult with their own legal counsel before making decisions regarding software licenses.
Since it has been roughly one year since the newest board members were chosen, I'd like to put on my new hat as chair of the License Committee and recap some license committee highlights from this past year.
Some things that got done:
- Revised the opensource.org/licenses landing page to make it more useful to visitors who are not familiar with open source.
- Reorganized and revised the FAQ, which now has categories and a few improved questions.
- Handled a stream of license submissions, including AROS, MOSL, "No Nonsense", and CeCILL. The new EUPL is in the pipeline as well.
- Adopted a beta Code of Conduct for license-discuss/license-review. This was drafted with the intent that it will eventually be a CoC for all of OSI, but it is still being formally beta-tested in the license committee community.
Some important projects have also been started or discussed, but are still incomplete:
- Opened discussion of the status of licenses without patent grants in the modern licensing world.
- Improved the presentation of existing licenses. This started with removing comments that were confusing to visitors. There are still open suggestions to present plain text licenses, and update the template BSD/MIT licenses, as well as some other ideas - help still wanted!
- We're looking into objective analysis of license popularity, and a "license chooser" project. Both are huge projects. They do have some momentum, but even if they are at all workable, they are still a long way out. Please contact Luis Villa or the list if you have thoughts or resources for either of these projects.
I hope that this sounds like a pretty good year- it isn't perfect but it felt like a good start to me, giving us some things we can build on for future years.
If you think this kind of thing sounds useful for the broader open source community, you can help :)
- Join license-discuss, or, if you're more sensitive to mail traffic, but still want to help with the committee's most important work, join license-review, which focuses on approving/rejecting proposed new licenses.
- Become a member! Easier than joining license-discuss ;) and provides both fiscal and moral support to the organization.
Universities are ahead of the curve in adopting open source, says Scott Wilson – we should now lead the public sector in exploring its full potential.
Earlier this month EdX, the nonprofit organisation set up by MIT and Harvard to provide a MOOC platform, released part of its code under an open source licence – the Affero GPL.
MOOCs – “Massive Online Open Courses” – have been hitting the headlines frequently in 2013, with high profile proponents and some big name backers. (For a good overview of the subject, I’d recommend reading MOOCs and Open Education: Implications for Higher Education, a white paper published by CETIS.)
The meaning of “Open” in MOOCs has been variously argued; however the prevailing model is one of open access to higher education, but not necessarily provided using an open platform.
The earliest MOOCs, and those operated by individuals or collectives rather than companies, have tended to operate using combinations of free of charge – though not necessarily open – services such as those offered by Google, WordPress, and Twitter, coordinated using open source course management platforms such as Moodle.
However, most of the high-profile commercially-oriented MOOCs have operated a bespoke online service based on either proprietary software, or solutions built using open source software but not necessarily available for others to replicate due to the so-called privacy loophole; that is, the modifications made to the software to deliver the service are not themselves required to be distributed to users.
The EdX announcement is interesting for two reasons – firstly that they are the first high profile MOOC provider to release open source – and secondly that they are doing so under the AGPL, one of a small number of open source licenses that specifically address the “privacy loophole”. This means that if you create your own MOOC service using EdX’s XBlock software, you must make the source code for the service – include your modifications – available to download under the AGPL. This is a form of “service provider copyleft” that ensures that EdX will have access to any improvements on their platform used by third parties.
This can be seen as a very cautious move – using the AGPL will ensure no other services can improve on the codebase without EdX getting access. However its also quite a bold one as it makes a clear distinction between how EdX sees “open” in contrast with Coursera, Udacity and others. (It remains to be seen what direction the UK’s FutureLearn initiative will take). It will also be interesting to see if other components of EdX will be released, and if so whether they will also use AGPL.
For more information on how online services use open source licenses, see How Can Services Be Open Source?
The EdX XBlock code is available on Github.
At Open Source Junction 4 we invited attendees to present their hardware projects. Some were open source hardware, while some used consumer hardware components in conjunction with open source software to provide an innovative solution to a problem.ColorHug
Richard Hughes is the creator of ColorHug, an open source colorimeter. These devices measure the colour coming from your screen and create a colour profile allowing you to ensure that colours look the same across all devices. This means that a photo taken and viewed on your DSLR camera will look the same when being touched up on your laptop, and when being shown to friends on your TV.
One of the initial concerns to Richard was infringing patents of competing devices. To avoid this, he kept his device as simple as possible – the less technically novel the device, the less chance someone else had patented a method it used. The simplicity of the device and components could be made up for using more complex software.
Once he had a working prototype, he Richard set up a website to announce the device and start taking orders. He hoped that he would sell a dozen or so. Before long he had several hundred orders.
Financing production at this scale proved difficult. Richard attempted to get a bank loan, but because his device was open source, the bank felt it was too much of a risk. He decided to fund the first 50 units himself, using the profit to fund the next batch of 100, and so on.
As production scaled up, Richard found new ways of creating efficiencies.
Holes in the case were initially made using a Dremel multitool and a template. However, before long the hole in the template became enlarged and had to be replaced. This process was replaced with a punch tool, which was faster and more durable.
The ColorHug circuit boards were initially printed in China and hand-soldered by Richard, who has past experience with surface-mount soldering. However, the cheap boards had a high defect rate, and resulted in a lot of wastage.
As the production process evolved, Richard moved the circuit board manufacture to the UK, and paid for them to be tested at production. This created a higher unit cost but dramatically reduced wastage, creating savings overall.
After soldering 50 units himself, Richard sought out an alternative. He looked for companies to provide a pick-and-place process for the surface mount components. Initially looking to Eastern Europe, he found companies were only willing to deal with orders far in excess of what he needed. Again, the answer was found closer to home, with a UK factory willing to satisfy smaller orders in exchange for an initial set-up fee.
If you’re interested in the ColorHug, you can buy one (or download the designs, firmware and build your own) from http://hughski.comPanStamp
Paolo Di Prodi of Robomotic introduced us to the panStamp, small Arduino-compatible boards that communicate wirelessly over an RF protocol called Simple Wireless Abstract Protocol (SWAP). PanStamps can be connected to various sensors and consume very little power, allowing them to operate on a single AA battery.
A network of panStamps can be used to measure all aspects of an environment such as temperature, air quality, noise levels, light levels, and report readings back to a base system.
A panStamp network is managed using Lagarto, a python-based device management interface for SWAP. From here, sensor readings can be read, recorded, published, and used to trigger events. A reading from one panStamp can even be used to activate another panStamp in the network.
The panStamp site features technical details and source code for the system. Paolo admitted that he was initially sceptical about the open source model, but concluded that if your device is copied, it’s because you’re doing something right.Remote Care Package
Kevin Safford is a technical writer for IBM by day. When his mother was diagnosed with altzheimer’s, she was keen to maintain her independence. With Kevin living some distance from her, he wanted to provide her with a system that allowed her to live independently where possible, while providing him with peace of mind.
Kevin designed an unobtrusive computer system for his mum which could be administered remotely. The hardware he chose was a DreamPlug connected to a USB touch-screen. The software from the operating system to the applications is entirely open source.
When Kevin wants to speak to his mother, he uses VNC to log into her computer remotely. From here he can initiate a video call to himself, allowing it to ring until she is ready, when he answers at his end. Being able to see one another gives both parties reassurance.
Between calls, Kevin uses the computer to provide his mother with reminders and stimuli. Using cron jobs and structured text files, he can display a daily list of events to ensure his mother knows what she’s doing, and what she needs to remember. He can also schedule the system to play her music or show her photos, which provide a source of entertainment and stimulus.
Details of Kevin’s system including the scripts he uses can be found on his Google Code site.Stellar Computer System
Richard Melville works for Cellularity Ltd who produce the Stellar Computer System, a system of modular desktop computers which can operate independently, or be clustered together to provide a distributed and robust platform.
The Stellar system is built entirely from consumer components. The form factor is small enough to be mounted on the back of a computer monitor. The components are passively cooled and the storage is solid state, creating a low-power, silent device.
The power requirements are low enough for the machine to run from a 12V battery. This can be connected to a mains charger to provide a long life UPS-style set-up, or even to a small wind turbine.
The device is designed to be modular – if the operating system fails, the device it is stored on can be swapped out for another, leaving the user data (stored separately) intact. The case is tool-less, allowing most components to be replaced by hand.
Cellularity Ltd. is currently working with Basho to implement the Riak distributed database on a cluster of Stellar machines. This setup would allow user data to be distributed across the cluster, meaning a machine could fail completely and be replaced with no loss of data.
You can find our more about the Stellar Computing System on their WordPress site.Cellular Automata
Adam Cooper has been spending his spare time building hardware cellular automata. If you’ve ever seen Conway’s Game of Life and similar simulations you’ll be familiar with the concepts – an array of cells activate, stay active and deactivate in response to those around them, based on a pre-defined rule set.
Adam’s project takes this idea into the real world. His idea was to design a low-cost device (about £6 for a board an the required components) which would allow year 9-age school pupils to build physical automata. With a yeargroup building a set of devices, a large array of cells could be created.
The project is currently at the prototyping stage, with the first 5 boards built and functioning. The boards feature LEDs which allow them to signal an active state, and light sensors to detect the state of surrounding cells.
Adam’s plan is to get sponsorship for the boards to be built at his daughter’s school, and that the project will help promote interest in STEM subjects.
Once again OSS Watch would like to thank everyone who presented at Open Source Junction 4.
OSS Watch Briefing Paper: Open Standards and Open Source
Open source software and open standards are two of the key interventions in technology policy, whether that policy is made by governments, public sector organisations, or companies.
Open standards can ensure interoperability and assist portability, allowing the switching of solutions and avoiding vendor lock-in. Standards can also help to create new markets, and can also encourage innovation within markets by imposing useful constraints.
Open source software offers benefits of greater flexibility and the potential for reduced development costs and better software quality through collaboration and reuse.
Together, open source and open standards provide the basis for solutions that offer interoperability, cost reduction, and flexibility; no wonder they are seen as such a powerful tool for technology policy!
However, whats often less clear is how the two interact in practice. There is, for example, a fairly widely-held view that open source software is somehow inherently more likely to support open standards. However, in practice this is not necessarily the case, and there are a number of barriers that can actually make it less likely for open source projects to implement standards than their closed-source counterparts.
For example, implementation of a standard requires access to documentation; in many cases this involves payment for access, or paid membership of a consortium – something that open source projects may have difficulty with unless a benefactor or sponsor does this on their behalf. Also, if a project wishes to publicly claim that it implements a standard, this may involve a formal conformance process requiring paying fees for testing and accreditation.
So for policy makers and CIOs, the selection of standards, and the standards setting organisations they originate from, can have a significant impact on the availability of open source solutions to meet their requirements.
Mandating standards that involve patent licensing fees, mandatory expensive conformance testing and assurance, and restricted access to documentation will exclude many potential solutions and providers. This will have the impact of increasing costs, and potentially eliminating the benefits of standardisation altogether if organisations have little practical prospect of switching suppliers.
Conversely, if standards are selected that provide a low barrier to entry to open source then this can be good not just for individual solution procurement, but for interoperability as a whole.Unlike closed-source solutions, with open source it is possible to inspect the implementation of standards and to conduct independent interoperability and conformance testing rather than rely principally on vendor claims. The presence of open source implementations can also influence uptake of a standard; either by making open source libraries available for use within other products, or by providing a good target for interoperability testing for other entrants.
Open source and open standards are key components in technology policy; but its important to know how they can work together – and potentially work against each other.
A new OSS Watch briefing paper provides an overview of the main issues facing implementation of standards for open source projects and developers; for more information see Open Standards and Open Source.