- Discussion Lists
- Discussion Lists
- Research Groups
OSS Watch team blog
open source software advisory service
Updated: 14 hours 21 min ago
Across the 2 days, Scott Wilson and I presented sessions on the varieties of communities and why we form them, communication within online communities, governance of free and open source software projects, leadership, and conflict resolution.
While we were only able to have a small group from the vast community of TYPO3 at the workshop, those who did attend represented a range of teams from the community, including developers from both the TYPO3 CMS and TYPO3 Neos projects, as well as members of the marketing team and the community manager, Ben van ‘t Ende.
One of the great things to see was how open and honest the attendees were about the issues we discussed and the challenges they faced. A few points were of particular interest to me.
When we discussed the reasons we form communities, there was no clear agreement on what the shared interest of the TYPO3 community was. Defining this will be key to driving towards the community’s common goals in the future.
The community uses a myriad selection of communication channels, and the purpose of each isn’t always clear-cut. There’s also been a general lack of moderation culture, which has led to a few poisonous people getting out of hand. Instilling a sense of shared values and leading by example is needed over time to help ensure that discussions remain constructive.
There is a visible lack of diversity in the community, both shallow-level (most contributors are white, male and located in Germany) and deep-level (there are lots of highly skilled developers, but less who are learning or from other disciplines). These issues could affect the long-term sustainability of the community if the barriers for new, less-skilled contributors are too high. Engagement with users and less technical members of the community will also be key to shaping the community’s goals.
These problems aren’t the kind that will be fixed by a two-day workshop or a change of policy, it’s going to take commitment and leadership from those who believe in the community to move things forward. One thing that we definitely saw from this workshop is that those people are present and highly active in the TYPO3 community. We look forward to the possibility of working together again.
Our presentation slides from the workshop can be found on SlideShare.
A special thanks to Christian Händel for making sure we made it to Altlenigen and back!
The concept of a software patch is well understood by users of version control systems such as CVS and Subversion, but with the recent rise in popularity of distributed version control systems (DVCSs) such as Git, a new method for submitting changes to a project has evolved: the pull request.
While not a feature of DVCS software, pull requests are a common workflow used by projects that manage their code with a DVCS, so knowing what they are and how to do them is important if you’re planning to contribute to such a project. You can find out more in our new briefing note, What Is A Pull Request?
Recently OSS Watch were invited to produce a video for an online course discussing how to get involved in open source software projects. In the video, we discuss why you’d want to contribute to open source projects, how to choose the right open source project for you, the communication channels used by open source communities, how to make your first contributions and what you can contribute other than code, as well as sharing a little of our own experiences contributing to open source projects.
To view the video on HD, click the YouTube icon to view it there.
Last month we officially kicked off a new EU project aimed at getting students involved in free and open source software projects.
Inspired by Google Summer of Code, the idea is to enable students to obtain academic recognition for their contributions to open software communities.
The project itself is called VALS, and will be running the first pilot Semester of Code programme in 2014, and will be using the same Melange platform used to run GSoC.
For this to work in practice we need to have an effective model for co-mentoring, with communication between the student, their academic supervisor, and mentors from the software community.
We also need to put in place assessment and curriculum changes at universities to support experiential learning by students in open development. For example, universities need to recognise and assess not just what students produce in terms of contributed artefacts (such as code and documentation) but also the processes of engagement and communication with the community, and developing their experiences in using issue tracking and source control systems.
Some of the common problems we’ve seen with other mentoring programmes have been a lack of commitment from students in actually completing their contribution, and breakdowns of communications between students and mentors, and these are some of the things we need to look at in how the programme is designed. For example, we need to make it clear how issues can be resolved, and also to ensure that all parties involved have appropriate expectations.
I think for open source communities its also critical that we look at how engagement by students can be sustained beyond a single contribution to become an ongoing part of their professional development, at university and beyond.
The project team includes OSS Watch, OpenDirective, RayCom, MindShock, and the universities of Salamanca, Udine, Bolton and Cyprus, though the programme itself is going to be open.
As the project has only just started we don’t have a website yet, but If your community or university is interested in participating in the pilot programme next year, let us know at email@example.com.
Over the weekend I started playing Doom 3 BFG Edition, I re-release of the mid-00s first person shooter. The reason I’m talking about this here is, as we’ve discussed before, id software who make Doom 3 have a policy of open sourcing the code for their games.
Predictably, this hasn’t posed a problem for the open source community, and bit of googling turned up a Github fork of id’s code with support for Linux. The code relies on the SDL and OpenAL libraries to handle input and audio respectively, but once those dependencies were installed, it compiled for me on Ubuntu 13.10 and 12.04 LTS with no problems.
Alongside the resulting binary, to actually play the game requires the commercial data files, which aren’t distributed freely. Since the game’s distributed using the Steam DRM system, you need to install a copy of the Windows Steam client, install the game, then copy the files in place.
It’s possible to install the game files using WINE, but I was using a laptop which happened to be dual-booting Windows so I installed the game as normal on Windows, then switched to Linux and created a symbolic link to the data files on the Windows disk partition.
There’s a few caveats to note about the open-source version of this game. Firstly, trying to run the game with AMD graphics caused the game to crash with OpenGL errors. Reading bug reports shows that this may be a problem with driver compatibility (some people have gotten it working), but using a system with NVidia graphics worked flawlessly. The game also uses a couple of non-free components which can’t be included in the GPL code: the Bink video codec and the “Carmack’s Reverse” shadow stencilling technology licensed from Creative. This means that the odd in-game video is missing, although this doesn’t really detract from the game-play as the audio still plays.
The ease with which I was able to find a solution to play this unsupported Windows game natively on Linux is a real testament to the open source community’s ability and willingness to solve and share solutions to problems. I was also really impressed by how well the game ran under these circumstances, sowing how bright a future Linux has as a gaming platform.
BitTorrent, creators of the highly popular distributed peer-to-peer file sharing protocol, recently released BitTorrent Sync, a solution for syncing folders between machines based on the BitTorrent protocol. BTSync provides a fully distributed and encrypted alternative to services like Dropbox where all your data is synced through a third-party server.
BTSync has been released for Windows, Mac, Linux and other platforms, although the user experience on Linux isn’t quite as polished as it’s counterparts – the only interface provided is via a local webserver accessed through your browser, while Windows and Mac get a nice desktop GUI with a system tray indicator. I found this a pain as I’d sometimes finish making changes to a synced file and want to shut my computer down quickly, but had to open my browser first to check if the file had finished syncing.
While BTSync isn’t Open Source, the developers are very open to feedback from users and developers. I quickly realised that I’d be able to use data from the web interface to create a desktop indicator for Linux, so in the open source tradition of scratching my own itch, I wrote a python script that gave me an indicator to show if a file was syncing. When it was workable, I stuck it on Github with an open source licence and made a post on the BitTorrent Labs forum.
I then noticed another post on the forum by a developer called Leo Moll – he was packaging BitTorrent Sync for Ubuntu and Debian distributions, and as I’d written my script with Ubuntu in mind, asked if he’d like to include it in his packages. He agreed and before long my indicator could be installed alongside a well integrated BitTorrent Sync client.
Here’s when things really took off. With it being so easy to get hold of my indicator, people started using it and reporting bugs on the GitHub page. Almost as quickly, they started submitting patches. I got a new set of better animated icons for the indicator, various bugfixes for cases I’d not come across, new feature requests, and even someone packaging the indicator for Arch Linux.
Alongside this Leo and I were contacted by another developer who was packaging BitTorrent Sync for Debian and Ubuntu. We had a discussion and worked out where best to focus our efforts to avoid duplicating each other’s work and creating conflicting packages. Leo and I are now discussing merging our codebases to streamline our work and allow for better integration.
In the space of a month, what started as a little hack to make my life a little bit easier has become a vibrant project with an engaged community of developers and users. The real key, I think, has been to make it as simple as possible to let users run the software, and to show I’m listening and responsive to feedback.
Commit messages are an important part of how software is developed, debugged and maintained, and when done badly can become an unnecessary barrier to collaboration in open source projects.
Bad commit messages make it harder to figure out where problems have been introduced, especially for newcomers to a project.
The worst-case scenario for anyone trying to make sense of changes to a project is a commit message that offers basically no information for a major change affecting multiple locations in the code.“Worst commit message ever” via Jeff Dallien
To get a good sense of how commit messages are useful, take a project and look at its history in the revision system. You’ll see something like this:
- Revision 1525597: Add ap_errorlog_provider to make ErrorLog logging modular. Move syslog support from core to new mod_syslog.
- Revision 1514267: tweak syntax strings for ServerTokens
- Revision 1514255: follow-up to r813376: finish reverting r808965 (ServerTokens set foo)
- Revision 1506474: server/core.c (set_document_root): Improve error message for inaccessible docroot.
- Revision 1498880: Replace pre_htaccess hook with more flexible open_htaccess hook
Or, if you’re unlucky, you might see something like this:
- Revision 1525597: fixed it
- Revision 1514267: more changes
- Revision 1514255: bug fixes
- Revision 1506474: more improvements
- Revision 1498880: lots of changes
If you now imagine you’re looking to find out where, say, the ServerTokens syntax changed, you can see the value of providing good commit messages.
So, how can you write better commit messages? Below are some top tips.Be brief
Commit messages should be brief and easy to scan. Often the reader of commit messages is viewing then in a log or revision history, so make sure the most important words and phrases stand out.
There is no hard rule about this. Some developers prefer an approach of having a very short one-line message but with optional subsequent paragraphs of context and description, whereas others prefer to only provide one line of any length, and link to detailed explanations elsewhere, such as in the issue tracker.
However, you should use your common sense as to how much information should be in the commit message. If you find you’re writing lots of explanatory text, maybe you need to put more comments in the code itself where the changes are made, or add more detail to an issue in the tracker.Make messages easier to find when searching
As well as scanning the revision history, developers also search logs using grep or similar tools. In which case its important to use the best terms for discovery. For example, if you use component or module names, make sure you spell them correctly and use them consistently. For example, if its a component called “DownloadManager” don’t use “Download Manager”.
Commit messages can also turn up in search engines, either project-specific searches or in regular web search engines. So its important to be clear and consistent in language use.Provide sufficient context
While brevity is desirable, commit messages need sufficient context to be useful.
For a one-line fix, you can always view the diff to see what changed, but if a commit affects multiple files or multiple lines of code, it needs more explanation so that other developers and users can re-establish the context of the commit.
Peter Hutterer suggests a commit message needs to answer three questions:
- Why is it necessary? It may fix a bug, it may add a feature, it may improve performance, reliabilty, stability, or just be a change for the sake of correctness.
- How does it address the issue? For short obvious patches this part can be omitted, but it should be a high level description of what the approach was.
- What effects does the patch have? (In addition to the obvious ones, this may include benchmarks, side effects, etc.)
You don’t need to go into a lot of depth, but you need to capture enough of what is going on that someone reading the revision history can get a sense of what your commit did without having to look at all the diffs.
Added unicode support for imported files to prevent encoding errors in article.title
This doesn’t necessarily need to be in the message itself – for example, if there has been a discussion on the mailing list, or there is plenty of information in a related issue on the project tracker, then you can include a reference or link to this in the commit message.
Added unicode support for imported files to prevent encoding errors in article.title (see bug #1345)
Some issue trackers can also link commits to an issue automatically based on the commit message, in which case you need to make sure you’re using the correct format for it to pick this up.Provide credit and recognition where it is due
While you may be committing the changes, you may not in fact be the author – if you’re applying someone else’s changes, you need to acknowledge the fact and give the author recognition. Even if its not a complete submitted patch, but just a “if you change x to y that would fix the bug”, its worth putting in an acknowledgement.
Added unicode support for imported files to prevent encoding errors in article.title (see bug #1345). Thanks to Jane Doe for the patch
This has both a social function (placing credit where it is due) and also provides an audit trail.
(Some projects prefer a more formal “Submitted by: <username>” but I like to just say “thanks to <username>”.)Avoid repetition
Troy Hunt provides another rule of thumb for commit messages: subsequent commit messages from the same author should never be identical.
This is partly because it makes it more difficult to distinguish changes in the version history, and partly because each change should, logically, be different to the last.Try not to swear or insult anyone
Fixed stupid $$&!! mistake caused by £$%$%@ Steve
OK, it is difficult sometimes, but lets keep things professional. Save your venting for the IRC channel
More seriously, commit messages form part of the overall tone of communications for a project; snarky, rude and unhelpful commit messages don’t put your community in a good light, particularly for newcomers.Check the logs to see how you’re doing
Every now and again its worth checking your log or revision history for your project, and reviewing the last page or so of commit messages. Would somehow relatively new to the project get a good idea of what was happening? Can you improve the usefulness of the messages that you and your community members are writing?Follow project guidelines
Your project might have a preferred fromat for commit messages, so make sure you find out before making a commit.
For example, Moodle’s commit message guidelines call for a message subject line consisting of the issue number followed by component name, and the rest of the subject up to 72 characters.Any more?
I’d love to hear any more suggestions for better commit messages (or your worst examples of bad practice!)
For a random commit message, give WhatTheCommit a whirl
Photo credit: Wilson Afonso
I’ve just come back from a weekend in Liverpool helping organise and run OggCamp, the biggest Free Culture community event in the UK. While I can’t say for sure that it had the highest attendance of its 5-year run (numbers have been pretty much stable for the past 3 years), there’s several ways in which I think it was the best.
OggCamp is an “unconference”, meaning most of the schedule is decided on the day, and the sessions are delivered by the attendees. We also have a scheduled track covering topics including:
- Privacy & Mass surveillance
- Creative Commons music
- Physical and digital computer security
- Open source operating systems for cars
- Opening up book publishing
OggCamp is a free event organised and run by volunteers, and as such those speakers to are scheduled talk for free as well. Having speakers in our community willing to deliver professional-quality talks on such a range of topics is one of the things that made this event great.
That’s not to mention the unconference talks which included:
- Managing open hardware projects on Github
- Ubuntu for phones vs Firefox OS
- Minecraft hacks
- A game of Werewolf
- Quad- and tri-copters
- Lots, lots more
When you’ve organised a 2-day event with no schedule, there’s always the prospect of people turning up and no talks being offered, so having such an amazing community who come every year and make the event what it is was fantastic.
The final thing that made this OggCamp amazing was the support from our sponsors and community. This year, we introduced a pay-what-you-want system where, if people wanted, they could donate to the event when they signed up for their free ticket. This proved to be massively successful, and led to our community being our biggest single cash sponsor. On top of that we had a huge amount of support from companies such as Bytemark and Canonical, which meant that we could provide a really fun experience for attendees, including free play arcade machines, some excellent raffle prizes, and a posh venue for the evening social.
We’ve already had some excellent ideas from the community for next year. See you in 2014…
OSS Watch’s legal officer Rowan Wilson was fortunate enough to see Joel Spolsky of StackExchange speak at Open World Forum about the Cultural Anthropology of StackOverflow. I wasn’t able to attend, but there’s an longer version of the talk available on YouTube.
Joel presents some interesting points about how the design of a piece of software affects the way its users behave – this is crucial in this context as the software we’re talking about is a communication tool, so its design affects how a community communicates.
He describes the importance of first impressions
The first impression on StackOverflow is that, if you’re a programmer, you get that these are all programmer questions… If you’re not a programmer you don’t understand a single thing and you leave.
This seems like a hostile and exclusive approach to community management – usually when we talk about building open development communities we talk about being welcoming to ensure we’re not putting off potentially valuable contributions. However, the goal of StackOverflow and similar StackExchange websites is to get expert answers to difficult questions – people who don’t understand the subject will only create noise, so putting them off from engaging early increases the site’s usefulness.
The talk is an hour long so I’ll leave you to watch the whole thing rather than picking it apart here, but it’s a really good overview of a very successful online support community, and discusses some ideas which might go against the conventional wisdom of community management.
You need sound coding skills to create good software, but the success of an open source project can also depend on something much less glamorous: your choice of software license.
Last week I spoke to Paul Rubens of CIO.com about the issues that need to be considered when deciding which licence to use when releasing your code, including why a licence is necessary, the varieties of Free and Open Source Software licences, and how you provide licenses for the non-software parts of your project.
You can read the full article at CIO.com.
Last week I took the opportunity to visit Oxford’s Hackspace and see a talk by Javier Serrano of CERN. Serrano has been working together with Moorcrofts, an Oxford-based legal firm, on the latest version of CERN’s Open Hardware Licence (OHL).
CERN’s systems have unique requirements in terms of scale, synchronisation and geographic distribution. As a result, a lot of their hardware is produced to bespoke specifications.
Serrano spoke about the models available when considering closed/open and commercial/non-commercial licensing. Due to the long lifespan and iterative nature of CERN’s systems, a commercial proprietary solution would create vendor lock-in which isn’t acceptable for their requirements. An open solution without commercialisation wouldn’t be sustainable. He concluded that an open, commercialised solution provides “the best of both worlds” in terms of sustainable support and sharing of knowledge, which is one of CERN’s core goals.
The licence itself takes inspiration from the GNU GPL for software, with modifications to make it more applicable to hardware. The licence is designed to cover the documentation for the hardware (such as CAD files and bills of materials), allowing the documents to be distributed, modified, and used to manufacture products given that the documentation is made accessible to those receiving the products.
Serrano described the licence as “weak-copyleft”. It is designed to ensure that modifications to the design, used complete or in part, are shared back to the community. However, it does not attempt to stipulate that the designs of other products that are integrated or linked with the OHL products also have their designs shared.
Similarly, the licence contains a patent grant to any patents owned by the designer, but it doesn’t attempt to make this reciprocal – the licensee isn’t required to license their own patents back to the licensor.
A final notable feature of the licence is the stipulation that alongside any trademarks and copyright notices, any references to the location of the documentation must not be removed from the designs. This means, for example, that a URL to access the documentation could be included in the top copper layer of a PCB – this would ensure than anyone receiving the board would have access to the designs.
Serrano finished by introducing White Rabbit – a network time protocol which improves on the Precise Time Protocol standard to synchronise networked nodes with tolerance of under a nanosecond. The documentation for the hardware implementing White Rabbit is released under the CERN OHL.
A big thanks to Oxhack for hosting the event, and Moorcrofts for sponsoring it.
Creative Commons is a great way to license documentation, websites, articles, artwork, and other media assets associated with a software projects, but source code has some special characteristics that are better suited to using licenses recommended by the Open Source Initiative and the Free Software Foundation.
To find out why, take a look at our recently updated briefing note on Creative Commons licensing and Open Content, where we’ve added a section on this question.
tl;dr: Open Source is inherently no more or less secure than closed source software.
For a more thorough answer to this question, we’ve just updated our briefing note, “Is Open Source Software Insecure? An Introduction To The Issues” where we look at some of the ways in which software is considered secure, and look at some of the common claims both for and against the security of Free & Open Source Software.
On the whole there are no significant differences in security between closed and open source software as a category. The key differences are between individual products, and the governance processes around security – something which applies to both closed and open source software.
Claims that Open Source is inherently insecure – or, conversely, that it is inherently more secure – are unfounded and should be challenged, particularly in the process of selecting and procuring software. Accepting such a generalisation may actually be increasing security risks for the organisation, by excluding the most fit-for-purpose solutions from consideration.
Photo by nolifebeforecoffee of a stencil by banksy.
Last week I presented at ALT-C in Nottingham on the topic of open source in education and the public sector.
This was partly to invite people to participate in Open Source Options for Education, and partly to open up discussions around software procurement policies and processes in the sector.
The discussion tended to confirm our survey findings that the practice of procurement including open source options varies a lot within institutions, resulting in different biases (both for and against FOSS). So the degree to which FOSS is considered for procurement in education can be quite different depending on who you’re dealing with. To get a more balanced approach to closed and open source software would therefore require engagement with all of the different groups engaged in software procurement to develop their understanding and practices.
Here’s the slides:
A couple of months ago OSS Watch took the difficult decision to postpone Open Source Junction 5 – the latest in our series of events seeking to foster cross-sector innovation and partnerships.
We’ve very pleased to announce that the event, which this time focuses is Open Source in the Public Sector, has now been rescheduled and is happening on the 7th and 8th of November. We’ve got a great schedule of speakers taking shape, and the event is free to public sector employees. Places are limited however, so make sure you book your place through Eventbrite.
We’ve still filling the last few slots in the schedule, so if you’d like to share your experience of working with open source in the public sector, please send us an email to firstname.lastname@example.org
In November, Mark and I will be in Mannheim, Germany as part of the TYPO3 Marketing Sprint Week, where we’ll be facilitating an OSS Watch workshop focussed on communications in open source communities.
Effective communication in all its aspects is crucial for a healthy open source community, and we’re excited to be able to pull all of these aspects together into a two-day workshop.
You can find out more about the TYPO3 Marketing Sprint Week on their website.
If you’re interested in organising a similar activity for your project or organisation, get in touch with us.
I’ll be at ALT-C next Tuesday to talk about Open Source Options for Education as part of the “OERs and OSSs” session in the morning. I’ll also be around for the rest of the day so feel free to collar me for a chat about anything OSS-related!
Here’s the session details to whet your appetite:
Levelling the playing field for open source in education and public sector
Open Source Software (OSS) offers many potential benefits for procuring organisations, including reduced costs and greater flexibility.
The UK Cabinet Office has taken an active role in levelling the playing field for Open Source Software (OSS) in the procurement of IT systems in the public sector.
This has included a set of open standards principles that favour Royalty-Free standards (UK Cabinet Office 2012a), and a procurement toolkit that includes open source options for commonly procured types of system (UK Cabinet Office 2012b).
These interventions are necessary, as many organisations in the public sector have procurement policies and processes that – whether intentionally or otherwise – exclude open source alternatives from selection, even where it would save organisations money or provide them with the systems that best fit their needs.
This also applies within the education sectors, and OSS Watch, based at the University of Oxford, has worked with the Cabinet Office on extending this guidance to the education sector, publishing Open Source Options for Education (Johnson et al., 2012). This lists open source alternatives for many IT solutions used in education, including subject-specific applications.
In this session we will introduce Open Source Options and the Cabinet Office guidance, and explain how it can be used to open up procurement in education institutions.
We will also invite delegates to contribute their own suggestions for open source alternatives they have used in their own work to include in the options.
UK Cabinet Office. (2012a). Open Standards: Open Opportunities – flexibility and efficiency in government IT, https://www.gov.uk/government/consultations/open-standards-open-opportunities-flexibility-and-efficiency-in-government-it
UK Cabinet Office. (2012b). Open Source Options. https://www.gov.uk/government/publications/open-source-procurement-toolkit
Johnson, M., Wilson, S., Wilson, R. (2012). Open Source Options for Education. http://www.oss-watch.ac.uk/resources/ossoptionseducation
This October sees the return of OggCamp, the UK’s biggest community-run Free Culture event encompassing free and open source software, hardware hacking, creative commons media, and much more.
The event features a scheduled track of speakers (currently being confirmed) alongside 3 barcamp/unconference tracks where the attendees give talks about their projects, ideas, campaigns, or whatever takes their fancy. The Ubuntu Podcast and Linux Outlaws will also be doing a joint live show, plus there’ll be parties and prizes to be won in the infamous annual raffle-cast.
The event’s now in its 5th year and about 400 attendees are expected. Tickets are free (or pay-what-you-want if you’re so inclined), and have been going fast, so find out more, watch the documentary and book your place head to http://oggcamp.org!
The Ubuntu Edge crowdfunding campaign finished today, which despite having more money pledged than any previous crowdfunding campaign, fell $20 million short of it’s massive $32 million target.The good
The campaign created a huge amount of mainstream press for Canonical and for Ubuntu, at a level never seen before. Jane Silber, Canonical’s CEO, was interviewed on CNBC news. Forbes ran several articles about the campaign. Several major UK newspapers ran a story about the Edge, including The Sun tabloid, which seized on the fact that Canonical is a British company. Having received this coverage, the chance of seeing Ubuntu in the mainstream media next time the project hits a big milestone has increased.
Aside from general publicity, the campaign has also generated interest in the Ubuntu Touch platform within the phone industry. The reason the price of the Edge was lowered to $695 was reportedly due to support from manufacturers who wanted to see the phone become a reality. This suggests to me that we may see something like the Edge released through more a traditional mobile contract model in the not-too-distant future.The bad
I’ve seen several factors that contributed to the campaign’s failure, and I’m sure there’s others I couldn’t identify, but here’s a few that stood out to me:
- The Ubuntu Edge could be more than a phone, but it isn’t, yet. What made the Edge a unique proposition is what Canonical calls the “Convergence story” – using one device as your phone, desktop, laptop or media centre, depending on the connected peripherals. The problem is that at launch these peripherals (and even the required software) wouldn’t have existed, meaning people had to make their purchasing decision based on a device that was just a phone, and…
- People aren’t used to paying for phones. The Ubuntu Edge was a premium device with a premium price tag. However, as illustrated by the comparison on the campaign’s page, the price wasn’t unrealistic when compared to competing devices. The trouble is that most people don’t pay that money in one go, they effectively hire-purchase the phone through a contact with their network. Someone who can afford £20 per month can’t necessarily afford £400 up front for a phone they’ll get 6 months later.
- Pledging money is scary when you have to pay straight away. The campaign was run on the IndieGoGo platform, which uses PayPal for payments, and requires payment immediately. Other platforms simply require a pledge, and only take the money when the project reaches its goal, or the campaign deadline is reached. Potential supporters commented that this put them off as they weren’t confident in PayPal returning their money promptly.
- Canonical’s lack of crowdfunding experience was very apparent. In Mark Shuttleworth’s final update on the project, he was very open that they’d learned a lot from the campaign. The campaign opened with a limited number of devices at a big discount ($600 down from $830), which sparked interest and led to some record-breaking rates of pledging. However, while I’d hoped there was a coherent plan to maintain the level of demand once the cheaper tier sold out and then throughout the campaign, no such plan seemed to exist. Instead we saw several changes in the pricing structure creating confusion and a lack of confidence from potential supporters (if you’ve already paid $830, what happens now the Edge is only $695?).
- Mark Shuttleworth is very rich indeed. I wish this wasn’t a reason, but it’s certainly been an elephant in the room throughout the campaign. When a man who can afford to take a holiday in space and run an unprofitable company for fun asks for $32 million so he can release a new product, there’s bound to be those who think he should put up the money up himself (try that one on Dragon’s Den and see how far you get). To me, this is an unfair position to take (why spend millions of dollars making a phone no-one wants to buy?) and smacks of a sense of entitlement from members of the community, but it was certainly one that I saw expressed.
When the campaign started, I predicted that failure could mean a huge amount of embarrassment for both Canonical and the Ubuntu community. However, looking back I can see that both parties were always prepared for failure – Canonical were clear that they knew it was a hugely ambitious goal, and the community were never sure if it was reachable. This made failure much easier to swallow for all concerned, and much easier to present to the wider world.
There are those who say that Canonical never meant for the Edge to be made, and the whole campaign was a publicity stunt. I can’t say if that was the case, but if it was, they’ve certainly made a good job of it.
This won’t be the last we see of the Ubuntu phone. There’s currently an App Showdown competition with a huge number of entries that’s seeing apps created for and ported to the Ubuntu Touch platform. Canonical’s re-engineering of Ubuntu to allow full device convergence in the next Long-Term Support release shows no signs of slowing. I can’t say what shape the first Ubuntu phones will take, but I am sure that they’re not far off.