- Discussion Lists
- Discussion Lists
- Research Groups
OSS Watch team blog
open source software advisory service
Updated: 11 hours 13 min ago
Markdown is more or less a set of conventions for semi-structured plain text that can be fairly easily converted into HTML (or other formats). The idea is that its easy to read and write using text editors. This makes it great for wikis, and for source code documentation. Its the format used for most of the README files on Github, for example.# This is a header Some text with bits in **bold** or in *italics*. > I'm a blockquote! ## Sub-header * Bullet one * Bullet two * Bullet three
Markdown has flourished despite the fact it isn’t really an open standard as such. There are numerous “flavours” of Markdown, such as GitHub-Flavoured Markdown, and different processing engines have their own extensions, particularly in areas where there is no common agreement on how to represent things such as tables and definition lists.
Despite this lack of consistency, Markdown works very well without ever becoming a de jure standard. The closest its got to standardisation is a community site on GitHub following a call to action by Jeff Atwood.
What has brought this into focus for me is the discussion around open formats for the UK government. Putting the (huge) issue of ODF versus OOXML to one side, I would actually prefer it if more government content was in an easy to read plain text format rather than in any flavour of whopping great office-style documents. In fact, wouldn’t it be excellent if they were to use Markdown?
Which is where the problem lies – its difficult for government to mandate or even recognise “standards” that have no clear provenance or governance model arising out of some sort of recognisable standards organisation. This isn’t a problem when its just a case of “using whatever works” as an individual developer (which is how sort-of standards like Markdown and RSS take off), but seems to be a major barrier when trying to formulate policy.
So sadly, unless there is a new concerted effort to make some sort of standardised Markdown, I don’t think my dream of reading government documents in markdown using TextEdit or on GitHub are likely to become a reality.
With the same team behind it, Linux Voice has the same feel and a similar structure to old issues of Linux Format. However, Linux Voice aims to be different to other Linux publications in 3 key ways: It’s independent, so only answerable to readers; 9 months after publication, all issues will be licensed CC-BY-SA; 50% of the profits at the end of each financial year will be donated to free software projects, as chosen by the readers.
By presenting itself with these key principles, Linux Voice embodies in a publication the spirit of the community it serves, which provides a compelling USP for free software fans. On top of that, Linux Voice was able to get started thanks to a very successful crowd funding campaign on IndieGoGo, allowing the community to take a real sense of ownership.
Aside from the business model, the first issue contains some great content. There’s a 2-page section on games for Linux, which would have been hard to fill two years ago, but is now sure to grow. There’s a round-up of encryption tools looking at security, usability and performance, to help average users keep their data safe. There’s a bundle of features and tutorials, including homebrew monitoring with a RaspberryPi and PGP email encryption. Plus, of course, letters from users, news, and the usual regulars you’d expect from any magazine.
I’m particularly impressed by what appears to be a series of articles about the work of some of the female pioneers of computing. Issue 1 contains a tutorial looking at the work of Ada Lovelace, and Issue 2 promises to bring us the work of Grace Hopper. It’s great to see a publication shining the spotlight on some of the early hackers, and it’s fascinating to see how it was done before the days of IDEs, text editors, or even in some cases electricity!
For your £6 (less if you subscribe) you get 114 pages jammed with great content, plus a DVD with various linux distros and other software to play with. Well worth it in my opinion, and I look forward to Issue 2!
Mobile World Congress is running this week in Barcelona. While it’s predicable that we’ve seen lots of Android phones, including the big unveiling of the Galaxy S5 from Samsung, I’ve found it interesting to see the coverage of the other devices powered by open source technologies.
Mozilla announced their plans for a smartphone that could retail for as little as $25. It’s based on a new system-on-chip platform that integrates a 1GHz processor, 1GB of RAM and 2GB of flash memory, and will of course be running the open source Firefox OS. It’s very much an entry level smartphone, but the $25 price point gives real weight to Mozilla’s ambition to target the “next billion” web users in developing countries.
Ubuntu Touch is finally seeing the light of day on 2 phones, one from Chinese manufacturer Meizu and one from Spanish manufacturer Bq. Both phones are currently sold running Android, but will ship with Ubuntu later this year. The phones’ internals have high-end performance in mind, with the Meizu sporting an 8-core processor and 2GB of RAM, clearly chosen to deliver Ubuntu’s fabled “convergence story”.
There’s been rumours abound this year that Nokia have been planning to release an Android smartphone, and they confirmed the rumours were true at MWC, sort of. “Nokia X” will be a fork of Android with its own app store (as well as third-party ones) and a custom interface that borrows elements from Nokia’s Asha platform and Windows Phone. Questions were raised at the rumour mill over whether Microsoft’s takeover of Nokia’s smartphone business would prevent an Android-based Nokia being possible. However, Microsoft’s vice-president for operating systems Joe Belfiore said “Whatever they do, we’re very supportive of them,” while Nokia’s Stephen Elop maintains that the Windows-based Lumia range is still their primary smartphone product.
A slightly more left-field offering comes in the shape of Samsung’s Gear 2 “smartwatch” running Tizen, the apparently-not-dead-after-all successor to Maemo, Meego, LiMo, and all those other Linux-based mobile operating systems that never quite made it. The device is designed to link up to the Samsung Galaxy range of Android phones, but with the dropping of “Galaxy” from the Gear’s branding, perhaps we’ll be seeing a new brand of Tizen powered smartphones from Samsung in the future.
On Friday Apple released a patch for a flaw in one of their core security libraries. The library is used both in Apple’s mobile operating system iOS, and their desktop operating system OSX. As of today, the desktop version has yet to be patched. This flaw, and its aftermath, are interesting for a number of reasons.
Firstly, it’s very serious. The bug means that insecure network connections are falsely identified as secure by the operating system. This means that the flaw has an impact across numerous programs; anything that relies on the operating system to negotiate a secure connection could potentially be affected. This makes a whole range of services like web and mail vulnerable to so-called ‘man-in-the-middle’ attacks where a disreputable network host intercepts your network traffic, and potentially thereby gains access to your personal information.
Secondly, the flaw was dumb. The code in question includes an unnecessarily duplicated ‘goto’, highlighted here:
It looks like a cut-and-paste error, as the rogue ‘goto’ is indented as though it is conditional when – unlike the one above it – it is not. There are many reasons a bug like this ought not to get through quality assurance. It results in unreachable code, which the compiler would normally warn about. It would have been obvious if the code had been run through a tool that checks coding style, another common best practice precaution. Apple have received a huge amount of criticism for both the severity and the ‘simplicity’ of this bug.
Thirdly, and this is where we take a turn into the world of free and open source software, the code in question is part of Apple’s open source release programme. That is why I can post an image of the source code up there, and why critics of Apple have been able to see exactly how dumb this bug is. So one effect of Apple making the code open source has been that – arguably – it has increased the anger and ridicule to which they have been exposed. Without the source being available, we would have a far less clear idea of how dumb a mistake this was. Alright, one might argue, open source release makes your mistakes clear, but it also lets anyone fix them. That is a good trade-off, you might say. Unfortunately, in this case, it is not that simple. Despite being open source, the security framework in question is not provided by Apple in a state which makes it easy to modify and rebuild. Third party hackers have found it easier to fix the OSX bug by patching the faulty binary – normally a much more difficult route – rather than using Apple’s open source code to compile a fixed binary.
It is often argued that one key benefit of open source is that it permits code review by anyone. In this case, though, despite being a key security implementation and being available to review for over a year, this bug was not seemingly identified via source review. For me, this once again underlines that – while universal code review is a notional benefit of open source release – in practice it is universal ability to fix bugs once they’re found that is the strongest argument for source availability strengthening security. In this case Apple facilitated the former goal but made the latter problematic, and thereby in my opinion seriously reduced the security benefit open source might have brought.
Finally, it is interesting to note that a large number of commentators have asked whether this bug might have been deliberate. In the atmosphere of caution over security brought about by Edward Snowden’s revelations, these questions naturally arise. Did Apple deliberately break their own security at the request of the authorities? Obviously we cannot know. However it is interesting to note the relation between that possibility and the idea that open source is a weapon against deliberate implantation of flaws in software.
Bruce Schneier, the security analyst brought in by The Guardian to comment on Snowden’s original documents, noted in his commentary that the use of free and open source software was a means of combating national security agencies and their nasty habit of implanting and exploiting software flaws. After all if you can study the source you can see the backdoors, right? Leaving aside the issue of compromised compiler binaries, which might poison your binaries even when the source is ‘clean’, the GotoFail incident raises another question about the efficacy of open source as a weapon against government snooping. Whether deliberate or not, this flaw has been available for review for over a year.
The internet is throbbing with the schadenfreude of programmers and others attacking Apple over their dumbness. Yet isn’t another lesson of this debacle that we cannot rely on open source release on it’s own to act as a guarantee that our security critical code is neither compromised nor just plain bad?
2014 marks the 10th anniversary of the Google Summer of Code, a competition that brings students and open source communities together through summer coding projects.
Each year the competition gathers ideas for contributions from a wide range of projects and foundations, and invites proposals from students for tackling them. As well as the experience of working on major projects with expert mentors, students are also paid a stipend of $5,500 for their effort.
With the date for accepting proposals only a few weeks away (10th-21st March), its time to get a move on!
Personally I think GSoC is an excellent opportunity for students to develop real-world development and project participation skills and make connections that will be useful after they graduate, and I’m always surprised how few students from the UK apply each year.
If you are a lecturer in a university, its a good time to raise awareness of the competition with your students. You can download a flyer here for example.
Time too tight? Well, after the Google Summer of Code, there is also the VALS Semester of Code in the Spring.
OSS Watch, supported by Jisc, has conducted the National Software Survey roughly every two years since 2003. The survey studies the status of open and closed source software in both Further Education (FE) and Higher Education (HE) institutions in the UK. OSS Watch is a non-advocacy information service covering free and open source software. We do not necessarily advocate the adoption of free and open source software. We do however advocate the consideration of all viable software solutions – free or constrained, open or closed – as the best means of achieving value for money during procurement.
Throughout this report the term “open source” is used for brevity’s sake to indicate both free software and open source software.
Looking back over 10 years of surveys, we can see how open source has grown in terms of its impact on ICT in the HE and FE sectors. For example, when we first ran our survey in 2003, the term “open source” was to be found in only 30% of ICT policies – and in some of those it was because open source software was prohibited! In our 2013 survey we now find open source considered as an option in the majority of institutions.
Open source software has also grown as an option for procurement; while only a small number of institutions use mostly open source software, all institutions now report they use a mix of open source and closed source.
However, the picture is not all positive for open source advocates, and we’ve noticed the differences between HE and FE becoming more pronounced.
At FOSDEM last weekend I saw an excellent talk by Richard Fontana entitled Taking License Compatibility Semi-Seriously. The talk took a look at the origins of compatibility issues between free and open source software licences, how efforts have been made to either address them directly or dodge around them, and ask whether it’s worth worrying about them in the first place. This post will summarise the talk and delve into some of the points I found most interesting.
The idea of FOSS license compatibility isn’t one that was created alongside the FOSS movements, but rather one that came about when projects started to combine code released under different licences, particularly copyleft and non-copyleft licenses. As such, there’s no real definition of what license compatibility means, and so people tend to defer to received doctrine (such as the FSF’s list of GPL compatible licenses), or leave it up to lawyers to sort out.
Early versions of KDE and Qt created the biggest significant license compatibility issue in the FOSS world. Qt’s original proprietary license and later the QPL under which it was relicenced were considered incompatible with the GPLv2 under which the KDE project (or at least, parts of it) were licensed. Qt is now dual-licensed under LGPL or a commercial proprietary license which fixes this incompatibility, but the FSF also suggest a remedy whereby a specific exception is added to the QPL allowing differently-licensed software to be treated under the terms of the GPL.
Another common incompatibility issue with FOSS licenses has arisen where projects have wanted to combine GPLv2 code with ASLv2 code. The FSF consider the patent termination and indemnification provisions in ASLv2 to make it incompatible with GPLv2, however they believed these provisions to be a good thing so ensured that GPLv3 was compatible with it. Indeed, the GPLv3 went on to codify what it meant for another license to be compatible with it.
While this means at first glance that only code explicitly licensed as GPLv3 and ASLv2 can be used together while GPLv2 and ASLv2 cannot, this isn’t necessarily the case. The FSF encouraged projects to license their code “GPLv2 or later“, in the hope that when future versions of the license were released, they would be encouraged to transition to the new license and in doing so benefit from features such as ASLv2 compatibility. However, this method of licensing can be interpreted as “GPLv2 with the option to treat it as GPLv3 instead”, meaning that for the purposes of compatibility it can be treated to be GPLv3, while remaining “GPLv2 or later”.
This has the opposite effect of the FSF’s intention by encouraging projects to remain “GPLv2 or later” for the added flexibility it provides while avoiding forcing licensees to be bound by parts of GPLv3 that either party may not like.
While the above trick won’t work for code licensed “GPLv2 only“, a similar thing is possible for code licensed “LGPLv2 only“. As the LGPLv2 is intended for library code, it contains a clause allowing you to re-license the code to GPLv2 or any later version, in case you wanted to include it in non-library software. This means that you could, for the purposes of compatibility, treat the code as GPLv3. The Artistic License 2.0 and EUPL contain similar re-licensing clauses.
What all of this shows us is that while it’s a complex issue, it’s a somewhat artificial one, and there’s all sorts of tricks one can use to circumvent it. In practice, these compatibility “rules” are rarely followed, and rarely enforced.
In response to this, Richard Fontana suggests that we borrow the idea of “duck typing” from programming to make our lives easier. If a FOSS project wants to combine some code under the GPL with code under a more permissive, possibly incompatible license, as long as they’re willing to follow the convention of distributing the source as though it was all GPL, the community still gets the benefit without the additional headache of worrying over which bits are allowed to be combined with which.
In a recent training session, I discussed commitment gradients – how much extra effort is involved to move between each stage of involvement within a project. After the session I was asked for some examples of commitment gradients and how it’s possible to make them shallower, so it’s easier for people to progress their involvement in a project.
This graph represents a desirable commitment gradient. The move from knowing about the project to using and discussing it is fairly trivial. Reporting bugs requires some extra knowledge, e.g. using the bug tracker, but isn’t a significantly harder step. Contributing patches is slightly harder as it requires knowledge of the programming language and awareness of things such as coding styles. Finally, moving into a leadership role requires significant additional effort, as leaders need to have an awareness of all aspects of the project, including understanding of the governance model, as well as having gained the confidence of other community members.Using the software
This graph represents a project where the software is so hard to install that you need to have intimate knowledge of the project to even get it working. For example, if configuration settings are hard-coded, setting the software up involves knowledge of the language, changing the code, then compiling it yourself before you even get started. By this point, you know the software so well that there’s nearly no extra effort required for the following stages, but most people won’t bother, and your user base will suffer.
To make the commitment gradient lower at this stage, a project should make it easy to acquire, install and configure its outputs. For example, having packaged versions in the software repositories or app stores for the target platforms makes installation easier. Where this isn’t appropriate, an automated installer requiring little technical knowledge (such as that used by WordPress) can be used as an option for beginners, with a more configurable “expert’s mode” available for more experienced users. For configuration, being able to change settings through the software’s interface rather than in code or configuration files is helpful.Discussing the project
This graph represents a project where the software is easy to use, but the community has an elitist attitude and is hostile to newcomers. Responses to questions asked assume deep technical understanding of the software, and people who don’t have such understanding are expected to find out for themselves before they engage with those who do.
The solution to this is to promote a culture of moderation and mentorship which ensures that discussions are conducted in a tolerant way that allows newcomers to learn.
Another issue at this stage may the the technology used – for example, if all user support takes place on Usenet newsgroups, many people wont know how to access them, or the conventions they are expected to follow. Using channels that new users will be more familiar with, such as web forums or social media, can help lower the commitment gradient here.Reporting bugs
The step from discussing the project to reporting a bug can be high where the project uses a complex bug tracker, where there is an involved process to get access to the tracker, and where gathering the information required to submit a useful report involves intimate knowledge of the software.
The Ubuntu project lowers the gradient at this stage through use of the ubuntu-bug utility. Any user can run the command ubuntu-bug <software name>, and have a template bug report generated with all the required information about their environment, and any relevant logs or crash reports. All they then need to do is write a description of the problem. Again, a culture of moderation and mentorship is useful here to help guide people into writing useful reports.Submitting Patches
Submitting patches inevitably involves a step up in terms of effort, as the contributor needs sufficient knowledge of the programming language, the source code, a development environment and so on. However, the commitment gradient can be made too steep if contributors are expected to follow a complex or poorly documented coding style, if they are expected to do a lot of manual testing before submission, and if the actual submission process is esoteric.
The main way to lower the gradient here is documentation, and automation where possible. Coding styles should be well-defined and documented. Tests should be automated using a unit testing tool. The submission process should be well documented, using a well known workflow such as GitHub’s pull requests can help here.
The Moodle community has a tool called “code checker” which is packaged as a Moodle plugin, and allows developers to analyse their code to ensure it meets the project’s coding style. This allows them to quickly identify and fix any issues before submission, and allows reviewers to quickly direct them to instructions on how to fix any discrepancies.Taking Charge
Again, a large step up at this stage is inevitable, and in some respects desirable, as a project probably doesn’t want to be led by someone who hasn’t shown sufficient commitment. There may also be legal requirements for the people in charge to adhere to.
However, excessive or unclear requirements for how a person might get voting rights within a project may make this step too large, so these need to be fair and well-documented. Also, a leader will need to have a good understanding of the project’s governance model and its decision-making process, so these need to be well-documented too.
If an project is large enough, it may be possible to allow different levels of commitment at this stage, so not everyone who has a say on technical issues is also required to, for example, make budget decisions.
We’ve just updated our Open Source Options for Education list, providing a list of alternatives to common proprietary software used in schools, colleges and universities. Most of the software we list is provided by the academic and open source communities via our publicly editable version. Some new software we’ve added in this update includes:SageMath
SageMath is a package made from over 100 open source components including R and Python with the goal of creating “a viable free open source alternative to Magma, Maple, Mathematica and Matlab.” Supported by the University of Washington, the project is currently trialling SageMath Cloud, a hosted service allowing instant access to the suite of SageMath tools with no setup required.R and R Commander
R is the go-to language for open source statistical analysis, and R Commander provides a graphical interface to make running R commands easier. Steven Muegge got in touch to let us know that he uses the two projects for teaching graduate research methods at Carleton University. Thanks, Steven!Gibbon
Gibbon is a management system combining features of a VLE (such as resource sharing, activities and markbooks) and MIS systems (such as attendance, timetables, and student information). The system was developed by the International College of Hong Kong. Thanks to Ross Parker for letting us know about Gibbon.OwnCloud Documents
The recent release of OwnCloud 6 includes a new tool called OwnCloud Documents allowing real-time collaboration on text documents. Collaborators can be other users on the Owncloud system, or anonymous users with the link from the author. With support for LDAP and Active Directory, could this represent a viable alternative to Google Docs for privacy-conscious institutions?
Earlier in the year, I wrote a case study on Koha, the open source library management system released under the GPL, detailing the history of the project and how the sale of assets had created confusion and disagreements between the Horowhenua Library Trust (HLT) who originally commissioned the system, and PTFS who now holds the copyright for most of the project’s original assets, publishing their own fork under the name LibLime Koha.
At the time of writing, the major issue at hand was PTFS’s trademark application for the mark KOHA in New Zealand, which HLT and Catalyst IT who provide commercial support for Koha were opposing. This month, the case was settled, with the commissioner ruling against PTFS and rejecting the application.
HLT and Catalyst opposed the application on 6 grounds:
- The mark was likely to deceive or cause confusion.
- The application for the mark was contrary to New Zealand law (specifically, The Fair Trading Act 1986), on the basis of ground 1.
- Use of the mark would amount to passing off, also in breach of New Zealand law.
- The mark was identical to an existing trade mark in use in New Zealand.
- PTFS wasn’t the rightful owner of the mark, HLT was.
- The application was made in bad faith, on the basis that HLT owns the mark.
Interestingly, grounds 3, 4, and 5 were rejected by the commissioner, largely on the grounds that HLT’s use of the name Koha didn’t constitute a trade mark. When HLT originally open sourced Koha, the evidence presented showed that it intended Koha to be given away for free so other libraries could benefit from it. The commissioner didn’t consider this to constitute trading, and therefore Koha, while identical to the mark being registered, didn’t constitute a trade mark.
As ground 5 didn’t show HLT to be the rightful owner, ground 6 was also rejected as PTFS weren’t seen to be acting in bad faith by trying to register a mark which clearly belonged to someone else.
However, HLT and Catalyst’s success in this case hinges on the fact that when the trademark application was made in 2010, HLT’s Koha software had existed for 10 years and was well known in New Zealand’s library sector. Since the commissioner considered the mark being registered to be identical to the name Koha, and HLT’s software to be the same class of product as PTFS’s, it was found that the two could be confused by a substantial number of people, allowing ground 1 to succeed.
Furthermore, the cited sections of the Fair Trading Act had a similar but stricter requirement that there not be a real risk that such a confusion or deception might happen. The commissioner believed that due to Koha’s prominence in the industry there was a real risk in this case, allowing ground 2 to succeed.
The application for the trade mark has now been defeated, with HLT and Catalyst being awarded nearly 7,500 NZD in legal costs between them. What affect this will have on the use of the Koha name in New Zealand isn’t clear – since HLT have been shown not to own the mark themselves, they are unlikely to be able to stop PTFS from using the name in New Zealand should they choose to. However the Koha community in New Zealand can now rest easy knowing that they won’t be stopped from continuing to use the name as they always have.
I hope that other open source software projects use the case of Koha as a lesson to ensure that your branding and IP is well-managed, so that cases like this can be avoided.
As a result of the OSS Watch National Software Survey, we now have 10 years of survey data on open source in universities and colleges in the UK, so we can look at some long term trends. Today I’ve been looking at institutional IT policies.
Back in 2003, most IT policies in colleges and universities in the UK didn’t mention open source at all, while today that position is reversed.
We’ve also seen the demise of policies that prohibit open source; while at the same time policies that state a preference for open source also seem to be on the way out.
So, are universities and colleges moving towards a “level playing field” approach to open source and setting “equal consideration” policies? Perhaps; though IT policies are only a part of that equation.
We also have survey data from 2008-2013 for what types of software are being considered for procurement and deployment in practice:
So, equal consideration of open source software is on the increase, but there is still a long way to go; and if the rate of change over the past five years is anything to go by, we’ll never get there!
Perhaps what we’re seeing is a lag between changes in policy filtering through into changes to processes and practices – or perhaps its not filtering through at all.
For more information on open source policies and procurement processes, read our briefing note Decision factors for open source software procurement.
The full results of the 2013 OSS Watch National Software Survey will be published in January
Across the 2 days, Scott Wilson and I presented sessions on the varieties of communities and why we form them, communication within online communities, governance of free and open source software projects, leadership, and conflict resolution.
While we were only able to have a small group from the vast community of TYPO3 at the workshop, those who did attend represented a range of teams from the community, including developers from both the TYPO3 CMS and TYPO3 Neos projects, as well as members of the marketing team and the community manager, Ben van ‘t Ende.
One of the great things to see was how open and honest the attendees were about the issues we discussed and the challenges they faced. A few points were of particular interest to me.
When we discussed the reasons we form communities, there was no clear agreement on what the shared interest of the TYPO3 community was. Defining this will be key to driving towards the community’s common goals in the future.
The community uses a myriad selection of communication channels, and the purpose of each isn’t always clear-cut. There’s also been a general lack of moderation culture, which has led to a few poisonous people getting out of hand. Instilling a sense of shared values and leading by example is needed over time to help ensure that discussions remain constructive.
There is a visible lack of diversity in the community, both shallow-level (most contributors are white, male and located in Germany) and deep-level (there are lots of highly skilled developers, but less who are learning or from other disciplines). These issues could affect the long-term sustainability of the community if the barriers for new, less-skilled contributors are too high. Engagement with users and less technical members of the community will also be key to shaping the community’s goals.
These problems aren’t the kind that will be fixed by a two-day workshop or a change of policy, it’s going to take commitment and leadership from those who believe in the community to move things forward. One thing that we definitely saw from this workshop is that those people are present and highly active in the TYPO3 community. We look forward to the possibility of working together again.
Our presentation slides from the workshop can be found on SlideShare.
A special thanks to Christian Händel for making sure we made it to Altlenigen and back!
The concept of a software patch is well understood by users of version control systems such as CVS and Subversion, but with the recent rise in popularity of distributed version control systems (DVCSs) such as Git, a new method for submitting changes to a project has evolved: the pull request.
While not a feature of DVCS software, pull requests are a common workflow used by projects that manage their code with a DVCS, so knowing what they are and how to do them is important if you’re planning to contribute to such a project. You can find out more in our new briefing note, What Is A Pull Request?
Recently OSS Watch were invited to produce a video for an online course discussing how to get involved in open source software projects. In the video, we discuss why you’d want to contribute to open source projects, how to choose the right open source project for you, the communication channels used by open source communities, how to make your first contributions and what you can contribute other than code, as well as sharing a little of our own experiences contributing to open source projects.
To view the video on HD, click the YouTube icon to view it there.
Last month we officially kicked off a new EU project aimed at getting students involved in free and open source software projects.
Inspired by Google Summer of Code, the idea is to enable students to obtain academic recognition for their contributions to open software communities.
The project itself is called VALS, and will be running the first pilot Semester of Code programme in 2014, and will be using the same Melange platform used to run GSoC.
For this to work in practice we need to have an effective model for co-mentoring, with communication between the student, their academic supervisor, and mentors from the software community.
We also need to put in place assessment and curriculum changes at universities to support experiential learning by students in open development. For example, universities need to recognise and assess not just what students produce in terms of contributed artefacts (such as code and documentation) but also the processes of engagement and communication with the community, and developing their experiences in using issue tracking and source control systems.
Some of the common problems we’ve seen with other mentoring programmes have been a lack of commitment from students in actually completing their contribution, and breakdowns of communications between students and mentors, and these are some of the things we need to look at in how the programme is designed. For example, we need to make it clear how issues can be resolved, and also to ensure that all parties involved have appropriate expectations.
I think for open source communities its also critical that we look at how engagement by students can be sustained beyond a single contribution to become an ongoing part of their professional development, at university and beyond.
The project team includes OSS Watch, OpenDirective, RayCom, MindShock, and the universities of Salamanca, Udine, Bolton and Cyprus, though the programme itself is going to be open.
As the project has only just started we don’t have a website yet, but If your community or university is interested in participating in the pilot programme next year, let us know at email@example.com.
Over the weekend I started playing Doom 3 BFG Edition, I re-release of the mid-00s first person shooter. The reason I’m talking about this here is, as we’ve discussed before, id software who make Doom 3 have a policy of open sourcing the code for their games.
Predictably, this hasn’t posed a problem for the open source community, and bit of googling turned up a Github fork of id’s code with support for Linux. The code relies on the SDL and OpenAL libraries to handle input and audio respectively, but once those dependencies were installed, it compiled for me on Ubuntu 13.10 and 12.04 LTS with no problems.
Alongside the resulting binary, to actually play the game requires the commercial data files, which aren’t distributed freely. Since the game’s distributed using the Steam DRM system, you need to install a copy of the Windows Steam client, install the game, then copy the files in place.
It’s possible to install the game files using WINE, but I was using a laptop which happened to be dual-booting Windows so I installed the game as normal on Windows, then switched to Linux and created a symbolic link to the data files on the Windows disk partition.
There’s a few caveats to note about the open-source version of this game. Firstly, trying to run the game with AMD graphics caused the game to crash with OpenGL errors. Reading bug reports shows that this may be a problem with driver compatibility (some people have gotten it working), but using a system with NVidia graphics worked flawlessly. The game also uses a couple of non-free components which can’t be included in the GPL code: the Bink video codec and the “Carmack’s Reverse” shadow stencilling technology licensed from Creative. This means that the odd in-game video is missing, although this doesn’t really detract from the game-play as the audio still plays.
The ease with which I was able to find a solution to play this unsupported Windows game natively on Linux is a real testament to the open source community’s ability and willingness to solve and share solutions to problems. I was also really impressed by how well the game ran under these circumstances, sowing how bright a future Linux has as a gaming platform.