Feeds

Using Travis CI to test Docker builds

LinuxPlanet - Mon, 2016-01-11 10:00

In last months article we discussed "Dockerizing" this blog. What I left out from that article was how I also used Docker Hub's automatic builds functionality to automatically create a new image every time changes are made to the GitHub Repository which contains the source for this blog.

The automatic builds are useful because I can simply make changes to the code or articles within the repository and once pushed, those changes trigger Docker Hub to build an image using the Dockerfile we created in the previous article. As an extra benefit the Docker image will also be available via Docker Hub, which means any system with Docker installed can deploy the latest version by simply executing docker run -d madflojo/blog.

The only gotcha is; what happens if those changes break things? What if a change prevents the build from occurring, or worse prevents the static site generator from correctly generating pages. What I need is a way to know if changes are going to cause issues or not before they are merged to the master branch of the repository; deploying those changes to production.

To do this, we can utilize Continuous Integration principles and tools.

What is Continuous Integration

Continuous Integration or CI, is something that has existed in the software development world for a while but it has gained more following in the operations world recently. The idea of CI came up to address the problem of multiple developers creating integration problems within the same code base. Basically, two developers working on the same code creating conflicts and not finding those conflicts until much later.

The basic rule goes, the later you find issues within code the more expensive (time and money) it is to fix those issues. The idea to solve this is for developers to commit their code into source control often, multiple times a day even. With code commits being pushed frequently this reduces the opportunity for code integration problems, and when they do happen it is often a lot easier to fix.

However, code commits multiple times a day by itself doesn't solve integration issues. There also needs to be a way to ensure the code being committed is quality code and works. This brings us to another concept of CI, where every time code is committed, the code is built and tested automatically.

In the case of this blog, the build would consist of building a Docker image, and testing would consist of some various tests I've written to ensure the code that powers this blog is working appropriately. To perform these automated builds and test executions we need a tool that can detect when changes happen, and perform the necessary steps; we need a tool like Travis CI.

Travis CI

Travis CI is a Continuous Integration tool that integrates with GitHub and performs automated build and test actions. It is also free for public GitHub repositories, like this blog for instance.

In this article I am going to walk through configuring Travis CI to automatically build and test the Docker image being generated for this blog. Which, will give you (the reader) the basics of how to use Travis CI to test your own Docker builds.

Automating a Docker build with Travis CI

This post is going to assume that we have already signed up for Travis CI and connected it to our public repository. This process is fairly straight forward, as it is part of Travis CI's on-boarding flow. If you find yourself needing a good walk through, Travis CI does have a getting started guide.

Since we will be testing our builds and do not wish to impact the main master branch the first thing we are going to do is create a new git branch to work with.

$ git checkout -b building-docker-with-travis

As we make changes to this branch we can push the contents to GitHub under the same branch name and validate the status of Travis CI builds without those changes going into the master branch.

Configuring Travis CI

Within our new branch we will create a .travis.yml file. This file essentially contains configuration and instructions for Travis CI. Within this file we will be able to tell Travis CI what languages and services we need for the build environment as well as the instructions for performing the build.

Defining the build environment

Before starting any build steps we first need to define what the build environment should look like. For example, since the hamerkop application and associated testing scripts are written in Python, we will need Python installed within this build environment.

While we could install Python with a few apt-get commands, since Python is the only language we need within this environment it's better to define it as the base language using the language: python parameter within the .travis.yml file.

language: python python: - 2.7 - 3.5

The above configuration informs Travis CI to set the build environment to a Python environment; specifically for Python versions 2.7 and 3.5 to be installed and supported.

The syntax used above is in YAML format, which is a fairly popular configuration format. In the above we are essentially defining the language parameter as python and setting the python parameter to a list of versions 2.7 and 3.5. If we wanted to add additional versions it is as simple as appending that version to this list; such as in the example below.

language: python python: - 2.7 - 3.2 - 3.5

In the above we simply added version 3.2 by adding it to the list.

Required services

As we will be building a Docker image we will also need Docker installed and the Docker service running within our build environment. We can accomplish this by using the services parameter to tell Travis CI to install Docker and start the service.

services: - docker

Like the python parameter the services parameter is a list of services to be started within our environment. As such that means we can also include additional services by appending to the list. If we needed Docker and Redis for example we can simply append the line after specifying the Docker service.

services: - docker - redis-server

In this example we do not require any service other than Docker, however it is useful to know that Travis CI has quite a few services available.

Performing the build

Now that we have defined the build environment we want, we can execute the build steps. Since we wish to validate a Docker build we essentially need to perform two steps, building a Docker container image and starting a container based on that image.

We can perform these steps by simply specifying the same docker commands we used in the previous article.

install: - docker build -t blog . - docker run -d -p 127.0.0.1:80:80 --name blog blog

In the above we can see that the two docker commands are specified under the install parameter. This parameter is actually a defined build step for Travis CI.

Travis CI has multiple predefined steps used during builds which can be called out via the .travis.yml file. In the above we are defining that these two docker commands are the steps necessary to install this application.

Testing the build

Travis CI is not just a simple build tool, it is a Continuous Integration tool which means its primary function is testing. Which means we need to add a test to our build; for now we can simply verify that the Docker container is in running, which can be performed by a simple docker ps command.

script: - docker ps | grep -q blog

In the above we defined our basic test using the script parameter. This is yet another build step which is used to call test cases. The script step is a required step, if omitted the build will fail.

Pushing to GitHub

With the steps above defined we now have a minimal build that we can send to Travis CI; to accomplish this, we simply push our changes to GitHub.

$ git add .travis.yml $ git commit -m "Adding docker build steps to Travis" [building-docker-with-travis 2ad7a43] Adding docker build steps to Travis 1 file changed, 10 insertions(+), 32 deletions(-) rewrite .travis.yml (72%) $ git push origin building-docker-with-travis

During the sign up process for Travis CI, you are asked to link your repositories with Travis CI. This allows it to monitor the repository for any changes. When changes occur, Travis CI will automatically pull down those changes and execute the steps defined within the .travis.yml file. Which in this case, means executing our Docker build and verifying it worked.

As we just pushed new changes to our repository, Travis CI should have detected those changes. We can go to Travis CI to verify whether those changes resulted in a successful build or not.

Travis CI, will show a build log for every build, at the end of the log for this specific build we can see that the build was successful.

Removing intermediate container c991de57cced Successfully built 45e8fb68a440 $ docker run -d -p 127.0.0.1:80:80 --name blog blog 45fe9081a7af138da991bb9e52852feec414b8e33ba2007968853da9803b1d96 $ docker ps | grep -q blog The command "docker ps | grep -q blog" exited with 0. Done. Your build exited with 0.

One important thing to know about Travis CI is that most build steps require commands to execute successfully in order for the build to be marked as successful.

The script and install steps are two examples of this, if any of our commands failed and did not return a 0 exit code than the whole build would be marked as failed.

If this happens during the install step, the build will be stopped at the exact step that failed. With the script step however, the build will not be stopped. The idea behind this is that if an install step fails, the build will absolutely not work. However, if a single test case fails only a portion is broken. By showing all testing results users will be able to identify what is broken vs. what is working as expected.

Adding additional tests

While we now have Travis CI able to verify the Docker build is successful, there are still other ways we could inadvertently break this blog. For example, we could make a change that prevents the static site generator from properly generating pages, this would break the site within the container but not necessarily the container itself. To prevent a scenario like this, we can introduce some additional testing.

Within our repository there is a directory called tests, this directory contains three more directories; unit, integration and functional. These directories contain various automated tests for this environment. The first two types of tests unit and integration are designed to specifically test the code within the hamerkop.py application. While useful, these tests are not going to help test the Docker container. However, the last directory functional, contains automated tests that can be used to test the running Docker container.

$ ls -la tests/functional/ total 24 drwxr-xr-x 1 vagrant vagrant 272 Jan 1 03:22 . drwxr-xr-x 1 vagrant vagrant 170 Dec 31 22:11 .. -rw-r--r-- 1 vagrant vagrant 2236 Jan 1 03:02 test_broken_links.py -rw-r--r-- 1 vagrant vagrant 2155 Jan 1 03:22 test_content.py -rw-r--r-- 1 vagrant vagrant 1072 Jan 1 03:13 test_rss.py

These tests are designed to connect to the running Docker container and validate the static site's content.

For example test_broken_links.py will crawl the website being served by the Docker container and check the HTTP status code returned when requesting each page. If the return code is anything but 200 OK the test will fail. The test_content.py test will also crawl the site and validate the content returned matches a certain pattern. If it does not, then again these tests will fail.

What is useful about these tests is that, even though the static site is running within a Docker container we are still able to test the site functionality. If we were to add these tests to the Travis CI configuration, they would also be executed for every code change; providing even more confidence about each change being made.

Installing test requirements in before_script

To run these tests via Travis CI we will simply need to add them to the script section as we did with the docker ps command. However, before they can be executed these tests require several Python libraries to be installed. To install these libraries we can add the installation steps into the before_script build step.

before_script: - pip install -r requirements.txt - pip install mock - pip install requests - pip install feedparser

The before_script build step is performed before the script step but after the install step. Making before_script the perfect location for steps that are required for script commands but not part of the overall installation. Since the before_script step is not executing test cases like the install step, it too requires all commands to succeed before moving to the script build step. If a command within the before_script build step fails, the build will be stopped.

Running additional tests

With the required Python libraries installed we can add the test execution to the script build step.

script: - docker ps | grep -q blog - python tests.py

These tests can be launched by executing tests.py, which will run all 3 automated tests; unit, integration and functional.

Testing the build again

With the tests added we can once again push our changes to GitHub.

$ git add .travis.yml $ git commit -m "Adding tests.py execution" [building-docker-with-travis 99c4587] Adding tests.py execution 1 file changed, 14 insertions(+) $ git push origin building-docker-with-travis

After pushing our updates to the repository we can sit back and wait for Travis to build and test our application.

###################################################################### Test Runner: Functional tests ###################################################################### runTest (test_rss.VerifyRSS) Execute recursive request ... ok runTest (test_broken_links.CrawlSite) Execute recursive request ... ok runTest (test_content.CrawlSite) Execute recursive request ... ok ---------------------------------------------------------------------- Ran 3 tests in 0.768s OK

Once the build completes we will see the above message in the build log, showing that Travis CI has in fact executed our tests.

Summary

With our builds successfully processing let's take a final look at our .travis.yml file.

language: python python: - 2.7 services: - docker install: - docker build -t blog . - docker run -d -p 127.0.0.1:80:80 --name blog blog before_script: - pip install -r requirements.txt - pip install mock - pip install requests - pip install feedparser script: - docker ps | grep -q blog - python tests.py

In the above we can see our Travis CI configuration consists of 3 build steps; install, before_script and script. The install step is used to build and start our Docker container. The before_script step is simply used to install required libraries for test scripts and the script step is used to execute our test scripts.

Overall, this setup is pretty simple and something we could test manually outside of Travis CI. The benefit of having Travis CI though is that all of these steps are performed for every change, no matter how minor they are.

Also since we are using GitHub, this means Travis CI will append build status notifications on every pull request as well, like this one for example. With these types of notifications I can merge pull requests into the master branch with the confidence that they will not break production.

Building a Continuous Integration and Deployment pipeline

In last months article we explored using Docker to package and distribute the application running this blog. In this article, we have discussed leveraging Travis CI to automatically build that Docker image as well as performing functional tests against it.

In next months article, we are going to take this setup one step further by automatically deploying these changes to multiple servers using SaltStack. By the end of the next article we will have a full Continuous Integration and Deployment work-flow defined which will allow changes to be tested and deployed to production without human interaction.


Posted by Benjamin Cane
Categories: FLOSS Project Planets

Best Laid Plans

LinuxPlanet - Thu, 2016-01-07 19:38

Hello all, we meet again. In my last post I said I’d be writing about technology here in the future rather than my ongoing health saga. As the title of this post suggests though, “the best laid plans of mice and men often go awry”. Here’s some info on the quote. I was recently informed by The Christie that they are postponing my operation with less than a week to go. I was due to be admitted on Jan 13th and now that’s been put back to Feb 10th, with the actual operation to take place on Feb 11th.

It’s only a slip of 4 weeks I know and it’s not the end of the world but it is frustrating when I just want to get this done so I can recover, get back to work and hopefully get on with my life. Every extra week adds up and it can start to feel like time is dragging on but I’ll get there.

So what’s the reason for the delay? An emergency case they need to deal with first apparently. With such a rare and specialised surgical procedure I suppose there was always a danger of delay. In some ways I should be glad that my case isn’t deemed as critically urgent and they feel I can wait 4 more weeks. There must be other patients in a much worse condition. Every cloud has a silver lining and all that.

Only 500 of these operations have been done at The Christie in the 10 years since it was first pioneered, so that illustrates how rare it is. Right now I can’t say I feel fantastic but I’m not in pain and I am managing to do some things to keep busy. I guess it’s a case of hurry up and wait. So I have to be a patient patient.

The Google Pixel C

However, in other (nicer) news I just got a Google Pixel C last week and I’m actually writing this on it right now. It’s the new flagship 10.2 inch Android tablet from Google and the first to be 100% designed by them, right down to the hardware. The Pixel team developed it alongside their fancy but rather expensive Chromebooks. It has Android 6.0.1 installed at the moment and is effectively a Nexus device in all but name. That means it will be first to receive new versions of Android and Android N is due in a few months. I expect it will be largely tailored to this device and I expect good things. I needed a new tablet but I also wanted to get something that could replace most of the functions I’d normally do with the laptop. In the interests of fairness I looked at the Microsoft Surface, Apple iPad Pro and a variety of convertible laptops, including the ASUS transformer series. I decided this was by far the best option right now. It’s something of a personal experiment to see whether a good tablet like this (with a Bluetooth mouse and keyboard) can really cut it as a laptop replacement. I am also helped in this project by the work I’ve done on my server beefing up hardware and configuring KVM soI can use a remote desktop. I’ll write up some proper thoughts on all this to share with you very soon. At least I have a little more time to do that now before I head off to the hospital.

Take care out there, Happy New Year and I’ll speak to you again soon,

Dan

Categories: FLOSS Project Planets

Happy New Year & Browser and OS stats for 2015

LinuxPlanet - Wed, 2016-01-06 12:46

I’d like to wish everyone a happy new year on behalf of the entire LQ team. 2015 has been another great year for LQ and we have quite a few exciting developments in store for 2016, including a major code update that is now *way* overdue. As has become tradition, here are the browser and OS statistics for the main LQ site for all of 2015 (2014 stats for comparison).

Browsers Chrome 47.37% Firefox 37.81% Internet Explorer 6.86% Safari 4.90% Opera 1.11% Edge 0.42%

For the first time in many years, browser stats have not changed in any meaningful way from the previous year. Chrome is very slightly up, and Firefox and IE are very slightly down (although Edge does make its initial appearance in the chart).

Operating Systems Windows 52.42% Linux 31.45% Macintosh 10.75% Android 3.01% iOS 1.53%

Similar to the browser, OS shares have remained quite stable over the last year as well. 2015 seems to have been a year of stability in both markets, at least for the technical audience that comprises LinuxQuestions.org. Note that Chrome OS has the highest percentage of any OS not to make the chart.
I’d also like to take this time to thank each and every LQ member. You are what make the site great; without you, we simply wouldn’t exist. I’d like to once again thank the LQ mod team, whose continued dedication ensures that things run as smoothly as they do. Don’t forget to vote in the LinuxQuestions.org Members Choice Awards, which recently opened.

–jeremy


Categories: FLOSS Project Planets

Checking for data validity in libreoffice spreadsheets

LinuxPlanet - Wed, 2016-01-06 11:39
When entering data into the spread sheet, we might want at times to ensure that the data entered lies with in a specified range or is equal to certain number or value. To ensure this we can use the data validity option in libreoffice spreadsheet.

To enable data validity select the range of cells on which the validity needs to be applied. Then select the option validity option from data -> validity.



This will pop a menu as shown below.



In the criteria tab the "allow" option will help we can chose the what type of numbers are valid. In the data option we can choose the what should be the value of the data i.e should it be greater than or lesser than a number etc, and the text field allows us to enter the maximum number to be allowed in the cells.

Let us say we want to allow "Whole numbers" which are "less than" 100, then the setting will be as shown below.



Now when ever we enter a value equal to greater than 100 in the selected range of cells we will get an error as shown below.



We can add a message next to the cells that have data validity enabled in them by selecting the input tab and entering a message that we wish to display next to the cells as shown below.






Categories: FLOSS Project Planets

Sun, Oracle, Android, Google and JDK Copyleft FUD

LinuxPlanet - Wed, 2016-01-06 00:00

I have probably spent more time dealing with the implications and real-world scenarios of copyleft in the embedded device space than anyone. I'm one of a very few people charged with the task of enforcing the GPL for Linux, and it's been well-known for a decade that GPL violations on Linux occur most often in embedded devices such as mobile hand-held computers (aka “phones”) and other such devices.

This experience has left me wondering if I should laugh or cry at the news coverage and pundit FUD that has quickly come forth from Google's decision to move from the Apache-licensed Java implementation to the JDK available from Oracle.

As some smart commenters like Bob Lee have said, there is already at least one essential part of Android, namely Linux itself, licensed as pure GPL. I find it both amusing and maddening that respondents use widespread GPL violation by chip manufacturers as some sort of justification for why Linux is acceptable, but Oracle's JDK is not. Eventually, (slowly but surely) GPL enforcement will adjudicate the widespread problem of poor Linux license compliance — one way or the other. But, that issue is beside the point when we talk of the licenses of code running in userspace. The real issue with that is two-fold.

First, If you think the ecosystem shall collapse because “pure GPL has moved up the Android stack”, and “it will soon virally infect everyone” with copyleft (as you anti-copyleft folks love to say) your fears are just unfounded. Those of us who worked in the early days of reimplementing Java in copyleft communities thought carefully about just this situation. At the time, remember, Sun's Java was completely proprietary, and our goal was to wean developers off Sun's implementation to use a Free Software one. We knew, just as the early GNU developers knew with libc, that a fully copylefted implementation would gain few adopters. So, the earliest copyleft versions of Java were under an extremely weak copyleft called the “GPL plus the Classpath exception”. Personally, I was involved as a volunteer in the early days of the Classpath community; I helped name the project and design the Classpath exception. (At the time, I proposed we call it the “Least GPL” since the Classpath exception carves so many holes in strong copyleft that it's less of a copyleft than even the Lesser GPL and probably the Mozilla Public License, too!)

But, what does the Classpath exception from GNU's implementation have to with Oracle's JDK? Well, Sun, before Oracle's acquisition, sought to collaborate with the Classpath community. Those of us who helped start Classpath were excited to see the original proprietary vendor seek to release their own formerly proprietary code and want to merge some of it with the community that had originally formed to replace their code with a liberated alternative.

Sun thus released much of the JDK under “GPL with Classpath exception”. The reasons were clearly explained (URL linked is an archived version of what once appeared on Sun's website) on their collaboration website for all to see. You see the outcome of that in many files in the now-infamous commit from last week. I strongly suspect Google's lawyers vetted what was merged to made sure that the Android Java SDK fully gets the appropriate advantages of the Classpath exception.

So, how is incorporating Oracle's GPL-plus-Classpath-exception'd JDK different from having an Apache-licensed Java userspace? It's not that much different! Android redistributors already have strong copyleft obligations in kernel space, and, remember that Webkit is LGPL'd; there's also already weak copyleft compliance obligations floating around Android, too. So, if a redistributor is already meeting those, it's not much more work to meet the even weaker requirements now added to the incorporated JDK code. I urge you to ask anyone who says that this change will have any serious impact on licensing obligations and analysis for Android redistributors to please prove their claim with an actual example of a piece of code added in that commit under pure GPL that will combine in some way with Android userspace applications. I admit I haven't dug through the commit to prove the negative, but I'd be surprised if some Google engineers didn't do that work before the commit happened.

You may now ask yourself if there is anything of note here at all. There's certainly less here than most are saying about it. In fact, a Java industry analyst (with more than a decade of experience in the area) told me that he believed the decision was primarily technical. Authors of userspace applications on Android (apparently) seek a newer Java language implementation and given that there was a reasonably licensed Free Software one available, Google made a technical switch to the superior codebase, as it gives API users technically what they want while also reducing maintenance burden. This seems very reasonable. While it's less shocking than what the pundits say, technical reasons probably were the primary impetus.

So, for Android redistributors, are there any actual licensing risks to this change? The answer there is undoubtedly yes, but the situation is quite nuanced, and again, the problem is not as bad as the anti-copyleft crowd says. The Classpath exception grants very wide permissions. Nevertheless, some basic copyleft obligations can remain, albeit in a very weak-copyleft manner. It is possible to violate that weak copyleft, particularly if you don't understand the licensing of all third-party materials combined with the JDK. Still, since you have comply with Linux's license to redistribute Android, complying with the Classpath exception'd stuff will require only a simple afterthought.

Meanwhile, Sun's (now Oracle's) JDK, is likely nearly 100% copyright-held by Oracle. I've written before about the dangers of the consolidation of a copylefted codebase with a single for-profit, commercial entity. I've even pointed out that Oracle specifically is very dangerous in its methods of using copyleft as an aggression.

Copyleft is a tool, not a moral principle. Tools can be used incorrectly with deleterious effect. As an analogy, I'm constantly bending paper clips to press those little buttons on electronic devices, and afterwards, the tool doesn't do what it's intended for (hold papers together); it's bent out of shape and only good for the new, dubious purpose, better served by a different tool. (But, the paper clip was already right there on my desk, you see…)

Similarly, while organizations like Conservancy use copyleft in a principled way to fight for software freedom, others use it in a manipulative, drafter-unintended, way to extract revenue with no intention standing up for users' rights. We already know Oracle likes to use GPL this way, and I really doubt that Oracle will sign a pledge to follow Conservancy's and FSF's principles of GPL enforcement. Thus, we should expect Oracle to aggressively enforce against downstream Android manufacturers who fail to comply with “GPL plus Classpath exception”. Of course, Conservancy's GPL Compliance Project for Linux developers may also enforce, if the violation extends to Linux as well. But, Conservancy will follow those principles and prioritize compliance and community goodwill. Oracle won't. But, saying that means that Oracle has “its hooks” in Android makes no sense. They have as many hooks as any of the other thousands of copyright holders of copylefted material in Android. If anything, this is just another indication that we need more of those copyright holders to agree with the principles, and we should shun codebases where only one for-profit company holds copyright.

Thus, my conclusion about this situation is quite different than the pundits and link-bait news articles. I speculate that Google weighed a technical decision against its own copyleft compliance processes, and determined that Google would succeed in its compliance efforts on Android, and thus won't face compliance problems, and can therefore easily benefit technically from the better code. However, for those many downstream redistributors of Android who fail at license compliance already, the ironic outcome is that you may finally find out how friendly and reasonable Conservancy's Linux GPL enforcement truly is, once you compare it with GPL enforcement from a company like Oracle, who holds avarice, not software freedom, as its primary moral principle.

Finally, the bigger problem in Android with respect to software freedom is that the GPL is widely violated on Linux in Android devices. If this change causes Android redistributors to reevalute their willful ignorance of GPL's requirements, then some good may come of it all, despite Oracle's expected nastiness.

Categories: FLOSS Project Planets

A Requiem for Ian Murdock

LinuxPlanet - Wed, 2015-12-30 19:00

[ This post was crossposted on Conservancy's website. ]

I first met Ian Murdock gathered around a table at some bar, somewhere, after some conference in the late 1990s. Progeny Linux Systems' founding was soon to be announced, and Ian had invited a group from the Debian BoF along to hear about “something interesting”; the post-BoF meetup was actually a briefing on his plans for Progeny.

Many of the details (such as which conference and where on the planet it was), I've forgotten, but I've never forgotten Ian gathering us around, bending my ear to hear in the loud bar, and getting one of my first insider scoops on something big that was about to happen in Free Software. Ian was truly famous in my world; I felt like I'd won the jackpot of meeting a rock star.

More recently, I gave a keynote at DebConf this year and talked about how long I've used Debian and how much it has meant to me. I've since then talked with many people about how the Debian community is rapidly becoming a unicorn among Free Software projects — one of the last true community-driven, non-commercial projects.

A culture like that needs a huge group to rise to fruition, and there are no specific actions that can ensure creation of a multi-generational project like Debian. But, there are lots of ways to make the wrong decisions early. As near as I can tell, Ian artfully avoided the project-ending mistakes; he made the early decisions right.

Ian cared about Free Software and wanted to make something useful for the community. He teamed up with (for a time in Debian's earliest history) the FSF to help Debian in its non-profit connections and roots. And, when the time came, he did what all great leaders do: he stepped aside and let a democratic structure form. He paved the way for the creation of Debian's strong Constitutional and democratic governance. Debian has had many great leaders in its long history, but Ian was (effectively) the first DPL, and he chose not to be a BDFL.

The Free Software community remains relatively young. Thus, loss of our community members jar us in the manner that uniquely unsettles the young. In other words, anyone we lose now, as we've lost Ian this week, has died too young. It's a cliché to say, but I say anyway that we should remind ourselves to engage with those around us every day, and to welcome new people gladly. When Ian invited me around that table, I was truly nobody: he'd never met me before — indeed no one in the Free Software community knew who I was then. Yet, the mere fact that I stayed late at a conference to attend the Debian BoF was enough for him — enough for him to even invite me to hear the secret plans of his new company. Ian's trust — his welcoming nature — remains for me unforgettable. I hope to watch that nature flourish in our community for the remainder of all our lives.

Categories: FLOSS Project Planets

I’ve Got A Date

LinuxPlanet - Fri, 2015-12-25 17:31

A Date At Last

Hello all, I have some exciting news. It’s been a long time since I’ve had cause to use this sentence but… I’ve got a date! Sadly in this context I’m only referring to a date for my upcoming surgery. I’ll be going under the knife at The Christie in Manchester on January 14th 2016. Not far away.

If you’ve read my last 2 or 3 posts you’ll know that I’ve had some serious health problems in recent months. After perplexing a good number of medical professionals I was finally diagnosed with a rare condition known as Pseudomyxoma Peritonei, or PMP for short. Sadly not PIMP which would have sounded much cooler. The treatment involves cutting out all the affected areas and cleaning it up with a heated chemotherapy liquid. It’ll be a pretty long surgical procedure and take months to recover from but the prognosis is good. I will have to be scanned yearly to ensure no return of tumours but with a 75% chance of no re-occurrence in 10 years it’s well worth it I’d say. I won’t go on at length I just wanted to share the date for those people who’ve been asking.

I’m looking forward to Christmas and New Year, I can’t wait to get this surgery out of the way and begin down the road to recovery. Get back to work and all the other things I used to do. I went to see Star Wars last night so at least I was able to do that before my op. I’ve also done some techy things lately I’d like to write about, I’ll share those with you soon. I don’t want to spend all my time on medical talk.

I wish you all Christmas and New Year! I’ll report in again soon

Dan

Categories: FLOSS Project Planets

Bicho 0.9 is comming soon!

LibreSoft Planet - Thu, 2011-06-09 10:06

During last months we’ve been working to improve Bicho, one of our data mining tools. Bicho gets information from remote bug/issue tracking systems and store them in a relational database.

Bicho

 

The next release of Bicho 0.9 will also include incremental support, which is something we’ve missed for flossmetrics and for standalone studies with a huge amount of bugs. We also expect that more backends will be created easily with the improved backend model created by Santi Dueñas. So far we support JIRA, Bugzilla and Sourceforge. For the first two ones we parse HTML + XML, for sourceforge all we have is HTML so we are more dependent from the layout (to minimize that problem we use BeautifulSoup). We plan to include at least backends for FusionForge and Mantis (which is partially written) during this year.

Bicho is being used currently in the ALERT project (still in the first months) where all the information offered by the bug/issue reports will be related to the information available in the source code repositories (using CVSAnaly) through semantic analysis. That relationship will allow us to help developers through recommendations and other more pro-active use cases. One of my favorites is to recommend a developer to fix a bug through the analysis of the stacktraces posted in a bug. In libre software projects all the information is available in the internet, the main problem (not a trival one) is that it is available in very different resources. Using bicho against the bts/its we can get the part of the code (function name, class and file) that probably contains the error and the version of the application. That information can be related to the one got from the source code repository with cvsanaly, in this case we would need to find out who is the developer that edit that part of the code more often. This and other uses cases are being defined in the ALERT project.

If you want to stay tunned to Bicho have a look at the project page at http://projects.libresoft.es/projects/bicho/wiki or the mailing list libresoft-tools-devel _at__ lists.morfeo-project.org

 

Categories: FLOSS Research

ARviewer, PhoneGap and Android

LibreSoft Planet - Thu, 2011-06-09 05:44
ARviewer is a FLOSS mobile augmented reality browser and editor that you can easily integrate in your own Android applications. This version has been developed using PhoneGap Framework. The browser part of ARviewer draws the label associated with an object of the reality using as parameters both its A-GPS position and its altitude. The system works both outdoors and indoors in this latest case with location provided by QR-codes. ARviewer labels can be shown through a traditional list based view or through an AR view a magic lens mobile augmented reality UI.    The next steps are: 
  • Testing this source code in IOS platform to check the real portability that phoneGap provide us.
  • We plan to add the “tagging mode” with phoneGap to allow tag new nodes/objetcs from the mobile. 
  Are very very similar the next images, right? We only have found a critical problem with the refresh of nodes in the WebView using PhoneGap. We will study and analyze this behavior.  

ARviewer PhoneGap

 

ARviewer Android (native)

  More info: http://www.libregeosocial.org/node/24  Source Code: http://git.libresoft.es/ARviewer-phoneGap/  Android Market: http://market.android.com/details?id=com.libresoft.arviewer.phonegap
Categories: FLOSS Research

Finding code clones between two libre software projects

LibreSoft Planet - Thu, 2011-05-12 09:05

Last month I’ve been working in the creation of a report with the aim of finding out code clones between two libre software projects. The method we used was basically the one that was detailed in the paper Code siblings: Technical and Legal Implications by German, D., Di Penta M., Gueheneuc Y. and Antoniol, G.

It is an interesting case and I’m pretty sure this kind of reports will be more and more interesting for entities that publish code using a libre software license. Imagine you are part of a big libre software project and your copyright and even money is there, it would be very useful to you knowing whether a project is using your code and respecting your copyright and the rights you gave to the users with the license. With the aim of identifying these scenarios we did in our study the following:

  • extraction of clones with CCFinderX
  • detection of license with Ninka
  • detection of the copyright with shell scripts

The CCFinderX tool used in the first phase gives you information about common parts of the code, it detects a common set of tokens (by default it is 50) between two files, this parameter should be changed depending on what it is being looked for. In the following example the second and third column contain information about the file and the common code. The syntax is (id of the file).(source file tokens) so the example shows that the file with id 1974 contains common code with files with id 11, 13 and 14.

...
clone_pairs {
19108 11.85-139 1974.70-124
19108 13.156-210 1974.70-124
19108 14.260-314 1974.70-124
12065 17.1239-1306 2033.118-185
12065 17.1239-1306 2033.185-252
12065 17.1239-1306 2033.252-319
12065 17.1239-1306 2141.319-386
...

In the report we did we only wanted to estimate the percent of code used from the “original” project in the derivative work, but there are some variables that are necessary to take into account. First, code clones can appear among the files of the same project (btw this is clear sign of needing refactorization). Second, different parts of a file can have clones in different files (a 1:n relationship) in both projects. The ideal solution would be to study file by file the relationship with others and to remove the repeated ones.

Once the relationship among files is created is the turn of the license and copyright detection. In this phase the method just compares the output of the two detectors and finally you get a matrix where it is possible to detect whether the copyright holders were respected and the license was correctly used.

Daniel German’s team found interesting things in their study of the FreeBSD and Linux kernels. They found GPL code in FreeBSD in the xfs file system. The trick to distribute this code under a BSD license is to distribute it disabled (is not compiled into FreeBSD) and let the user the election of compiling it or not. If a developer compiles the kernel with xfs support, the resulting kernel must be distributed under the terms of the GPLx licence.

Categories: FLOSS Research

OpenBSD 4.9 incorpora el sistema /etc/rc.d

LibreSoft Planet - Wed, 2011-05-04 17:23
Algo de historia  

Como cualquier administrador de sistemas Unix sabe, init es el primer proceso en ejecución tras la carga del kernel, y da inicio a los demonios ("servicios") estándar del sistema. En el Unix original de Bell Labs, el proceso init arrancaba los servicios de userland mediante un único script de shell denominado /etc/rc. La Distribución de Berkeley añadió años después otro script denominado /etc/rc.local para arrancar otros servicios. 

Esto funcionó así durante años, hasta que Unix se fue fragmentando y, con la aparición del software empaquetado de terceros, la versión System V del Unix de AT&T introdujo un nuevo esquema de directorios en /etc/rc.d/ que contenía scripts de arranque/parada de servicios, ordenados por orden de arranque, con una letra-clave delante del nombre de fichero (S- arrancar servicios y K- detener el servicio). Por ejemplo: S19mysql inicia [S] el servicio mysql. Estos scripts (situados en /etc/init.d) se distribuyeron en niveles de ejecución (runlevels, descritos en /etc/inittab), asociando los scripts con enlaces simbólicos en cada nivel de ejecución (/etc/rc0.d, rc1.d, rc2.d, etc.). Los niveles de ejecución en cada directorio representan la parada, el reinicio, arranque en monousuario o multiusuario, etc. Este esquema, conocido como "System V" (o "SysV"), es, por ejemplo, el que adoptaron las distribuciones de Linux (con algunas diferencias entre ellas en cuanto a la ubicación de subdirectorios y scripts). Tenía la ventaja de evitar el peligro de que cualquier error de sintaxis introducido por un paquete pudiera abortar la ejecución del único script y por tanto dejar el sistema en un estado inconsistente. A cambio, introdujo cierto grado de complejidad en la gestión y mantenimiento de scripts de inicio, directorios, enlaces simbólicos, etc. 

Otros sistemas de tipo Unix, como los BSD, mantuvieron el esquema tradicional y simple de Unix, con solo uno o dos únicos ficheros rc y sin niveles de ejecución[*], si bien fueron incorporando algunos otros aspectos del esquema SysV de inicialización de los servicios del sistema. Por ejemplo, NetBSD incluyó un sistema de inicio System V similar al de Linux, con scripts individuales para controlar servicios, pero sin runlevels. FreeBSD, a su vez, integró en 2002 el sistema rc.d de NetBSD y actualmente cuenta con decenas de demonios de inicio que funcionan de forma análoga a SysV: 

$ /etc/rc.d/sshd restart

 

OpenBSD incorpora /etc/rc.d

 

OpenBSD, sin embargo, no había adoptado hasta ahora el subsistema de scripts individuales para controlar los servicios, lo que a veces causaba cierto pánico, como si les faltase algo esencial, a quienes desde el mundo Linux (u otros Unices)

entraban por primera vez en contacto con este sistema (aunque luego la cosa tampoco era tan grave, es cuestión de hábitos). La actual versión OpenBSD 4.8, publicada en noviembre de 2010, todavía utiliza únicamente dos scripts de inicio (/etc/rc y /etc/rc.local). En OpenBSD 4.9, que se publicará el próximo 1 de mayo, se ha implementado por primera vez esta funcionalidad mediante el directorio /etc/rc.d

Como suele ser habitual en OpenBSD, no se implementa algo hasta que se está seguro que se gana algo y que hay un modo sencillo y fiable de utilizarlo para el usuario final. El mecanismo es análogo al de otros sistemas de tipo Unix, pero más sencillo y con algunas sutiles e importantes diferencias que vale la pena conocer. Veámoslo. 


Descripción del nuevo subsistema /etc/rc.d de OpenBSD  

En /etc/rc.conf (donde se incluye las variables de configuración para el script rc)  nos encontraremos una nueva variable denominada rc_scripts: 

# rc.d(8) daemons scripts # started in the specified order and stopped in reverse order rc_scripts=

Incluimos en esa variable (o mejor, como se recomienda siempre, en /etc/rc.conf.local, un fichero opcional que sobreescribe las variables de /etc/rc.conf) los demonios que deseamos arrancar de inicio, por orden de arranque:

rc_scripts="dbus_daemon mysql apache2 freshclam clamd cupsd"

Los scripts de inicio de servicios residirán, como suele ser habitual, en el directorio /etc/rc.d. Pero una diferencia clave es que, aunque los scripts estén ahí situados, no arrancará nada automáticamente que no esté listado en la variable rc_scripts, siguiendo el principio de OpenBSD de evitar presumir automatismos predeterminados. Cada script responderá a las siguientes acciones:

  • start    Arranca el servicio si no está ya corriendo.
  • stop     Detiene el servicio.
  • reload   Ordena al demonio que recargue su configuración.
  • restart  Ejecuta una parada del demonio (stop), y a continuación lo inicia (start).
  • check    Devuelve 0 si el demonio está corriendo o 1 en caso contrario. 

Actualmente, este sistema solo se usa para demonios instalados desde paquetes, no para el sistema base de OpenBSD. Por ejemplo, para gestionar los estados del servicio "foobar", que habremos antes instalado desde ports o paquetes, basta ejecutar:

/etc/rc.d/foobar reload /etc/rc.d/foobar restart /etc/rc.d/foobar check /etc/rc.d/foobar stop

La última orden ("stop") se invoca también en un reinicio (reboot) o parada (shutdown) desde /etc/rc.shutdown, en orden inverso al que aparece en la variable en rc_scripts, antes de que se ejecute la orden "stop/reboot" para todo el sistema. No es necesario preocuparse por el orden de ejecución o por el significado de S17 al comienzo del nombre de los scripts.

Otra ventaja de esta implementación es lo extraordinariamente sencillos que es escribir esos scripts, frente a otras implementaciones que precisan scripts de decenas o incluso cientos de líneas. En su forma más simple:

daemon="/usr/local/sbin/foobard" . /etc/rc.d/rc.subr rc_cmd $1

Un ejemplo algo más complejo:

#!/bin/sh # # $OpenBSD: specialtopics.html,v 1.15 2011/03/21 21:37:38 ajacoutot Exp $ daemon="${TRUEPREFIX}/sbin/munin-node" . /etc/rc.d/rc.subr pexp="perl: ${daemon}" rc_pre() { install -d -o _munin /var/run/munin } rc_cmd $1

Como puede observarse, el script típico solo necesita definir el demonio, incluir /etc/rc.d/rc.subr y opcionalmente definir una expresión regular diferente a la predeterminada para pasársela a pkill(1) y pueda encontrar el proceso deseado (la expresión por defecto es "${daemon} ${daemon_flags}").

El nuevo script debe colocarse en ${PKGDIR} con extensión .rc, por ejemplo foobard.rc. TRUEPREFIX se sustituirá automáticamente en el momento de instalarlo.

La sencillez y limpieza es posible gracias al subsistema rc.subr(8), un script que contiene las rutinas internas y la lógica más compleja para controlar los demonios. Así y todo, es muy legible y contiene menos de 100 líneas. Existe también una plantilla para los desarrolladores de paquetes y ports que se distribuye en "/usr/ports/infrastructure/templates/rc.template".

Y eso es todo. Cualquier "port" o paquete que necesite instalar un demonio puede beneficiarse ahora de los scripts rc.d(8). Quizá el nuevo sistema no cubra todos los supuestos, pero cubre las necesidades de los desarrolladores de ports para mantener un sistema estándar y sencillo para arrancar servicios). En marzo de 2011, ya hay más de 90 ports de los más usados que los han implementado. Por supuesto, el viejo sistema sigue funcionando en paquetes no convertidos, pero es indudable que los desarrolladores de OpenBSD (especial mención para Antoine Jacuotot (jacuotot@) y Robert Nagy (robert@)) han logrado una vez más un buen balance entre simplicidad y funcionalidad. Por supuesto, para ampliar detalles, nunca debe eludirse leer las páginas correspondientes del manual: rc.subr(8), rc.d(8), rc.conf(8) y rc.conf.local(8) y la documentación web


Referencias


(*) Que BSD no implemente "/etc/inittab" o "telinit" no significa que no tenga niveles de ejecución (runlevels), simplemente es capaz de cambiar sus estados de inicio mediante otros procedimientos, sin necesidad de "/etc/inittab".

 
Categories: FLOSS Research

Brief study of the Android community

LibreSoft Planet - Mon, 2011-04-18 12:19

Libre software is changing the way applications are built by companies, while the traditional software development model does not pay attention to external contributions, libre software products developed by companies benefit from them. These external contributions are promoted creating communities around the project and will help the company to create a superior product with a lower cost than possible for traditional competitors. The company in exchange offers the product free to use under a libre software license.

Android is one of these products, it was created by Google a couple of years ago and it follows a single vendor strategy. As Dirk Riehle introduced some time ago it is a kind of a economic paradox that a company can earn money making its product available for free as open source. But companies are not NGOs, they don't give away money without expecting something in return, so where is the trick?

As a libre software project Android did not start from scratch, it uses software that would be unavailable for non-libre projects. Besides that, it has a community of external stakeholders that improve and test the latest version published, help to create new features and fix errors. It is true that Android is not a project driven by a community but driven by a single vendor, and Google does it in a very restricted way. For instance external developers have to sign a Grant of Copyright License and they do not even have a roadmap, Google publish the code after every release so there are big intervals of time where external developers do not have access to the latest code. Even with these barriers there are a significant part of the code that is being provided from external people, it is done directly for the project or reused from common dependencies (GIT provides ways to reuse changes done to remote repositories).


The figures above reflect the monthly number of commits done by people split up in two, in green colour commits from mail domains google.com or android.com, the study assumes that these persons are Google employees. On the other hand in grey colour the rest of commits done by other mail domains, these ones belong to different companies or volunteers.

According to the first figure (on the left), which shows the proportion of commits, during the first months that were very active (March and April 2009) the number of commits from external contributors was similar to the commits done by Google staff. The number of external commits is also big in October 2009, when the total amount of commits reached its maximum. Since April 2009 the monthly activity of the external contributors seems to be between 10% and 15%.

The figure on the left provides a interesting view of the total activity per month, two very interesting facts here: the highest peak of development was reached during late 2009 (more than 8K commits per month during two months). The second is the activity during the last months, as it was mentioned before the Google staff work in private repositories so until they publish the next version of Android, we won't see another peak of development (take into account that commits in GIT will modify the history when the code is published, thus the last months in the timeline will be overwritten during the next release)


More than 10% of the commits used by Google in Android were committed using mail domains different to google.com or android.com. At this point the question is: who did it?

(Since October 2008) # Commits Domain 69297 google.com 22786 android.com 8815 (NULL) 1000 gmail.com 762 nokia.com 576 motorola.com 485 myriadgroup.com 470 sekiwake.mtv.corp.google.com 422 holtmann.org 335 src.gnome.org 298 openbossa.org 243 sonyericsson.com 152 intel.com



Having a look at the name of the domains, it is very surprising that Nokia is one of the most active contributors. This is a real paradox, the company that states that Android is its main competition helps it!. One of the effects of using libre software licenses for your work is that even your competition can use your code, currently there are Nokia commits in the following repositories:

  • git://android.git.kernel.org/platform/external/dbus
  • git://android.git.kernel.org/platform/external/bluetooth/bluez


This study is a ongoing process that should become a scientific paper, if you have feedback please let us know.



CVSAnalY was used to get data from 171 GIT repositories (the Linux kernel was not included). Our tool allow us to store the metadata of all the repositories in one SQL database, which helped a lot. The study assumes that people working for Google use a domain @google.com or @android.com.

 

References:

Categories: FLOSS Research

AR interface in Android using phoneGap

LibreSoft Planet - Tue, 2011-03-29 06:51

Since 6 months ago we have evaluated the possibility to implement a new AR interface (based in our project ARviewer) using phoneGap. phoneGap is a mobile framework based in HTML5/JS that allow execute the same source code HTML5 in differents mobile platforms (iphone, android, blackberry). It seem a good way to create portable source code. Since 3 years ago I work in this project with Raúl Román, a crack coder!!

Currently using phoneGap is not possible obtain the stream camera in the webView widget. So, this part of the source code must be developed in the native platform. We find another problem. We could not put the webview transparent so it would look the camera in the background, and paint objects on top with HTML. In this case, we asked for this to David A. Lareo (Bcultura) and Julio Rabadán (Somms.net) and gave us some very interesting clues about this problem.

The solution is implemented in the source code that you can see below. It's necessary that our main view (R.layout.main) is the main view, for this we do 'setContentView' and later we add the main view of 'DroidGap' using 'addview' and 'getParent'. Once we have our view mixed with phonegap main view, we set the backgroundColor transparent.

@Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); super.init(); super.loadUrl("file:///android_asset/www/index.html"); setContentView(R.layout.main); RelativeLayout view = (RelativeLayout) findViewById(R.id.main_container); // appView is the WebView object View html = (View)appView.getParent(); html.setBackgroundColor(Color.TRANSPARENT); view.addView (html, new LayoutParams(LayoutParams.FILL_PARENT, LayoutParams.FILL_PARENT)); appView.setBackgroundColor(Color.TRANSPARENT); }  

  Currently, we have started this project so I will post the full source code in this blog
Categories: FLOSS Research

Amarok Code Swarm

Paul Adams: Green Eggs and Ham - Fri, 2009-08-21 06:52

In my previous entry it was commented that it would be nice to see a code swarm for Amarok's history in SVN. Well... go on then.

Code Swarm is a tool which gives a time-based visualisation of activity in SVN. Whilst code swarm are often very pretty and fun to look at for 15 minutes, they are not very informative. Much of what appears is meaningless (e.g. the entry point of the particles) and some of it is ambiguous (e.g. the movement of particles).

Anyhow, I was surprised to see that someone hadn't already made one of these for Amarok. So, here it is:

Amarok Code Swarm from Paul Adams on Vimeo.

Categories: FLOSS Research

Amarok's SVN History - Community Network

Paul Adams: Green Eggs and Ham - Thu, 2009-08-20 09:08

I did not include a "who has worked with whom" community network graph in my previous post on the history of Amarok in SVN. This was largely because that blog post was written quite late and I didn't want to wait ages for the community network graph to be generated.

Well, now I have created it.


Click here for the complete, 8.1MB, 5111x3797 version

So, just to remind you... SVN accounts are linked if they have both worked on the same artifact at some point. The more artifacts they share, the closer together the SVN accounts are placed. The result of this is that the "core" community should appear closer to the middle of the graph.

Categories: FLOSS Research

Amarok's SVN History

Paul Adams: Green Eggs and Ham - Tue, 2009-08-18 17:25

So, as you might have recently seen, Amarok has now moved out of SVN. This was SVN r1002747 on 2009-07-26. Amarok first appeared in /trunk/kdeextragear-1/amarok on 2003-09-07 (r249141) thanks to an import from markey. It was migrated the to simplified extragear structure (/trunk/extragear/multimedia/amarok) at r409209 on 2005-05-04.

So, to mark this event I have created a green blob chart and a plot of daily commits and committers for the entire time Amarok was in SVN.

Simply right-click and download the green blobs to see them in their full-scale glory. I'm sorry the plot isn't too readable. It is caused by a recent day where there appears to be about 300 commits in Amarok; way above the average. I assume this is scripty gone mad again.

Categories: FLOSS Research

Archiving KDE Community Data

Paul Adams: Green Eggs and Ham - Sun, 2009-08-16 08:23

So me and my number crunching have been quiet for a couple of months now. Since handing in my thesis I have been busier than ever. One of the things keeping me busy has been to make good on a promise I made a while back...

I have, for some time, promised to create a historical archive of how KDE "looked" in the past. To achieve this I have created SVN logs for each calendar month of KDE's history and ran various scripts that I have against them. Here's a few examples for August 1998....

Community Network

A community network graph is a representation of who was working with whom in the given month. You may remember that I have shown these off once or twice before. The nodes are SVN accounts and they share an edge when they have shared an artefact in SVN. The more artefacts that the pair have shared, the closer they are placed together. The result is that the community's more central contributors should appear towards the middle of the plot.

Your browser does not support SVG.

The Green Blobs

Yes, they're back! For the uninitiated the green blobs are representation of who committed in a given week. Read the chart from top to bottom and left to right. The date of the first commit and the % of weeks used are also given.

Commits and Committers

I have also gathered basic number of commits and committers per day and created plots, like this...

Your browser does not support SVG. So, I now have a few things to do:
  • Firstly, I need to find a place where I can store all these images and the source data where they are easily accessible. They will go online somewhere.
  • Secondly, I need to keep taking logs and keeping this archive up-to-date.
  • I also need to create a sensible means for generating SVG versions of the Green Blobs. This was an issue raised back at Akademy in Glasgow and still hasn't been addressed. I'm generally moving all of my visualisations to SVG these days.

In time I will add visualisations for things like email activity as well. If you have any ideas of aspects of the community you want visualised just let me know and I'll see what I can do. In particular, if you want me to run these jobs for your "bit" of KDE (e.g. Amarok, KOffice), just give me a shout and I'll see if I can make time. Better still, why not lend me a hand? Once I have hosting for the visualisations I will be putting all my scripts up with them. Finally.

Whilst the historical data has been visualised for interest, I hope that the new charts, as they are produced, will be helpful for all sorts of activities: from community management and coordination to marketing. Oh... and research, of course.

Categories: FLOSS Research

OSS, Akademy and ICSM 2009

Paul Adams: Green Eggs and Ham - Mon, 2009-06-01 16:15

I've just arrived in Sweden for the 5th International Conference on Open Source Systems - OSS2009. This year the conference is being held in Skövde, Sweden. This year's keynote speakers will be Stormy Peters and Brian Behlendorf. I'm particularly keen to meet with Stormy who I haven't seen since GUADEC in Birmingham; it would be good to talk before GCDS.

I like OSS. It is a friendly crowd who turn up and the conference always has a good mix of "the usual suspects" and new faces. One of those new faces for this year is Celeste Lyn Paul of KDE Usability fame. Her paper, "A survey of usability practices in Free/Libre/Open Source Software" is presented on Friday. My paper, "Reassessing Brooks' Law for the Free Software Community", will get its outing on Thursday.

In my paper I present a new approach to assessing the role of Brooks' Law and its relevance to Free Software development. This is really a "work in progress" paper. At least it was when I wrote it....

... and having subsequently finished this work I have recently received confirmation that my full paper on this topic has been accepted as a full paper to the International Conference on Software Maintenance. This gives me a great opportunity to start adding to my "I'm going to..." banners.

I'm putting together tentative plans to hold a workshop at Akademy on software quality issues. The idea is for this to be a joint workshop for both KDE and GNOME and a showcase for some of the more import results from SQO-OSS, FLOSSMETRICS and QUALOSS EC-funded research projects. If you are interested in this, please let me know. Unless there is enough up-front support, it will be hard to arrange this.

Edit: Co-Author Fail

One of my co-author's has correctly pointed out that I have used "I" where I should have written "us" and failed to give credit to my co-authors. I apologise unreservedly to Andrea Capiluppi and Cornelia Boldyreff. I am not worthy.

Categories: FLOSS Research
Syndicate content