FLOSS Project Planets

Using Travis CI to test Docker builds

LinuxPlanet - Mon, 2016-01-11 10:00

In last months article we discussed "Dockerizing" this blog. What I left out from that article was how I also used Docker Hub's automatic builds functionality to automatically create a new image every time changes are made to the GitHub Repository which contains the source for this blog.

The automatic builds are useful because I can simply make changes to the code or articles within the repository and once pushed, those changes trigger Docker Hub to build an image using the Dockerfile we created in the previous article. As an extra benefit the Docker image will also be available via Docker Hub, which means any system with Docker installed can deploy the latest version by simply executing docker run -d madflojo/blog.

The only gotcha is; what happens if those changes break things? What if a change prevents the build from occurring, or worse prevents the static site generator from correctly generating pages. What I need is a way to know if changes are going to cause issues or not before they are merged to the master branch of the repository; deploying those changes to production.

To do this, we can utilize Continuous Integration principles and tools.

What is Continuous Integration

Continuous Integration or CI, is something that has existed in the software development world for a while but it has gained more following in the operations world recently. The idea of CI came up to address the problem of multiple developers creating integration problems within the same code base. Basically, two developers working on the same code creating conflicts and not finding those conflicts until much later.

The basic rule goes, the later you find issues within code the more expensive (time and money) it is to fix those issues. The idea to solve this is for developers to commit their code into source control often, multiple times a day even. With code commits being pushed frequently this reduces the opportunity for code integration problems, and when they do happen it is often a lot easier to fix.

However, code commits multiple times a day by itself doesn't solve integration issues. There also needs to be a way to ensure the code being committed is quality code and works. This brings us to another concept of CI, where every time code is committed, the code is built and tested automatically.

In the case of this blog, the build would consist of building a Docker image, and testing would consist of some various tests I've written to ensure the code that powers this blog is working appropriately. To perform these automated builds and test executions we need a tool that can detect when changes happen, and perform the necessary steps; we need a tool like Travis CI.

Travis CI

Travis CI is a Continuous Integration tool that integrates with GitHub and performs automated build and test actions. It is also free for public GitHub repositories, like this blog for instance.

In this article I am going to walk through configuring Travis CI to automatically build and test the Docker image being generated for this blog. Which, will give you (the reader) the basics of how to use Travis CI to test your own Docker builds.

Automating a Docker build with Travis CI

This post is going to assume that we have already signed up for Travis CI and connected it to our public repository. This process is fairly straight forward, as it is part of Travis CI's on-boarding flow. If you find yourself needing a good walk through, Travis CI does have a getting started guide.

Since we will be testing our builds and do not wish to impact the main master branch the first thing we are going to do is create a new git branch to work with.

$ git checkout -b building-docker-with-travis

As we make changes to this branch we can push the contents to GitHub under the same branch name and validate the status of Travis CI builds without those changes going into the master branch.

Configuring Travis CI

Within our new branch we will create a .travis.yml file. This file essentially contains configuration and instructions for Travis CI. Within this file we will be able to tell Travis CI what languages and services we need for the build environment as well as the instructions for performing the build.

Defining the build environment

Before starting any build steps we first need to define what the build environment should look like. For example, since the hamerkop application and associated testing scripts are written in Python, we will need Python installed within this build environment.

While we could install Python with a few apt-get commands, since Python is the only language we need within this environment it's better to define it as the base language using the language: python parameter within the .travis.yml file.

language: python python: - 2.7 - 3.5

The above configuration informs Travis CI to set the build environment to a Python environment; specifically for Python versions 2.7 and 3.5 to be installed and supported.

The syntax used above is in YAML format, which is a fairly popular configuration format. In the above we are essentially defining the language parameter as python and setting the python parameter to a list of versions 2.7 and 3.5. If we wanted to add additional versions it is as simple as appending that version to this list; such as in the example below.

language: python python: - 2.7 - 3.2 - 3.5

In the above we simply added version 3.2 by adding it to the list.

Required services

As we will be building a Docker image we will also need Docker installed and the Docker service running within our build environment. We can accomplish this by using the services parameter to tell Travis CI to install Docker and start the service.

services: - docker

Like the python parameter the services parameter is a list of services to be started within our environment. As such that means we can also include additional services by appending to the list. If we needed Docker and Redis for example we can simply append the line after specifying the Docker service.

services: - docker - redis-server

In this example we do not require any service other than Docker, however it is useful to know that Travis CI has quite a few services available.

Performing the build

Now that we have defined the build environment we want, we can execute the build steps. Since we wish to validate a Docker build we essentially need to perform two steps, building a Docker container image and starting a container based on that image.

We can perform these steps by simply specifying the same docker commands we used in the previous article.

install: - docker build -t blog . - docker run -d -p 127.0.0.1:80:80 --name blog blog

In the above we can see that the two docker commands are specified under the install parameter. This parameter is actually a defined build step for Travis CI.

Travis CI has multiple predefined steps used during builds which can be called out via the .travis.yml file. In the above we are defining that these two docker commands are the steps necessary to install this application.

Testing the build

Travis CI is not just a simple build tool, it is a Continuous Integration tool which means its primary function is testing. Which means we need to add a test to our build; for now we can simply verify that the Docker container is in running, which can be performed by a simple docker ps command.

script: - docker ps | grep -q blog

In the above we defined our basic test using the script parameter. This is yet another build step which is used to call test cases. The script step is a required step, if omitted the build will fail.

Pushing to GitHub

With the steps above defined we now have a minimal build that we can send to Travis CI; to accomplish this, we simply push our changes to GitHub.

$ git add .travis.yml $ git commit -m "Adding docker build steps to Travis" [building-docker-with-travis 2ad7a43] Adding docker build steps to Travis 1 file changed, 10 insertions(+), 32 deletions(-) rewrite .travis.yml (72%) $ git push origin building-docker-with-travis

During the sign up process for Travis CI, you are asked to link your repositories with Travis CI. This allows it to monitor the repository for any changes. When changes occur, Travis CI will automatically pull down those changes and execute the steps defined within the .travis.yml file. Which in this case, means executing our Docker build and verifying it worked.

As we just pushed new changes to our repository, Travis CI should have detected those changes. We can go to Travis CI to verify whether those changes resulted in a successful build or not.

Travis CI, will show a build log for every build, at the end of the log for this specific build we can see that the build was successful.

Removing intermediate container c991de57cced Successfully built 45e8fb68a440 $ docker run -d -p 127.0.0.1:80:80 --name blog blog 45fe9081a7af138da991bb9e52852feec414b8e33ba2007968853da9803b1d96 $ docker ps | grep -q blog The command "docker ps | grep -q blog" exited with 0. Done. Your build exited with 0.

One important thing to know about Travis CI is that most build steps require commands to execute successfully in order for the build to be marked as successful.

The script and install steps are two examples of this, if any of our commands failed and did not return a 0 exit code than the whole build would be marked as failed.

If this happens during the install step, the build will be stopped at the exact step that failed. With the script step however, the build will not be stopped. The idea behind this is that if an install step fails, the build will absolutely not work. However, if a single test case fails only a portion is broken. By showing all testing results users will be able to identify what is broken vs. what is working as expected.

Adding additional tests

While we now have Travis CI able to verify the Docker build is successful, there are still other ways we could inadvertently break this blog. For example, we could make a change that prevents the static site generator from properly generating pages, this would break the site within the container but not necessarily the container itself. To prevent a scenario like this, we can introduce some additional testing.

Within our repository there is a directory called tests, this directory contains three more directories; unit, integration and functional. These directories contain various automated tests for this environment. The first two types of tests unit and integration are designed to specifically test the code within the hamerkop.py application. While useful, these tests are not going to help test the Docker container. However, the last directory functional, contains automated tests that can be used to test the running Docker container.

$ ls -la tests/functional/ total 24 drwxr-xr-x 1 vagrant vagrant 272 Jan 1 03:22 . drwxr-xr-x 1 vagrant vagrant 170 Dec 31 22:11 .. -rw-r--r-- 1 vagrant vagrant 2236 Jan 1 03:02 test_broken_links.py -rw-r--r-- 1 vagrant vagrant 2155 Jan 1 03:22 test_content.py -rw-r--r-- 1 vagrant vagrant 1072 Jan 1 03:13 test_rss.py

These tests are designed to connect to the running Docker container and validate the static site's content.

For example test_broken_links.py will crawl the website being served by the Docker container and check the HTTP status code returned when requesting each page. If the return code is anything but 200 OK the test will fail. The test_content.py test will also crawl the site and validate the content returned matches a certain pattern. If it does not, then again these tests will fail.

What is useful about these tests is that, even though the static site is running within a Docker container we are still able to test the site functionality. If we were to add these tests to the Travis CI configuration, they would also be executed for every code change; providing even more confidence about each change being made.

Installing test requirements in before_script

To run these tests via Travis CI we will simply need to add them to the script section as we did with the docker ps command. However, before they can be executed these tests require several Python libraries to be installed. To install these libraries we can add the installation steps into the before_script build step.

before_script: - pip install -r requirements.txt - pip install mock - pip install requests - pip install feedparser

The before_script build step is performed before the script step but after the install step. Making before_script the perfect location for steps that are required for script commands but not part of the overall installation. Since the before_script step is not executing test cases like the install step, it too requires all commands to succeed before moving to the script build step. If a command within the before_script build step fails, the build will be stopped.

Running additional tests

With the required Python libraries installed we can add the test execution to the script build step.

script: - docker ps | grep -q blog - python tests.py

These tests can be launched by executing tests.py, which will run all 3 automated tests; unit, integration and functional.

Testing the build again

With the tests added we can once again push our changes to GitHub.

$ git add .travis.yml $ git commit -m "Adding tests.py execution" [building-docker-with-travis 99c4587] Adding tests.py execution 1 file changed, 14 insertions(+) $ git push origin building-docker-with-travis

After pushing our updates to the repository we can sit back and wait for Travis to build and test our application.

###################################################################### Test Runner: Functional tests ###################################################################### runTest (test_rss.VerifyRSS) Execute recursive request ... ok runTest (test_broken_links.CrawlSite) Execute recursive request ... ok runTest (test_content.CrawlSite) Execute recursive request ... ok ---------------------------------------------------------------------- Ran 3 tests in 0.768s OK

Once the build completes we will see the above message in the build log, showing that Travis CI has in fact executed our tests.

Summary

With our builds successfully processing let's take a final look at our .travis.yml file.

language: python python: - 2.7 services: - docker install: - docker build -t blog . - docker run -d -p 127.0.0.1:80:80 --name blog blog before_script: - pip install -r requirements.txt - pip install mock - pip install requests - pip install feedparser script: - docker ps | grep -q blog - python tests.py

In the above we can see our Travis CI configuration consists of 3 build steps; install, before_script and script. The install step is used to build and start our Docker container. The before_script step is simply used to install required libraries for test scripts and the script step is used to execute our test scripts.

Overall, this setup is pretty simple and something we could test manually outside of Travis CI. The benefit of having Travis CI though is that all of these steps are performed for every change, no matter how minor they are.

Also since we are using GitHub, this means Travis CI will append build status notifications on every pull request as well, like this one for example. With these types of notifications I can merge pull requests into the master branch with the confidence that they will not break production.

Building a Continuous Integration and Deployment pipeline

In last months article we explored using Docker to package and distribute the application running this blog. In this article, we have discussed leveraging Travis CI to automatically build that Docker image as well as performing functional tests against it.

In next months article, we are going to take this setup one step further by automatically deploying these changes to multiple servers using SaltStack. By the end of the next article we will have a full Continuous Integration and Deployment work-flow defined which will allow changes to be tested and deployed to production without human interaction.


Posted by Benjamin Cane
Categories: FLOSS Project Planets

Best Laid Plans

LinuxPlanet - Thu, 2016-01-07 19:38

Hello all, we meet again. In my last post I said I’d be writing about technology here in the future rather than my ongoing health saga. As the title of this post suggests though, “the best laid plans of mice and men often go awry”. Here’s some info on the quote. I was recently informed by The Christie that they are postponing my operation with less than a week to go. I was due to be admitted on Jan 13th and now that’s been put back to Feb 10th, with the actual operation to take place on Feb 11th.

It’s only a slip of 4 weeks I know and it’s not the end of the world but it is frustrating when I just want to get this done so I can recover, get back to work and hopefully get on with my life. Every extra week adds up and it can start to feel like time is dragging on but I’ll get there.

So what’s the reason for the delay? An emergency case they need to deal with first apparently. With such a rare and specialised surgical procedure I suppose there was always a danger of delay. In some ways I should be glad that my case isn’t deemed as critically urgent and they feel I can wait 4 more weeks. There must be other patients in a much worse condition. Every cloud has a silver lining and all that.

Only 500 of these operations have been done at The Christie in the 10 years since it was first pioneered, so that illustrates how rare it is. Right now I can’t say I feel fantastic but I’m not in pain and I am managing to do some things to keep busy. I guess it’s a case of hurry up and wait. So I have to be a patient patient.

The Google Pixel C

However, in other (nicer) news I just got a Google Pixel C last week and I’m actually writing this on it right now. It’s the new flagship 10.2 inch Android tablet from Google and the first to be 100% designed by them, right down to the hardware. The Pixel team developed it alongside their fancy but rather expensive Chromebooks. It has Android 6.0.1 installed at the moment and is effectively a Nexus device in all but name. That means it will be first to receive new versions of Android and Android N is due in a few months. I expect it will be largely tailored to this device and I expect good things. I needed a new tablet but I also wanted to get something that could replace most of the functions I’d normally do with the laptop. In the interests of fairness I looked at the Microsoft Surface, Apple iPad Pro and a variety of convertible laptops, including the ASUS transformer series. I decided this was by far the best option right now. It’s something of a personal experiment to see whether a good tablet like this (with a Bluetooth mouse and keyboard) can really cut it as a laptop replacement. I am also helped in this project by the work I’ve done on my server beefing up hardware and configuring KVM soI can use a remote desktop. I’ll write up some proper thoughts on all this to share with you very soon. At least I have a little more time to do that now before I head off to the hospital.

Take care out there, Happy New Year and I’ll speak to you again soon,

Dan

Categories: FLOSS Project Planets

Happy New Year & Browser and OS stats for 2015

LinuxPlanet - Wed, 2016-01-06 12:46

I’d like to wish everyone a happy new year on behalf of the entire LQ team. 2015 has been another great year for LQ and we have quite a few exciting developments in store for 2016, including a major code update that is now *way* overdue. As has become tradition, here are the browser and OS statistics for the main LQ site for all of 2015 (2014 stats for comparison).

Browsers Chrome 47.37% Firefox 37.81% Internet Explorer 6.86% Safari 4.90% Opera 1.11% Edge 0.42%

For the first time in many years, browser stats have not changed in any meaningful way from the previous year. Chrome is very slightly up, and Firefox and IE are very slightly down (although Edge does make its initial appearance in the chart).

Operating Systems Windows 52.42% Linux 31.45% Macintosh 10.75% Android 3.01% iOS 1.53%

Similar to the browser, OS shares have remained quite stable over the last year as well. 2015 seems to have been a year of stability in both markets, at least for the technical audience that comprises LinuxQuestions.org. Note that Chrome OS has the highest percentage of any OS not to make the chart.
I’d also like to take this time to thank each and every LQ member. You are what make the site great; without you, we simply wouldn’t exist. I’d like to once again thank the LQ mod team, whose continued dedication ensures that things run as smoothly as they do. Don’t forget to vote in the LinuxQuestions.org Members Choice Awards, which recently opened.

–jeremy


Categories: FLOSS Project Planets

Checking for data validity in libreoffice spreadsheets

LinuxPlanet - Wed, 2016-01-06 11:39
When entering data into the spread sheet, we might want at times to ensure that the data entered lies with in a specified range or is equal to certain number or value. To ensure this we can use the data validity option in libreoffice spreadsheet.

To enable data validity select the range of cells on which the validity needs to be applied. Then select the option validity option from data -> validity.



This will pop a menu as shown below.



In the criteria tab the "allow" option will help we can chose the what type of numbers are valid. In the data option we can choose the what should be the value of the data i.e should it be greater than or lesser than a number etc, and the text field allows us to enter the maximum number to be allowed in the cells.

Let us say we want to allow "Whole numbers" which are "less than" 100, then the setting will be as shown below.



Now when ever we enter a value equal to greater than 100 in the selected range of cells we will get an error as shown below.



We can add a message next to the cells that have data validity enabled in them by selecting the input tab and entering a message that we wish to display next to the cells as shown below.






Categories: FLOSS Project Planets

Sun, Oracle, Android, Google and JDK Copyleft FUD

LinuxPlanet - Wed, 2016-01-06 00:00

I have probably spent more time dealing with the implications and real-world scenarios of copyleft in the embedded device space than anyone. I'm one of a very few people charged with the task of enforcing the GPL for Linux, and it's been well-known for a decade that GPL violations on Linux occur most often in embedded devices such as mobile hand-held computers (aka “phones”) and other such devices.

This experience has left me wondering if I should laugh or cry at the news coverage and pundit FUD that has quickly come forth from Google's decision to move from the Apache-licensed Java implementation to the JDK available from Oracle.

As some smart commenters like Bob Lee have said, there is already at least one essential part of Android, namely Linux itself, licensed as pure GPL. I find it both amusing and maddening that respondents use widespread GPL violation by chip manufacturers as some sort of justification for why Linux is acceptable, but Oracle's JDK is not. Eventually, (slowly but surely) GPL enforcement will adjudicate the widespread problem of poor Linux license compliance — one way or the other. But, that issue is beside the point when we talk of the licenses of code running in userspace. The real issue with that is two-fold.

First, If you think the ecosystem shall collapse because “pure GPL has moved up the Android stack”, and “it will soon virally infect everyone” with copyleft (as you anti-copyleft folks love to say) your fears are just unfounded. Those of us who worked in the early days of reimplementing Java in copyleft communities thought carefully about just this situation. At the time, remember, Sun's Java was completely proprietary, and our goal was to wean developers off Sun's implementation to use a Free Software one. We knew, just as the early GNU developers knew with libc, that a fully copylefted implementation would gain few adopters. So, the earliest copyleft versions of Java were under an extremely weak copyleft called the “GPL plus the Classpath exception”. Personally, I was involved as a volunteer in the early days of the Classpath community; I helped name the project and design the Classpath exception. (At the time, I proposed we call it the “Least GPL” since the Classpath exception carves so many holes in strong copyleft that it's less of a copyleft than even the Lesser GPL and probably the Mozilla Public License, too!)

But, what does the Classpath exception from GNU's implementation have to with Oracle's JDK? Well, Sun, before Oracle's acquisition, sought to collaborate with the Classpath community. Those of us who helped start Classpath were excited to see the original proprietary vendor seek to release their own formerly proprietary code and want to merge some of it with the community that had originally formed to replace their code with a liberated alternative.

Sun thus released much of the JDK under “GPL with Classpath exception”. The reasons were clearly explained (URL linked is an archived version of what once appeared on Sun's website) on their collaboration website for all to see. You see the outcome of that in many files in the now-infamous commit from last week. I strongly suspect Google's lawyers vetted what was merged to made sure that the Android Java SDK fully gets the appropriate advantages of the Classpath exception.

So, how is incorporating Oracle's GPL-plus-Classpath-exception'd JDK different from having an Apache-licensed Java userspace? It's not that much different! Android redistributors already have strong copyleft obligations in kernel space, and, remember that Webkit is LGPL'd; there's also already weak copyleft compliance obligations floating around Android, too. So, if a redistributor is already meeting those, it's not much more work to meet the even weaker requirements now added to the incorporated JDK code. I urge you to ask anyone who says that this change will have any serious impact on licensing obligations and analysis for Android redistributors to please prove their claim with an actual example of a piece of code added in that commit under pure GPL that will combine in some way with Android userspace applications. I admit I haven't dug through the commit to prove the negative, but I'd be surprised if some Google engineers didn't do that work before the commit happened.

You may now ask yourself if there is anything of note here at all. There's certainly less here than most are saying about it. In fact, a Java industry analyst (with more than a decade of experience in the area) told me that he believed the decision was primarily technical. Authors of userspace applications on Android (apparently) seek a newer Java language implementation and given that there was a reasonably licensed Free Software one available, Google made a technical switch to the superior codebase, as it gives API users technically what they want while also reducing maintenance burden. This seems very reasonable. While it's less shocking than what the pundits say, technical reasons probably were the primary impetus.

So, for Android redistributors, are there any actual licensing risks to this change? The answer there is undoubtedly yes, but the situation is quite nuanced, and again, the problem is not as bad as the anti-copyleft crowd says. The Classpath exception grants very wide permissions. Nevertheless, some basic copyleft obligations can remain, albeit in a very weak-copyleft manner. It is possible to violate that weak copyleft, particularly if you don't understand the licensing of all third-party materials combined with the JDK. Still, since you have comply with Linux's license to redistribute Android, complying with the Classpath exception'd stuff will require only a simple afterthought.

Meanwhile, Sun's (now Oracle's) JDK, is likely nearly 100% copyright-held by Oracle. I've written before about the dangers of the consolidation of a copylefted codebase with a single for-profit, commercial entity. I've even pointed out that Oracle specifically is very dangerous in its methods of using copyleft as an aggression.

Copyleft is a tool, not a moral principle. Tools can be used incorrectly with deleterious effect. As an analogy, I'm constantly bending paper clips to press those little buttons on electronic devices, and afterwards, the tool doesn't do what it's intended for (hold papers together); it's bent out of shape and only good for the new, dubious purpose, better served by a different tool. (But, the paper clip was already right there on my desk, you see…)

Similarly, while organizations like Conservancy use copyleft in a principled way to fight for software freedom, others use it in a manipulative, drafter-unintended, way to extract revenue with no intention standing up for users' rights. We already know Oracle likes to use GPL this way, and I really doubt that Oracle will sign a pledge to follow Conservancy's and FSF's principles of GPL enforcement. Thus, we should expect Oracle to aggressively enforce against downstream Android manufacturers who fail to comply with “GPL plus Classpath exception”. Of course, Conservancy's GPL Compliance Project for Linux developers may also enforce, if the violation extends to Linux as well. But, Conservancy will follow those principles and prioritize compliance and community goodwill. Oracle won't. But, saying that means that Oracle has “its hooks” in Android makes no sense. They have as many hooks as any of the other thousands of copyright holders of copylefted material in Android. If anything, this is just another indication that we need more of those copyright holders to agree with the principles, and we should shun codebases where only one for-profit company holds copyright.

Thus, my conclusion about this situation is quite different than the pundits and link-bait news articles. I speculate that Google weighed a technical decision against its own copyleft compliance processes, and determined that Google would succeed in its compliance efforts on Android, and thus won't face compliance problems, and can therefore easily benefit technically from the better code. However, for those many downstream redistributors of Android who fail at license compliance already, the ironic outcome is that you may finally find out how friendly and reasonable Conservancy's Linux GPL enforcement truly is, once you compare it with GPL enforcement from a company like Oracle, who holds avarice, not software freedom, as its primary moral principle.

Finally, the bigger problem in Android with respect to software freedom is that the GPL is widely violated on Linux in Android devices. If this change causes Android redistributors to reevalute their willful ignorance of GPL's requirements, then some good may come of it all, despite Oracle's expected nastiness.

Categories: FLOSS Project Planets

A Requiem for Ian Murdock

LinuxPlanet - Wed, 2015-12-30 19:00

[ This post was crossposted on Conservancy's website. ]

I first met Ian Murdock gathered around a table at some bar, somewhere, after some conference in the late 1990s. Progeny Linux Systems' founding was soon to be announced, and Ian had invited a group from the Debian BoF along to hear about “something interesting”; the post-BoF meetup was actually a briefing on his plans for Progeny.

Many of the details (such as which conference and where on the planet it was), I've forgotten, but I've never forgotten Ian gathering us around, bending my ear to hear in the loud bar, and getting one of my first insider scoops on something big that was about to happen in Free Software. Ian was truly famous in my world; I felt like I'd won the jackpot of meeting a rock star.

More recently, I gave a keynote at DebConf this year and talked about how long I've used Debian and how much it has meant to me. I've since then talked with many people about how the Debian community is rapidly becoming a unicorn among Free Software projects — one of the last true community-driven, non-commercial projects.

A culture like that needs a huge group to rise to fruition, and there are no specific actions that can ensure creation of a multi-generational project like Debian. But, there are lots of ways to make the wrong decisions early. As near as I can tell, Ian artfully avoided the project-ending mistakes; he made the early decisions right.

Ian cared about Free Software and wanted to make something useful for the community. He teamed up with (for a time in Debian's earliest history) the FSF to help Debian in its non-profit connections and roots. And, when the time came, he did what all great leaders do: he stepped aside and let a democratic structure form. He paved the way for the creation of Debian's strong Constitutional and democratic governance. Debian has had many great leaders in its long history, but Ian was (effectively) the first DPL, and he chose not to be a BDFL.

The Free Software community remains relatively young. Thus, loss of our community members jar us in the manner that uniquely unsettles the young. In other words, anyone we lose now, as we've lost Ian this week, has died too young. It's a cliché to say, but I say anyway that we should remind ourselves to engage with those around us every day, and to welcome new people gladly. When Ian invited me around that table, I was truly nobody: he'd never met me before — indeed no one in the Free Software community knew who I was then. Yet, the mere fact that I stayed late at a conference to attend the Debian BoF was enough for him — enough for him to even invite me to hear the secret plans of his new company. Ian's trust — his welcoming nature — remains for me unforgettable. I hope to watch that nature flourish in our community for the remainder of all our lives.

Categories: FLOSS Project Planets

I’ve Got A Date

LinuxPlanet - Fri, 2015-12-25 17:31

A Date At Last

Hello all, I have some exciting news. It’s been a long time since I’ve had cause to use this sentence but… I’ve got a date! Sadly in this context I’m only referring to a date for my upcoming surgery. I’ll be going under the knife at The Christie in Manchester on January 14th 2016. Not far away.

If you’ve read my last 2 or 3 posts you’ll know that I’ve had some serious health problems in recent months. After perplexing a good number of medical professionals I was finally diagnosed with a rare condition known as Pseudomyxoma Peritonei, or PMP for short. Sadly not PIMP which would have sounded much cooler. The treatment involves cutting out all the affected areas and cleaning it up with a heated chemotherapy liquid. It’ll be a pretty long surgical procedure and take months to recover from but the prognosis is good. I will have to be scanned yearly to ensure no return of tumours but with a 75% chance of no re-occurrence in 10 years it’s well worth it I’d say. I won’t go on at length I just wanted to share the date for those people who’ve been asking.

I’m looking forward to Christmas and New Year, I can’t wait to get this surgery out of the way and begin down the road to recovery. Get back to work and all the other things I used to do. I went to see Star Wars last night so at least I was able to do that before my op. I’ve also done some techy things lately I’d like to write about, I’ll share those with you soon. I don’t want to spend all my time on medical talk.

I wish you all Christmas and New Year! I’ll report in again soon

Dan

Categories: FLOSS Project Planets
Syndicate content