FLOSS Project Planets

Moto 360 Generation 2 Smartwatch Review

LinuxPlanet - Thu, 2016-02-11 17:13

In the next episode of Bad Voltage, I’ll be reviewing the Motorola Moto 360 Generation 2 (2015 edition). Tune in tomorrow to listen to the ensuing discussion and the rest of the show. In the interim, here’s the review:

While I’m both a watch aficionado and a huge tech enthusiast, I’ve not traditionally been super impressed with smart watches. Sure, I backed the original pebble but the first few generations of devices in this category just didn’t impress me. Sub par displays, laggy unintuitive interfaces and terrible battery life weren’t the only issues. They just weren’t aesthetically pleasing. Shortly before our trip to Germany for Bad Voltage Live, friend of the show Tarus Balog mentioned the translation feature on his original Moto 360. I was intrigued as I unfortunately don’t speak any German. After taking a look at the Moto 360 generation two (or 2015 version as it’s sometimes called), I saw a watch that actually appealed to me. Evidently it’s not just me, as I’ve gotten several comments on how nice the device looks from random enthusiasts since my purchase.

The first thing you’ll notice when you start to build your watch using Moto Maker is that there are quite a few options. You can choose from 42mm or 46mm faces designed in men’s style, or a 42mm women’s style. There are multiple bezel choices, the case is available in a variety of colors based on which style you choose and there are myriad bands in both leather and metal. The price ranges from $299 – $449, depending on which options you choose, but given the large number of variables there should be something for everyone.

Moving on to specs, all models have gorilla glass, wireless charging, an ambient light sensor, heart rate sensor, Gyroscope, Accelerometer, are IP67* dust and water resistant, and have both wifi and bluetooth connectivity. The smaller style has a 300mAH battery that should last a little over one day, while the 46mm style has a 400mAH battery that should last almost two. In my experience, that estimate is pretty accurate but does depend on whether you utilize ambient mode. The wireless charger is a little stand that turns the watch into a small clock while charging, which is a nice touch. The watch does work with both Android and iOS. It appears Motorola plans to be a good Android citizen on the upgrade side, as I literally got the Marshmallow upgrade notification as I was writing this review.

With specs out of the way, let’s move on to using and wearing the watch. I’ve already mentioned that I like the look of the watch, but I should mention that it’s also well built and comfortable to wear. Getting notifications on your wrist does come in handy at times, and not having to reach for your phone to check your calendar is nice. You can dictate text messages using the watch, but I just don’t *ever* see myself doing that. To be fair, I don’t do that with my phone either. The Google Now card implementation is both intuitive and useful. The translation feature that led me to first look into buying the watch works as advertised and came in handy on multiple occasions. The Google Fit and Moto Body functionality is also there for those who are interested, although keep in mind Motorola has a dedicated Sport Watch.  Overall I like the device more than I anticipated, but there are some downsides. I’ve only been using the marshmallow version of Wear for 15 minutes or so, but overall Wear is not quite where it needs to be. It is getting closer though, and that isn’t specific to the Moto 360. While battery life on the device is acceptable, I think for a watch to get mainstream adoption it will need to be able to last for “a weekend” and so far I’m not aware of a non e-ink one that does. I should note that while the original 360 was the first round smartwatch, both it and the generation 2 model have a small notch out of the bottom part of the display that has derisively been nicknamed the flat tire. While it doesn’t bother me much, it seems to drive some people absolutely bonkers. Competing round watches from LG, Samsung, Huawei and others do not have the tire.

So, what’s the Bad Voltage verdict? The Moto 360 generation 2 is a sleek, well built, reasonably priced device with enough customization options to appeal to traditional watch enthusiasts. If you’ve been holding out on getting a smartwatch, it may well be time to take another look.

–jeremy

Note, I’ve heard good things about the latest Huawei Watch but don’t currently have one. If I get one, I’ll certainly review it here as well as post a comparison to the Moto 360 2. If you think there’s another watch I should be looking at, let me know.

  • IP67 – Withstands immersion in up to 3 feet of fresh water for up to 30 minutes. Not designed to work while submerged underwater. Do not use while swimming, or subject it to pressurized streams of water. Avoid exposure of leather band to water. Not dust proof.

 


Categories: FLOSS Project Planets

Smoke Me A Kipper

LinuxPlanet - Wed, 2016-02-10 13:25

Hello! As that Nirvana song goes “all apologies”. It doesn’t look like I’ll have time to post my Pixel C review before my operation now. I’m actually sat here in the Christie hospital in Manchester writing this and the ward has wifi. How cool is that? I’ve already written over 2000 words of the follow up to my last post, the server story. I hope to post it tonight perhaps before my operation tomorrow but who knows whether that will happen. I’ll do my best. In the last 2 or 3 weeks I thought I’d have more time but there’s just been so much stuff to sort out.

So tomorrow I will be in surgery most of the day from the early morning. Hopefully I’ll be out by around 6pm but I will be transferred to intensive care for monitoring. That may last a couple of days but as soon as possible I’ll transfer back to the regular ward and begin getting myself better. I’ll post a quick update to say I’m OK via this blog, Twitter and FB as soon as I can, porbably next week.

A big thanks to everyone who’s sent messages and good wishes over recent months. I appreciate them all. I am totally ready for this operation now, I’m in the best place in the country and I’m very confident about the ability of the doctors. It may take a few days before I can properly communicate again but in the immortal words of Ace Rimmer

“Smoke me a kipper I’ll be back for breakfast”

Smoke me a kipper!

See you soon,

Dan

Categories: FLOSS Project Planets

Philosophy & Servers – Part 1

LinuxPlanet - Tue, 2016-01-26 11:51

Hello everyone, I hope you’re well. I’ve written a lot about my health situation lately but I also promised I would write properly about technology as soon as time allowed. I’m pleased to say that day has finally arrived. I want to tell you about the changes I’ve made to my computing setup in the last 6-9 months and the thinking behind it, I’ll also dish the dirt on this new Pixel C tablet I’m currently toting. That sounds like a lot to cram in so I’ll probably split this into 2 or 3 posts, we’ll see. So let’s start with the new computing philosophy I’ve come to and the reasons for it.

HP Microserver Gen8 – case open

In the last couple of years I’ve been lucky enough to have the use of a Google Nexus 10 tablet belonging to my employers. Prior to this I hadn’t really gotten on board with the whole tablet computing revolution. I remember the previous false dawns and promises of tablets a decade ago. I’d had a couple of Nexus smartphones and maintained an an active interest in Android, even trying Android x86 on a netbook, but despite the excitement surrounding iPads and other shiny new devices I just didn’t see the point. “I have a phone and a laptop, what else would I need?” was my mindset. Over time though I noticed I was using the Nexus tablet more and more, rarely even turning on the laptop at home. The purchase of a bluetooth keyboard case for the Nexus 10 helped a lot with this, it felt much more like a mini laptop anyway. I could already easily do all of the following: Email, social media, web browsing, podcast aggregation, media consumption (Netflix, Plex, YouTube, BBC iPlayer etc), casual gaming (only solitaire or Angry Birds but I have a PS4 for serious gaming), time management and organisation (calendar, shopping lists, TODOs etc). The only things I couldn’t do on Android were audio production, web development and running virtual machines. I have a pretty powerful desktop PC that can take care of the podcast production and audio editing, probably the other bits too. I don’t do as much development these days anyway but when I do it’s carried out in an SSH session to a remote server anyway. That just left VMs and I figured a server could also take care of that. With the proliferation of faster broadband speeds, cloud computing, thin clients and an effective return to the client/server model it all seemed to be heading one way. I don’t need to carry a full computer when I can have a cool mobile device and can use it as a thin client anyway if I need to.

I’ve long thought that our phones will become our main computers eventually anyway. 5 or 10 years ago that seemed far fetched but these days you hardly need to be Nostradamus to see that one coming. A few companies have already attempted to create smartphones you can dock into a keyboard and large display, so far it hasn’t really worked out but sooner or later someone will get it right. My hunch is “sooner”, in the next year or two. I know this is a big part of the Ubuntu phone strategy, running Android and Ubuntu side by side, it remains to be seen whether that works. Tablets are certainly ripe for this kind of market. I also have a feeling Google are looking to merge Android and ChromeOS in the very near future, making a more serious play for the desktop and laptop market. New Google CEO Sundar Pichai is clearly keen. It’s silly to have these 2 separate products confusing the market, and while you could argue that they serve different purposes and different users, I don’t really buy that. “Convergence”, “synergy” and other such PR buzz words have long been in fashion. Now even Microsoft has realised that having 15 different editions of Windows is stupid. Have the same core OS across phone, tablet, laptop and desktop just brand it all the same. I can see why servers might still be a different kettle of fish but it makes sense to keep everything together. I’m excited about what Android N could bring in May and I’ve taken a gamble investing in the Pixel C. Fingers crossed.

Anyway, enough marketing and business speculation nonsense, where did all this ruminating really leave me? I’d already been running my own server for backups, media management and other things. It was a Buffalo Linkstation Duo NAS I hacked to allow full SSH access for backups (gotta love rsync) but it was low powered and also ARM based. It couldn’t handle increasingly essential software for me like Plex, Syncthing and ZeroTier (more on those in the next post). I had an old Lenovo netbook lying around and my makeshift solution was to set this up with Linux Mint MATE edition and then mount the NAS automatically on boot. It worked but if I was to truly embrace thin client computing and KVM I knew I had to step up my server game.

Tux Likes To Juggle Apparently

My home computing set up this time last year was: a powerful studio desktop PC for podcasting/music which we can leave out of the equation as I was always going to keep that, this cobbled together Buffalo NAS and Lenovo netbook server, my ASUS laptop (Core i3, 4Gb RAM, not terrifically powerful but enough) and the Nexus 10 for casual stuff. After much experimentation in the last year I now have a powerful server with KVM to suit my desktop needs and a Google Pixel C tablet for most of my mobile computing needs. I still have the laptop of course, I haven’t thrown it away, it just hasn’t been switched on in almost a month.

I figured I should talk you through my transition in these next 2 articles. It’s still an experiment really. In the next post I’ll explain how I got hold of an HP Microserver and upgraded the hardware, installed all the appropriate software, broke it, fixed it, broke it again and so on. What I’ve learned about Plex, KVM, Zero Tier and other things. Finally in the 3rd post I’ll talk about my recent purchase of the Google Pixel C and review the device properly, I’m actually writing this on it right now. I plan to get all this done before I go into hospital in 2 weeks so stay tuned.

Until then take care everyone,

Dan

Categories: FLOSS Project Planets

Key Charities That Advance Software Freedom Are Worthy of Your Urgent Support

LinuxPlanet - Mon, 2016-01-25 16:00

[ This blog was crossposted on Software Freedom Conservancy's website. ]

I've had the pleasure and the privilege, for the last 20 years, to be either a volunteer or employee of the two most important organizations for the advance of software freedom and users' rights to copy, share, modify and redistribute software. In 1996, I began volunteering for the Free Software Foundation (FSF) and worked as its Executive Director from 2001–2005. I continued as a volunteer for the FSF since then, and now serve as a volunteer on FSF's Board of Directors. I was also one of the first volunteers for Software Freedom Conservancy when we founded it in 2006, and I was the primary person doing the work of the organization as a volunteer from 2006–2010. I've enjoyed having a day job as a Conservancy employee since 2011.

These two organizations have been the center of my life's work. Between them, I typically spend 50–80 hours every single week doing a mix of paid and volunteer work. Both my hobby and my career are advancing software freedom.

I choose to give my time and work to these organizations because they provide the infrastructure that make my work possible. The Free Software community has shown that the work of many individuals, who care deeply about a cause but cooperate together toward a common goal, has an impact greater than any individuals can ever have working separately. The same is often true for cooperating organizations: charities, like Conservancy and the FSF, that work together with each other amplify their impact beyond the expected.

Both Conservancy and the FSF pursue specific and differing approaches and methods to the advancement of software freedom. The FSF is an advocacy organization that raises awareness about key issues that impact the future of users' freedoms and rights, and finds volunteers and pays staff to advocate about these issues. Conservancy is a fiscal sponsor, which means one of our key activities is operational work, meeting the logistical and organizational needs of volunteers so they can focus on the production of great Free Software and Free Documentation. Meanwhile, both Conservancy and FSF dedicated themselves to sponsoring software projects: the FSF through the GNU project, and Conservancy through its member projects. And, most importantly, both charities stand up for the rights of users by enforcing and defending copyleft licenses such as the GNU GPL.

Conservancy and the FSF show in concrete terms that two charities can work together to increase their impact. Last year, our organizations collaborated on many projects, such as the proposed FCC rule changes for wireless devices, jointly handled a GPL enforcement action against Canonical, Ltd., published the principles of community-oriented GPL enforcement, and continued our collaboration on copyleft.org. We're already discussing lots of ways that the two organizations can work together in 2016!

Your browser does not support the video element. Perhaps you can view the video on Youtube or download it directly?

I'm proud to give so much of my time and energy to both these excellent organizations. But, I also give my money as well: I was the first person in history to become an Associate Member of the FSF (back in November 2002), and have gladly paid my monthly dues since then. Today, I also signed up as an annual Supporter of Conservancy, because I'm want to ensure that Conservancy's meets its current pledge match — the next 215 Supporters who sign up before January 31st will double their donation via the match.

For just US$20 each month, you make sure the excellent work of both these organizations can continue. This is quite a deal: if you are employed, University-educated professional living in the industrialized world, US$20 is probably the same amount you'd easily spend on a meals at restaurants or other luxuries. Isn't it even a better luxury to know that these two organizations can have employ a years' worth of effort of standing up for your software freedom in 2016? You can make the real difference by making your charitable contribution to these two organizations today:

Please don't wait: both fundraising deadlines are just six days away!

Categories: FLOSS Project Planets

Adding SQLight as a datasource to SQLeo

LinuxPlanet - Sat, 2016-01-16 14:33

An audio version of this post is available on Hacker Public Radio.

I have been looking for a tool that will graphically and programmatically track identifiers as they pass through systems. I could have done this in Inkscape after following the excellent tutorials on http://screencasters.heathenx.org/, however I also wanted to be able to describe the relationships programmatically.

This got me to thinking about graphical query builders for databases. The idea is to show each system as a table block and then draw lines between them to show how “Field_X” in “System_A” will map to “Field_Y” in “System_B”. Many of the proprietary and some free database solutions allow this type of view. However I also want to easily package the entire thing up, so that someone else could access it without needing to pay for or install any specialized software. That limited the choice of database to SQLite, which is small, supported on many platforms and is released into the Public Domain.

SQLite is an in-process library that implements a self-contained, serverless, zero-configuration, transactional SQL database engine. The code for SQLite is in the public domain and is thus free for use for any purpose, commercial or private. SQLite is the most widely deployed database in the world with more applications than we can count, including several high-profile projects.

Please follow the instructions on the SQLite site for information on how you can install it on your system. For me on Fedora it’s simple to install via dnf/yum. You might also want to install some GUI managers if that’s your thing.

dnf install sqlite sqlitebrowser sqliteman

I created a small database for demonstration purposes, consisting of two tables and one field in each.

Next step is to download SQLeo Visual Query Builder which has support for a graphical query builder.

A powerful SQL tool to transform or reverse complex queries (generated by OBIEE, Microstrategy, Cognos, Hyperion, Pentaho …) into diagrams to ease visualization and analysis. A graphical query builder that permits to create complex SQL queries easily. The GUI with multi-connections supports virtually all JDBC drivers, including ODBC bridge, Oracle, MySQL, PostgreSQL, Firebird, HSQLDB, H2, CsvJdbc, SQLite. And top of that, everything is open-source!

SQLeo is a Java Tool and there is a limited version available on the web site which is limited to 3 tables per graph and 100 rows. Now as the program is released under the GPLv2.0, you could download the code and remove the restrictions. You can also support the project to the tune of €10 and you will get the full version ready to rock.

Unzip the file and enter the newly created directory, and run the program as follows:

java -Dfile.encoding=UTF-8 -jar SQLeoVQB.jar

One slightly confusing thing, and the reason for this post, is that I could not find support for SQLite listed in the list of databases to connect to. A quick search on the support forum and I found the question “Connection to SQLite DB“. I found the answer a bit cryptic until I read the manual related to JDBC Drivers, which told me how to add the sqlite library.

SQLeo uses a standard Java sqlite library that is released under the Apache Software License, Version 2.0. You can download it from the SQLite JDBC MVNRepository and save it into the same directory as SQLeo.

Right Click in the Metadata explorer window and select new driver.

Click “add library

Enter the following information
Name: SQLite JDBC
Driver: org.sqlite.JDBC
Example: jdbc:sqlite:~/yourdb.db

Next right click on the newly created driver and select “new datasource

The name can be anything you like, but the url needs to start with jdbc:sqlite: and then the path to the sqlite database you created earlier. I selected auto-connect and pressed connect as well.

Now you can press the Query Designer button and drag the tables into the view. Once there you can then join up the fields.

That covers the graphical representation, and we can tackle the programmatic implementation by pressing the save button. Which gives the structure as defined by SQL.


SELECT
FROM
System_B
INNER JOIN System_A
ON System_B.Field_Y = System_A.Field_X

So now I can check the SQL into git and have a nice image for adding to documentation any time I like.

Categories: FLOSS Project Planets

not a dynamic executable

LinuxPlanet - Thu, 2016-01-14 12:10

I sometimes have issues running a 32bit program under Linux X64.

When you run ldd it reports that it’s not a dynamic executable

# ldd /usr/bin/snx not a dynamic executable

However if you run file, you do see that it is.

# file /usr/bin/snx /usr/bin/snx: setuid ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, stripped

You can confirm that you are running 64 bit Linux

# uname -i x86_64

To fix this you need to install 32 bit libraries. On Fedora you can install them using

dnf install /lib/ld-linux.so.2 libX11.so.6 libpam.so.0 libstdc++.so.5

And on Debian

apt-get update apt-get install lib32z1 lib32ncurses5 libstdc++5:i386

Worked for me.

Categories: FLOSS Project Planets

The Best of 2015

LinuxPlanet - Thu, 2016-01-14 05:31

 

Another new year has started, and we think it is time to look a bit back and review what became the most popular content in Seravo Blog in 2015.

The five most popular blog posts in Seravo.fi in 2015 1. 10 reasons to migrate to MariaDB (if still using MySQL)

The most popular post last year – and actually the first article written in 2015 – was the one praising MariaDB and providing a list of reasons to start using it as your database. The article also created lively discussion in the comment section!

2. Ubuntu Phone and Unity vs Jolla and SailfishOS

In August we compared two Linux-based mobile operating systems that stand as competitors to Android. As a conclusion we claimed that  “Jolla and SailfishOS would be the technically and usabilitywise superior alternative, but then again Ubuntu could be able to leverage on it’s position as the most popular Linux distribution in desktops and servers”. All in all, we hoped for new innovations fueling from these (at least for now minor) competitors.

3. Ubuntu Phone review by a non-geek

The new Ubuntu Phone was clearly in the center of the open source community’s attention in the summer of 2015. A not-that-technical review on the phone became the year’s third most visited blog post. After that the phone has been tested and tried by Seravo geeks too – and has not gained too admirable evaluations…

4. Continuous integration testing for WordPress plugins on Github using Travis CI

If that wasn’t technical enough for you, maybe the article on testing WordPress plugins will do? And if you found youself puzzled after reading it, do not hesitate to contact us. Seravo can help you using PHPUnit, Rspec and Travis in your projects, please feel free to ask us about our WordPress testing via email at wordpress@seravo.fi .

PS. If you identify yourself as a Finn (or for some other odd reason happen to be familiar with Finnish), check out our blog on WP-palvelu.fi that focuses on all things happening within and around WordPress.

5. Fixing black screen after login in Ubuntu 14.04

The fifth most read article in 2015 was our guide to fixing a Ubuntu login problem that raised from some of our customers. Instead of charging for one hour of work for fixing the problem, we provided the instructions for everyone to do it on their own.

 

The all-time favourites

Even though they say that infomation ages faster than ever, it seems that even ancient articles in the Seravo blog are still valid and popular today. The post on how to turn any computer into a wireless access point with Hostapd from August 2014 is actually still the most popular of all seravo.fi pages with over 67 000 page visits. The more recent post on why to migrate to MariaDB has taken the second place with nerly 57 000 visits.

Next in the all-time top five favourites come the following articles:

Free Your Android phone (and upgrade to the latest Android version)!

Optimizing web server performance with Nginx and PHP

Virtualized bridged networking with MacVTap

 

In fact, our front page seravo.fi with approximately ten thousand visits per year only comes as sixth on the list of our most popular pages. This proves us that a blog is a great way to attract attention and to participate in the on-going technological discussion.

Although located in the northern corner of the world, in this distant land of snow and software, Seravo has with its online presence drawn visitors from all over the world: United States, India, Germany and United Kingdom form the top four and our dear own Finland takes the fifth place on this list.

At this point we would like to thank all of our visitors, readers and especially those who have commented on the blog posts! We will keep on posting new topics in 2016 as well, and the comment section is always open for new thoughts and constructive criticism. We are also more than happy to receive ideas for future blog posts from any of you.

Happy new year!
Categories: FLOSS Project Planets

Unhandled error message: Error when getting information for file ‘/media/ntfs’: Input/output error

LinuxPlanet - Wed, 2016-01-13 09:01
On inserting a external drive or a pen drive which has is formatted as NTFS we might come across the error

"Unhandled error message: Error when getting information for file '/media/ntfs': Input/output error"

One of the common causes of the error is faulty NTFS driver. So an update of the driver should help us in resolving the issue. For example we can get the latest NTFS driver from https://www.tuxera.com/community/open-source-ntfs-3g/

For example the latest driver is

https://tuxera.com/opensource/ntfs-3g_ntfsprogs-2015.3.14.tgz

Once you have downloaded the driver run the following commands to install it

tar -xzvf ntfs-3g_ntfsprogs-2015.3.14.tgz ./configure make sudo make install Now if you insert the drive, it should work with out any errors.

I found this solution at

http://forums.debian.net/viewtopic.php?f=5&t=124856

Categories: FLOSS Project Planets

Bridging a TP-Link ADSL Modem With Web GUI Access from Lan Side

LinuxPlanet - Tue, 2016-01-12 14:53

Sometime you have to configure large over complex "enterprise" grade devices and other times you have to deal with consumer grade hardware with confusing interfaces and puzzling feature omissions or bugs.

Setting up Bridge Mode on TP-Link TD8177, ADSL Modem

We usually come face-to-face with the consumer grade devices when configuring branch ADSL connectivity. One device we like for its simplify, it is just an ADSL modem with single LAN port, is the TP-Link TD8177.  The device has no wifi access point built-in but does have the basic router functionality if needed. (We wish we could get the one without the annoying USB port but it seems its not available in SA.)

If the customer's budget allows we prefer to turn the device into bridge mode and have a dedicated appliance device installed with Linux of FreeBSD do the firewalling and routing. This is something we strongly recommend to our customers rather than relying on the manufacturer to keep the firmware up-to-date.

Setting the ADSL Modem in Bridge Mode

To put the device in bridge mode look for the setting under "Interface Settings" -> Internet ->Encapsulation -> "Bridge Mode". Note: This is different to the "Interface Settings" -> Internet -> "Bridge Mode" ->Encapsulation  setting which is to do with how encapsulating Ethernet frames in DSL frames before sending them down the wire to the DSLAM (as far as I can tell). This latter setting is something that is quite different form the bride mode of the former configuration.

How to Access ADSL Modem Web GUI in Bridge Mode?

What's great about the TP-Link TD8177 is that putting it in bridge mode and setting up PPPoE on the FreeBSD or Linux box, still allows access to the device via its assigned IP address. This addess can be assigned via the GUI interface as per normal. The problem however is trying to access the ADSL modem via its IP address from the Lan side of the firewall if the Lan side is in a different IP address range to the ADSL modem/router.

Trying to set a static route under "Advanced Setup" -> Routes prooved to be impossible due to a bug in the web gui or back-end script that implements the results of the configuration via the GUI. Assuming we have the modem with IP address 192.168.80.2 and the firewall has address 192.168.80.1 and we want to be able to access the web gui on 192.168.80.2 from our lan network of 192.168.55.0/24.

Trying to set up a route to a network such as 192.168.55.0/24 with the ip of the firewall interface (192.168.80.1) as the gateway results in a routing table entry being created with the interface of the ADSL port. The end result is that traffic can reach the ADSL modem but the modem tries to send the response down the ADSL port.

Luckily the TP-Link TD8177 can be configured by the command line. One simple needs to telnet into the device. We were happy to find that adding a route manually worked!

"ip route add 192.168.55.0/24 192.168.80.1 1"

Immediately the routing entry was added we got responses to our ping requests. Sadly the joy was short-lived as the modem looses the settings on reboot. A bit more googling revealed that we needed to use the "addrom" set of commands to make the changes persistent.

ip route addrom index 1
ip route addrom name lan
ip route addrom set 192.168.55.0/24 192.168.80.1 1
ip route addrom save

Now the device retains its setting on reboot and we get the best of both worlds. Our PPPoE connection is managed via our firewall and we can still access the ADSL GUI interface if necessary.

The TP-Link TD8177 are nice, no-nonsense devices!

Categories: FLOSS Project Planets

Using Travis CI to test Docker builds

LinuxPlanet - Mon, 2016-01-11 10:00

In last months article we discussed "Dockerizing" this blog. What I left out from that article was how I also used Docker Hub's automatic builds functionality to automatically create a new image every time changes are made to the GitHub Repository which contains the source for this blog.

The automatic builds are useful because I can simply make changes to the code or articles within the repository and once pushed, those changes trigger Docker Hub to build an image using the Dockerfile we created in the previous article. As an extra benefit the Docker image will also be available via Docker Hub, which means any system with Docker installed can deploy the latest version by simply executing docker run -d madflojo/blog.

The only gotcha is; what happens if those changes break things? What if a change prevents the build from occurring, or worse prevents the static site generator from correctly generating pages. What I need is a way to know if changes are going to cause issues or not before they are merged to the master branch of the repository; deploying those changes to production.

To do this, we can utilize Continuous Integration principles and tools.

What is Continuous Integration

Continuous Integration or CI, is something that has existed in the software development world for a while but it has gained more following in the operations world recently. The idea of CI came up to address the problem of multiple developers creating integration problems within the same code base. Basically, two developers working on the same code creating conflicts and not finding those conflicts until much later.

The basic rule goes, the later you find issues within code the more expensive (time and money) it is to fix those issues. The idea to solve this is for developers to commit their code into source control often, multiple times a day even. With code commits being pushed frequently this reduces the opportunity for code integration problems, and when they do happen it is often a lot easier to fix.

However, code commits multiple times a day by itself doesn't solve integration issues. There also needs to be a way to ensure the code being committed is quality code and works. This brings us to another concept of CI, where every time code is committed, the code is built and tested automatically.

In the case of this blog, the build would consist of building a Docker image, and testing would consist of some various tests I've written to ensure the code that powers this blog is working appropriately. To perform these automated builds and test executions we need a tool that can detect when changes happen, and perform the necessary steps; we need a tool like Travis CI.

Travis CI

Travis CI is a Continuous Integration tool that integrates with GitHub and performs automated build and test actions. It is also free for public GitHub repositories, like this blog for instance.

In this article I am going to walk through configuring Travis CI to automatically build and test the Docker image being generated for this blog. Which, will give you (the reader) the basics of how to use Travis CI to test your own Docker builds.

Automating a Docker build with Travis CI

This post is going to assume that we have already signed up for Travis CI and connected it to our public repository. This process is fairly straight forward, as it is part of Travis CI's on-boarding flow. If you find yourself needing a good walk through, Travis CI does have a getting started guide.

Since we will be testing our builds and do not wish to impact the main master branch the first thing we are going to do is create a new git branch to work with.

$ git checkout -b building-docker-with-travis

As we make changes to this branch we can push the contents to GitHub under the same branch name and validate the status of Travis CI builds without those changes going into the master branch.

Configuring Travis CI

Within our new branch we will create a .travis.yml file. This file essentially contains configuration and instructions for Travis CI. Within this file we will be able to tell Travis CI what languages and services we need for the build environment as well as the instructions for performing the build.

Defining the build environment

Before starting any build steps we first need to define what the build environment should look like. For example, since the hamerkop application and associated testing scripts are written in Python, we will need Python installed within this build environment.

While we could install Python with a few apt-get commands, since Python is the only language we need within this environment it's better to define it as the base language using the language: python parameter within the .travis.yml file.

language: python python: - 2.7 - 3.5

The above configuration informs Travis CI to set the build environment to a Python environment; specifically for Python versions 2.7 and 3.5 to be installed and supported.

The syntax used above is in YAML format, which is a fairly popular configuration format. In the above we are essentially defining the language parameter as python and setting the python parameter to a list of versions 2.7 and 3.5. If we wanted to add additional versions it is as simple as appending that version to this list; such as in the example below.

language: python python: - 2.7 - 3.2 - 3.5

In the above we simply added version 3.2 by adding it to the list.

Required services

As we will be building a Docker image we will also need Docker installed and the Docker service running within our build environment. We can accomplish this by using the services parameter to tell Travis CI to install Docker and start the service.

services: - docker

Like the python parameter the services parameter is a list of services to be started within our environment. As such that means we can also include additional services by appending to the list. If we needed Docker and Redis for example we can simply append the line after specifying the Docker service.

services: - docker - redis-server

In this example we do not require any service other than Docker, however it is useful to know that Travis CI has quite a few services available.

Performing the build

Now that we have defined the build environment we want, we can execute the build steps. Since we wish to validate a Docker build we essentially need to perform two steps, building a Docker container image and starting a container based on that image.

We can perform these steps by simply specifying the same docker commands we used in the previous article.

install: - docker build -t blog . - docker run -d -p 127.0.0.1:80:80 --name blog blog

In the above we can see that the two docker commands are specified under the install parameter. This parameter is actually a defined build step for Travis CI.

Travis CI has multiple predefined steps used during builds which can be called out via the .travis.yml file. In the above we are defining that these two docker commands are the steps necessary to install this application.

Testing the build

Travis CI is not just a simple build tool, it is a Continuous Integration tool which means its primary function is testing. Which means we need to add a test to our build; for now we can simply verify that the Docker container is in running, which can be performed by a simple docker ps command.

script: - docker ps | grep -q blog

In the above we defined our basic test using the script parameter. This is yet another build step which is used to call test cases. The script step is a required step, if omitted the build will fail.

Pushing to GitHub

With the steps above defined we now have a minimal build that we can send to Travis CI; to accomplish this, we simply push our changes to GitHub.

$ git add .travis.yml $ git commit -m "Adding docker build steps to Travis" [building-docker-with-travis 2ad7a43] Adding docker build steps to Travis 1 file changed, 10 insertions(+), 32 deletions(-) rewrite .travis.yml (72%) $ git push origin building-docker-with-travis

During the sign up process for Travis CI, you are asked to link your repositories with Travis CI. This allows it to monitor the repository for any changes. When changes occur, Travis CI will automatically pull down those changes and execute the steps defined within the .travis.yml file. Which in this case, means executing our Docker build and verifying it worked.

As we just pushed new changes to our repository, Travis CI should have detected those changes. We can go to Travis CI to verify whether those changes resulted in a successful build or not.

Travis CI, will show a build log for every build, at the end of the log for this specific build we can see that the build was successful.

Removing intermediate container c991de57cced Successfully built 45e8fb68a440 $ docker run -d -p 127.0.0.1:80:80 --name blog blog 45fe9081a7af138da991bb9e52852feec414b8e33ba2007968853da9803b1d96 $ docker ps | grep -q blog The command "docker ps | grep -q blog" exited with 0. Done. Your build exited with 0.

One important thing to know about Travis CI is that most build steps require commands to execute successfully in order for the build to be marked as successful.

The script and install steps are two examples of this, if any of our commands failed and did not return a 0 exit code than the whole build would be marked as failed.

If this happens during the install step, the build will be stopped at the exact step that failed. With the script step however, the build will not be stopped. The idea behind this is that if an install step fails, the build will absolutely not work. However, if a single test case fails only a portion is broken. By showing all testing results users will be able to identify what is broken vs. what is working as expected.

Adding additional tests

While we now have Travis CI able to verify the Docker build is successful, there are still other ways we could inadvertently break this blog. For example, we could make a change that prevents the static site generator from properly generating pages, this would break the site within the container but not necessarily the container itself. To prevent a scenario like this, we can introduce some additional testing.

Within our repository there is a directory called tests, this directory contains three more directories; unit, integration and functional. These directories contain various automated tests for this environment. The first two types of tests unit and integration are designed to specifically test the code within the hamerkop.py application. While useful, these tests are not going to help test the Docker container. However, the last directory functional, contains automated tests that can be used to test the running Docker container.

$ ls -la tests/functional/ total 24 drwxr-xr-x 1 vagrant vagrant 272 Jan 1 03:22 . drwxr-xr-x 1 vagrant vagrant 170 Dec 31 22:11 .. -rw-r--r-- 1 vagrant vagrant 2236 Jan 1 03:02 test_broken_links.py -rw-r--r-- 1 vagrant vagrant 2155 Jan 1 03:22 test_content.py -rw-r--r-- 1 vagrant vagrant 1072 Jan 1 03:13 test_rss.py

These tests are designed to connect to the running Docker container and validate the static site's content.

For example test_broken_links.py will crawl the website being served by the Docker container and check the HTTP status code returned when requesting each page. If the return code is anything but 200 OK the test will fail. The test_content.py test will also crawl the site and validate the content returned matches a certain pattern. If it does not, then again these tests will fail.

What is useful about these tests is that, even though the static site is running within a Docker container we are still able to test the site functionality. If we were to add these tests to the Travis CI configuration, they would also be executed for every code change; providing even more confidence about each change being made.

Installing test requirements in before_script

To run these tests via Travis CI we will simply need to add them to the script section as we did with the docker ps command. However, before they can be executed these tests require several Python libraries to be installed. To install these libraries we can add the installation steps into the before_script build step.

before_script: - pip install -r requirements.txt - pip install mock - pip install requests - pip install feedparser

The before_script build step is performed before the script step but after the install step. Making before_script the perfect location for steps that are required for script commands but not part of the overall installation. Since the before_script step is not executing test cases like the install step, it too requires all commands to succeed before moving to the script build step. If a command within the before_script build step fails, the build will be stopped.

Running additional tests

With the required Python libraries installed we can add the test execution to the script build step.

script: - docker ps | grep -q blog - python tests.py

These tests can be launched by executing tests.py, which will run all 3 automated tests; unit, integration and functional.

Testing the build again

With the tests added we can once again push our changes to GitHub.

$ git add .travis.yml $ git commit -m "Adding tests.py execution" [building-docker-with-travis 99c4587] Adding tests.py execution 1 file changed, 14 insertions(+) $ git push origin building-docker-with-travis

After pushing our updates to the repository we can sit back and wait for Travis to build and test our application.

###################################################################### Test Runner: Functional tests ###################################################################### runTest (test_rss.VerifyRSS) Execute recursive request ... ok runTest (test_broken_links.CrawlSite) Execute recursive request ... ok runTest (test_content.CrawlSite) Execute recursive request ... ok ---------------------------------------------------------------------- Ran 3 tests in 0.768s OK

Once the build completes we will see the above message in the build log, showing that Travis CI has in fact executed our tests.

Summary

With our builds successfully processing let's take a final look at our .travis.yml file.

language: python python: - 2.7 services: - docker install: - docker build -t blog . - docker run -d -p 127.0.0.1:80:80 --name blog blog before_script: - pip install -r requirements.txt - pip install mock - pip install requests - pip install feedparser script: - docker ps | grep -q blog - python tests.py

In the above we can see our Travis CI configuration consists of 3 build steps; install, before_script and script. The install step is used to build and start our Docker container. The before_script step is simply used to install required libraries for test scripts and the script step is used to execute our test scripts.

Overall, this setup is pretty simple and something we could test manually outside of Travis CI. The benefit of having Travis CI though is that all of these steps are performed for every change, no matter how minor they are.

Also since we are using GitHub, this means Travis CI will append build status notifications on every pull request as well, like this one for example. With these types of notifications I can merge pull requests into the master branch with the confidence that they will not break production.

Building a Continuous Integration and Deployment pipeline

In last months article we explored using Docker to package and distribute the application running this blog. In this article, we have discussed leveraging Travis CI to automatically build that Docker image as well as performing functional tests against it.

In next months article, we are going to take this setup one step further by automatically deploying these changes to multiple servers using SaltStack. By the end of the next article we will have a full Continuous Integration and Deployment work-flow defined which will allow changes to be tested and deployed to production without human interaction.


Posted by Benjamin Cane
Categories: FLOSS Project Planets

Best Laid Plans

LinuxPlanet - Thu, 2016-01-07 19:38

Hello all, we meet again. In my last post I said I’d be writing about technology here in the future rather than my ongoing health saga. As the title of this post suggests though, “the best laid plans of mice and men often go awry”. Here’s some info on the quote. I was recently informed by The Christie that they are postponing my operation with less than a week to go. I was due to be admitted on Jan 13th and now that’s been put back to Feb 10th, with the actual operation to take place on Feb 11th.

It’s only a slip of 4 weeks I know and it’s not the end of the world but it is frustrating when I just want to get this done so I can recover, get back to work and hopefully get on with my life. Every extra week adds up and it can start to feel like time is dragging on but I’ll get there.

So what’s the reason for the delay? An emergency case they need to deal with first apparently. With such a rare and specialised surgical procedure I suppose there was always a danger of delay. In some ways I should be glad that my case isn’t deemed as critically urgent and they feel I can wait 4 more weeks. There must be other patients in a much worse condition. Every cloud has a silver lining and all that.

Only 500 of these operations have been done at The Christie in the 10 years since it was first pioneered, so that illustrates how rare it is. Right now I can’t say I feel fantastic but I’m not in pain and I am managing to do some things to keep busy. I guess it’s a case of hurry up and wait. So I have to be a patient patient.

The Google Pixel C

However, in other (nicer) news I just got a Google Pixel C last week and I’m actually writing this on it right now. It’s the new flagship 10.2 inch Android tablet from Google and the first to be 100% designed by them, right down to the hardware. The Pixel team developed it alongside their fancy but rather expensive Chromebooks. It has Android 6.0.1 installed at the moment and is effectively a Nexus device in all but name. That means it will be first to receive new versions of Android and Android N is due in a few months. I expect it will be largely tailored to this device and I expect good things. I needed a new tablet but I also wanted to get something that could replace most of the functions I’d normally do with the laptop. In the interests of fairness I looked at the Microsoft Surface, Apple iPad Pro and a variety of convertible laptops, including the ASUS transformer series. I decided this was by far the best option right now. It’s something of a personal experiment to see whether a good tablet like this (with a Bluetooth mouse and keyboard) can really cut it as a laptop replacement. I am also helped in this project by the work I’ve done on my server beefing up hardware and configuring KVM soI can use a remote desktop. I’ll write up some proper thoughts on all this to share with you very soon. At least I have a little more time to do that now before I head off to the hospital.

Take care out there, Happy New Year and I’ll speak to you again soon,

Dan

Categories: FLOSS Project Planets

Happy New Year & Browser and OS stats for 2015

LinuxPlanet - Wed, 2016-01-06 12:46

I’d like to wish everyone a happy new year on behalf of the entire LQ team. 2015 has been another great year for LQ and we have quite a few exciting developments in store for 2016, including a major code update that is now *way* overdue. As has become tradition, here are the browser and OS statistics for the main LQ site for all of 2015 (2014 stats for comparison).

Browsers Chrome 47.37% Firefox 37.81% Internet Explorer 6.86% Safari 4.90% Opera 1.11% Edge 0.42%

For the first time in many years, browser stats have not changed in any meaningful way from the previous year. Chrome is very slightly up, and Firefox and IE are very slightly down (although Edge does make its initial appearance in the chart).

Operating Systems Windows 52.42% Linux 31.45% Macintosh 10.75% Android 3.01% iOS 1.53%

Similar to the browser, OS shares have remained quite stable over the last year as well. 2015 seems to have been a year of stability in both markets, at least for the technical audience that comprises LinuxQuestions.org. Note that Chrome OS has the highest percentage of any OS not to make the chart.
I’d also like to take this time to thank each and every LQ member. You are what make the site great; without you, we simply wouldn’t exist. I’d like to once again thank the LQ mod team, whose continued dedication ensures that things run as smoothly as they do. Don’t forget to vote in the LinuxQuestions.org Members Choice Awards, which recently opened.

–jeremy


Categories: FLOSS Project Planets

Checking for data validity in libreoffice spreadsheets

LinuxPlanet - Wed, 2016-01-06 11:39
When entering data into the spread sheet, we might want at times to ensure that the data entered lies with in a specified range or is equal to certain number or value. To ensure this we can use the data validity option in libreoffice spreadsheet.

To enable data validity select the range of cells on which the validity needs to be applied. Then select the option validity option from data -> validity.



This will pop a menu as shown below.



In the criteria tab the "allow" option will help we can chose the what type of numbers are valid. In the data option we can choose the what should be the value of the data i.e should it be greater than or lesser than a number etc, and the text field allows us to enter the maximum number to be allowed in the cells.

Let us say we want to allow "Whole numbers" which are "less than" 100, then the setting will be as shown below.



Now when ever we enter a value equal to greater than 100 in the selected range of cells we will get an error as shown below.



We can add a message next to the cells that have data validity enabled in them by selecting the input tab and entering a message that we wish to display next to the cells as shown below.






Categories: FLOSS Project Planets

Sun, Oracle, Android, Google and JDK Copyleft FUD

LinuxPlanet - Wed, 2016-01-06 00:00

I have probably spent more time dealing with the implications and real-world scenarios of copyleft in the embedded device space than anyone. I'm one of a very few people charged with the task of enforcing the GPL for Linux, and it's been well-known for a decade that GPL violations on Linux occur most often in embedded devices such as mobile hand-held computers (aka “phones”) and other such devices.

This experience has left me wondering if I should laugh or cry at the news coverage and pundit FUD that has quickly come forth from Google's decision to move from the Apache-licensed Java implementation to the JDK available from Oracle.

As some smart commenters like Bob Lee have said, there is already at least one essential part of Android, namely Linux itself, licensed as pure GPL. I find it both amusing and maddening that respondents use widespread GPL violation by chip manufacturers as some sort of justification for why Linux is acceptable, but Oracle's JDK is not. Eventually, (slowly but surely) GPL enforcement will adjudicate the widespread problem of poor Linux license compliance — one way or the other. But, that issue is beside the point when we talk of the licenses of code running in userspace. The real issue with that is two-fold.

First, If you think the ecosystem shall collapse because “pure GPL has moved up the Android stack”, and “it will soon virally infect everyone” with copyleft (as you anti-copyleft folks love to say) your fears are just unfounded. Those of us who worked in the early days of reimplementing Java in copyleft communities thought carefully about just this situation. At the time, remember, Sun's Java was completely proprietary, and our goal was to wean developers off Sun's implementation to use a Free Software one. We knew, just as the early GNU developers knew with libc, that a fully copylefted implementation would gain few adopters. So, the earliest copyleft versions of Java were under an extremely weak copyleft called the “GPL plus the Classpath exception”. Personally, I was involved as a volunteer in the early days of the Classpath community; I helped name the project and design the Classpath exception. (At the time, I proposed we call it the “Least GPL” since the Classpath exception carves so many holes in strong copyleft that it's less of a copyleft than even the Lesser GPL and probably the Mozilla Public License, too!)

But, what does the Classpath exception from GNU's implementation have to with Oracle's JDK? Well, Sun, before Oracle's acquisition, sought to collaborate with the Classpath community. Those of us who helped start Classpath were excited to see the original proprietary vendor seek to release their own formerly proprietary code and want to merge some of it with the community that had originally formed to replace their code with a liberated alternative.

Sun thus released much of the JDK under “GPL with Classpath exception”. The reasons were clearly explained (URL linked is an archived version of what once appeared on Sun's website) on their collaboration website for all to see. You see the outcome of that in many files in the now-infamous commit from last week. I strongly suspect Google's lawyers vetted what was merged to made sure that the Android Java SDK fully gets the appropriate advantages of the Classpath exception.

So, how is incorporating Oracle's GPL-plus-Classpath-exception'd JDK different from having an Apache-licensed Java userspace? It's not that much different! Android redistributors already have strong copyleft obligations in kernel space, and, remember that Webkit is LGPL'd; there's also already weak copyleft compliance obligations floating around Android, too. So, if a redistributor is already meeting those, it's not much more work to meet the even weaker requirements now added to the incorporated JDK code. I urge you to ask anyone who says that this change will have any serious impact on licensing obligations and analysis for Android redistributors to please prove their claim with an actual example of a piece of code added in that commit under pure GPL that will combine in some way with Android userspace applications. I admit I haven't dug through the commit to prove the negative, but I'd be surprised if some Google engineers didn't do that work before the commit happened.

You may now ask yourself if there is anything of note here at all. There's certainly less here than most are saying about it. In fact, a Java industry analyst (with more than a decade of experience in the area) told me that he believed the decision was primarily technical. Authors of userspace applications on Android (apparently) seek a newer Java language implementation and given that there was a reasonably licensed Free Software one available, Google made a technical switch to the superior codebase, as it gives API users technically what they want while also reducing maintenance burden. This seems very reasonable. While it's less shocking than what the pundits say, technical reasons probably were the primary impetus.

So, for Android redistributors, are there any actual licensing risks to this change? The answer there is undoubtedly yes, but the situation is quite nuanced, and again, the problem is not as bad as the anti-copyleft crowd says. The Classpath exception grants very wide permissions. Nevertheless, some basic copyleft obligations can remain, albeit in a very weak-copyleft manner. It is possible to violate that weak copyleft, particularly if you don't understand the licensing of all third-party materials combined with the JDK. Still, since you have comply with Linux's license to redistribute Android, complying with the Classpath exception'd stuff will require only a simple afterthought.

Meanwhile, Sun's (now Oracle's) JDK, is likely nearly 100% copyright-held by Oracle. I've written before about the dangers of the consolidation of a copylefted codebase with a single for-profit, commercial entity. I've even pointed out that Oracle specifically is very dangerous in its methods of using copyleft as an aggression.

Copyleft is a tool, not a moral principle. Tools can be used incorrectly with deleterious effect. As an analogy, I'm constantly bending paper clips to press those little buttons on electronic devices, and afterwards, the tool doesn't do what it's intended for (hold papers together); it's bent out of shape and only good for the new, dubious purpose, better served by a different tool. (But, the paper clip was already right there on my desk, you see…)

Similarly, while organizations like Conservancy use copyleft in a principled way to fight for software freedom, others use it in a manipulative, drafter-unintended, way to extract revenue with no intention standing up for users' rights. We already know Oracle likes to use GPL this way, and I really doubt that Oracle will sign a pledge to follow Conservancy's and FSF's principles of GPL enforcement. Thus, we should expect Oracle to aggressively enforce against downstream Android manufacturers who fail to comply with “GPL plus Classpath exception”. Of course, Conservancy's GPL Compliance Project for Linux developers may also enforce, if the violation extends to Linux as well. But, Conservancy will follow those principles and prioritize compliance and community goodwill. Oracle won't. But, saying that means that Oracle has “its hooks” in Android makes no sense. They have as many hooks as any of the other thousands of copyright holders of copylefted material in Android. If anything, this is just another indication that we need more of those copyright holders to agree with the principles, and we should shun codebases where only one for-profit company holds copyright.

Thus, my conclusion about this situation is quite different than the pundits and link-bait news articles. I speculate that Google weighed a technical decision against its own copyleft compliance processes, and determined that Google would succeed in its compliance efforts on Android, and thus won't face compliance problems, and can therefore easily benefit technically from the better code. However, for those many downstream redistributors of Android who fail at license compliance already, the ironic outcome is that you may finally find out how friendly and reasonable Conservancy's Linux GPL enforcement truly is, once you compare it with GPL enforcement from a company like Oracle, who holds avarice, not software freedom, as its primary moral principle.

Finally, the bigger problem in Android with respect to software freedom is that the GPL is widely violated on Linux in Android devices. If this change causes Android redistributors to reevalute their willful ignorance of GPL's requirements, then some good may come of it all, despite Oracle's expected nastiness.

Categories: FLOSS Project Planets

A Requiem for Ian Murdock

LinuxPlanet - Wed, 2015-12-30 19:00

[ This post was crossposted on Conservancy's website. ]

I first met Ian Murdock gathered around a table at some bar, somewhere, after some conference in the late 1990s. Progeny Linux Systems' founding was soon to be announced, and Ian had invited a group from the Debian BoF along to hear about “something interesting”; the post-BoF meetup was actually a briefing on his plans for Progeny.

Many of the details (such as which conference and where on the planet it was), I've forgotten, but I've never forgotten Ian gathering us around, bending my ear to hear in the loud bar, and getting one of my first insider scoops on something big that was about to happen in Free Software. Ian was truly famous in my world; I felt like I'd won the jackpot of meeting a rock star.

More recently, I gave a keynote at DebConf this year and talked about how long I've used Debian and how much it has meant to me. I've since then talked with many people about how the Debian community is rapidly becoming a unicorn among Free Software projects — one of the last true community-driven, non-commercial projects.

A culture like that needs a huge group to rise to fruition, and there are no specific actions that can ensure creation of a multi-generational project like Debian. But, there are lots of ways to make the wrong decisions early. As near as I can tell, Ian artfully avoided the project-ending mistakes; he made the early decisions right.

Ian cared about Free Software and wanted to make something useful for the community. He teamed up with (for a time in Debian's earliest history) the FSF to help Debian in its non-profit connections and roots. And, when the time came, he did what all great leaders do: he stepped aside and let a democratic structure form. He paved the way for the creation of Debian's strong Constitutional and democratic governance. Debian has had many great leaders in its long history, but Ian was (effectively) the first DPL, and he chose not to be a BDFL.

The Free Software community remains relatively young. Thus, loss of our community members jar us in the manner that uniquely unsettles the young. In other words, anyone we lose now, as we've lost Ian this week, has died too young. It's a cliché to say, but I say anyway that we should remind ourselves to engage with those around us every day, and to welcome new people gladly. When Ian invited me around that table, I was truly nobody: he'd never met me before — indeed no one in the Free Software community knew who I was then. Yet, the mere fact that I stayed late at a conference to attend the Debian BoF was enough for him — enough for him to even invite me to hear the secret plans of his new company. Ian's trust — his welcoming nature — remains for me unforgettable. I hope to watch that nature flourish in our community for the remainder of all our lives.

Categories: FLOSS Project Planets

I’ve Got A Date

LinuxPlanet - Fri, 2015-12-25 17:31

A Date At Last

Hello all, I have some exciting news. It’s been a long time since I’ve had cause to use this sentence but… I’ve got a date! Sadly in this context I’m only referring to a date for my upcoming surgery. I’ll be going under the knife at The Christie in Manchester on January 14th 2016. Not far away.

If you’ve read my last 2 or 3 posts you’ll know that I’ve had some serious health problems in recent months. After perplexing a good number of medical professionals I was finally diagnosed with a rare condition known as Pseudomyxoma Peritonei, or PMP for short. Sadly not PIMP which would have sounded much cooler. The treatment involves cutting out all the affected areas and cleaning it up with a heated chemotherapy liquid. It’ll be a pretty long surgical procedure and take months to recover from but the prognosis is good. I will have to be scanned yearly to ensure no return of tumours but with a 75% chance of no re-occurrence in 10 years it’s well worth it I’d say. I won’t go on at length I just wanted to share the date for those people who’ve been asking.

I’m looking forward to Christmas and New Year, I can’t wait to get this surgery out of the way and begin down the road to recovery. Get back to work and all the other things I used to do. I went to see Star Wars last night so at least I was able to do that before my op. I’ve also done some techy things lately I’d like to write about, I’ll share those with you soon. I don’t want to spend all my time on medical talk.

I wish you all Christmas and New Year! I’ll report in again soon

Dan

Categories: FLOSS Project Planets
Syndicate content