FSF Blogs: Free Software Directory meeting recap for November 18th, 2016

GNU Planet! - Wed, 2016-11-23 09:46

Every week free software activists from around the world come together in #fsf on irc.freenode.org to help improve the Free Software Directory. This recaps the work we accomplished on the Friday, November 18th, 2016 meeting.

Last week started off with a theme of outreach to maintainers of free software packages. mangeurdenuage and donaldr3 put together template volunteers can use to contact package maintainers to get them interested in keeping their own entries up to date. mangeurdenuage then went on to contact many maintainers. We hope to see everyone they contacted at upcoming meetings or at least in the revision log.

While the theme was maintainer outreach, there was still a lot of excitement about the previous week's discussion on how to deal with certain edge cases. David_Hedlund and IanKelling worked out that using categories to tag potential issues was probably the best system, as tagged categories could still flag a post with text explaining the potential issue. Lots of different categories of issues were discussed, with David_Hedlund implementing some of them for review. Those categories, their text, and implementation are still being reviewed and iterated on. That work will continue in this upcoming meeting as well, so be sure to join us all in directing the future of the directory.

If you would like to help update the directory, meet with us every Friday in #fsf on irc.freenode.org from 1 p.m. to 4 p.m. EST (18:00 to 20:00 UTC).

Categories: FLOSS Project Planets

FSF Blogs: Tear the wrapping paper off the 2016 Ethical Tech Giving Guide

GNU Planet! - Wed, 2016-11-23 08:00

An activist sharing the Giving Guide at a Giveaway.

As software permeates more and more aspects of society, the FSF must expand our work to protect and extend computer user freedom. On Monday, we launched our yearly fundraiser with the goal of welcoming 500 new members and raising $450,000 before December 31st. Please support the work that we do: make a donation or -- better yet -- join as a member today.

Electronics are popular gifts for the holidays, but people often overlook the restrictions that manufacturers slip under the wrapping paper. From surveillance to harsh rules about copying and sharing, some gifts take more than they give.

The good news is that there are ethical companies making better devices that your loved ones can enjoy with freedom and privacy. Today, we're launching the 2016 Giving Guide, your key to smarter and more ethical tech gifts.

Explore the Giving Guide online and in print. To sweeten the deal, many of the recommended gifts are specially discounted for the holiday season.

If you appreciate the guide, we invite you to to spread the word about it. Here's what you can do:

  • Lead a Giving Guide Giveaway at a local shopping area to show that conscientious giving applies to computers and software. We've prepared a primer to answer common questions and help make your Giveaway a success.

  • Share the guide on social media with the hashtag #givefreely and comment about it on gift or tech-related online articles.

  • Email it to your family and friends – heck, you might even get a gift out of it!

Some translations are already available, but we need more volunteers to port the Giving Guide to their own languages. Check out the primer page for translation instructions.

Millions of people will open tech gifts this holiday season, and most of them will be walled gardens encumbered with nonfree software and DRM. But things are changing. With each year, our message spreads further and more people start thinking critically about technology and voting with their wallets. Join us in fueling the movement for ethical tech – use and spread this guide.

Categories: FLOSS Project Planets

GNUnet News: YBTI / We Fix the Net session at 33c3

GNU Planet! - Tue, 2016-11-22 10:22

At 33c3 the GNUNet & pEp assembly will host a "YBTI/We Fix the Net" session with a series of talks of developing secure alternatives to current internet protocols. We might hold an organized discussion or panel as well.

More details will be posted here closer to the congress. For now, please contact us at wefixthenet@gnunet.org if you would like to present your work or wish to organize a panel or other activity.

Categories: FLOSS Project Planets

parallel @ Savannah: GNU Parallel 20161122 ('Trump') released [stable]

GNU Planet! - Mon, 2016-11-21 17:30

GNU Parallel 20161122 ('Trump') [stable] has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/

No new functionality was introduced so this is a good candidate for a stable release.

Haiku of the month:

President elect:
Make parallel great again
Use GNU Parallel
-- Ole Tange

New in this release:

  • --record-env can now be used with env_parallel for bash, ksh, pdksh, and zsh.
  • Bug fixes and man page updates.

GNU Parallel - For people who live life in the parallel lane.

About GNU Parallel

GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/

You can install GNU Parallel in just 10 seconds with: (wget -O - pi.dk/3 || curl pi.dk/3/) | bash

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your commandline will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2011): GNU Parallel - The Command-Line Power Tool, ;login: The USENIX Magazine, February 2011:42-47.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://www.gnu.org/s/parallel/merchandise.html
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference

If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:


GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

About GNU Niceload

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

Categories: FLOSS Project Planets

dico @ Savannah: Version 2.4

GNU Planet! - Mon, 2016-11-21 09:21

Version 2.4 is available for download (on both main GNU FTP and Puszcza FTP). New in this release:

  • dico accepts UNIX socket name as argument to the open command
  • Fix coredump in gcide module
  • Update translations
Categories: FLOSS Project Planets

FSF Events: Richard Stallman to speak in Rennes, France

GNU Planet! - Thu, 2016-11-17 13:04

Ce discours de Richard Stallman ne sera pas technique, l'entrée sera libre, et tout le monde est invité à assister.

Thème, heure, et lieu précis du discours à déterminer.

Veuillez complèter notre formulaire de contact, pour que nous puissions vous annoncer de futurs événements dans la région rénnaise.
Categories: FLOSS Project Planets

FSF Blogs: Friday Maintainers Outreach Directory IRC meetup: November 18th starting at 1 p.m. EST/18:00 UTC

GNU Planet! - Thu, 2016-11-17 11:56

Participate in supporting the FSD by adding new entries and updating existing ones. We will be on IRC in the #fsf channel on freenode.

Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the FSD contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.

While the FSD has been and continues to be a great resource to the world over the past decade, it has the potential of being a resource of even greater value. But it needs your help!

This week we're having a focus on reaching out to maintainers of packages to help keep their entries up to date. Plenty of maintainers like to add their package to the directory in order to take advantage of all the extra publicity it can bring to a project. But keeping things up to date will have the most impact, letting users know that the package is still under active development. So this week we'll be reaching out to maintainers to help get their directory entry looking the best it can.

If you are eager to help and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today! There are also weekly FSD Meetings pages that everyone is welcome to contribute to before, during, and after each meeting.

Categories: FLOSS Project Planets

FSF Events: Richard Stallman to speak in Kalamazoo, MI

GNU Planet! - Thu, 2016-11-17 11:34

This speech by Richard Stallman will be nontechnical, admission is gratis, and the public is encouraged to attend.

Speech topic and start time to be determined.

Location: Kalamazoo College, Kalamazoo, MI (detailed location to be determined

Please fill out our contact form, so that we can contact you about future events in and around Kalamazoo.

Categories: FLOSS Project Planets

FSF Events: Richard Stallman to speak in Grand Rapids, MI

GNU Planet! - Thu, 2016-11-17 11:20
Richard Stallman will speak about the goals and philosophy of the Free Software Movement, and the status and history of the GNU operating system, which in combination with the kernel Linux is now used by tens of millions of users world-wide.

Richard Stallman's speech will be nontechnical, admission is gratis, and the public is encouraged to attend.

Location: Calvin College, Grand Rapids, MI (exact location to be determined)

Please fill out our contact form, so that we can contact you about future events in and around Grand Rapids.

Categories: FLOSS Project Planets

FSF Events: Richard Stallman - "Free Software - Essential for Your Freedom" (Detroit, MI)

GNU Planet! - Thu, 2016-11-17 11:05

This speech by Richard Stallman will be nontechnical, admission is gratis, and the public is encouraged to attend.

Location: Wayne State University, Detroit, MI (exact location to be determined)

Please fill out our contact form, so that we can contact you about future events in and around Detroit.

Categories: FLOSS Project Planets

FSF Blogs: Free Software Directory meeting recap for November 11th, 2016

GNU Planet! - Thu, 2016-11-17 10:31

Every week free software activists from around the world come together in #fsf on irc.freenode.org to help improve the Free Software Directory. This recaps the work we accomplished on the Friday, November 11th, 2016 meeting.

Last week was a live meeting at the Seattle GNU/Linux Conference. Iankelling and donaldr3 were joined by helpful attendees in checking whether their favorite free software packages were already included in the directory. While the directory is very robust these days, most packages suggested were actually already included, there were a few new entries added. It was just a great opportunity to meet with people who weren't already involved in the directory and to help them get involved by learning about the project.

On the channel, there was also a long discussion about updating the requirements for the directory. The channel discussed two different scenarios which we have name 'bait and surrender' and 'freedom betrayed'. In 'bait and surrender', a developer offers an inferior free software version of their work in attempt to get users to surrender their freedom and switch to a more fully featured proprietary version. In 'freedom betrayed', a formerly free software project changes to a proprietary license. In both cases, we want to make clear to users that while there may be a free software version available that they have to be wary of the project, and understand that there are proprietary versions. The channel came up with a proposal to tag these different situations, which is now being discussed on the mailing list.

The meeting concluded with deciding that the next meeting should focus on contacting maintainers to help them include their packages on the directory or keep their entries up to date.

If you would like to help update the directory, meet with us every Friday in #fsf on irc.freenode.org from 1 p.m. to 4 p.m. EST (18:00 to 20:00 UTC).

Categories: FLOSS Project Planets

kenfallon.com 2016-04-12 01:50:51

LinuxPlanet - Tue, 2016-04-12 03:50

I am trying to mount a cifs share aka smaba/smb/windows share, from a Debian server so I can access log files when needed. To do this automatically I create two mounts, one which is read only and is automatically mounted and another that is read/write which is not mounted. The /etc/fstab file looks a bit like this:

// /mnt/server-d cifs auto,rw,credentials=/root/.ssh/server.credentials,domain= 0 0 // /mnt/server-d-rw cifs noauto,ro,credentials=/root/.ssh/server.credentials,domain= 0 0

To mount all the drives with “auto” in the /etc/fstab file you can use the “-a, –all” option . From the man page, Mount all filesystems (of the given types) mentioned in fstab (except for those whose line contains the noauto keyword). The filesystems are mounted following their order in fstab.

However when I ran the command I get:

root@server:~# mount -a mount: wrong fs type, bad option, bad superblock on //, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount. helper program) In some cases useful info is found in syslog - try dmesg | tail or so.

Well it turns out that Debian is no longer shipping cifs as a default option. It can be added easyly enough using the command:

root@server:~# aptitude install cifs-utils

Now mount -a works fine

root@server:~# mount -a root@server:~#
Categories: FLOSS Project Planets

After a long time I’m back

LinuxPlanet - Fri, 2016-04-01 15:03

After a long time I’m back and I will continue writing at this blog, sorry for the waiting.

Categories: FLOSS Project Planets

Ambient Weather WS-1001-Wifi Observer Review

LinuxPlanet - Thu, 2016-03-24 11:28

In the most recent episode of Bad Voltage, I reviewed the Ambient Weather WS-1001-Wifi Observer Personal Weather Station. Tune in to listen to the ensuing discussion and the rest of the show.

Regular listeners will know I’m an avid runner and sports fan. Add in the fact that I live in a city where weather can change in an instant and a personal weather station was irresistible to the tech and data enthusiast inside me. After doing a bit of research, I decided on the Ambient Weather WS-1001-Wifi Observer. While it only needs to be performed once, I should note that setup is fairly involved. The product comes with three components: An outdoor sensor array which should be mounted on a pole, chimney or other suitable area, a small indoor sensor and an LCD control panel/display console. The first step is to mount the all-in-one outdoor sensor, which remains powered using a solar panel and rechargeable batteries. It measures and transmits outdoor temperature, humidity, wind speed, wind direction, rainfall, and both UV and solar radiation. Next, mount the indoor sensor which measures and transmits indoor temperature, humidity and barometric pressure. Finally, plug in the control panel and complete the setup procedure which will walk you through configuring your wifi network, setting up NTP, syncing the two sensors and picking your units of measurement. Note that all three devices must be within 100-330 feet of each other, depending on layout and what materials are between them.

With everything setup, data will now start collecting on your display console and is updated every 14 seconds. In addition to showing all the data previously mentioned you will also see wind gusts, wind chill, sunrise, sunset, phases of the moon, dew point, rainfall rate and some historical graphs. There is a ton of data presented and while the sparse dense layout works for me, it has been described as unintuitive and overwhelming by some.

While seeing the data in real-time is interesting, you’ll likely also want to see long term trends and historical data. While the device can export all data to an SD card in CSV format, it becomes much more compelling when you connect it with the Weather Underground personal weather station network. Once connected, the unit becomes a public weather station that also feeds data to the Wunderground prediction model. That means you’ll be helping everyone get more accurate data for your specific area and better forecasts for your general area. You can even see how many people are using your PWS to get their weather report. There’s also a very slick Wunderstation app that is a great replacement for the somewhat antiquated display console, although unfortunately it’s currently only available for the iPad.

So, what’s the Bad Voltage verdict? At $289 the Ambient Weather WS-1001-WIFI OBSERVER isn’t cheap. In an era of touchscreens and sleek design, it’s definitely not going to win any design awards. That said, it’s a durable well built device that transmits and displays a huge amount of data. The Wunderground integration is seamless and knowing that you’re improving the predictive model for your neighborhood is surprisingly satisfying. If you’re a weather data junkie, this is a great device for you.


Categories: FLOSS Project Planets

PGDay Asia and FOSS Asia – 2016

LinuxPlanet - Thu, 2016-03-24 00:33

Jumping Bean attended PGDay Asia 17th March 2016 and FOSS Asia 18th-20th  March 2016 and delivered a talk at each event. At PGDay Asia we spoke on using Postgres as a NoSQL document store and for FOSS Asia. It was a great event and nothing beat interacting with the developers of Postgres and the open source community.

Our slides for "There is JavaScript in my SQL" presentation at PGDay Asia and "An Introduction to React" from FoSS Asia can be found on our Slide Share account.

Categories: FLOSS Project Planets

Create self-managing servers with Masterless Saltstack Minions

LinuxPlanet - Tue, 2016-03-22 10:30

Over the past two articles I've described building a Continuous Delivery pipeline for my blog (the one you are currently reading). The first article covered packaging the blog into a Docker container and the second covered using Travis CI to build the Docker image and perform automated testing against it.

While the first two articles covered quite a bit of the CD pipeline there is one piece missing; automating deployment. While there are many infrastructure and application tools for automated deployments I've chosen to use Saltstack. I've chosen Saltstack for many reasons but the main reason is that it can be used to manage both my host system's configuration and the Docker container for my blog application. Before I can start using Saltstack however, I first need to set it up.

I've covered setting up Saltstack before, but for this article I am planning on setting up Saltstack in a Masterless architecture. A setup that is quite different from the traditional Saltstack configuration.

Masterless Saltstack

A traditional Saltstack architecture is based on a Master and Minion design. With this architecture the Salt Master will push desired states to the Salt Minion. This means that in order for a Salt Minion to apply the desired states it needs to be able to connect to the master, download the desired states and then apply them.

A masterless configuration on the other hand involves only the Salt Minion. With a masterless architecture the Salt state files are stored locally on the Minion bypassing the need to connect and download states from a Master. This architecture provides a few benefits over the traditional Master/Minion architecture. The first is removing the need to have a Salt Master server; which will help reduce infrastructure costs, an important item as the environment in question is dedicated to hosting a simple personal blog.

The second benefit is that in a masterless configuration each Salt Minion is independent which makes it very easy to provision new Minions and scale out. The ability to scale out is useful for a blog, as there are times when an article is reposted and traffic suddenly increases. By making my servers self-managing I am able to meet that demand very quickly.

A third benefit is that Masterless Minions have no reliance on a Master server. In a traditional architecture if the Master server is down for any reason the Minions are unable to fetch and apply the Salt states. With a Masterless architecture, the availability of a Master server is not even a question.

Setting up a Masterless Minion

In this article I will walk through how to install and configure a Salt in a masterless configuration.

Installing salt-minion

The first step to creating a Masterless Minion is to install the salt-minion package. To do this we will follow the official steps for Ubuntu systems outlined at docs.saltstack.com. Which primarily uses the Apt package manager to perform the installation.

Importing Saltstack's GPG Key

Before installing the salt-minion package we will first need to import Saltstack's Apt repository key. We can do this with a simple bash one-liner.

# wget -O - https://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/SALTSTACK-GPG-KEY.pub | sudo apt-key add - OK

This GPG key will allow Apt to validate packages downloaded from Saltstack's Apt repository.

Adding Saltstack's Apt Repository

With the key imported we can now add Saltstack's Apt repository to our /etc/apt/sources.list file. This file is used by Apt to determine which repositories to check for available packages.

# vi /etc/apt/sources.list

Once editing the file simply append the following line to the bottom.

deb http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest trusty main

With the repository defined we can now update Apt's repository inventory. A step that is required before we can start installing packages from the new repository.

Updating Apt's cache

To update Apt's repository inventory, we will execute the command apt-get update.

# apt-get update Ign http://archive.ubuntu.com trusty InRelease Get:1 http://security.ubuntu.com trusty-security InRelease [65.9 kB] Get:2 http://archive.ubuntu.com trusty-updates InRelease [65.9 kB] Get:3 http://repo.saltstack.com trusty InRelease [2,813 B] Get:4 http://repo.saltstack.com trusty/main amd64 Packages [8,046 B] Get:5 http://security.ubuntu.com trusty-security/main Sources [105 kB] Hit http://archive.ubuntu.com trusty Release.gpg Ign http://repo.saltstack.com trusty/main Translation-en_US Ign http://repo.saltstack.com trusty/main Translation-en Hit http://archive.ubuntu.com trusty Release Hit http://archive.ubuntu.com trusty/main Sources Hit http://archive.ubuntu.com trusty/universe Sources Hit http://archive.ubuntu.com trusty/main amd64 Packages Hit http://archive.ubuntu.com trusty/universe amd64 Packages Hit http://archive.ubuntu.com trusty/main Translation-en Hit http://archive.ubuntu.com trusty/universe Translation-en Ign http://archive.ubuntu.com trusty/main Translation-en_US Ign http://archive.ubuntu.com trusty/universe Translation-en_US Fetched 3,136 kB in 8s (358 kB/s) Reading package lists... Done

With the above complete we can now access the packages available within Saltstack's repository.

Installing with apt-get

Specifically we can now install the salt-minion package, to do this we will execute the command apt-get install salt-minion.

# apt-get install salt-minion Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: dctrl-tools libmysqlclient18 libpgm-5.1-0 libzmq3 mysql-common python-dateutil python-jinja2 python-mako python-markupsafe python-msgpack python-mysqldb python-tornado python-zmq salt-common Suggested packages: debtags python-jinja2-doc python-beaker python-mako-doc python-egenix-mxdatetime mysql-server-5.1 mysql-server python-mysqldb-dbg python-augeas The following NEW packages will be installed: dctrl-tools libmysqlclient18 libpgm-5.1-0 libzmq3 mysql-common python-dateutil python-jinja2 python-mako python-markupsafe python-msgpack python-mysqldb python-tornado python-zmq salt-common salt-minion 0 upgraded, 15 newly installed, 0 to remove and 155 not upgraded. Need to get 4,959 kB of archives. After this operation, 24.1 MB of additional disk space will be used. Do you want to continue? [Y/n] y Get:1 http://archive.ubuntu.com/ubuntu/ trusty-updates/main mysql-common all 5.5.47-0ubuntu0.14.04.1 [13.5 kB] Get:2 http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/ trusty/main python-tornado amd64 4.2.1-1 [274 kB] Get:3 http://archive.ubuntu.com/ubuntu/ trusty-updates/main libmysqlclient18 amd64 5.5.47-0ubuntu0.14.04.1 [597 kB] Get:4 http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/ trusty/main salt-common all 2015.8.7+ds-1 [3,108 kB] Processing triggers for libc-bin (2.19-0ubuntu6.6) ... Processing triggers for ureadahead (0.100.0-16) ...

After a successful installation of the salt-minion package we now have a salt-minion instance running with the default configuration.

Configuring the Minion

With a Traditional Master/Minion setup, this point would be where we configure the Minion to connect to the Master server and restart the running service.

For this setup however, we will be skipping the Master server definition. Instead we will need to tell the salt-minion service to look for Salt state files locally. To alter the salt-minion's configuration we can either edit the /etc/salt/minion configuration file which is the default configuration file. Or we could add a new file into /etc/salt/minion.d/; this .d directory is used to override default configurations defined in /etc/salt/minion.

My personal preference is to create a new file within the minion.d/ directory, as this keeps the configuration easy to manage. However, there is no right or wrong method; as this is a personal and environmental preference.

For this article we will go ahead and create the following file /etc/salt/minion.d/masterless.conf.

# vi /etc/salt/minion.d/masterless.conf

Within this file we will add two configurations.

file_client: local file_roots: base: - /srv/salt/base bencane: - /srv/salt/bencane

The first configuration item above is file_client. By setting this configuration to local we are telling the salt-minion service to search locally for desired state configurations rather than connecting to a Master.

The second configuration is the file_roots dictionary. This defines the location of Salt state files. In the above example we are defining both /srv/salt/base and /srv/salt/bencane. These two directories will be where we store our Salt state files for this Minion to apply.

Stopping the salt-minion service

While in most cases we would need to restart the salt-minion service to apply the configuration changes, in this case, we actually need to do the opposite; we need to stop the salt-minion service.

# service salt-minion stop salt-minion stop/waiting

The salt-minion service does not need to be running when setup as a Masterless Minion. This is because the salt-minion service is only running to listen for events from the Master. Since we have no master there is no reason to keep this service running. If left running the salt-minion service will repeatedly try to connect to the defined Master server which by default is a host that resolves to salt. To remove unnecessary overhead it is best to simply stop this service in a Masterless Minion configuration.

Populating the desired states

At this point we have a Salt Minion that has been configured to run masterless. However, at this point the Masterless Minion has no Salt states to apply. In this section we will provide the salt-minion agent two sets of Salt states to apply. The first will be placed into the /srv/salt/base directory. This file_roots directory will contain a base set of Salt states that I have created to manage a basic Docker host.

Deploying the base Salt states

The states in question are available via a public GitHub repository. To deploy these Salt states we can simply clone the repository into the /srv/salt/base directory. Before doing so however, we will need to first create the /srv/salt directory.

# mkdir -p /srv/salt

The /srv/salt directory is Salt's default state directory, it is also the parent directory for both the base and bencane directories we defined within the file_roots configuration. Now that the parent directory exists, we will clone the base repository into this directory using git.

# cd /srv/salt/ # git clone https://github.com/madflojo/salt-base.git base Cloning into 'base'... remote: Counting objects: 50, done. remote: Total 50 (delta 0), reused 0 (delta 0), pack-reused 50 Unpacking objects: 100% (50/50), done. Checking connectivity... done.

As the salt-base repository is copied into the base directory the Salt states within that repository are now available to the salt-minion agent.

# ls -la /srv/salt/base/ total 84 drwxr-xr-x 18 root root 4096 Feb 28 21:00 . drwxr-xr-x 3 root root 4096 Feb 28 21:00 .. drwxr-xr-x 2 root root 4096 Feb 28 21:00 dockerio drwxr-xr-x 2 root root 4096 Feb 28 21:00 fail2ban drwxr-xr-x 2 root root 4096 Feb 28 21:00 git drwxr-xr-x 8 root root 4096 Feb 28 21:00 .git drwxr-xr-x 3 root root 4096 Feb 28 21:00 groups drwxr-xr-x 2 root root 4096 Feb 28 21:00 iotop drwxr-xr-x 2 root root 4096 Feb 28 21:00 iptables -rw-r--r-- 1 root root 1081 Feb 28 21:00 LICENSE drwxr-xr-x 2 root root 4096 Feb 28 21:00 ntpd drwxr-xr-x 2 root root 4096 Feb 28 21:00 python-pip -rw-r--r-- 1 root root 106 Feb 28 21:00 README.md drwxr-xr-x 2 root root 4096 Feb 28 21:00 screen drwxr-xr-x 2 root root 4096 Feb 28 21:00 ssh drwxr-xr-x 2 root root 4096 Feb 28 21:00 swap drwxr-xr-x 2 root root 4096 Feb 28 21:00 sysdig drwxr-xr-x 3 root root 4096 Feb 28 21:00 sysstat drwxr-xr-x 2 root root 4096 Feb 28 21:00 timezone -rw-r--r-- 1 root root 208 Feb 28 21:00 top.sls drwxr-xr-x 2 root root 4096 Feb 28 21:00 wget

From the above directory listing we can see that the base directory has quite a few Salt states. These states are very useful for managing a basic Ubuntu system as they perform steps such as installing Docker (dockerio), to setting the system timezone (timezone). Everything needed to run a basic Docker host is available and defined within these base states.

Applying the base Salt states

Even though the salt-minion agent can now use these Salt states, there is nothing running to tell the salt-minion agent it should do so. Therefore the desired states are not being applied.

To apply our new base states we can use the salt-call command to tell the salt-minion agent to read the Salt states and apply the desired states within them.

# salt-call --local state.highstate

The salt-call command is used to interact with the salt-minion agent from command line. In the above the salt-call command was executed with the state.highstate option.

This tells the agent to look for all defined states and apply them. The salt-call command also included the --local option, this option is specifically used when running a Masterless Minion. This flag tells the salt-minion agent to look through it's local state files rather than attempting to pull from a Salt Master.

The below shows the results of the execution above, within this output we can see the various states being applied successfully.

---------- ID: GMT Function: timezone.system Result: True Comment: Set timezone GMT Started: 21:09:31.515117 Duration: 126.465 ms Changes: ---------- timezone: GMT ---------- ID: wget Function: pkg.latest Result: True Comment: Package wget is already up-to-date Started: 21:09:31.657403 Duration: 29.133 ms Changes: Summary for local ------------- Succeeded: 26 (changed=17) Failed: 0 ------------- Total states run: 26

In the above output we can see that all of the defined states were executed successfully. We can validate this further if we check the status of the docker service. Which we can see from below is now running; where before executing salt-call, Docker was not installed on this system.

# service docker status docker start/running, process 11994

With a successful salt-call execution our Salt Minion is now officially a Masterless Minion. However, even though our server has Salt installed, and is configured as a Masterless Minion, there are still a few steps we need to take to make this Minion "Self Managing".

Self-Managing Minions

In order for our Minion to be Self-Managed, the Minion server should not only apply the base states above, it should also keep the salt-minion service and configuration up to date as well. To do this, we will be cloning yet another git repository.

Deploying the blog specific Salt states

This repository however, has specific Salt states used to manage the salt-minion agent, for not only this but also any other Masterless Minion used to host this blog.

# cd /srv/salt # git clone https://github.com/madflojo/blog-salt bencane Cloning into 'bencane'... remote: Counting objects: 25, done. remote: Compressing objects: 100% (16/16), done. remote: Total 25 (delta 4), reused 20 (delta 2), pack-reused 0 Unpacking objects: 100% (25/25), done. Checking connectivity... done.

In the above command we cloned the blog-salt repository into the /srv/salt/bencane directory. Like the /srv/salt/base directory the /srv/salt/bencane directory is also defined within the file_roots that we setup earlier.

Applying the blog specific Salt states

With these new states copied to the /srv/salt/bencane directory, we can once again run the salt-call command to trigger the salt-minion agent to apply these states.

# salt-call --local state.highstate [INFO ] Loading fresh modules for state activity [INFO ] Fetching file from saltenv 'base', ** skipped ** latest already in cache u'salt://top.sls' [INFO ] Fetching file from saltenv 'bencane', ** skipped ** latest already in cache u'salt://top.sls' ---------- ID: /etc/salt/minion.d/masterless.conf Function: file.managed Result: True Comment: File /etc/salt/minion.d/masterless.conf is in the correct state Started: 21:39:00.800568 Duration: 4.814 ms Changes: ---------- ID: /etc/cron.d/salt-standalone Function: file.managed Result: True Comment: File /etc/cron.d/salt-standalone updated Started: 21:39:00.806065 Duration: 7.584 ms Changes: ---------- diff: New file mode: 0644 Summary for local ------------- Succeeded: 37 (changed=7) Failed: 0 ------------- Total states run: 37

Based on the output of the salt-call execution we can see that 7 Salt states were executed successfully. This means that the new Salt states within the bencane directory were applied. But what exactly did these states do?

Understanding the "Self-Managing" Salt states

This second repository has a hand full of states that perform various tasks specific to this environment. The "Self-Managing" states are all located within the srv/salt/bencane/salt directory.

$ ls -la /srv/salt/bencane/salt/ total 20 drwxr-xr-x 5 root root 4096 Mar 20 05:28 . drwxr-xr-x 5 root root 4096 Mar 20 05:28 .. drwxr-xr-x 3 root root 4096 Mar 20 05:28 config drwxr-xr-x 2 root root 4096 Mar 20 05:28 minion drwxr-xr-x 2 root root 4096 Mar 20 05:28 states

Within the salt directory there are several more directories that have defined Salt states. To get started let's look at the minion directory. Specifically, let's take a look at the salt/minion/init.sls file.

# cat salt/minion/init.sls salt-minion: pkgrepo: - managed - humanname: SaltStack Repo - name: deb http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest {{ grains['lsb_distrib_codename'] }} main - dist: {{ grains['lsb_distrib_codename'] }} - key_url: https://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/SALTSTACK-GPG-KEY.pub pkg: - latest service: - dead - enable: False /etc/salt/minion.d/masterless.conf: file.managed: - source: salt://salt/config/etc/salt/minion.d/masterless.conf /etc/cron.d/salt-standalone: file.managed: - source: salt://salt/config/etc/cron.d/salt-standalone

Within the minion/init.sls file there are 5 Salt states defined.

Breaking down the minion/init.sls states

Let's break down some of these states to better understand what actions they are performing.

pkgrepo: - managed - humanname: SaltStack Repo - name: deb http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest {{ grains['lsb_distrib_codename'] }} main - dist: {{ grains['lsb_distrib_codename'] }} - key_url: https://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/SALTSTACK-GPG-KEY.pub

The first state defined is a pkgrepo state. We can see based on the options that this state is use to manage the Apt repository that we defined earlier. We can also see from the key_url option, that even the GPG key we imported earlier is managed by this state.

pkg: - latest

The second state defined is a pkg state. This is used to manage a specific package, specifically in this case the salt-minion package. Since the latest option is present the salt-minion agent will not only install the latest salt-minion package but also keep it up to date with the latest version if it is already installed.

service: - dead - enable: False

The third state is a service state. This state is used to manage the salt-minion service. With the dead and enable: False settings specified the salt-minion agent will stop and disable the salt-minion service.

So far these states are performing the same steps we performed manually above. Let's keep breaking down the minion/init.sls file to understand what other steps we have told Salt to perform.

/etc/salt/minion.d/masterless.conf: file.managed: - source: salt://salt/config/etc/salt/minion.d/masterless.conf

The fourth state is a file state, this state is deploying a /etc/salt/minion.d/masterless.conf file. This just happens to be the same file we created earlier. Let's take a quick look at the file being deployed to understand what Salt is doing.

$ cat salt/config/etc/salt/minion.d/masterless.conf file_client: local file_roots: base: - /srv/salt/base bencane: - /srv/salt/bencane

The contents of this file are exactly the same as the masterless.conf file we created in the earlier steps. This means that while right now the configuration file being deployed is the same as what is currently deployed. In the future if any changes are made to the masterless.conf within this git repository, those changes will then be deployed on the next state.highstate execution.

/etc/cron.d/salt-standalone: file.managed: - source: salt://salt/config/etc/cron.d/salt-standalone

The fifth state is also a file state, while this state is also deploying a file the file in question is very different. Let's take a look at this file to understand what it is used for.

$ cat salt/config/etc/cron.d/salt-standalone */2 * * * * root su -c "/usr/bin/salt-call state.highstate --local 2>&1 > /dev/null"

The salt-standalone file is a /etc/cron.d based cron job that appears to be running the same salt-call command we ran earlier to apply the local Salt states. In a masterless configuration there is no scheduled task to tell the salt-minion agent to apply all of the Salt states. The above cron job takes care of this by simply executing a local state.highstate execution every 2 minutes.

Summary of minion/init.sls

Based on the contents of the minion/init.sls we can see how this salt-minion agent is configured to be "Self-Managing". From the above we were able to see that the salt-minion agent is configured to perform the following steps.

  1. Configure the Saltstack Apt repository and GPG keys
  2. Install the salt-minion package or update to the newest version if already installed
  3. Deploy the masterless.conf configuration file into /etc/salt/minion.d/
  4. Deploy the /etc/cron.d/salt-standalone file which deploys a cron job to initiate state.highstate executions

These steps ensure that the salt-minion agent is both configured correctly and applying desired states every 2 minutes.

While the above steps are useful for applying the current states, the whole point of continuous delivery is to deploy changes quickly. To do this we need to also keep the Salt states up-to-date.

Keeping Salt states up-to-date with Salt

One way to keep our Salt states up to date is to tell the salt-minion agent to update them for us.

Within the /srv/salt/bencane/salt directory exists a states directory that contains two files base.sls and bencane.sls. These two files both contain similar Salt states. Let's break down the contents of the base.sls file to understand what actions it's telling the salt-minion agent to perform.

$ cat salt/states/base.sls /srv/salt/base: file.directory: - user: root - group: root - mode: 700 - makedirs: True base_states: git.latest: - name: https://github.com/madflojo/salt-base.git - target: /srv/salt/base - force: True

In the above we can see that the base.sls file contains two Salt states. The first is a file state that is set to ensure the /srv/salt/base directory exists with the defined permissions.

The second state is a bit more interesting as it is a git state which is set to pull the latest copy of the salt-base repository and clone it into /srv/salt/base.

With this state defined, every time the salt-minion agent runs (which is every 2 minutes via the cron.d job); the agent will check for new updates to the repository and deploy them to /srv/salt/base.

The bencane.sls file contains similar states, with the difference being the repository cloned and the location to deploy the state files to.

$ cat salt/states/bencane.sls /srv/salt/bencane: file.directory: - user: root - group: root - mode: 700 - makedirs: True bencane_states: git.latest: - name: https://github.com/madflojo/blog-salt.git - target: /srv/salt/bencane - force: True

At this point, we now have a Masterless Salt Minion that is not only configured to "self-manage" it's own packages, but also the Salt state files that drive it.

As the state files within the git repositories are updated, those updates are then pulled from each Minion every 2 minutes. Whether that change is adding the screen package, or deploying a new Docker container; that change is deployed across many Masterless Minions all at once.

What's next

With the above steps complete, we now have a method for taking a new server and turning it into a Self-Managed Masterless Minion. What we didn't cover however, is how to automate the initial installation and configuration.

In next months article, we will talk about using salt-ssh to automate the first time installation and configuration the salt-minion agent using the same Salt states we used today.

Posted by Benjamin Cane
Categories: FLOSS Project Planets

Linux and POWER8 microprocessors

LinuxPlanet - Mon, 2016-03-21 04:23

With the enormous amount of data being generated every day, POWER8 was designed specifically to keep up with today’s data processing requirements on high end servers.

POWER8 is a symmetric multiprocessor based on the power architecture by IBM. It’s designed specifically for server environments to have faster execution times and to really concentrate performing well on high server workloads. POWER8 is very scalable architecture and scales from 1 to 100+ CPU core per server. Google was involved when POWER8 was designed and they currently use dual socket POWER8 system boards internally.

Systems available with POWER8 cpus started shipping in late 2014. CPU clock ranges between 2.5Ghz all the way up to 5.0Ghz. It has support for DDR3 and DDR4 memory controllers. Memory support is designed to be future proof by being as generic as possible.

Photo: Wikimedia Commons

Open architecture

Design is available for licensing via the OpenPower foundation mainly to support custom made processors for use in cloud computing and applications that need to calculate big amounts of scientific data. POWER8 processor specifications and firmware are available on liberal licensing. Collaborative development model is encouraged and it’s already happening.

Linux has full support of POWER8

IBM has begun submitting code patches for the Linux kernel in 2012 to support POWER8 features. Linux now has full support for POWER8 since version the kernel version 3.8.

Many big Linux distributions, including Debian, Fedora and OpenSUSE has installable iso images available for Power hardware. When it comes to applications almost all software available for traditional cpu architectures are also available for POWER8. Packages build for it usually has the prefix ppc64el/ppc64le or ppc64 when build for big endian mode. There is prebuilt software available for Linux distributions. For example, thousands of Debian Linux packages are available. Remember to limit the search results to include packages for ppc64el to get a better picture what’s available.

While power hardware is transitioning from big endian to little endian, POWER8 is actually bi-endian architechure and it’s capable of accessing data is both modes. However, most Linux distributions concentrate on little endian mode as it has much wider application ecosystem.

Future of POWER8

Some years ago it seemed like that ARM servers were going to be really popular, but as of today it seems that POWER8 is the only viable alternative for the Intel Xeon architecture.

Categories: FLOSS Project Planets

The VMware Hearing and the Long Road Ahead

LinuxPlanet - Mon, 2016-02-29 21:00

[ This blog was crossposted on Software Freedom Conservancy's website. ]

On last Thursday, Christoph Hellwig and his legal counsel attended a hearing in Hellwig's VMware case that Conservancy currently funds. Harald Welte, world famous for his GPL enforcement work in the early 2000s, also attended as an observer and wrote an excellent summary. I'd like to highlight a few parts of his summary, in the context of Conservancy's past litigation experience regarding the GPL.

First of all, in great contrast to the cases here in the USA, the Court acknowledged fully the level of public interest and importance of the case. Judges who have presided over Conservancy's GPL enforcement cases USA federal court take all matters before them quite seriously. However, in our hearings, the federal judges preferred to ignore entirely the public policy implications regarding copyleft; they focused only on the copyright infringement and claims related to it. Usually, appeals courts in the USA are the first to broadly consider larger policy questions. There are definitely some advantages to the first Court showing interest in the public policy concerns.

However, beyond this initial point, I was struck that Harald's summary sounded so much like the many hearings I attended in the late 2000's and early 2010's regarding Conservancy's BusyBox cases. From his description, it sounds to me like judges around the world aren't all that different: they like to ask leading questions and speculate from the bench. It's their job to dig deep into an issue, separate away irrelevancies, and assure that the stark truth of the matter presents itself before the Court for consideration. In an adversarial process like this one, that means impartially asking both sides plenty of tough questions.

That process can be a rollercoaster for anyone who feels, as we do, that the Court will rule on the specific legal issues around which we have built our community. We should of course not fear the hard questions of judges; it's their job to ask us the hard questions, and it's our job to answer them as best we can. So often, here in the USA, we've listened to Supreme Court arguments (for which the audio is released publicly), and every pundit has speculated incorrectly about how the justices would rule based on their questions. Sometimes, a judge asks a clarification question regarding a matter they already understand to support a specific opinion and help their colleagues on the bench see the same issue. Other times, judges asks a questions for the usual reasons: because the judges themselves are truly confused and unsure. Sometimes, particularly in our past BusyBox cases, I've seen the judge ask the opposing counsel a question to expose some bit of bluster that counsel sought to pass off as settled law. You never know really why a judge asked a specific question until you see the ruling. At this point in the VMware case, nothing has been decided; this is just the next step forward in a long process. We enforced here in the USA for almost five years, we've been in litigation in Germany for about one year, and the earliest the Germany case can possibly resolve is this May.

Kierkegaard wrote that it is perfectly true, as the philosophers say, that life must be understood backwards. But they forget the other proposition, that it must be lived forwards. Court cases are a prime example of this phenomenon. We know it is gut-wrenching for our Supporters to watch every twist and turn in the case. It has taken so long for us to reach the point where the question of a combined work of software under the GPL is before a Court; now that it is we all want this part to finish quickly. We remain very grateful to all our Supporters who stick with us, and the new ones who will join today to help us make our funding match on its last day. That funding makes it possible for Conservancy to pursue this and other matters to ensure strong copyleft for our future, and handle every other detail that our member projects need. The one certainty is that our best chance of success is working hard for plenty of hours, and we appreciate that all of you continue to donate so that the hard work can continue. We also thank the Linux developers in Germany, like Harald, who are supporting us locally and able to attend in person and report back.

Categories: FLOSS Project Planets

Give My Regards To Ward 10

LinuxPlanet - Mon, 2016-02-29 10:03

Hello everyone, I’m back!!! Well, partially back I suppose. I just wanted to write a quick update to let you all know that I’m at home now recovering from my operation and all went as well as could be expected. At this early stage all indications are that I could be healthier than I’ve been in over a decade but it’s going to take a long time to recover. There were a few unexpected events during the operation which left me with a drain in my chest for a few days, I wasn’t expecting that but I don’t have the energy to explain it all now. Maybe I will in future. The important thing to remember is it went really well, the surgeon seems really happy and he was practically dancing when I saw him the day after the operation.

Don’t Mess With Me

Right now I am recuperating at home and just about able to shuffle around the house. I’m doing ok though, not in much pain, just overwhelming tired all the time and very tender. I have a rather fetching scar which stretches from deep in my groin up to my chest and must be about 15 inches long. I’m just bragging now though hehehe I will certainly look like a bad ass judging from my scars. I just need a suitable story to go with them. Have you seen The Revenant? I might go with something that. I strangled a bear with my bare hands. No pun intended.

I was treated impeccably by the wonderful staff at The Christie and I really can’t praise them highly enough. My fortnight on Ward 10 was made bearable by their humour and good grace. I couldn’t have asked for more.

I will obviously be out of action for many weeks but rest assured I am fine and I’ll see you all again soon.

Take it easy,


Categories: FLOSS Project Planets
Syndicate content