FLOSS Project Planets

Continuum Analytics News: Continuum Analytics Welcomes Mathew Lodge as SVP Products and Marketing

Planet Python - Thu, 2017-08-24 10:56
News Thursday, August 24, 2017

Former VMware VP Joins Anaconda’s Executive Leadership Team to Help Accelerate Data Science Adoption across the Enterprise

AUSTIN, TEXAS—August 24, 2017—Continuum Analytics, the company behind Anaconda, the leading Python data science platform, today announced Mathew Lodge as the company’s new senior vice president (SVP) of products and marketing. Lodge brings extensive experience to the B2B product space––including software and SaaS––to help the company further expand adoption of Anaconda across the enterprise.  

“With a proven history of leading product and marketing strategies for some of the biggest brands and hottest startups in technology, Mathew brings a unique perspective that will help take Continuum Analytics to the next level and extend our position as the leading Python data science platform,” said Scott Collison, CEO at Continuum Analytics. “Mathew will lead our product and marketing efforts, helping to ensure that Anaconda continues to meet and exceed the requirements of today’s enterprise, empowering organizations across the globe to build data science-driven applications that deliver measurable business impact.”
The Python community is estimated at more than 30 million members and, according to the most recent O’Reilly Data Science Survey, among data scientists, 72 percent prefer Python as their main tool. Anaconda has more than 4.5 million users and is the world’s most popular and trusted Python data science platform. 

“Data science is foundational to digital and AI strategies, allowing organizations to deliver new products and services faster and cheaper, and respond more quickly to customers and competitors,” said Lodge. “Python is the open source ecosystem of choice for data scientists, and Anaconda is the gold standard platform for Python data science. I look forward to working with customers and partners to realize the huge potential of Anaconda to deliver actionable, automated insight and intelligence to every organization.”

This summer, Continuum Analytics was included in Gartner’s “Cool Vendors in Data Science and Machine Learning, 2017” report and Gartner’s “Hype Cycle for Data Science and Machine Learning for 2017.”
Mathew Lodge comes from Weaveworks, a container and microservices start-up, where he was the chief operating officer. Previously, he was vice president in VMware’s Cloud Services group; notably he was co-founder for what became its vCloud Air IaaS service. Early in his career, Lodge built compilers and distributed systems for projects like the International Space Station, helped connect six countries to the Internet for the first time and managed a $630M product line at Cisco. Lodge holds a Master of Engineering from University of York, where he graduated with honors.
About Anaconda Powered by Continuum Analytics
Anaconda is the leading data science platform powered by Python, the fastest growing data science language with more than 30 million downloads to date. Continuum Analytics is the creator and driving force behind Anaconda, empowering leading businesses across industries worldwide with solutions to identify patterns in data, uncover key insights and transform data into a goldmine of intelligence to solve the world’s most challenging problems. Anaconda puts superpowers into the hands of people who are changing the world. Learn more at continuum.io



Media Contact:
Jill Rosenthal

Categories: FLOSS Project Planets

Krita’s Updated Vision

Planet KDE - Thu, 2017-08-24 10:12

In 2010, during a developer sprint in Deventer, the Krita team sat down together with Peter Sikking to hammer out a vision statement for the project. Our old goal, be KDE’s Gimp/Photoshop, didn’t reflect what we really wanted to do. Here are some documents describing the creation of Krita’s vision:

Creating the vision took a lot of good, hard work, and this was the result (you need to read this as three paragraphs, givings answers to “what is it”, “for whom is it” and “what’s the value”):

Krita is a KDE program for sketching and painting, offering an end–to–end solution for creating digital painting files from scratch by masters.

Fields of painting that Krita explicitly supports are concept art, creation of comics and textures for rendering.

Modeled on existing real-world painting materials and workflows, Krita supports creative working by getting out of the way and with a snappy response.

Seven years later, this needed updating. We’ve added new fields that Krita supports, such as animation, and real-world painting materials and workflows — that never really materialized because as soon as we sat down with real-world artists, we learned that they couldn’t care less: they cared about being productive. So, after discussion on the mailing list and during our weekly meetings, we modified the vision document:

Krita is a free and open source cross-platform application that offers an end-to-end solution for creating digital art files from scratch. Krita is optimized for frequent, prolonged and focused use.

Explicitly supported fields of painting are illustrations, concept art, matte painting, textures, comics and animations.

Developed together with users, Krita is an application that supports their actual needs and workflow. Krita supports open standards and interoperates with other applications.

Let’s go through the changes.

We now mention “free and open source” instead of KDE because with expansion of Krita on Windows and OSX, we now have many users who do not know that KDE stands for Free Software that respects your privacy and the way you want to work. We considered “Free Software” instead, but this is really a moment where we need to make clear that “free software” is not “software for free”.

We still mention “files” explicitly; we’ve never really been interested in what you do with those files, but, for instance, printing from Krita just doesn’t have any priority for us. Krita is for creating your artwork, not for publishing it.

We replaced the “for masters” with “frequent, prolonged and focused use”. The meaning is the same: to get the most out of Krita you have to really use it. Krita is not for casually adding scribbles to a screenshot. But the “for masters” often made people wonder whether Krita could be used by beginning artists. The answer is of course “yes” — but you’ll have to master an application with thousands of possibilities.

In the second paragraph, we’ve added animations and matte painting. Animation was introduced for the third time in 2016; it’s clearly something a lot of people love to do. Matte painting gets close to photo manipulation, which isn’t in our vision, but focused on creating a new artwork. We’ve always felt that Krita could be used for that, as well. Note: no 3d, no webpage design, no product design, no wedding albums, no poster or other print design.

Finally, the last paragraph got almost completely rewritten. Gone is real-world materials as an inspiration, and in are our users as inspiration: we won’t let you dictate what Krita can do, or how Krita lets you do stuff, UX design isn’t something that can be created by voting. But we do listen, and for the past years we’ve let you vote for which features you would find most useful, while still keeping the direction of Krita as a whole in our hands. But we want to create an application that lets you get stuff done. We removed the “snappy” response, since that’s pretty much a given. We’re not going to try to create an application that’s ponderous or works against you all the way. Finally, we do care about interoperability and standards, and have spent countless hours of work on improving that, so we felt it needed to be said.

Categories: FLOSS Project Planets

PyCharm: PyCharm 2017.2.2 is now available

Planet Python - Thu, 2017-08-24 09:54

After last week’s RC for PyCharm 2017.2.2 we have fixed some issues on the create new project screen. Get the new version from our website!

Improvements are in the following areas:

  • An issue in the creation of Angular CLI projects
  • Code insight and inspection fixes: “method may be static” issues, and misidentification of Python 3.7
  • Django: Cache conflict with Jinja template project, and Ctrl+Click on widget templates
  • Docker: Docker Compose environment variable issues
  • JavaScript: go to declaration, go to implementation
  • And much more, check out the release notes for details

We’d like to thank our users who have reported these bugs for helping us to resolve them! If you find a bug, please let us know on YouTrack.

If you use Django, but don’t have PyCharm Professional Edition yet, you may be interested to learn about our Django Software Foundation promotion. You can get a 30% discount on PyCharm, and support the Django Software Foundation at the same time.

-PyCharm Team
The Drive to Develop

Categories: FLOSS Project Planets

Evolving Web: Migrating Aliases and Redirects to Drupal 8

Planet Drupal - Thu, 2017-08-24 09:00

When content URLs change during migrations, it is always a good idea to do something to handle the old URLs to prevent them from suddenly starting to throw 404s which are bad for SEO. In this article, we'll discuss how to migrate URL aliases provided by the path module (part of D8 core) and URL redirects provided by the redirect module.

The Problem

Say we have two CSV files (given to us by the client):

The project requirement is to:

  • Migrate the contents of article.csv as article nodes.
  • Migrate the contents of category.csv as terms of a category terms.
  • Make the articles accessible at the path blog/{{ category-slug }}/{{ article-slug }}.
  • Make blog/{{ slug }}.php redirect to article/{{ article-slug }}.

Here, the term slug refers to a unique URL-friendly and SEO-friendly string.

Before We Start Migrate Node and Category Data

This part consists of two simple migrations:

The article data migration depends on the category data migration to associate each node to a specific category like:

# Migration processes process: ... field_category: plugin: 'migration_lookup' source: 'category' migration: 'example_category_data' no_stub: true ...

So, if we execute this migration, we will have all categories created as category terms and 50 squeaky new nodes belonging to those categories. Here's how it should look if we run the migrations using drush:

$ drush migrate-import example_article_data,example_category_data Processed 5 items (5 created, 0 updated, 0 failed, 0 ignored) - done with 'example_category_data' Processed 50 items (50 created, 0 updated, 0 failed, 0 ignored) - done with 'example_article_data'

Additionally, we will be able to access a list of articles in each category at the URL blog/{{ category-slug }}. This is because of the path parameter we set in the category data migration. The path parameter is processed by the path module to create URL aliases during certain migrations. We can also use the path parameter while creating nodes to generate URL aliases for those nodes. However, in this example, we will generate the URL aliases in a stand-alone migration.

Generate URL Aliases with Migrations

The next task will be to make the articles available at URLs like /blog/{{ category-slug }}/{{ article-slug }}. We use the example_article_alias migration to generate these additional URL aliases. Important sections of the migration are discussed below.

Source source: plugin: 'csv' path: 'article.csv' ... constants: slash: '/' source_prefix: '/node/' alias_prefix: '/blog/' und: 'und'

We use the article.csv file as our source data to iterate over articles. Also, we use source/constants to define certain data which we want to use in the migration, but we do not have in the CSV document.

Destination destination: plugin: 'url_alias'

Since we want to create URL aliases, we need to use the destination plugin url_alias provided by the path module. Reading documentation or taking a quick look at the plugin source at Drupal\path\Plugin\migrate\destination\UrlAlias::fields(), we can figure out the fields and configuration supported by this plugin.

Process ... temp_nid: plugin: 'migration_lookup' source: 'slug' migration: 'example_article_data' ... temp_category_slug: # First, retrieve the ID of the taxonomy term created during the "category_data" migration. - plugin: 'migration_lookup' source: 'category' migration: 'example_category_data' # Use a custom callback to get the category name. - plugin: 'callback' callable: '_migrate_example_paths_load_taxonomy_term_name' # Prepare a url-friendly version for the category. - plugin: 'machine_name'

Since we need to point the URL aliases to the nodes we created during the article data migration, we use use the migration_lookup plugin (formerly migration) to read the ID of the relevant node created during the article data migration. We store the node id in temp_nid. I added the prefix temp_ to the property name because we just need it temporarily for calculating another property and not for using it directly.

Similarly, we need to prepare a slug for the category to which the node belongs. We will use this slug to generate the alias property.

source: plugin: 'concat' source: - 'constants/source_prefix' - '@temp_nid'

Next, we generate the source, which is the path to which the alias will point. We do that by simply concatenating '/nid/' and '@temp_nid' using the concat plugin.

alias: plugin: 'concat' source: - 'constants/alias_prefix' - '@temp_category_slug' - 'constants/slash' - 'slug'

And finally, we generate the entire alias by concatenating '/article/', '@temp_category_slug', a '/' and the article's '@slug'. After running this migration like drush migrate-import example_article_alias, all the nodes should be accessible at /article/{{ category-slug }}/{{ article-slug }}.

Generate URL Redirects with Migrations

For the last requirement, we need to generate redirects, which takes us to the redirect module. So, we create another migration named example_article_redirect to generate redirects from /blog/{{ slug }}.php to the relevant nodes. Now, let's discuss some important lines of this migration.

Source constants: # The source path is not supposed to start with a "/". source_prefix: 'blog/' source_suffix: '.php' redirect_prefix: 'internal:/node/' uid_admin: 1 status_code: 301

We use source/constants to define certain data which we want to use in the migration, but we do not have in the CSV document.

Destination destination: plugin: 'entity:redirect'

In Drupal 8, every redirect rule is an entity. Hence, we use the entity plugin for the destination.

Process redirect_source: plugin: 'concat' source: - 'constants/source_prefix' - 'slug' - 'constants/source_suffix'

First, we determine the path to be redirected. This will be the path as in the old website, example, blog/{{ slug }}.php without a / in the front.

redirect_redirect: plugin: 'concat' source: - 'constants/redirect_prefix' - '@temp_nid'

Just like we did for generating aliases, we read node IDs from the article data migration and use them to generate URIs to which the user should be redirected when they visit one of the /blog/{{ slug }}.php paths. These destination URIs should be in the form internal:/node/{{ nid }}. The redirect module will intelligently use these URIs to determine the URL alias for those paths and redirect the user to the path /article/{{ slug }} instead of sending them to /node/{{ nid }}. This way, the redirects will not break even if we change the URL alias for a particular node after running the migrations.

# We want to generate 301 permanent redirects as opposed to 302 temporary redirects. status_code: 'constants/status_code'

We also specify a status_code and set it to 301. This will create 301 permanent redirects as opposed to 302 temporary redirects. Having done so and having run this third migration as well, we are all set!

Migration dependencies migration_dependencies: required: - 'example_article_data'

Since the migration of aliases and the migration of redirects both require access to the ID of the node which was generated during the article data migration, we need to add the above lines to define a migration_dependency. It will ensure that the example_article_data migration is executed before the alias and the redirect migrations. So if we run all the migrations of this example, we should see them executing in the correct order like:

$ drush mi --tag=example_article Processed 5 items (5 created, 0 updated, 0 failed, 0 ignored) - done with 'example_category_data' Processed 50 items (50 created, 0 updated, 0 failed, 0 ignored) - done with 'example_article_data' Processed 50 items (50 created, 0 updated, 0 failed, 0 ignored) - done with 'example_article_alias' Processed 50 items (50 created, 0 updated, 0 failed, 0 ignored) - done with 'example_article_redirect'Next steps + more awesome articles by Evolving Web
Categories: FLOSS Project Planets

Acquia Developer Center Blog: Decoupled Drupal: POWDR’s Front End Architecture Build

Planet Drupal - Thu, 2017-08-24 08:46

In this article we’ll discuss the three main areas that needed to be addressed during the build of POWDR’s front end architecture: Routing & Syncing with the API, Component Driven Content, and the Build Process & Tools.

Tags: acquia drupal planet
Categories: FLOSS Project Planets

Chromatic: Announcing our Drupal Coding Standards Series on Drupalize.me!

Planet Drupal - Thu, 2017-08-24 08:30

The folks at Drupalize.me provide the best Drupal training materials on the web, so we were more than happy to oblige them when they asked if they could release our Coding Standards guide as a free series on their platform.

Categories: FLOSS Project Planets

Bruno Rocha: Simple Login Extension for Flask

Planet Python - Thu, 2017-08-24 06:03

Login Extension for Flask

There are good and recommended options to deal with web authentication in Flask.

I recommend you use:

Those extensions are really complete and production ready!

So why Flask Simple Login?

However sometimes you need something simple for that small project or for prototyping.

Flask Simple Login

What it provides:

  • Login and Logout forms and pages
  • Function to check if user is logged-in
  • Decorator for views
  • Easy and customizable login_checker

What it does not provide: (but of course you can easily implement by your own)

  • Database Integration
  • Password management
  • API authentication
  • Role or user based access control
Hot it works

First install it from PyPI.

pip install flask_simplelogin

from flask import Flask from flask_simplelogin import SimpleLogin app = Flask(__name__) SimpleLogin(app)

That's it! now you have /login and /logout routes in your application.

The username defaults to admin and the password defaults to secret (yeah that's not clever, let's see how to change it)


Simple way

from flask import Flask from flask_simplelogin import SimpleLogin app = Flask(__name__) app.config['SECRET_KEY'] = 'something-secret' app.config['SIMPLELOGIN_USERNAME'] = 'chuck' app.config['SIMPLELOGIN_PASSWORD'] = 'norris' SimpleLogin(app)

That works, but is not so clever, lets use env vars.


then SimpleLogin will read those env vars automatically.

from flask import Flask from flask_simplelogin import SimpleLogin app = Flask(__name__) app.config['SECRET_KEY'] = 'something-secret' SimpleLogin(app)

But what if you have more users and more complex auth logic? write a custom login checker

Using a custom login checker from flask import Flask from flask_simplelogin import SimpleLogin app = Flask(__name__) app.config['SECRET_KEY'] = 'something-secret' def only_chuck_norris_can_login(user): "user = {'username': 'foo', 'password': 'bar'}" # do the authentication here, it is up to you! # query your database, check your user/passwd file # connect to external service.. anything. if user.get('username') == 'chuck' and user.get('password') == 'norris': return True # Allowed return False # Denied SimpleLogin(app, login_checker=only_chuck_norris_can_login) Checking if user is logged in from flask_simplelogin import is_logged_in if is_logged_in(): # do things if anyone is logged in if is_logged_in('admin'): # do things only if admin is logged in Decorating your views from flask_simplelogin import login_required @app.route('/it_is_protected') @login_required # < --- simple decorator def foo(): return 'secret' Protecting Flask Admin views from flask_admin.contrib.foo import ModelView from flask_simplelogin import is_logged_in class AdminView(ModelView) def is_accessible(self): return is_logged_in('admin') Customizing templates

There are only one template to customize and it is called login.html

Example is:

{% extends 'base.html' %} {% block title %}Login{% endblock %} {% block messages %} {{super()}} {%if form.errors %} <ul class="alert alert-danger"> {% for field, errors in form.errors.items() %} <li>{{field}} {% for error in errors %}{{ error }}{% endfor %}</li> {% endfor %} </ul> {% endif %} {% endblock %} {% block page_body %} <form action="{{ url_for('simplelogin.login', next=request.args.get('next', '/')) }}" method="post"> <div class="form-group"> {{ form.csrf_token }} {{form.username.label}}<div class="form-control">{{ form.username }}</div><br> {{form.password.label}}<div class="form-control"> {{ form.password }}</div><br> </form> <input type="submit" value="Send"> </form> {% endblock %}

Take a look at the example app.

And you can customize it in anyway you want and need, it receives a form in context and it is a WTF form the submit should be done to request.path which is the same /login view.

You can also use {% if is_logged_in %} in your template if needed.

  • Flask-WTF and WTForms
  • having a SECRET_KEY set in your app.config
Categories: FLOSS Project Planets

PyCharm: Develop Django Under the Debugger

Planet Python - Thu, 2017-08-24 06:00

PyCharm Professional has long had great support for developing Django applications, including a run configuration tailored to the Django server. This winds up being a wonderful workflow, with a tool window showing the server output.

Sometimes, though, you hit a problem and want to debug your code. You stop the server, run it under the debugger, and do your debugging. Which also works really well: PyCharm’s visual debugger is a key selling point and integrates nicely into Django (e.g. template debugging.) And recently, PyCharm’s debugger has undergone dramatic speedups, especially when using Python 3.6. In fact, running under the debugger is approaching the speed of normal run configurations.

But still, it’s irritating to stop you server, restart under the debugger, stop the server, and restart under the regular runner. It messes up your flow, and you’ll be tempted to blasphemy of debugging with print().

So let’s break with the past with a crazy idea: always run under the debugger when developing Django.

Note: If you’ve taken Kenneth Love’s fantastic Django Basics course at Treehouse, you might recognize the code in this blog post. Enjoyed watching their approach to teaching.

Regular Running

Let’s start with the “normal” path. PyCharm usually makes a Django run configuration automatically, but if not, here’s how to do so: Run -> Edit Configurations -> + -> Django Server:

With this run configuration selected, click the green play button to start the Django server:

When done, click the red button to stop the server:

You can also restart the server, use keystrokes instead of the mouse, let Django restart automatically on code changes, get linked tracebacks to jump directly to errors, etc.


You’re likely familiar with all that. Let’s now do the same thing, but running under the debugger. Click the green button with the bug on it to start that Run/Debug configuration, but running under the debugger instead of the normal runner:

Note: If you get a link saying you can compile the debugger speedups under Cython (on Windows, we ship it by default), click the link, the speedups are worth it.

Let’s compare the restart-on-edit times side-by-side:

As you can see, the debugger’s restart hit isn’t that big, certainly compared to the savings versus other parts of your development workflow (e.g. moving print statements and reload versus moving breakpoints without restarting.)

I can proceed as normal, writing new views and trying them in the browser or the built-in REST client. But when I hit a problem, I don’t reach for print() — no, I set a breakpoint!

In fact, I can set a breakpoint in a template and then poke around at that line of template execution:

Much better than random print statements, which require a restart each time I want to inspect something.

Testing…Under the Debugger

Always develop under the debugger? Sounds weird…the debugger is for debugging. Let’s make it weirder: always do testing under the debugger.

For example, you might run Django tests under PyCharm’s handy Django Test run configuration:

But you can also launch your test runner under the debugger:

Test running under the debugger is an interesting case. Often with testing you are exploring, and sometimes, you are actually intending to produce bugs when writing a test case. Having the debugger close at hand can fit well with TDD:


No free lunch, of course. Even if we have increased performance, dramatically for Python 3.6, there is still a speed hit. Not just on startup, but on execution.

You may run into quirks in obscure usages. For example, when running pytest tests under the debugger, you can’t get to the console (known issue slated to be fixed in 2017.3.) Also, tracebacks might be longer since the code is running under pydevd.


At two recent conferences I mentioned this — do your Django development under the debugger — to people who visited the booth. The reactions were fascinating, as a look of horror turned to confusion then curiosity ending with interest.

For some, it might never fit your brain. For others, it might make all the sense in the world. Want to tell me I’m crazy? It’s possible. Leave me a comment below with any feedback.

Categories: FLOSS Project Planets

Hundreds of visual surveys in KStars!

Planet KDE - Thu, 2017-08-24 05:16
With the KStars "Hipster" 2.8.1 release, I introduced Hierarchical Progressive Survey (HiPS) in KStars with three sample catalogs in the optical, infrared, and gamma regions of the electromagnetic spectrum.

Now users can browse from hundreds of online HiPS surveys and can enable them for overlay in KStars. Everything from radio, infrared, optical up to gamma rays is available along with a short description on each survey of interest.

Since these surveys literally take hundreds of gigabytes of storage space, they are downloaded on-demand and stored in a local cache. The disk cache is set by default to consume 1 GB while the RAM cache is set to 300 MB. These settings are now configurable from the HiPS Settings to provide users the flexibility to balance system resources with catalog usage.

By default, overlays utilize nearest neighbor algorithm to map 2D images unto the celestial sphere. Drawing of HiPS overlays can be further improved by enabling Bilinear Interpolation at the expense of increased CPU usage.

These selections shall be available in the next KStars 2.8.2 release coming up soon.

Categories: FLOSS Project Planets

Agiledrop.com Blog: AGILEDROP: Web Accessibility in Drupal 8 – part 2

Planet Drupal - Thu, 2017-08-24 04:48
This blog post is the second part of the session Web accessibility in Drupal 8 from our Development director Bostjan Kovac. We will look at the most common mistakes developers make, Drupal Contrib modules and other tools that will help you out when it comes to web accessibility. If you have missed the first part, you can read it here. Most common mistakes developers make Simple markup: And then there is a classy theme: The Ignorant theme (Bostjan called it that way) with no HTML elements, no roles for each element, nothing the screen readers should help themselves with. That means that… READ MORE
Categories: FLOSS Project Planets

kenfallon.com 2016-04-12 01:50:51

LinuxPlanet - Tue, 2016-04-12 02:50

I am trying to mount a cifs share aka smaba/smb/windows share, from a Debian server so I can access log files when needed. To do this automatically I create two mounts, one which is read only and is automatically mounted and another that is read/write which is not mounted. The /etc/fstab file looks a bit like this:

// /mnt/server-d cifs auto,rw,credentials=/root/.ssh/server.credentials,domain= 0 0 // /mnt/server-d-rw cifs noauto,ro,credentials=/root/.ssh/server.credentials,domain= 0 0

To mount all the drives with “auto” in the /etc/fstab file you can use the “-a, –all” option . From the man page, Mount all filesystems (of the given types) mentioned in fstab (except for those whose line contains the noauto keyword). The filesystems are mounted following their order in fstab.

However when I ran the command I get:

root@server:~# mount -a mount: wrong fs type, bad option, bad superblock on //, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount. helper program) In some cases useful info is found in syslog - try dmesg | tail or so.

Well it turns out that Debian is no longer shipping cifs as a default option. It can be added easyly enough using the command:

root@server:~# aptitude install cifs-utils

Now mount -a works fine

root@server:~# mount -a root@server:~#
Categories: FLOSS Project Planets

After a long time I’m back

LinuxPlanet - Fri, 2016-04-01 14:03

After a long time I’m back and I will continue writing at this blog, sorry for the waiting.

Categories: FLOSS Project Planets

Ambient Weather WS-1001-Wifi Observer Review

LinuxPlanet - Thu, 2016-03-24 10:28

In the most recent episode of Bad Voltage, I reviewed the Ambient Weather WS-1001-Wifi Observer Personal Weather Station. Tune in to listen to the ensuing discussion and the rest of the show.

Regular listeners will know I’m an avid runner and sports fan. Add in the fact that I live in a city where weather can change in an instant and a personal weather station was irresistible to the tech and data enthusiast inside me. After doing a bit of research, I decided on the Ambient Weather WS-1001-Wifi Observer. While it only needs to be performed once, I should note that setup is fairly involved. The product comes with three components: An outdoor sensor array which should be mounted on a pole, chimney or other suitable area, a small indoor sensor and an LCD control panel/display console. The first step is to mount the all-in-one outdoor sensor, which remains powered using a solar panel and rechargeable batteries. It measures and transmits outdoor temperature, humidity, wind speed, wind direction, rainfall, and both UV and solar radiation. Next, mount the indoor sensor which measures and transmits indoor temperature, humidity and barometric pressure. Finally, plug in the control panel and complete the setup procedure which will walk you through configuring your wifi network, setting up NTP, syncing the two sensors and picking your units of measurement. Note that all three devices must be within 100-330 feet of each other, depending on layout and what materials are between them.

With everything setup, data will now start collecting on your display console and is updated every 14 seconds. In addition to showing all the data previously mentioned you will also see wind gusts, wind chill, sunrise, sunset, phases of the moon, dew point, rainfall rate and some historical graphs. There is a ton of data presented and while the sparse dense layout works for me, it has been described as unintuitive and overwhelming by some.

While seeing the data in real-time is interesting, you’ll likely also want to see long term trends and historical data. While the device can export all data to an SD card in CSV format, it becomes much more compelling when you connect it with the Weather Underground personal weather station network. Once connected, the unit becomes a public weather station that also feeds data to the Wunderground prediction model. That means you’ll be helping everyone get more accurate data for your specific area and better forecasts for your general area. You can even see how many people are using your PWS to get their weather report. There’s also a very slick Wunderstation app that is a great replacement for the somewhat antiquated display console, although unfortunately it’s currently only available for the iPad.

So, what’s the Bad Voltage verdict? At $289 the Ambient Weather WS-1001-WIFI OBSERVER isn’t cheap. In an era of touchscreens and sleek design, it’s definitely not going to win any design awards. That said, it’s a durable well built device that transmits and displays a huge amount of data. The Wunderground integration is seamless and knowing that you’re improving the predictive model for your neighborhood is surprisingly satisfying. If you’re a weather data junkie, this is a great device for you.


Categories: FLOSS Project Planets

PGDay Asia and FOSS Asia – 2016

LinuxPlanet - Wed, 2016-03-23 23:33

Jumping Bean attended PGDay Asia 17th March 2016 and FOSS Asia 18th-20th  March 2016 and delivered a talk at each event. At PGDay Asia we spoke on using Postgres as a NoSQL document store and for FOSS Asia. It was a great event and nothing beat interacting with the developers of Postgres and the open source community.

Our slides for "There is JavaScript in my SQL" presentation at PGDay Asia and "An Introduction to React" from FoSS Asia can be found on our Slide Share account.

Categories: FLOSS Project Planets

Create self-managing servers with Masterless Saltstack Minions

LinuxPlanet - Tue, 2016-03-22 09:30

Over the past two articles I've described building a Continuous Delivery pipeline for my blog (the one you are currently reading). The first article covered packaging the blog into a Docker container and the second covered using Travis CI to build the Docker image and perform automated testing against it.

While the first two articles covered quite a bit of the CD pipeline there is one piece missing; automating deployment. While there are many infrastructure and application tools for automated deployments I've chosen to use Saltstack. I've chosen Saltstack for many reasons but the main reason is that it can be used to manage both my host system's configuration and the Docker container for my blog application. Before I can start using Saltstack however, I first need to set it up.

I've covered setting up Saltstack before, but for this article I am planning on setting up Saltstack in a Masterless architecture. A setup that is quite different from the traditional Saltstack configuration.

Masterless Saltstack

A traditional Saltstack architecture is based on a Master and Minion design. With this architecture the Salt Master will push desired states to the Salt Minion. This means that in order for a Salt Minion to apply the desired states it needs to be able to connect to the master, download the desired states and then apply them.

A masterless configuration on the other hand involves only the Salt Minion. With a masterless architecture the Salt state files are stored locally on the Minion bypassing the need to connect and download states from a Master. This architecture provides a few benefits over the traditional Master/Minion architecture. The first is removing the need to have a Salt Master server; which will help reduce infrastructure costs, an important item as the environment in question is dedicated to hosting a simple personal blog.

The second benefit is that in a masterless configuration each Salt Minion is independent which makes it very easy to provision new Minions and scale out. The ability to scale out is useful for a blog, as there are times when an article is reposted and traffic suddenly increases. By making my servers self-managing I am able to meet that demand very quickly.

A third benefit is that Masterless Minions have no reliance on a Master server. In a traditional architecture if the Master server is down for any reason the Minions are unable to fetch and apply the Salt states. With a Masterless architecture, the availability of a Master server is not even a question.

Setting up a Masterless Minion

In this article I will walk through how to install and configure a Salt in a masterless configuration.

Installing salt-minion

The first step to creating a Masterless Minion is to install the salt-minion package. To do this we will follow the official steps for Ubuntu systems outlined at docs.saltstack.com. Which primarily uses the Apt package manager to perform the installation.

Importing Saltstack's GPG Key

Before installing the salt-minion package we will first need to import Saltstack's Apt repository key. We can do this with a simple bash one-liner.

# wget -O - https://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/SALTSTACK-GPG-KEY.pub | sudo apt-key add - OK

This GPG key will allow Apt to validate packages downloaded from Saltstack's Apt repository.

Adding Saltstack's Apt Repository

With the key imported we can now add Saltstack's Apt repository to our /etc/apt/sources.list file. This file is used by Apt to determine which repositories to check for available packages.

# vi /etc/apt/sources.list

Once editing the file simply append the following line to the bottom.

deb http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest trusty main

With the repository defined we can now update Apt's repository inventory. A step that is required before we can start installing packages from the new repository.

Updating Apt's cache

To update Apt's repository inventory, we will execute the command apt-get update.

# apt-get update Ign http://archive.ubuntu.com trusty InRelease Get:1 http://security.ubuntu.com trusty-security InRelease [65.9 kB] Get:2 http://archive.ubuntu.com trusty-updates InRelease [65.9 kB] Get:3 http://repo.saltstack.com trusty InRelease [2,813 B] Get:4 http://repo.saltstack.com trusty/main amd64 Packages [8,046 B] Get:5 http://security.ubuntu.com trusty-security/main Sources [105 kB] Hit http://archive.ubuntu.com trusty Release.gpg Ign http://repo.saltstack.com trusty/main Translation-en_US Ign http://repo.saltstack.com trusty/main Translation-en Hit http://archive.ubuntu.com trusty Release Hit http://archive.ubuntu.com trusty/main Sources Hit http://archive.ubuntu.com trusty/universe Sources Hit http://archive.ubuntu.com trusty/main amd64 Packages Hit http://archive.ubuntu.com trusty/universe amd64 Packages Hit http://archive.ubuntu.com trusty/main Translation-en Hit http://archive.ubuntu.com trusty/universe Translation-en Ign http://archive.ubuntu.com trusty/main Translation-en_US Ign http://archive.ubuntu.com trusty/universe Translation-en_US Fetched 3,136 kB in 8s (358 kB/s) Reading package lists... Done

With the above complete we can now access the packages available within Saltstack's repository.

Installing with apt-get

Specifically we can now install the salt-minion package, to do this we will execute the command apt-get install salt-minion.

# apt-get install salt-minion Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: dctrl-tools libmysqlclient18 libpgm-5.1-0 libzmq3 mysql-common python-dateutil python-jinja2 python-mako python-markupsafe python-msgpack python-mysqldb python-tornado python-zmq salt-common Suggested packages: debtags python-jinja2-doc python-beaker python-mako-doc python-egenix-mxdatetime mysql-server-5.1 mysql-server python-mysqldb-dbg python-augeas The following NEW packages will be installed: dctrl-tools libmysqlclient18 libpgm-5.1-0 libzmq3 mysql-common python-dateutil python-jinja2 python-mako python-markupsafe python-msgpack python-mysqldb python-tornado python-zmq salt-common salt-minion 0 upgraded, 15 newly installed, 0 to remove and 155 not upgraded. Need to get 4,959 kB of archives. After this operation, 24.1 MB of additional disk space will be used. Do you want to continue? [Y/n] y Get:1 http://archive.ubuntu.com/ubuntu/ trusty-updates/main mysql-common all 5.5.47-0ubuntu0.14.04.1 [13.5 kB] Get:2 http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/ trusty/main python-tornado amd64 4.2.1-1 [274 kB] Get:3 http://archive.ubuntu.com/ubuntu/ trusty-updates/main libmysqlclient18 amd64 5.5.47-0ubuntu0.14.04.1 [597 kB] Get:4 http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/ trusty/main salt-common all 2015.8.7+ds-1 [3,108 kB] Processing triggers for libc-bin (2.19-0ubuntu6.6) ... Processing triggers for ureadahead (0.100.0-16) ...

After a successful installation of the salt-minion package we now have a salt-minion instance running with the default configuration.

Configuring the Minion

With a Traditional Master/Minion setup, this point would be where we configure the Minion to connect to the Master server and restart the running service.

For this setup however, we will be skipping the Master server definition. Instead we will need to tell the salt-minion service to look for Salt state files locally. To alter the salt-minion's configuration we can either edit the /etc/salt/minion configuration file which is the default configuration file. Or we could add a new file into /etc/salt/minion.d/; this .d directory is used to override default configurations defined in /etc/salt/minion.

My personal preference is to create a new file within the minion.d/ directory, as this keeps the configuration easy to manage. However, there is no right or wrong method; as this is a personal and environmental preference.

For this article we will go ahead and create the following file /etc/salt/minion.d/masterless.conf.

# vi /etc/salt/minion.d/masterless.conf

Within this file we will add two configurations.

file_client: local file_roots: base: - /srv/salt/base bencane: - /srv/salt/bencane

The first configuration item above is file_client. By setting this configuration to local we are telling the salt-minion service to search locally for desired state configurations rather than connecting to a Master.

The second configuration is the file_roots dictionary. This defines the location of Salt state files. In the above example we are defining both /srv/salt/base and /srv/salt/bencane. These two directories will be where we store our Salt state files for this Minion to apply.

Stopping the salt-minion service

While in most cases we would need to restart the salt-minion service to apply the configuration changes, in this case, we actually need to do the opposite; we need to stop the salt-minion service.

# service salt-minion stop salt-minion stop/waiting

The salt-minion service does not need to be running when setup as a Masterless Minion. This is because the salt-minion service is only running to listen for events from the Master. Since we have no master there is no reason to keep this service running. If left running the salt-minion service will repeatedly try to connect to the defined Master server which by default is a host that resolves to salt. To remove unnecessary overhead it is best to simply stop this service in a Masterless Minion configuration.

Populating the desired states

At this point we have a Salt Minion that has been configured to run masterless. However, at this point the Masterless Minion has no Salt states to apply. In this section we will provide the salt-minion agent two sets of Salt states to apply. The first will be placed into the /srv/salt/base directory. This file_roots directory will contain a base set of Salt states that I have created to manage a basic Docker host.

Deploying the base Salt states

The states in question are available via a public GitHub repository. To deploy these Salt states we can simply clone the repository into the /srv/salt/base directory. Before doing so however, we will need to first create the /srv/salt directory.

# mkdir -p /srv/salt

The /srv/salt directory is Salt's default state directory, it is also the parent directory for both the base and bencane directories we defined within the file_roots configuration. Now that the parent directory exists, we will clone the base repository into this directory using git.

# cd /srv/salt/ # git clone https://github.com/madflojo/salt-base.git base Cloning into 'base'... remote: Counting objects: 50, done. remote: Total 50 (delta 0), reused 0 (delta 0), pack-reused 50 Unpacking objects: 100% (50/50), done. Checking connectivity... done.

As the salt-base repository is copied into the base directory the Salt states within that repository are now available to the salt-minion agent.

# ls -la /srv/salt/base/ total 84 drwxr-xr-x 18 root root 4096 Feb 28 21:00 . drwxr-xr-x 3 root root 4096 Feb 28 21:00 .. drwxr-xr-x 2 root root 4096 Feb 28 21:00 dockerio drwxr-xr-x 2 root root 4096 Feb 28 21:00 fail2ban drwxr-xr-x 2 root root 4096 Feb 28 21:00 git drwxr-xr-x 8 root root 4096 Feb 28 21:00 .git drwxr-xr-x 3 root root 4096 Feb 28 21:00 groups drwxr-xr-x 2 root root 4096 Feb 28 21:00 iotop drwxr-xr-x 2 root root 4096 Feb 28 21:00 iptables -rw-r--r-- 1 root root 1081 Feb 28 21:00 LICENSE drwxr-xr-x 2 root root 4096 Feb 28 21:00 ntpd drwxr-xr-x 2 root root 4096 Feb 28 21:00 python-pip -rw-r--r-- 1 root root 106 Feb 28 21:00 README.md drwxr-xr-x 2 root root 4096 Feb 28 21:00 screen drwxr-xr-x 2 root root 4096 Feb 28 21:00 ssh drwxr-xr-x 2 root root 4096 Feb 28 21:00 swap drwxr-xr-x 2 root root 4096 Feb 28 21:00 sysdig drwxr-xr-x 3 root root 4096 Feb 28 21:00 sysstat drwxr-xr-x 2 root root 4096 Feb 28 21:00 timezone -rw-r--r-- 1 root root 208 Feb 28 21:00 top.sls drwxr-xr-x 2 root root 4096 Feb 28 21:00 wget

From the above directory listing we can see that the base directory has quite a few Salt states. These states are very useful for managing a basic Ubuntu system as they perform steps such as installing Docker (dockerio), to setting the system timezone (timezone). Everything needed to run a basic Docker host is available and defined within these base states.

Applying the base Salt states

Even though the salt-minion agent can now use these Salt states, there is nothing running to tell the salt-minion agent it should do so. Therefore the desired states are not being applied.

To apply our new base states we can use the salt-call command to tell the salt-minion agent to read the Salt states and apply the desired states within them.

# salt-call --local state.highstate

The salt-call command is used to interact with the salt-minion agent from command line. In the above the salt-call command was executed with the state.highstate option.

This tells the agent to look for all defined states and apply them. The salt-call command also included the --local option, this option is specifically used when running a Masterless Minion. This flag tells the salt-minion agent to look through it's local state files rather than attempting to pull from a Salt Master.

The below shows the results of the execution above, within this output we can see the various states being applied successfully.

---------- ID: GMT Function: timezone.system Result: True Comment: Set timezone GMT Started: 21:09:31.515117 Duration: 126.465 ms Changes: ---------- timezone: GMT ---------- ID: wget Function: pkg.latest Result: True Comment: Package wget is already up-to-date Started: 21:09:31.657403 Duration: 29.133 ms Changes: Summary for local ------------- Succeeded: 26 (changed=17) Failed: 0 ------------- Total states run: 26

In the above output we can see that all of the defined states were executed successfully. We can validate this further if we check the status of the docker service. Which we can see from below is now running; where before executing salt-call, Docker was not installed on this system.

# service docker status docker start/running, process 11994

With a successful salt-call execution our Salt Minion is now officially a Masterless Minion. However, even though our server has Salt installed, and is configured as a Masterless Minion, there are still a few steps we need to take to make this Minion "Self Managing".

Self-Managing Minions

In order for our Minion to be Self-Managed, the Minion server should not only apply the base states above, it should also keep the salt-minion service and configuration up to date as well. To do this, we will be cloning yet another git repository.

Deploying the blog specific Salt states

This repository however, has specific Salt states used to manage the salt-minion agent, for not only this but also any other Masterless Minion used to host this blog.

# cd /srv/salt # git clone https://github.com/madflojo/blog-salt bencane Cloning into 'bencane'... remote: Counting objects: 25, done. remote: Compressing objects: 100% (16/16), done. remote: Total 25 (delta 4), reused 20 (delta 2), pack-reused 0 Unpacking objects: 100% (25/25), done. Checking connectivity... done.

In the above command we cloned the blog-salt repository into the /srv/salt/bencane directory. Like the /srv/salt/base directory the /srv/salt/bencane directory is also defined within the file_roots that we setup earlier.

Applying the blog specific Salt states

With these new states copied to the /srv/salt/bencane directory, we can once again run the salt-call command to trigger the salt-minion agent to apply these states.

# salt-call --local state.highstate [INFO ] Loading fresh modules for state activity [INFO ] Fetching file from saltenv 'base', ** skipped ** latest already in cache u'salt://top.sls' [INFO ] Fetching file from saltenv 'bencane', ** skipped ** latest already in cache u'salt://top.sls' ---------- ID: /etc/salt/minion.d/masterless.conf Function: file.managed Result: True Comment: File /etc/salt/minion.d/masterless.conf is in the correct state Started: 21:39:00.800568 Duration: 4.814 ms Changes: ---------- ID: /etc/cron.d/salt-standalone Function: file.managed Result: True Comment: File /etc/cron.d/salt-standalone updated Started: 21:39:00.806065 Duration: 7.584 ms Changes: ---------- diff: New file mode: 0644 Summary for local ------------- Succeeded: 37 (changed=7) Failed: 0 ------------- Total states run: 37

Based on the output of the salt-call execution we can see that 7 Salt states were executed successfully. This means that the new Salt states within the bencane directory were applied. But what exactly did these states do?

Understanding the "Self-Managing" Salt states

This second repository has a hand full of states that perform various tasks specific to this environment. The "Self-Managing" states are all located within the srv/salt/bencane/salt directory.

$ ls -la /srv/salt/bencane/salt/ total 20 drwxr-xr-x 5 root root 4096 Mar 20 05:28 . drwxr-xr-x 5 root root 4096 Mar 20 05:28 .. drwxr-xr-x 3 root root 4096 Mar 20 05:28 config drwxr-xr-x 2 root root 4096 Mar 20 05:28 minion drwxr-xr-x 2 root root 4096 Mar 20 05:28 states

Within the salt directory there are several more directories that have defined Salt states. To get started let's look at the minion directory. Specifically, let's take a look at the salt/minion/init.sls file.

# cat salt/minion/init.sls salt-minion: pkgrepo: - managed - humanname: SaltStack Repo - name: deb http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest {{ grains['lsb_distrib_codename'] }} main - dist: {{ grains['lsb_distrib_codename'] }} - key_url: https://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/SALTSTACK-GPG-KEY.pub pkg: - latest service: - dead - enable: False /etc/salt/minion.d/masterless.conf: file.managed: - source: salt://salt/config/etc/salt/minion.d/masterless.conf /etc/cron.d/salt-standalone: file.managed: - source: salt://salt/config/etc/cron.d/salt-standalone

Within the minion/init.sls file there are 5 Salt states defined.

Breaking down the minion/init.sls states

Let's break down some of these states to better understand what actions they are performing.

pkgrepo: - managed - humanname: SaltStack Repo - name: deb http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest {{ grains['lsb_distrib_codename'] }} main - dist: {{ grains['lsb_distrib_codename'] }} - key_url: https://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/SALTSTACK-GPG-KEY.pub

The first state defined is a pkgrepo state. We can see based on the options that this state is use to manage the Apt repository that we defined earlier. We can also see from the key_url option, that even the GPG key we imported earlier is managed by this state.

pkg: - latest

The second state defined is a pkg state. This is used to manage a specific package, specifically in this case the salt-minion package. Since the latest option is present the salt-minion agent will not only install the latest salt-minion package but also keep it up to date with the latest version if it is already installed.

service: - dead - enable: False

The third state is a service state. This state is used to manage the salt-minion service. With the dead and enable: False settings specified the salt-minion agent will stop and disable the salt-minion service.

So far these states are performing the same steps we performed manually above. Let's keep breaking down the minion/init.sls file to understand what other steps we have told Salt to perform.

/etc/salt/minion.d/masterless.conf: file.managed: - source: salt://salt/config/etc/salt/minion.d/masterless.conf

The fourth state is a file state, this state is deploying a /etc/salt/minion.d/masterless.conf file. This just happens to be the same file we created earlier. Let's take a quick look at the file being deployed to understand what Salt is doing.

$ cat salt/config/etc/salt/minion.d/masterless.conf file_client: local file_roots: base: - /srv/salt/base bencane: - /srv/salt/bencane

The contents of this file are exactly the same as the masterless.conf file we created in the earlier steps. This means that while right now the configuration file being deployed is the same as what is currently deployed. In the future if any changes are made to the masterless.conf within this git repository, those changes will then be deployed on the next state.highstate execution.

/etc/cron.d/salt-standalone: file.managed: - source: salt://salt/config/etc/cron.d/salt-standalone

The fifth state is also a file state, while this state is also deploying a file the file in question is very different. Let's take a look at this file to understand what it is used for.

$ cat salt/config/etc/cron.d/salt-standalone */2 * * * * root su -c "/usr/bin/salt-call state.highstate --local 2>&1 > /dev/null"

The salt-standalone file is a /etc/cron.d based cron job that appears to be running the same salt-call command we ran earlier to apply the local Salt states. In a masterless configuration there is no scheduled task to tell the salt-minion agent to apply all of the Salt states. The above cron job takes care of this by simply executing a local state.highstate execution every 2 minutes.

Summary of minion/init.sls

Based on the contents of the minion/init.sls we can see how this salt-minion agent is configured to be "Self-Managing". From the above we were able to see that the salt-minion agent is configured to perform the following steps.

  1. Configure the Saltstack Apt repository and GPG keys
  2. Install the salt-minion package or update to the newest version if already installed
  3. Deploy the masterless.conf configuration file into /etc/salt/minion.d/
  4. Deploy the /etc/cron.d/salt-standalone file which deploys a cron job to initiate state.highstate executions

These steps ensure that the salt-minion agent is both configured correctly and applying desired states every 2 minutes.

While the above steps are useful for applying the current states, the whole point of continuous delivery is to deploy changes quickly. To do this we need to also keep the Salt states up-to-date.

Keeping Salt states up-to-date with Salt

One way to keep our Salt states up to date is to tell the salt-minion agent to update them for us.

Within the /srv/salt/bencane/salt directory exists a states directory that contains two files base.sls and bencane.sls. These two files both contain similar Salt states. Let's break down the contents of the base.sls file to understand what actions it's telling the salt-minion agent to perform.

$ cat salt/states/base.sls /srv/salt/base: file.directory: - user: root - group: root - mode: 700 - makedirs: True base_states: git.latest: - name: https://github.com/madflojo/salt-base.git - target: /srv/salt/base - force: True

In the above we can see that the base.sls file contains two Salt states. The first is a file state that is set to ensure the /srv/salt/base directory exists with the defined permissions.

The second state is a bit more interesting as it is a git state which is set to pull the latest copy of the salt-base repository and clone it into /srv/salt/base.

With this state defined, every time the salt-minion agent runs (which is every 2 minutes via the cron.d job); the agent will check for new updates to the repository and deploy them to /srv/salt/base.

The bencane.sls file contains similar states, with the difference being the repository cloned and the location to deploy the state files to.

$ cat salt/states/bencane.sls /srv/salt/bencane: file.directory: - user: root - group: root - mode: 700 - makedirs: True bencane_states: git.latest: - name: https://github.com/madflojo/blog-salt.git - target: /srv/salt/bencane - force: True

At this point, we now have a Masterless Salt Minion that is not only configured to "self-manage" it's own packages, but also the Salt state files that drive it.

As the state files within the git repositories are updated, those updates are then pulled from each Minion every 2 minutes. Whether that change is adding the screen package, or deploying a new Docker container; that change is deployed across many Masterless Minions all at once.

What's next

With the above steps complete, we now have a method for taking a new server and turning it into a Self-Managed Masterless Minion. What we didn't cover however, is how to automate the initial installation and configuration.

In next months article, we will talk about using salt-ssh to automate the first time installation and configuration the salt-minion agent using the same Salt states we used today.

Posted by Benjamin Cane
Categories: FLOSS Project Planets

Linux and POWER8 microprocessors

LinuxPlanet - Mon, 2016-03-21 03:23

With the enormous amount of data being generated every day, POWER8 was designed specifically to keep up with today’s data processing requirements on high end servers.

POWER8 is a symmetric multiprocessor based on the power architecture by IBM. It’s designed specifically for server environments to have faster execution times and to really concentrate performing well on high server workloads. POWER8 is very scalable architecture and scales from 1 to 100+ CPU core per server. Google was involved when POWER8 was designed and they currently use dual socket POWER8 system boards internally.

Systems available with POWER8 cpus started shipping in late 2014. CPU clock ranges between 2.5Ghz all the way up to 5.0Ghz. It has support for DDR3 and DDR4 memory controllers. Memory support is designed to be future proof by being as generic as possible.

Photo: Wikimedia Commons

Open architecture

Design is available for licensing via the OpenPower foundation mainly to support custom made processors for use in cloud computing and applications that need to calculate big amounts of scientific data. POWER8 processor specifications and firmware are available on liberal licensing. Collaborative development model is encouraged and it’s already happening.

Linux has full support of POWER8

IBM has begun submitting code patches for the Linux kernel in 2012 to support POWER8 features. Linux now has full support for POWER8 since version the kernel version 3.8.

Many big Linux distributions, including Debian, Fedora and OpenSUSE has installable iso images available for Power hardware. When it comes to applications almost all software available for traditional cpu architectures are also available for POWER8. Packages build for it usually has the prefix ppc64el/ppc64le or ppc64 when build for big endian mode. There is prebuilt software available for Linux distributions. For example, thousands of Debian Linux packages are available. Remember to limit the search results to include packages for ppc64el to get a better picture what’s available.

While power hardware is transitioning from big endian to little endian, POWER8 is actually bi-endian architechure and it’s capable of accessing data is both modes. However, most Linux distributions concentrate on little endian mode as it has much wider application ecosystem.

Future of POWER8

Some years ago it seemed like that ARM servers were going to be really popular, but as of today it seems that POWER8 is the only viable alternative for the Intel Xeon architecture.

Categories: FLOSS Project Planets

The VMware Hearing and the Long Road Ahead

LinuxPlanet - Mon, 2016-02-29 20:00

[ This blog was crossposted on Software Freedom Conservancy's website. ]

On last Thursday, Christoph Hellwig and his legal counsel attended a hearing in Hellwig's VMware case that Conservancy currently funds. Harald Welte, world famous for his GPL enforcement work in the early 2000s, also attended as an observer and wrote an excellent summary. I'd like to highlight a few parts of his summary, in the context of Conservancy's past litigation experience regarding the GPL.

First of all, in great contrast to the cases here in the USA, the Court acknowledged fully the level of public interest and importance of the case. Judges who have presided over Conservancy's GPL enforcement cases USA federal court take all matters before them quite seriously. However, in our hearings, the federal judges preferred to ignore entirely the public policy implications regarding copyleft; they focused only on the copyright infringement and claims related to it. Usually, appeals courts in the USA are the first to broadly consider larger policy questions. There are definitely some advantages to the first Court showing interest in the public policy concerns.

However, beyond this initial point, I was struck that Harald's summary sounded so much like the many hearings I attended in the late 2000's and early 2010's regarding Conservancy's BusyBox cases. From his description, it sounds to me like judges around the world aren't all that different: they like to ask leading questions and speculate from the bench. It's their job to dig deep into an issue, separate away irrelevancies, and assure that the stark truth of the matter presents itself before the Court for consideration. In an adversarial process like this one, that means impartially asking both sides plenty of tough questions.

That process can be a rollercoaster for anyone who feels, as we do, that the Court will rule on the specific legal issues around which we have built our community. We should of course not fear the hard questions of judges; it's their job to ask us the hard questions, and it's our job to answer them as best we can. So often, here in the USA, we've listened to Supreme Court arguments (for which the audio is released publicly), and every pundit has speculated incorrectly about how the justices would rule based on their questions. Sometimes, a judge asks a clarification question regarding a matter they already understand to support a specific opinion and help their colleagues on the bench see the same issue. Other times, judges asks a questions for the usual reasons: because the judges themselves are truly confused and unsure. Sometimes, particularly in our past BusyBox cases, I've seen the judge ask the opposing counsel a question to expose some bit of bluster that counsel sought to pass off as settled law. You never know really why a judge asked a specific question until you see the ruling. At this point in the VMware case, nothing has been decided; this is just the next step forward in a long process. We enforced here in the USA for almost five years, we've been in litigation in Germany for about one year, and the earliest the Germany case can possibly resolve is this May.

Kierkegaard wrote that it is perfectly true, as the philosophers say, that life must be understood backwards. But they forget the other proposition, that it must be lived forwards. Court cases are a prime example of this phenomenon. We know it is gut-wrenching for our Supporters to watch every twist and turn in the case. It has taken so long for us to reach the point where the question of a combined work of software under the GPL is before a Court; now that it is we all want this part to finish quickly. We remain very grateful to all our Supporters who stick with us, and the new ones who will join today to help us make our funding match on its last day. That funding makes it possible for Conservancy to pursue this and other matters to ensure strong copyleft for our future, and handle every other detail that our member projects need. The one certainty is that our best chance of success is working hard for plenty of hours, and we appreciate that all of you continue to donate so that the hard work can continue. We also thank the Linux developers in Germany, like Harald, who are supporting us locally and able to attend in person and report back.

Categories: FLOSS Project Planets

Give My Regards To Ward 10

LinuxPlanet - Mon, 2016-02-29 09:03

Hello everyone, I’m back!!! Well, partially back I suppose. I just wanted to write a quick update to let you all know that I’m at home now recovering from my operation and all went as well as could be expected. At this early stage all indications are that I could be healthier than I’ve been in over a decade but it’s going to take a long time to recover. There were a few unexpected events during the operation which left me with a drain in my chest for a few days, I wasn’t expecting that but I don’t have the energy to explain it all now. Maybe I will in future. The important thing to remember is it went really well, the surgeon seems really happy and he was practically dancing when I saw him the day after the operation.

Don’t Mess With Me

Right now I am recuperating at home and just about able to shuffle around the house. I’m doing ok though, not in much pain, just overwhelming tired all the time and very tender. I have a rather fetching scar which stretches from deep in my groin up to my chest and must be about 15 inches long. I’m just bragging now though hehehe I will certainly look like a bad ass judging from my scars. I just need a suitable story to go with them. Have you seen The Revenant? I might go with something that. I strangled a bear with my bare hands. No pun intended.

I was treated impeccably by the wonderful staff at The Christie and I really can’t praise them highly enough. My fortnight on Ward 10 was made bearable by their humour and good grace. I couldn’t have asked for more.

I will obviously be out of action for many weeks but rest assured I am fine and I’ll see you all again soon.

Take it easy,


Categories: FLOSS Project Planets

Big Panda’s community panel on Cloud monitoring

LinuxPlanet - Fri, 2016-02-26 06:17
On Wednesday, February 10th, I participated in an online panel on the subject of Cloud Monitoring, as part of MonitoringScape Live (#MonitoringScape), a series of community panels about everything that matters to DevOps, ITOps, and the modern NOC.

 Watch a recording of the panel:

Points to note from the session above:

  • What is cloud?
    Most of the panelists agreed that cloud is a way to get resources on demand. I personally think that a scalable and practically infinite pool of resources with high availability can be termed as cloud.
  • How cloud based architectures have impacted user experience?
    There are mixed feeling about this. While a lot of clutter and noise is generated because getting resources to build and host applications have become easier by the virtue of cloud, I think it is not a bad thing. Cloud has reduced barrier to entry for a lot of application developers. It helps in shielding users from bad experiences during high volume of requests or processing. In a way, cloud has help to serve users more consistently.
  • What is the business case for moving to cloud?
    It is easy to scale, not only scale out and up but also down and in. Out and up helps in consistent user experience and ensuring that the app does not die due to high load. Down and in helps in reducing the expense which might have been incurred due to underutilized resources lying around.
  • What is different about monitoring cloud application?
    Cloud is dynamic. So, in my opinion, monitoring hosts is less important than monitoring services. One should focus on figuring out the health of the service, rather than the health of individual machines. Alerting was a pain point that every panelist pointed out. I think we need to change the way we alert for cloud systems. We need to measure parameters like response time of the application, rather than CPU cycles on individual machine.
  • What technology will impact cloud computing the most in next 5 years?
    This is a tricky question. While I would bet that containers are going to change the way we deploy and run our applications, it was pointed out, and I accept this that predicting technology is hard. So we just need to wait and watch and be prepared to adapt and evolve to whatever comes.
  • Will we ever automate people out of datacenters?
    I think we are almost there. As I see it, there are only two manual tasks left to get a server online which is to connect it to network and power it on. From there, thanks to network boot and technologies like kickstarts, taking things forward is not too difficult and does not need a human inside the datacenter. 
This was a summary of the panel discussion. I would recommend everyone to go through the video and listen to what different panelists had to say about cloud monitoring.
I would like to thank Big Panda for organizing this. There are more community panels that are going to happen with different panelists. Do check them out.
Categories: FLOSS Project Planets
Syndicate content