Feeds

Riku Voipio: Cross-compiling with debian stretch

Planet Debian - Sat, 2017-06-24 12:03
Debian stretch comes with cross-compiler packages for selected architectures: $ apt-cache search cross-build-essential
crossbuild-essential-arm64 - Informational list of cross-build-essential packages for
crossbuild-essential-armel - ...
crossbuild-essential-armhf - ...
crossbuild-essential-mipsel - ...
crossbuild-essential-powerpc - ...
crossbuild-essential-ppc64el - ...

Lets have a quick exact steps guide. But first - while you can use do all this in your desktop PC rootfs, it is more wise to contain yourself. Fortunately, Debian comes with a container tool out of box:
sudo debootstrap stretch /var/lib/container/stretch http://deb.debian.org/debian
echo "strech_cross" | sudo tee /var/lib/container/stretch/etc/debian_chroot
sudo systemd-nspawn -D /var/lib/container/stretch
Then we set up cross-building enviroment for arm64 inside the container:
# Tell dpkg we can install arm64
dpkg --add-architecture arm64
# Add src line to make "apt-get source" work
echo "deb-src http://deb.debian.org/debian stretch main" >> /etc/apt/sources.list
apt-get update
# Install cross-compiler and other essential build tools
apt install --no-install-recommends build-essential crossbuild-essential-arm64
Now we have a nice build enviroment, lets choose something more complicated than the usual kernel/BusyBox to cross-build, qemu:
# Get qemu sources from debian
apt-get source qemu
cd qemu-*
# New in stretch: build-dep works in unpacked source tree
apt-get build-dep -a arm64 .
# Cross-build Qemu for arm64
dpkg-buildpackage -aarm64 -j6 -b
Now that works perfectly for Qemu. For other packages, challenges may appear. For example you may have to se "nocheck" flag to skip build-time unit tests. Or some of the build-dependencies may not be multiarch-enabled. So work continues :)
Categories: FLOSS Project Planets

Peter Bengtsson: How to do performance micro benchmarks in Python

Planet Python - Sat, 2017-06-24 09:50

Suppose that you have a function and you wonder, "Can I make this faster?" Well, you might already have thought that and you might already have a theory. Or two. Or three. Your theory might be sound and likely to be right, but before you go anywhere with it you need to benchmark it first. Here are some tips and scaffolding for doing Python function benchmark comparisons.

Tenets
  1. Internally, Python will warm up and it's likely that your function depends on other things such as databases or IO. So it's important that you don't test function1 first and then function2 immediately after because function2 might benefit from a warm up painfully paid for by function1. So mix up the order of them or cycle through them enough that they all pay for or gain from warm ups.

  2. Look at the median first. The mean (aka. average) is often tainted by spikes and these spikes of slow-down can be caused by your local Spotify client deciding to reindex itself or something some such. Sometimes those spikes matter. For example, garbage collection is inevitable and will have an effect that matters.

  3. Run your functions many times. So many times that the whole benchmark takes a while. Like tens of seconds or more. Also, if you run it significantly long it's likely that all candidates gets punished by the same environmental effects such as garbage collection or CPU being reassinged to something else intensive on your computer.

  4. Try to take your benchmark into different, and possibly more realistic environments. For example, don't rely on reading a file like /Users/peterbe/only/on/my/macbook when, likely, the end destination for your code is an Ubuntu server in AWS. Write your code so that it's easy to copy and paste around, like into a vi/jed editor in an ssh session somewhere.

  5. Sanity check each function before benchmarking them. No need for pytest or anything fancy but just make sure that you test them in some basic way. But the assertion testing is likely to add to the total execution time so don't do it when running the functions.

  6. Avoid "prints" inside the time measured code. A print() is I/O and an "external resource" that can become very unfair to compare CPU bound performance.

  7. Don't fear testing many different functions. If you have multiple ideas of doing a function differently, it's cheap to pile them on. But be careful how you "report" because if there are many different ways of doing something you might accidentally compare different fruit without noticing.

  8. Make sure your functions take at least one parameter. I'm no Python core developer or C hacker but I know there are "murks" within a compiler and interpreter that might do what a regular memoizer might done. Also, the performance difference can be reversed on tiny inputs compared to really large ones.

  9. Be humble with the fact that 0.01 milliseconds difference when doing 10,000 iterations is probably not worth writing a more complex and harder-to-debug function.

The Boilerplate

Let's demonstrate with an example:

# The functions to compare import math def f1(degrees): return math.cos(degrees) def f2(degrees): e = 2.718281828459045 return ( (e**(degrees * 1j) + e**-(degrees * 1j)) / 2 ).real # Assertions assert f1(100) == f2(100) == 0.862318872287684 assert f1(1) == f2(1) == 0.5403023058681398 # Reporting import time import random import statistics functions = f1, f2 times = {f.__name__: [] for f in functions} for i in range(100000): # adjust accordingly so whole thing takes a few sec func = random.choice(functions) t0 = time.time() func(i) t1 = time.time() times[func.__name__].append((t1 - t0) * 1000) for name, numbers in times.items(): print('FUNCTION:', name, 'Used', len(numbers), 'times') print('\tMEDIAN', statistics.median(numbers)) print('\tMEAN ', statistics.mean(numbers)) print('\tSTDEV ', statistics.stdev(numbers))

Let's break that down a bit.

  • The first area (# The functions to compare) is all up to you. This silly example tries to peg Python's builtin math.cos against your own arithmetic expression.

  • The second area (# Assertions) is where you do some basic sanity checks/tests. This comes in handy to make sure if you keep modifying the functions more and more to try to squeeze out some extra juice.

  • The last area (# Reporting) is the boilerplat'y area. You obviously have to change the line functions = f1, f2 to include all the named functions you have in the first area. And the number of iterations totally depends on how long the functions take to run. Here it's 100,000 times which is kinda ridiculous but I just needed a dead simple function to demonstrate.

  • Note that each measurement is in milliseconds.

You run that and get something like this:

FUNCTION: f1 Used 49990 times MEDIAN 0.0 MEAN 0.00045161219591330375 STDEV 0.0011268475946446341 FUNCTION: f2 Used 50010 times MEDIAN 0.00095367431640625 MEAN 0.0009188626294516487 STDEV 0.000642871632138125 More Examples

The example above already broke one of the tenets in that these functions were simply too fast. Doing rather basic mathematics is just too fast to compare with such a trivial benchmark. Here are some other examples:

Remove duplicates from list without losing order # The functions to compare def f1(seq): checked = [] for e in seq: if e not in checked: checked.append(e) return checked def f2(seq): checked = [] seen = set() for e in seq: if e not in seen: checked.append(e) seen.add(e) return checked def f3(seq): checked = [] [checked.append(i) for i in seq if not checked.count(i)] return checked def f4(seq): seen = set() return [x for x in seq if x not in seen and not seen.add(x)] def f5(seq): def generator(): seen = set() for x in seq: if x not in seen: seen.add(x) yield x return list(generator()) # Assertion import random def _random_seq(length): seq = [] for _ in range(length): seq.append(random.choice( 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ' )) return seq L = list('abca') assert f1(L) == f2(L) == f3(L) == f4(L) == f5(L) == list('abc') L = _random_seq(10) assert f1(L) == f2(L) == f3(L) == f4(L) == f5(L) # Reporting import time import statistics functions = f1, f2, f3, f4, f5 times = {f.__name__: [] for f in functions} for i in range(3000): seq = _random_seq(i) for _ in range(len(functions)): func = random.choice(functions) t0 = time.time() func(seq) t1 = time.time() times[func.__name__].append((t1 - t0) * 1000) for name, numbers in times.items(): print('FUNCTION:', name, 'Used', len(numbers), 'times') print('\tMEDIAN', statistics.median(numbers)) print('\tMEAN ', statistics.mean(numbers)) print('\tSTDEV ', statistics.stdev(numbers))

Results:

FUNCTION: f1 Used 3029 times MEDIAN 0.6871223449707031 MEAN 0.6917867380307822 STDEV 0.42611748137761174 FUNCTION: f2 Used 2912 times MEDIAN 0.054955482482910156 MEAN 0.05610262627130026 STDEV 0.03000829926668248 FUNCTION: f3 Used 2985 times MEDIAN 1.4472007751464844 MEAN 1.4371055654145566 STDEV 0.888658217522005 FUNCTION: f4 Used 2965 times MEDIAN 0.051975250244140625 MEAN 0.05343245816673035 STDEV 0.02957275548477728 FUNCTION: f5 Used 3109 times MEDIAN 0.05507469177246094 MEAN 0.05678296204202234 STDEV 0.031521596461048934

Winner:

def f4(seq): seen = set() return [x for x in seq if x not in seen and not seen.add(x)]

Fastest way to count the number of lines in a file # The functions to compare import codecs import subprocess def f1(filename): count = 0 with codecs.open(filename, encoding='utf-8', errors='ignore') as f: for line in f: count += 1 return count def f2(filename): with codecs.open(filename, encoding='utf-8', errors='ignore') as f: return len(f.read().splitlines()) def f3(filename): return int(subprocess.check_output(['wc', '-l', filename]).split()[0]) # Assertion filename = 'big.csv' assert f1(filename) == f2(filename) == f3(filename) == 9999 # Reporting import time import statistics import random functions = f1, f2, f3 times = {f.__name__: [] for f in functions} filenames = 'dummy.py', 'hacker_news_data.txt', 'yarn.lock', 'big.csv' for _ in range(200): for fn in filenames: for func in functions: t0 = time.time() func(fn) t1 = time.time() times[func.__name__].append((t1 - t0) * 1000) for name, numbers in times.items(): print('FUNCTION:', name, 'Used', len(numbers), 'times') print('\tMEDIAN', statistics.median(numbers)) print('\tMEAN ', statistics.mean(numbers)) print('\tSTDEV ', statistics.stdev(numbers))

Results:

FUNCTION: f1 Used 800 times MEDIAN 5.852460861206055 MEAN 25.403797328472137 STDEV 37.09347378640582 FUNCTION: f2 Used 800 times MEDIAN 0.45299530029296875 MEAN 2.4077045917510986 STDEV 3.717931526478758 FUNCTION: f3 Used 800 times MEDIAN 2.8804540634155273 MEAN 3.4988239407539368 STDEV 1.3336427480808102

Winner:

def f2(filename): with codecs.open(filename, encoding='utf-8', errors='ignore') as f: return len(f.read().splitlines())

Conclusion

No conclusion really. Just wanted to point out that this is just a hint of a decent start when doing performance benchmarking of functions.

There is also the timeit built-in for "provides a simple way to time small bits of Python code" but it has the disadvantage that your functions are not allowed to be as complex. Also, it's harder to generate multiple different fixtures to feed your functions without that fixture generation effecting the times.

There's a lot of things that this boilerplate can improve such as sorting by winner, showing percentages comparisons against the fastests, ASCII graphs, memory allocation differences, etc. That's up to you.

Categories: FLOSS Project Planets

FOSSASIA SUMMIT 2017 and KDE

Planet KDE - Sat, 2017-06-24 03:24



I got an opportunity to represent KDE in FOSSASIA 2017 held in mid-March at Science Center, Singapore. There were many communities showcasing their hardware, designs, graphics, and software.
 Science Center, SingaporeI talked about KDE, what it aims at, what are the various programs that KDE organizes to help budding developers, and how are these mentored. I walked through all the necessities to start contributing in KDE by introducing the audience to KDE Bugzilla, the IRCs channels, various application Domains, and the SoK(Season of KDE) proposal format. 
                                  

Then I shared my journey in KDE, briefed about my projects under Season of KDE and Google Summer of Code. The audience were really enthusiastic and curious to start contributing in KDE. I sincerely thank FOSSASIA for giving me this wonderful opportunity.
Overall working in KDE has been a very enriching experience. I wish to continue contributing in KDE and also share my experiences to help the budding developers to get started.

Categories: FLOSS Project Planets

Norbert Preining: Calibre 3 for Debian

Planet Debian - Fri, 2017-06-23 23:42

I have updated my Calibre Debian repository to include packages of the current Calibre 3.1.1. As with the previous packages, I kept RAR support in to allow me to read comic books. I also have forwarded my changes to the maintainer of Calibre in Debian so maybe we will have soon official packages, too.

The repository location hasn’t changed, see below.

deb http://www.preining.info/debian/ calibre main deb-src http://www.preining.info/debian/ calibre main

The releases are signed with my Debian key 0x6CACA448860CDC13

Enjoy

Categories: FLOSS Project Planets

Joachim Breitner: The perils of live demonstrations

Planet Debian - Fri, 2017-06-23 19:54

Yesterday, I was giving a talk at the The South SF Bay Haskell User Group about how implementing lock-step simulation is trivial in Haskell and how Chris Smith and me are using this to make CodeWorld even more attractive to students. I gave the talk before, at Compose::Conference in New York City earlier this year, so I felt well prepared. On the flight to the West Coast I slightly extended the slides, and as I was too cheap to buy in-flight WiFi, I tested them only locally.

So I arrived at the offices of Target1 in Sunnyvale, got on the WiFi, uploaded my slides, which are in fact one large interactive CodeWorld program, and tried to run it. But I got a type error…

Turns out that the API of CodeWorld was changed just the day before:

commit 054c811b494746ec7304c3d495675046727ab114 Author: Chris Smith <cdsmith@gmail.com> Date: Wed Jun 21 23:53:53 2017 +0000 Change dilated to take one parameter. Function is nearly unused, so I'm not concerned about breakage. This new version better aligns with standard educational usage, in which "dilation" means uniform scaling. Taken as a separate operation, it commutes with rotation, and preserves similarity of shapes, neither of which is true of scaling in general.

Ok, that was quick to fix, and the CodeWorld server started to compile my code, and compiled, and aborted. It turned out that my program, presumably the larges CodeWorld interaction out there, hit the time limit of the compiler.

Luckily, Chris Smith just arrived at the venue, and he emergency-bumped the compiler time limit. The program compiled and I could start my presentation.

Unfortunately, the biggest blunder was still awaiting for me. I came to the slide where two instances of pong are played over a simulated network, and my point was that the two instances are perfectly in sync. Unfortunately, they were not. I guess it did support my point that lock-step simulation can easily go wrong, but it really left me out in the rain there, and I could not explain it – I did not modify this code since New York, and there it worked flawless2. In the end, I could save my face a bit by running the real pong game against an attendee over the network, and no desynchronisation could be observed there.

Today I dug into it and it took me a while, and it turned out that the problem was not in CodeWorld, or the lock-step simulation code discussed in our paper about it, but in the code in my presentation that simulated the delayed network messages; in some instances it would deliver the UI events in different order to the two simulated players, and hence cause them do something different. Phew.

  1. Yes, the retail giant. Turns out that they have a small but enthusiastic Haskell-using group in their IT department.

  2. I hope the video is going to be online soon, then you can check for yourself.

Categories: FLOSS Project Planets

Dale McGladdery: Site Reset BASH Script

Planet Drupal - Fri, 2017-06-23 16:57

I'm experimenting with Drupal 8 and some its new features like migration and configuration management. A reset script is a convenient timesaver and many people have shared their techniques for doing so. Having benefited from other's generosity I wanted to return the favour by sharing my current work-in-progress.

This script:

  • Leaves the codebase as-is
  • Deletes and reinstalls the database
  • Deletes the 'files' directory
  • Deletes the specified configuration management directory
  • Enables additional modules, as required
  • Deletes settings.php and allows Drupal install to recreate
  • Updates the new settings.php $config_directories['sync'] entry
  • Adds a settings-migrate.php include to settings.php

I call the script via Drush with an entry in drushrc.php:

$options['shell-aliases']['428-reset'] = '!sh /Users/dale/.drush/sha-scripts/g428-reset.sh';

The script is evolving as I learn more about Drupal 8 and refine my workflow. Comments and suggestions are welcome.

drupal-reset.sh:

#!/bin/bash

# Reinstall a Drupal instance to reset it back to a know state.
# A file base and Drush alias must already be configured.

DRUSH8='/Users/dale/bin/drush8/vendor/bin/drush'
DRUPALDIR='/Users/dale/Sites/group428'
CONFIGDIR='sites/default/group42config/sync'
DRUSHID='@g428'
SITE_NAME='Group 428'
ACCOUNT='admin'
PASS='staring-password'
EMAIL='no-reply@group42.ca'
DB_URL='mysql://group428:group428@localhost/group428'


# Nuke the database
$DRUSH8 $DRUSHID sql-drop --yes

# Nuke the filebase
echo "Resetting files"
chmod -R u+w $DRUPALDIR/*
rm $DRUPALDIR/sites/default/settings.php
rm -r $DRUPALDIR/sites/default/files
rm -r $DRUPALDIR/$CONFIGDIR

# Fresh Drupal install
cd $DRUPALDIR
$DRUSH8 site-install standard --db-url=$DB_URL --site-name=$SITE_NAME --account-name=$ACCOUNT --account-pass=$PASS --account-mail=$EMAIL --yes

# Base configuration
$DRUSH8 $DRUSHID en admin_toolbar,admin_toolbar_tools --yes

# Allow upcoming changes to settings.php
chmod u+w $DRUPALDIR/sites/default
chmod u+w $DRUPALDIR/sites/default/settings.php

# Configuration Management
sed -i '' "/config\_directories\['sync'\]/d" $DRUPALDIR/sites/default/settings.php
echo "\$config_directories['sync'] = '$CONFIGDIR';" >> $DRUPALDIR/sites/default/settings.php

# Migrate
echo "\ninclude 'settings-migrate.php';" >> $DRUPALDIR/sites/default/settings.php
$DRUSH8 $DRUSHID en migrate,migrate_drupal,migrate_plus,migrate_tools,migrate_upgrade --yes

# Login
$DRUSH8 $DRUSHID uli Tagged: AttachmentSize Drupal Reset BASH Script1.43 KB
Categories: FLOSS Project Planets

Joey Hess: PV array is hot

Planet Debian - Fri, 2017-06-23 16:43

Only took a couple hours to wire up and mount the combiner box.

Something about larger wiring like this is enjoyable. So much less fiddly than what I'm used to.

And the new PV array is hot!

Update: The panels have an open circuit voltage of 35.89 and are in strings of 2, so I'd expect to see 71.78 V with only my multimeter connected. So I'm losing 0.07 volts to wiring, which is less than I designed for.

Categories: FLOSS Project Planets

Bruce Snyder: Annual Spinal Cord Injury Re-evaluation

Planet Apache - Fri, 2017-06-23 12:42
Recently I went back to Craig Hospital for an annual spinal cord injury re-evaluation and the results were very positive. It was really nice to see some familiar faces of the people for whom I have such deep admiration like my doctors, physical therapists and administrative staff. My doctor and therapists were quite surprised to see how well I am doing, especially given that I'm still seeing improvements three years later. Mainly because so many spinal cord injury patients have serious issues even years later. I am so lucky to no longer be taking any medications and to be walking again.
It has also been nearly one year since I have been back to Craig Hospital and it seems like such a different place to me now. Being back there again feels odd for a couple of reasons. First, due to the extensive construction/remodel, the amount of change to the hospital makes it seem like a different place entirely. It used to be much smaller which encouraged more close interaction between patients and staff. Now the place is so big (i.e., big hallways, larger individual rooms, etc.) that patients can have more privacy if they want or even avoid some forms of interaction. Second, although I am comfortable being around so many folks who have been so severely injured (not everyone is), I have noticed that some folks are confused by me. I can tell the way they look at me that they are wondering what I am doing there because, outwardly, I do not appear as someone who has experienced a spinal cord injury. I have been lucky enough to make it out of the wheelchair and to walk on my own. Though my feet are still paralyzed, I wear flexible, carbon fiber AFO braces on my legs and walk with one arm crutch, the braces are covered by my pants so it's puzzling to many people.
The folks who I wish I could see more are the nurses and techs. These are the folks who helped me the most when I was so vulnerable and confused and to whom I grew very attached. To understand just how attached I was, simply moving to a more independent room as I was getting better was upsetting to me because I was so emotionally attached to them. I learned that these people are cut from a unique cloth and possess very big hearts to do the work they do every day. Because they are so involved with the acute care of in-patients, they are very busy during the day and not available for much socializing as past patients come through. Luckily, there was one of my nurses who I ran into and was able to spend some time speaking with him. I really enjoyed catching up with him and hearing about new adventures in his career. He was one of the folks I was attached to at the time and he really made a difference in my experience. I will be eternally thankful for having met these wonderful people during such a traumatic time in my life.
Today I am walking nearly 100% of the time with the leg braces and have been for over two years. I am working to rebuild my calves and my glutes, but this is a very, very long and slow process due to severe muscle atrophy after not being able to move my glutes for five months and my calves for two years. Although my feet are not responding yet, we will see what the future holds. I still feel so very lucky to be alive and continuing to make progress.
Although I cannot run at all or cycle the way I did previously, I am very thankful to be able to work out as much as I can. I am now riding the stationary bike regularly, using my Total Gym (yes, I have a Chuck Norris Total Gym) to build my calves, using a Bosu to work on balance and strength in my lower body, doing ab roller workouts and walking as much as I can both indoors on a treadmill and outside. I'd like to make time for swimming laps again, but all of this can be time consuming (and tiring!). I am not nearly as fit as I was at the time of my injury, but I continue to work hard and to see noticeable improvements for which I am truly thankful.
Thank you to everyone who continues to stay in touch and check in on me from time-to-time. You may not think it's much to send a quick message, but these messages have meant a lot to me through this process. The support from family and friends has been what has truly kept me going. The patience displayed by Bailey, Jade and Janene is pretty amazing.
Next month will mark the three year anniversary of my injury. It seems so far away and yet it continues to affect my life every day. My life will never be the same, but I do believe I have found peace with this entire ordeal.
Categories: FLOSS Project Planets

Colm O hEigeartaigh: SSO support for Apache Syncope REST services

Planet Apache - Fri, 2017-06-23 11:32
Apache Syncope has recently added SSO support for its REST services in the 2.0.3 release. Previously, access to the REST services of Syncope was via HTTP Basic Authentication. From the 2.0.3 release, SSO support is available using JSON Web Tokens (JWT). In this post, we will look at how this works and how it can be configured.

1) Obtaining an SSO token from Apache Syncope

As stated above, in the past it was necessary to supply HTTP Basic Authentication credentials when invoking on the REST API. Let's look at an example using curl. Assume we have a running Apache Syncope instance with a user "alice" with password "ecila". We can make a GET request to the user self service via:
  • curl -u alice:ecila http://localhost:8080/syncope/rest/users/self
It may be inconvenient to supply user credentials on each request or the authentication process might not scale very well if we are authenticating the password to a backend resource. From Apache Syncope 2.0.3, we can instead get an SSO token by sending a POST request to "accessTokens/login" as follows:
  • curl -I -u alice:ecila -X POST http://localhost:8080/syncope/rest/accessTokens/login
The response contains two headers:
  • X-Syncope-Token: A JWT token signed according to the JSON Web Signature (JWS) spec.
  • X-Syncope-Token-Expire: The expiry date of the token
The token in question is signed using the (symmetric) "HS512" algorithm. It contains the subject "alice" and the issuer of the token ("ApacheSyncope"), as well as a random token identifier, and timestamps that indicate when the token was issued, when it expires, and when it should not be accepted before.

The signing key and the issuer name can be changed by editing 'security.properties' and specifying new values for 'jwsKey' and 'jwtIssuer'. Please note that it is critical to change the signing key from the default value! It is also possible to change the signature algorithm from the next 2.0.4 release via a custom 'securityContext.xml' (see here). The default lifetime of the token (120 minutes) can be changed via the "jwt.lifetime.minutes" configuration property for the domain.

2) Using the SSO token to invoke on a REST service

Now that we have an SSO token, we can use it to invoke on a REST service instead of specifying our username and password as before. For Syncope 2.0.3 only, the header name is the same as the header name above "X-Syncope-Token". From Syncope 2.0.4 onwards, the header name is "Authorization: Bearer <token>", e.g.:
  • curl -H "Authorization: Bearer eyJ0e..." http://localhost:8080/syncope/rest/users/self
The signature is first checked on the token, then the issuer is verified so that it matches what is configured, and then the expiry and not-before dates are checked. If the identifier matches that of a saved access token then authentication is successful.

Finally, SSO tokens can be seen in the admin console under "Dashboard/Access Token", where they can be manually revoked by the admin user:


Categories: FLOSS Project Planets

Patrick Kennedy: Using Docker for Flask Application Development (not just Production!)

Planet Python - Fri, 2017-06-23 11:31
Introduction

I’ve been using Docker for my staging and production environments, but I’ve recently figured out how to make Docker work for my development environment as well.

When I work on my personal web applications, I have three environments:

  • Production – the actual application that serves the users
  • Staging – a replica of the production environment on my laptop
  • Development – the environment where I write source code, unit/integration test, debug, integrate, etc.

While having a development environment that is significantly different (ie. not using Docker) from the staging/production environments is not an issue, I’ve really enjoyed the switch to using Docker for development.

The key aspects that were important to me when deciding to switch to Docker for my development environment were:

  1. Utilize the Flask development server instead of a production web server (Gunicorn)
  2. Allow easy access to my database (Postgres)
  3. Maintain my unit/integration testing capability

This blog post shows how to configure Docker and Docker Compose for creating a development environment that you can easily use on a day-to-day basis for developing a Flask application.

For reference, my Flask project that is the basis for this blog post can be found on GitLab.

Architecture

The architecture for this Flask application is illustrated in the following diagram:

Each key component has its own sub-directory in the repository:

$ tree . ├── docker-compose.yml ├── nginx │ ├── Dockerfile ├── postgresql │ └── Dockerfile * Not included in git repository └── web ├── Dockerfile ├── create_postgres_dockerfile.py ├── instance ├── project ├── requirements.txt └── run.py Configuration of Dockerfiles and Docker Compose for Production

The setup for my application utilizes separate Dockerfiles for the web application, Nginx, and Postgres; these services are integrated together using Docker Compose.

Web Application Service

Originally, I had been using the python-*:onbuild image for my web application image, as this seemed like a convenient and reasonable option (it provided the standard configurations for a python project). However, in reading the notes in the python page on Docker Hub, the use of the python-*:onbuild images are not recommended anymore.

Therefore, I created a Dockerfile that I use for my web application:

FROM python:3.6.1 MAINTAINER Patrick Kennedy <patkennedy79@gmail.com> # Create the working directory RUN mkdir -p /usr/src/app/web WORKDIR /usr/src/app/web # Install the package dependencies (this step is separated # from copying all the source code to avoid having to # re-install all python packages defined in requirements.txt # whenever any source code change is made) COPY requirements.txt /usr/src/app/web RUN pip install --no-cache-dir -r requirements.txt # Copy the source code into the container COPY . /usr/src/app/web

It may seem odd or out of sequence to copy the requirements.txt file from the local system into the container separately from the entire repository, but this is intentional. If you copy over the entire repository and then ‘pip install’ all the packages in requirements.txt, any change in the repository will cause all the packages to be re-installed (this can take a long time and is unnecessary) when you build this container. A better approach is to first just copy over the requirements.txt file and then run ‘pip install’. If changes are made to the repository (not to requirements.txt), then the cached intermediate container (or layer in your service) will be utilized. This is a big time saver, especially during development. Of course, if you make a change to requirements.txt, this will be detected during the next build and all the python packages will be re-installed in the intermediate container.

Nginx Service

Here is the Dockerfile that I use for my Nginx service:

FROM nginx:1.11.3 RUN rm /etc/nginx/nginx.conf COPY nginx.conf /etc/nginx/ RUN rm /etc/nginx/conf.d/default.conf COPY family_recipes.conf /etc/nginx/conf.d/

There is a lot of complexity when it comes to configuring Nginx, so please refer to my blog post entitled ‘How to Configure Nginx for a Flask Web Application‘.

Postgres Service

The Dockerfile for the postgres service is very simple, but I actually use a python script (create_postgres_dockerfile.py) to auto-generate it based on the credentials of my postgres database. The structure of the Dockerfile is:

FROM postgres:9.6 # Set environment variables ENV POSTGRES_USER <postgres_user> ENV POSTGRES_PASSWORD <postgres_password> ENV POSTGRES_DB <postgres_database> Docker Compose

Docker Compose is a great tool for connecting different services (ie. containers) to create a fully functioning application. The configuration of the application is defined in the docker-compose.yml file:

version: '2' services: web: restart: always build: ./web expose: - "8000" volumes: - /usr/src/app/web/project/static command: /usr/local/bin/gunicorn -w 2 -b :8000 project:app depends_on: - postgres nginx: restart: always build: ./nginx ports: - "80:80" volumes: - /www/static volumes_from: - web depends_on: - web data: image: postgres:9.6 volumes: - /var/lib/postgresql command: "true" postgres: restart: always build: ./postgresql volumes_from: - data expose: - "5432"

The following commands need to be run to build and then start these containers:

docker-compose build docker-compose -f docker-compose.yml up -d

Additionally, I utilize a script to re-initialize the database, which is frequently used in the staging environment:

docker-compose run --rm web python ./instance/db_create.py

To see the application, utilize your favorite web browser and navigate to http://ip_of_docker_machine/ to access the application; this will often be http://192.168.99.100/. The command ‘docker-machine ip’ will tell you the IP address to use.

Changes Needed for Development Environment

The easiest way to make the necessary changes for the development environment is to create the changes in the docker-compose.override.yml file.

Docker Compose automatically checks for docker-compose.yml and docker-compose.override.yml when the ‘up’ command is used. Therefore, in development use ‘docker-compose up -d’ and in production or staging use ‘docker-compose -f docker-compose.yml up -d’ to prevent the loading of docker-compose.override.yml.

Here are the contents of the docker-compose.override.yml file:

version: '2' services: web: build: ./web ports: - "5000:5000" environment: - FLASK_APP=run.py - FLASK_DEBUG=1 volumes: - ./web/:/usr/src/app/web command: flask run --host=0.0.0.0 postgres: ports: - "5432:5432"

Each line in the docker-compose.override.yml overrides the applicable setting from docker-compose.ml.

Web Application Service

For the web application container, the web server is being switched from Gunicorn (used in production) to the Flask development server. The Flask development server allows auto-reloading of the application whenever a change is made and has debugging capability right in the browser when exceptions occurs. These are create features to have during development. Additionally, port 5000 is now accessible from the web application container. This allows the developer to gain access to the Flask web server by navigating to http://ip_of_docker_machine:5000.

Postgres Service

For the postgres container, the only change that is made is to allow access to port 5432 by the host machine instead of just other services. For reference, here is a good explanation of the use of ‘ports’ vs. ‘expose’ from Stack Overflow.

This change allows direct access to the postgres database using the psql shell. When accessing the postgres database, I prefer specifying the URI:

psql postgresql://<username>:<password>@192.168.99.100:5432/<postgres_database>

This allows you access to the postgres database, which will come in really handy at some point during development (almost a guarantee).

Nginx Service

While there are no override commands for the Nginx service, this service will be basically ignored during development, as the web application is accessed directly through the Flask web server by navigating to http://ip_of_docker_machine:5000/. I have not found a clear way to disable a service, so the Nginx service is left untouched.

Running the Development Application

The following commands should be run to build and run the containers:

docker-compose stop # If there are existing containers running, stop them docker-compose build docker-compose up -d

Since you are running in a development environment with the Flask development server, you will need to navigate to http://ip_of_docker_machine:5000/ to access the application (for example, http://192.168.99.100:5000/). The command ‘docker-machine ip’ will tell you the IP address to use.

Another helpful command that allows quick access to the logs of a specific container is:

docker-compose logs <service>

For example, to see the logs of the web application, run ‘docker-compose logs web’. In the development environment, you should see something similar to:

$ docker-compose logs web Attaching to flaskrecipeapp_web_1 web_1 | * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit) web_1 | * Restarting with stat web_1 | * Debugger is active! web_1 | * Debugger pin code: ***-***-*** Conclusion

Docker is an amazing product that I have really come to enjoy using for my development environment. I really feel that using Docker makes you think about your entire architecture, as Docker provides such an easy way to start integrating complex services, like web services, databases, etc.

Using Docker for a development environment does require a good deal of setup, but once you have the configuration working, it’s a great way for quickly developing your application while still having that one foot towards the production environment.

References

Docker Compose File (version 2) Reference
https://docs.docker.com/compose/compose-file/compose-file-v2/

Dockerizing Flask With Compose and Machine – From Localhost to the Cloud
NOTE: This was the blog post that got me really excited to learn about Docker!
https://realpython.com/blog/python/dockerizing-flask-with-compose-and-machine-from-localhost-to-the-cloud/

Docker Compose for Development and Production – GitHub – Antonis Kalipetis
https://github.com/akalipetis/docker-compose-dev-prod
Also, check out Antonis’ talk from DockerCon17 on YouTube.

Overview of Docker Compose CLI
https://docs.docker.com/compose/reference/overview/

Docker Command Reference

Start or Re-start Docker Machine:
$ docker-machine start default
$ eval $(docker-machine env default)

Build all of the images in preparation for running your application:
$ docker-compose build

Using Docker Compose to run the multi-container application (in daemon mode):
$ docker-compose up -d
$ docker-compose -f docker-compose.yml up -d

View the logs from the different running containers:
$ docker-compose logs
$ docker-compose logs web # or whatever service you want

Stop all of the containers that were started by Docker Compose:
$ docker-compose stop

Run a command in a specific container:
$ docker-compose run –rm web python ./instance/db_create.py
$ docker-compose run web bash

Check the containers that are running:
$ docker ps

Stop all running containers:
$ docker stop $(docker ps -a -q)

Delete all running containers:
$ docker rm $(docker ps -a -q)

Delete all untagged Docker images
$ docker rmi $(docker images | grep “^” | awk ‘{print $3}’)

Categories: FLOSS Project Planets

Elena 'valhalla' Grandi: On brokeness, the live installer and being nice to people

Planet Debian - Fri, 2017-06-23 10:57
On brokeness, the live installer and being nice to people

This morning I've read this https://blog.einval.com/2017/06/22#troll.

I understand that somebody on the internet will always be trolling, but I just wanted to point out:

* that the installer in the old live images has been broken (for international users) for years
* that nobody cared enough to fix it, not even the people affected by it (the issue was reported as known in various forums, but for a long time nobody even opened an issue to let the *developers* know).

Compare this with the current situation, with people doing multiple tests as the (quite big number of) images were being built, and a fix released soon after for the issues found.

I'd say that this situation is great, and that instead of trolling around we should thank the people involved in this release for their great job.
Categories: FLOSS Project Planets

Jonathan Dowland: WD drive head parking update

Planet Debian - Fri, 2017-06-23 10:28

An update for my post on Western Digital Hard Drive head parking: disabling the head-parking completely stopped the Load_Cycle_Count S.M.A.R.T. attribute from incrementing. This is probably at the cost of power usage, but I am not able to assess the impact of that as I'm not currently monitoring the power draw of the NAS (Although that's on my TODO list).

Categories: FLOSS Project Planets

Bits from Debian: Hewlett Packard Enterprise Platinum Sponsor of DebConf17

Planet Debian - Fri, 2017-06-23 10:15

We are very pleased to announce that Hewlett Packard Enterprise (HPE) has committed support to DebConf17 as a Platinum sponsor.

"Hewlett Packard Enterprise is excited to support Debian's annual developer conference again this year", said Steve Geary, Senior Director R&D at Hewlett Packard Enterprise. "As Platinum sponsors and member of the Debian community, HPE is committed to supporting Debconf. The conference, community and open distribution are foundational to the development of The Machine research program and will our bring our Memory Driven Computing agenda to life."

HPE is one of the largest computer companies in the world, providing a wide range of products and services, such as servers, storage, networking, consulting and support, software, and financial services.

HPE is also a development partner of Debian, and provides hardware for port development, Debian mirrors, and other Debian services (hardware donations are listed in the Debian machines page).

With this additional commitment as Platinum Sponsor, HPE contributes to make possible our annual conference, and directly supports the progress of Debian and Free Software helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much Hewlett Packard Enterprise, for your support of DebConf17!

Become a sponsor too!

DebConf17 is still accepting sponsors. Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf17 website at https://debconf17.debconf.org.

Categories: FLOSS Project Planets

GSoC’17-Week #2

Planet KDE - Fri, 2017-06-23 10:09

Week 2 of GSoC’s coding period was pretty dope :D. After all the hard work from the last week, I got my downloader to pull data from the website share.krita.org. As from the Week #1’s work status update, we have all discussed what all classes and functions were required to get this running. I was able to get it done and the downloader started downloading the data from the website.

PS: To get my project up and running we need KNewStuff framework version to be 5.29+. KNS team has done a lot of work in the area creating things to move pretty much good. (They have isolated KNSCore and KNS3 from then).

Before I proceed, I would love to mention the immense support and help given to me by Leinir for understanding how KNS and KNSCore works. If he didn’t notice my blog post which I posted at the starting of my project and official coding period, I would be lost at every single point of my project :P. As well as my Krita community people

As we had discussed the different classes I have created for the project to proceed, we have used certain KDE primary level framework/APIs in order to complete the GUI and get things working as planned.

Some of them, I have listed below.

  • Kconfig

The KConfig framework offers functionality around reading and writing configuration. KConfig consists of two parts, KConfigCore and KConfigGui. KConfigCore offers an abstract API to configuration files. It allows grouping and transparent cascading and (de-)serialization. KConfigGui offers, on top of KConfigCore, shortcuts, and configuration for some common cases, such as session and window restoration.

  • KWidgetsAddons

KWidgetAddons contains higher-level user interface elements for common tasks. KWidgetAddons contains widgets for the following areas:

  • Keyboard accelerators
  • Action menus and selections
  • Capacity indicator
  • Character selection
  • Color selection
  • Drag decorators
  • Fonts
  • Message boxes
  • Passwords
  • Paging of e.g. wizards
  • Popups and other dialogs
  • Rating
  • Ruler
  • Separators
  • Squeezed labels
  • Titles
  • URL line edits with drag and drop
  • View state serialization
  • KRatingWidget

This class is a part of KWidgetAddons class. It displays a rating value as a row of pixmaps. The KRatingWidget displays a range of stars or other arbitrary pixmaps and allows the user to select a certain number by mouse.

Hence, till now I have implemented order by functionality to sort data items accordingly. Have included functionalities to sort according to categories. While the categories get populated according to the knsrc file. We have options to rate each item on star basis according to the likes of the user. We also have the option to see the expanded details of each item. Revised a methodology to view items in a different mode such as icon mode and list mode. Also, Function to search between the items is also working just fine.

To see all the changes and test it visually, Created a test UI and shows how things work out and pull in data from the site. I will attach it here:

Content downloader with the basic test UI, which does look like the existing KNewStuff. Next step is to change it to our own customizable UI.

Plans for Week #3

Start creating the UI for the resource downloader which will be customizable hereafter. We just need to tweak the existing UI to our need.

Here is what we actually need.

After that, This week is being followed by the first Evaluation of our work so I have mostly done my part well. Completed the tasks as required in time. As well as my Krita community too. So, after evaluation for the first phase gets over, I will be doing the following.

  1. Give the work done till now a test run with the new and revised GUI for the content downloader.
  2. Fix any bugs if there exists or noticed at the testing phase in the content downloader and fix some of the bugs which might exist in the Resource Manager after discussing it with the Krita community.
  3. Meanwhile, these are going through, will be documenting the functions and classes created and changed.

Here is my branch were all the work I am done is going to.

https://cgit.kde.org/krita.git/?h=Aniketh%2FT6108-Integrate-with-share.krita.org

Will be back with more updates later next week.

Cheers.


Categories: FLOSS Project Planets

Understanding Xwayland - Part 1 of 2

Planet KDE - Fri, 2017-06-23 10:00

In this week’s article for my ongoing Google Summer of Code (GSoC) project I planned on writing about the basic idea behind the project, but I reconsidered and decided to first give an overview on how Xwayland functions on a high-level and in the next week take a look at its inner workings in detail. The reason for that is, that there is not much Xwayland documentation available right now. So these two articles are meant to fill this void in order to give interested beginners a helping hand. And in two weeks I’ll catch up on explaining the project’s idea.

As we go high level this week the first question is, what is Xwayland supposed to achieve at all? You may know this. It’s something in a Wayland session ensuring that applications, which don’t support Wayland but only the old Xserver still function normally, i.e. it ensures backwards compatibility. But how does it do this? Before we go into this, there is one more thing to talk about, since I called Xwayland only something before. What is Xwayland exactly? How does it look to you on your Linux system? We’ll see in the next week that it’s not as easy to answer as the following simple explanation makes it appear, but for now this is enough: It’s a single binary containing an Xserver with a special backend written to communicate with the Wayland compositor active on your system - for example with KWin in a Plasma Wayland session.

To make it more tangible let’s take a look at Debian: There is a package called Xwayland and it consists of basically only the aforementioned binary file. This binary gets copied to /usr/bin/Xwayland. Compare this to the normal Xserver provided by X.org, which in Debian you can find in the package xserver-xorg-core. The respective binary gets put into /usr/bin/Xorg together with a symlink /usr/bin/X pointing to it.

While the latter is the central building block in an X session and therefore gets launched before anything else with graphical output, the Xserver in the Xwayland binary works differently: It is embedded in a Wayland session. And in a Wayland session the Wayland compositor is the central building block. This means in particular that the Wayland compositor also takes up the role of being the server, who talks to Wayland native applications with graphical output as its clients. They send request to it in order to present their painted stuff on the screen. The Xserver in the Xwayland binary is only a necessary link between applications, which are only able to speak to an Xserver, and the Wayland compositor/server. Therefore the Xwayland binary gets launched later on by the compositor or some other process in the workspace. In Plasma it’s launched by KWin after the compositor has initialized the rendering pipeline. You find the relevant code here.

Although in this case KWin also establishes some communication channels with the newly created Xwayland process, in general the communication between Xwayland and a Wayland server is done by the normal Wayland protocoll in the same way other native Wayland applications talk to the compositor/server. This means the windows requested by possibly several X based applications and provided by Xwayland acting as an Xserver are translated at the same time by Xwayland to Wayland compatible objects and, acting as a native Wayland client, send to the Wayland compositor via the Wayland protocol. These windows look to the Wayland compositor just like the windows - in Wayland terminology surfaces - of every other Wayland native application. When reading this keep in mind, that an application in Wayland is not limited to using only one window/surface but can create multiple at the same time, so Xwayland as a native Wayland client can do the same for all the windows created for all of its X clients.

In the second part next week we’ll have a close look at the Xwayland code to see how Xwayland fills its role as an Xserver in regards to its X based clients and at the same time acts as a Wayland client when facing the Wayland compositor.

Categories: FLOSS Project Planets

Made with Krita: Bird Brains

Planet KDE - Fri, 2017-06-23 08:51

Look what we got today by snail mail:

It’s a children’s nonfiction book, nice for adults too, by Jeremy Hyman (text) and Haude Levesque (art). All the art was made with Krita!

Jeremy:

One of my favorite illustrations is the singing White-throated sparrow (page 24-25). The details of the wing feathers, the boldness of the black and white stripes, and the shine in the eye all make the bird leap off the page.

I love the picture of the long tailed manakins (page 32-33). I think this illustration captures the velvety black of the body plumage, and the soft texture of the blue cape, and the shining red of the cap. I also like the way the unfocused background makes the birds in the foreground seem so crisp. It reminds me of seeing these birds in Costa Rica – in dark and misty tropical forests, the world often seems a bit out of focus until a bright bird, flower, or butterfly focuses your attention.

I also love the picture of the red-knobbed hornbill (page 68-69). You can see the texture and detail of the feathers, even in the dark black feathers of the wings and back. The illustration combines the crispness and texture of the branches, leaves and fruits in the foreground, with the softer focus on leaves in the background and a clear blue sky. Something about this illustration reminds me of the bird dioramas at the American Museum of Natural History – a place I visited many times with my grandfather (to whom the book is dedicated). The realism of those dioramas made me fantasize about seeing those birds and those landscapes someday. Hopefully, good illustrations will similarly inspire some children to see the birds of the world.

Haude:

My name is Haude Levesque and I am a scientific illustrator, writer and fish biologist. I have always been interested in both animal sciences and art, and it has been hard to choose between both careers, until I started illustrating books as a side job, about ten years ago while doing my post doc. My first illustration job was a book about insects behavior (Bug Butts), which I did digitally after taking an illustration class at the University of Minnesota. Since then, I have been both teaching biology, illustrating and writing books, while raising my two kids. The book “Bird Brains” belongs to a series with two other books that I illustrated, and Iwanted to have illustrations that look similar, which is full double page illustrations of a main animal in its natural habitat. I started using Krita only a year ago when illustrating “Bird Brains”, upon a suggestion from my husband, who is a software engineer and into open source software. I was getting frustrated with the software I had used previously, because it did not allow me to render life-like drawings, and required too many steps and time to do what I wanted. I also wanted my drawing to look like real paintings and also get the feeling that I am painting and Krita’s brushes do just that. It is hard for me to choose a favorite illustration in “Bird Brains”, I like them all and I know how many hours I spent on each. But, if I had to, I would say the superb lyrebird, page 28 and 29. I like how this bird is walking and singing at the same time and I like how I could render its plumage while giving him a real life posture.

I also like the striated heron, page 60 and 61. Herons are my favorite birds and I like the contrast between the pink and the green of the lilypads. Overall I am very happy with the illustrations in this book and I am planning on doing more scientific books for kids and possibly try fiction as well.

You can get it here from Amazon or here from Book Depository.

Categories: FLOSS Project Planets

Django Weekly: Django Weekly Issue 44 - Django vs Flask, Translation, Kubernetes, Google Authentication and more

Planet Python - Fri, 2017-06-23 08:50
Worthy Read
Django project optimization guide (part 1)This is the first part of a series about Django performance optimization. It will cover logging, debug toolbar, locust.io for testing, Silk etc.
profiling
Django vs FlaskThis analysis is a comparison of 2 python frameworks, Flask and Django. It discusses their features and how their technical philosophies impact software developers. It is based on my experience using both, as well as time spent personally admiring both codebases.
web framework
Is Your DevOps Pipeline Leaking?Gartner’s Recommendations for Long-Term Pipeline Success.
sponsor
Getting started with translating a Django ApplicationAfter reading this article, you have a basic understanding of translating a Django app.
translation
Kubernetes Health Checks in Django - By Ian LewisHealth checks are a great way to help Kubernetes help your app to have high availability, and that includes Django apps.
kubernetes
How web requests are processed in a typical Django application - By Arun RavindranIllustration of request processing in Django from browser to back.
illustration
Simple Google Authentication in Django - By Sahil JainHow to build a simple google authentication app on Django framework.
authentication
Celery 4 Periodic Task in Django - By Yehan DjoehartonoAutomation in Django is a developer dream. Tedious work such as creating database backup, reporting annual KPI, or even blasting email could be made a breeze. Through Celery?—?a well-known software in Python for delegating task?—?such action made possible.
celery
Django For Beginners BookDjango For Beginners Book
book

Projects
django-admin-env-notice - 24 Stars, 0 ForkVisually distinguish environments in Django Admin. Based on great advice from post: 5 ways to make Django Admin safer by hakibenita.
Scrum - 0 Stars, 0 ForkNow work in a far more efficient and organized manner! The project allows users to list their tasks in a scrum order, monitor and update their progress.
django-base - 0 Stars, 0 ForkA Dockerized Django project template with NGINX and PostgreSQL ready to go
Categories: FLOSS Project Planets

EuroPython: PyData EuroPython 2017

Planet Python - Fri, 2017-06-23 06:29

We are excited to announce a complete PyData track at EuroPython 2017 in Rimini, Italy from the 9th to 16th July.

PyData EuroPython 2017

The PyData track will be part of EuroPython 2017, so you won’t need to buy an extra ticket to attend. Mostly talks and trainings are scheduled for Wednesday, Thursday and Friday (July 12-14), with a few on other days as well.

We will have over 40 talks, 5 trainings, and 2 keynotes dedicated to PyData. If you’d like to attend PyData EuroPython 2017, please register for EuroPython 2017 soon.

Enjoy,

EuroPython 2017 Team
EuroPython Society
EuroPython 2017 Conference

Categories: FLOSS Project Planets

Fabio Zadrozny: mu-repo: Dealing with multiple git repositories

Planet Python - Fri, 2017-06-23 05:07
It's been a while since I've commented about mu-repo, so, now that 1.6.0 is available, I decided to give some more details on the latest additions ;)

-- if you're reading this and don't know what mu-repo is, it's a tool (done in Python) which helps when dealing with multiple git repositories (providing a way to call git commands on multiple repositories at once, along some other bells and whistles). http://fabioz.github.io/mu-repo has more details.

The last 2 major things that were introduced where:

1. A workflow for creating code-reviews in multiple repositories at once.

2. The possibility of executing non-git commands on multiple repositories at once.

For #1, the command mu open-url was created. Mostly, it'll compare the current branch against a different branch and open browser tabs making replacements in the url passed with the name of the repository (http://fabioz.github.io/mu-repo/open_url has more info and examples on how to use this for common git hosting platforms).

For #2, it's possible to execute a given command in the multiple tracked repositories by using the mu sh command. Mostly, call mu sh and pass the command you want to issue in the multiple tracked repositories.

e.g.: calling mu sh python setup.py develop will call python setup.py develop on each of the tracked repository directories.

That's it... enjoy!
Categories: FLOSS Project Planets

Arturo Borrero González: Backup router/switch configuration to a git repository

Planet Debian - Fri, 2017-06-23 04:10

Most routers/switches out there store their configuration in plain text, which is nice for backups. I’m talking about Cisco, Juniper, HPE, etc. The configuration of our routers are being changed several times a day by the operators, and in this case we lacked some proper way of tracking these changes.

Some of these routers come with their own mechanisms for doing backups, and depending on the model and version perhaps they include changes-tracking mechanisms as well. However, they mostly don’t integrate well into our preferred version control system, which is git.

After some internet searching, I found rancid, which is a suite for doing tasks like this. But it seemed rather complex and feature-full for what we required: simply fetch the plain text config and put it into a git repo.

Worth noting that the most important drawback of not triggering the change-tracking from the router/switch is that we have to follow a polling approach: loggin into each device, get the plain text and the commit it to the repo (if changes detected). This can be hooked in cron, but as I said, we lost the sync behaviour and won’t see any changes until the next cron is run.

In most cases, we lost authorship information as well. But it was not important for us right now. In the future this is something that we will have to solve.

Also, some routers/switches lack some basic SSH security improvements, like public-key authentication, so we end having to hard-code user/pass in our worker script.

Since we have several devices of the same type, we just iterate over their names.

For example, this is what we use for hp comware devices:

#!/bin/bash # run this script by cron USER="git" PASSWORD="readonlyuser" DEVICES="device1 device2 device3 device4" FILE="flash:/startup.cfg" GIT_DIR="myrepo" GIT="/srv/git/${GIT_DIR}.git" TMP_DIR="$(mktemp -d)" if [ -z "$TMP_DIR" ] ; then echo "E: no temp dir created" >&2 exit 1 fi GIT_BIN="$(which git)" if [ ! -x "$GIT_BIN" ] ; then echo "E: no git binary" >&2 exit 1 fi SCP_BIN="$(which scp)" if [ ! -x "$SCP_BIN" ] ; then echo "E: no scp binary" >&2 exit 1 fi SSHPASS_BIN="$(which sshpass)" if [ ! -x "$SSHPASS_BIN" ] ; then echo "E: no sshpass binary" >&2 exit 1 fi # clone git repo cd $TMP_DIR $GIT_BIN clone $GIT cd $GIT_DIR for device in $DEVICES; do mkdir -p $device cd $device # fetch cfg CONN="${USER}@${device}" $SSHPASS_BIN -p "$PASSWORD" $SCP_BIN ${CONN}:${FILE} . # commit $GIT_BIN add -A . $GIT_BIN commit -m "${device}: configuration change" \ -m "A configuration change was detected" \ --author="cron <cron@example.com>" $GIT_BIN push -f cd .. done # cleanup rm -rf $TMP_DIR

You should create a read-only user ‘git’ in the devices. And beware that each device model has the config file stored in a different place.

For reference, in HP comware, the file to scp is flash:/startup.cfg. And you might try creating the user like this:

local-user git class manage password hash xxxxx service-type ssh authorization-attribute user-role security-audit #

In Junos/Juniper, the file you should scp is /config/juniper.conf.gz and the script should gunzip the data before committing. For the read-only user, try is something like this:

system { [...] login { [...] class git { permissions maintenance; allow-commands scp.*; } user git { uid xxx; class git; authentication { encrypted-password "xxx"; } } } }

The file to scp in HP procurve is /cfg/startup-config. And for the read-only user, try something like this:

aaa authorization group "git user" 1 match-command "scp.*" permit aaa authentication local-user "git" group "git user" password sha1 "xxxxx"

What would be the ideal situation? Get the device controlled directly by git (i.e. commit –> git hook –> device update) or at least have the device to commit the changes by itself to git. I’m open to suggestions :-)

Categories: FLOSS Project Planets
Syndicate content