FLOSS Project Planets

Amazee Labs: Amazee Agile Agency Survey Results - Part 5

Planet Drupal - Fri, 2017-12-08 09:51
Amazee Agile Agency Survey Results - Part 5

Welcome to part five of our series, processing the results of the Amazee Agile Agency Survey. Previously I wrote about forming discovery and planning. This time let’s focus on team communication and process.

Josef Dabernig Fri, 12/08/2017 - 15:51 Team Communication

When it comes to ways how to communicate, the ones that got selected with the highest rating of “mostly practised” where “Written communication in tickets”, “Written communication via (i.e. Slack)” as well as “Group meetings for the entire team”. The options that most often got selected as “Not practised” where “Written communication in blog or wiki” and “Written communication in pull requests”.

For us at Amazee Labs Zurich, a variety of communication channels is essential. Regular 1-on-1 meetings between managers and their employees allow us to continuously talk about what’s important to either side and work on improvements. We communicate a lot via Slack where we have various team channels, channels together with clients related to projects, channels for work-related topics or just channels to talk about fun stuff. Each morning, we start with a short team stand-up for the entire company where we check in with each other, and that’s followed by a more in-depth standup for the Scrum teams where we talk about “What has been done, What will be done and What’s blocking us”. Written communication happens between the team and customers in Jira tickets. As part of our 4-eyes-principle peer review process, we also give feedback on code within pull requests that are used to ensure the quality of the code and train each other.


We talked about iteration length in part 1 of this series. Now let’s look into how much time we spend on which things.

According to the survey, the majority of standups take 15 minutes, followed by 5 minutes and 10 minutes with a few ones taking up to 30 minutes.

This also reflects ours: we take 10 minutes for the company-wide stand up amongst 24 team members and another 15 minutes for the Scrum Team specific stand-ups.

For the review-phase, teams equally often selected 2 hours and 1 hour as the top-rated option followed closely by 30 minutes. 4 hours has been chosen by a few other teams, and the last one would be one day. For the retrospectives, the top-rated option was 30 minutes, followed by 1 hour. Much fewer teams take 2 hours or even up to 4 hours for the retrospective. For planning, we saw the most significant gap regarding top rated options: 30 minutes is followed by 4 hours and then 2 hours and 1 hours were selected.

In the teams I work with, we usually spend half a day doing sprint review, retrospective and planning altogether. Our reviews typically take 45 minutes, the retrospective about 1.5 hours and the planning another 30 minutes. We currently don’t do these meetings together with customers because the Scrum teams are stable teams that usually work for multiple customers. Instead, we do demos along with the clients individually outside of these meetings. Also, our plannings are quite fast because the team split up stories already in part of grooming sessions beforehand and we only estimate smaller tasks that don’t get split up later on as usually done in sprint planning 2.

When looking at how much time is being spent on Client work (billable, unbillable) and Internal work we got a good variety of results. The top-rated option for “Client work (billable)” was 50-75%, “Client work (unbillable)” was usually rated below 10% and “Internal work” defaulted to 10-25%. Our internal statistics match these options that have been voted by the industry most often.

I also asked about what is most important to you and your team when it comes to scheduling time? Providing value while keeping our tech debt in a reasonable place has been mentioned which is also true for us. Over the last year, we started introducing our global maintenance team which puts a dedicated focus on maintaining existing sites and keeping customer satisfaction high. By using a Kanban-approach there, we can prioritise timely critical bugs fixes when they are needed and work on maintenance-related tasks such as module updates in a coordinated way. We found it particularly helpful that the Scrum-teams are well connected with the maintenance-team to provide know-how transfer and domain-knowledge where needed.

Another one mentioned, “We still need a good time tracker.” At Amazee we bill by the hour that we work so accurate time tracking is a must. We do so by using Tempo Timesheets for Jira combined with the Toggl app.

How do you communicate and what processes do you follow? Please leave us a comment below. If you are interested in Agile Scrum training, don’t hesitate to contact us.

Stay tuned for the next post where we’ll look at defining work.

Categories: FLOSS Project Planets

Ned Batchelder: Iter-tools for puzzles: oddity

Planet Python - Fri, 2017-12-08 08:52

It’s December, which means Advent of Code is running again. It provides a new two-part puzzle every day until Christmas. They are a lot of fun, and usually are algorithmic in nature.

One of the things I like about the puzzles is they often lend themselves to writing unusual but general-purpose helpers. As I have said before, abstraction of iteration is a powerful and under-used feature of Python, so I enjoy exploring it when the opportunity arises.

For yesterday’s puzzle I needed to find the one unusual value in an otherwise uniform list. This is the kind of thing that might be in itertools if itertools had about ten times more functions than it does now. Here was my definition of the needed function:

def oddity(iterable, key=None):
    Find the element that is different.

    The iterable has at most one element different than the others. If a
    `key` function is provided, it is a function used to extract a comparison
    key from each element, otherwise the elements themselves are compared.

    Two values are returned: the common comparison key, and the different

    If all of the elements are equal, then the returned different element is
    None.  If there is more than one different element, an error is raised.


The challenge I set for myself was to implement this function in as general and useful a way as possible. The iterable might not be a list, it could be a generator, or some other iterable. There are edge cases to consider, like if there are more than two different values.

If you want to take a look, My code is on GitHub (with tests, natch.) Fair warning: that repo has my solutions to all of the Advent of Code problems so far this year.

One problem with my implementation: it stores all the values from the iterable. For the actual Advent of Code puzzle, that was fine, it only had to deal with less than 10 values. But how would you change the code so that it didn’t store them all?

My code also assumes that the comparison values are hashable. What if you didn’t want to require that?

Suppose the iterable could be infinite? This changes the definition somewhat. You can’t detect the case of all the values being the same, since there’s no such thing as “all” the values. And you can’t detect having more than two distinct values, since you’d have to read values forever on the possibility that it might happen. How would you change the code to handle infinite iterables?

These are the kind of considerations you have to take into account to write truly general-purpose itertools functions. It’s an interesting programming exercise to work through each version would differ.

BTW: it might be that there is a way to implement my oddity function with a clever combination of things already in itertools. If so, let me know!

Categories: FLOSS Project Planets

More Calamares Releases

Planet KDE - Fri, 2017-12-08 08:07

Another month passed, just like that. I spent last week holed up with some KDE people in the hills, watching the snow come down. While they did impressive things to the KDE codebase, I hacked on Calamares. Since my last post on the topic, I’ve been running a roughly every-other-week release schedule for the Calamares 3.1-stable branch. We’ve just reached 3.1.10. The reason for these stable releases is small bugfixes, minor polishing, and occasional non-interfering features.

Each release is announced on the Calamares site, and can be found on the Calamares GitHub page.

Calamares isn’t a KDE project, and aims to support whatever Linux distro wants to use it, and to configure the stuff that is needed for that distro. But when feature requests show up for KDE integration, there’s no special reason for me to reject them — as long as things can remain modular, the SKIP_MODULES mechanism in Calamares can avoid unwanted KDE Frameworks dependencies.

One new module in development is called “Plasma Look-and-Feel” (plasmalnf) and the purpose there is to configure the look-and-feel of Plasma. No surprise there, but ther point is that this can be done during installation, before Plasma is started by the (new) user on the target system. So kind of like the Look-and-Feel KCM, but entirely outside of the target environment. The UI is more primitive, more limited, but that’s the use-case that was asked for. I’d like to thank the Plasma developers for helping out with tooling and guiding me through some of the Plasma code, and deployers of Calamares (that is, distro’s) can look forward to this feature in the 3.2 series.

Speaking of Calamares 3.2, I’m intending to put out an RC fairly soon, with the features as developed so far. There will probably be a few RCs each of which integrates a new feature branch (e.g. a giant requirements-checking rewrite) with 3.2.0 showing up for real in the new year.

Categories: FLOSS Project Planets

Agaric Collective: A blooming community - Lakes and Volcanoes DrupalCamp 2017 Recap

Planet Drupal - Fri, 2017-12-08 08:00

Over 8 years have passed since there was a DrupalCamp in tropical Nicaragua. With the help of a diverse group of volunteers, sponsors, and university faculty staff, we held our second one. DrupalCamp Lagos y Volcanes ("Lakes & Volcanoes") was a great success with over 100 people attending in 2 days. It was a big undertaking so we followed giants' footsteps to prepare for our event. Lots of the ideas were taken from some of the organizers' experience while attending Drupal events. Others came from local free software communities who have organized events before us. Let me share what we did, how we did it, and what the results were.


In line with DrupalCon, we used the "Big Eight" social identifiers to define diversity and encourage everyone to have a chance to present. Among other statistics, we are pleased that 15% of the sessions and 33% of the trainings were presented by women. We would have liked higher percentages, but it was a good first step. Another related fact is that no speaker presented more than one session. We had the opportunity to learn from people with different backgrounds and expertise.


Ticket cost

BADCamp, Drupal's largest event outside of DrupalCons, is truly an inspiration when it comes to making affordable events. They are free! We got close. For $1 attendees had access to all the sessions and trainings, lunch both days, a t-shirt, and unlimited swag while supplies lasted. Of course, they also had the networking opportunities that are always present at Drupal events. Even though the camp was almost free, we wanted to give all interested people a chance to come and learn so we provided scholarships to many attendees.



The camp offered four types of scholarships:

  • Ticket cost: we would waive the $1 entry fee.
  • Transportation: we would cover any expense for someone to come from any part of the country.
  • Lodging: we would provide a room for people to stay overnight if they would come from afar.
  • Food: we would pay for meals during the two days of the camp.

About 40% of the people who attended did not pay the entry fee. We also had people traveling from differents parts of the country. Some stayed over. Others travelled back and forth each day. Everyone who requested a scholarship received it. It felt good to provide this type of opportunities and recipients were grateful for it.



As you can imagine, events like these need funding and we are extremely grateful to our sponsors:

These are people who attended from afar. Some were scholarship recipients. Others got educational memberships.


Session recordings

Although we worked hard to make it possible for interested people to attend, we knew that some would not be able to make it. In fact, having sessions recorded would make it possible for anyone who understands Spanish to benefit from what was presented at the camp.

We used Kevin Thull’s recommended kit to record sessions. My colleague Micky Metts donated the equipment and I did the recording. I had the opportunity to be at some camps that Kevin recorded this year and he was very kind in teaching me how to use the equipment. Unfortunately, the audio is not clear in some sessions and I completely lost one. I have learned from the mistakes and next time it should be better. Check out the camp playlist in Drupal Nicaragua’s YouTube channel for the recordings.

Thank you Kevin. It was through session recordings that I improved my skills when I could not afford to travel to events. I’m sure I am not the only one. Your contributions to the Drupal community are invaluable!


Sprints and live commit!

Lucas Hedding lead a sprint on Saturday morning. Most sprinters were people who had never worked with Drupal before the camp. They learned how to contribute to Drupal and worked on a few patches. One pleasant surprise was when Lucas went on stage with one of the sprinters and proceeded with the live commit ceremony. I was overjoyed that even with a short sprint an attendee’s contribution was committed. Congrats to Jorge Morales for getting a patch committed on his first sprint! And thanks to Holger Lopez, Edys Meza, and Lucas Hedding for mentoring and working on the patch.



Northern Lights DrupalCamp decided to change the (physical) swag for experiences. What we lived was epic! For our camp, we went for a low cost swag. The only thing we had to pay for was t-shirts. Other local communities recommended us to have them and so we did. The rest was a buffet of the things I have collected since my first DrupalCon, Austin 2014: stickers, pins, temporary tattoos. It was funny trying to explain where I had collected each item. I could not remember them all, but it was nice to bring back those memories. We also had hand sanitizer and notebooks provided by local communities. Can you spot your organization/camp/module/theme logo on our swag table?


Free software communities

We were very lucky to have the support of different local communities. We learned a lot from their experiences organizing events. They also sent an army of volunteers and took the microphone to present on different subjects. A special thank you to the WordPress Nicaragua community who helped us immensely before, during, and after the event. It showed that when communities work together, we make a bigger impact.


Keeping momentum

Two weeks after the camp, we held two Global Training Days workshops. More than 20 people attended. I felt honored when some attendees shared that they had travelled from distant places to participate. One person travelled almost 8 hours. But more than distance, it was their enthusiasm and engagement during the workshops that inspired us. The last month has been very exhausting, but the local community is thrilled with the result.


A blooming community

The community has come a long way since I got involved in 2011. We have had highs and lows. Since Lucas and myself kickstarted the Global Training Days workshops in 2014 we have seen more interest in Drupal. By the way, this edition marked our third anniversary facilitating the workshop! But despite all efforts, people would not stay engaged for long after initially interacting with the community. Things have changed.

In the last year interest in Drupal has increased. We have organized more events and more people have attended. Universities and other organizations are approaching us requesting trainings. And what makes me smile most… the number of volunteers is at its all-time peak. In the last month alone, the number of volunteers have almost doubled. The DrupalCamp and the Global Training Days workshops contributed a lot to this.

We recognize that the job is far from complete and we already have plans for 2018. One of the things that we need to do is find job opportunities. Even if people enjoy working with Drupal they need to make a living. If you are an organization looking for talent consider Nicaragua. We have very great developers. Feel free to contact me to put you in contact with them.


A personal thank you

I would like to take this opportunity to say thanks to Felix Delattre. He started the Drupal community in Nicaragua almost a decade ago. He was my mentor. He gave me my first Drupal gig. At a time when there was virtually no demand for Drupal talent in my country, that project helped me realize that I could make a living working with Drupal. But most importantly, Felix taught me the value of participating in the community. I remember creating my drupal.org account after he suggested it in a local meetup.

His efforts had a profound effect on the lives of many, even beyond the borders of my country or those of a single project. Felix was instrumental in the development of local communities across Central and South America. He also started the OpenStreetMap (OSM) community in Nicaragua. I still find it impressive how OSM Nicaragua have mapped so many places and routes. In some cities, their maps are more accurate and complete than those of large Internet corporations. Thank you Felix for all you did for us!

We hope to have you in 2018!

The land of lakes and volcanoes awaits you next year. Nicaragua has a lot to offer and a DrupalCamp can be the perfect excuse to visit. ;-) Active volcanoes, beaches to surf, forests rich in flora and fauna are some of the charms of this tropical paradise.

Let’s focus on volcanoes for a moment. Check out this website for a sneak peek into one of our active volcanoes. That is Masaya, where you can walk to the border of the crater and see the flow of lava. Active volcanoes, dormant volcanoes, volcanoes around a lake, volcanoes in the middle of a lake, lagoons on top of volcanoes, volcanoes where you can “surf” down the slope... you name it, we have it.

We would love to have you in 2018!


In this album there will be more photos of the event.

Categories: FLOSS Project Planets

Craig Small: Back Online

Planet Debian - Fri, 2017-12-08 05:58

I now have Internet back! Which means I can try to get the Debian WordPress packages bashed into shape. Unfortunately they still have the problem with the json horrible “no evil” license which causes so many problems all over the place.

I’m hoping there is a simple way of just removing that component and going from there.

Categories: FLOSS Project Planets

Agiledrop.com Blog: AGILEDROP: Meet Marko, our managing director

Planet Drupal - Fri, 2017-12-08 03:33
When did you start working at AGILEDROP and what were your initial responsibilities? I am one of three founders of the company. In the beginning, I was doing Drupal theming and site building. But it wasn’t long before we needed a full-time manager, so I took on the role of managing director. What are your responsibilities as managing director? At AGILEDROP, we only have great people – all of them experts in their own fields. So, my job is to make sure they do what they do best and then leave them to unleash their potential. I also give them some help with self-improve, organizing mentors,… READ MORE
Categories: FLOSS Project Planets

Codementor: Building Notification Service with AWS Lambda

Planet Python - Fri, 2017-12-08 00:06
The post walks you through the simple steps required to build a notification service with AWS Lambda.
Categories: FLOSS Project Planets

Bryan Pendleton: In the Valley of Gods

Planet Apache - Thu, 2017-12-07 23:51

Oh boy!

Oh boy oh boy oh boy oh boy oh boy!!!

Campo Santo return!

Campo Santo, makers of the astonishingly great Firewatch (you know, the game with that ending), have started to reveal some of the information about their next game: In the Valley of Gods.

In the Valley of Gods is a single-player first person video game set in Egypt in the 1920s. You play as an explorer and filmmaker who, along with your old partner, has traveled to the middle of the desert in the hopes of making a seemingly-impossible discovery and an incredible film.

Here's the In the Valley of Gods "reveal trailer".

Looking forward to 2019 already!

Categories: FLOSS Project Planets

Thomas Goirand: Testing OpenStack using tempest: all is packaged, try it yourself

Planet Debian - Thu, 2017-12-07 18:00

tl;dr: this post explains how the new openstack-tempest-ci-live-booter package configures a machine to PXE boot a Debian Live system running on KVM in order to run functional testing of OpenStack. It may be of interest to you if you want to learn how to PXE boot a KVM virtual machine running Debian Live, even if you aren’t interested in OpenStack.

Moving my CI from one location to another leads to package it fully

After packaging a release of OpenStack, it’s kind of mandatory to functionally test the set of packages. This is done by running the tempest test suite on an already deployed OpenStack installation. I used to do that on a real hardware, provided by my employer. But since I’ve lost my job (I’m still looking for a new employer at this time), I also lost access to the hardware they were providing to me.

As a consequence, I searched for a sponsor to provide the hardware to run tempest on. I first sent a mail to the openstack-dev list, asking for such a hardware. Then Rochelle Grober and Stephen Li from Huawei got me in touch with Zachary Smith, the CEO of Packet.net. And packet.net gave me an account on their system. I am amazed how good their service is. They provide baremetal servers around the world (15 data centers), provisioned using an API (meaning, fully automatically). A big thanks to them!

Anyway, even if I planned for a few weeks to give a big thanks to the above people (they really deserves it!), this isn’t the only goal of this post. This is to introduce how to run your own tempest CI on your own machine. Because since I have been in the situation where my CI had to move twice, I decided to industrialize it, and fully automate the setup of the CI server. And what does a DD do when writing software? Package it of course. So I packaged it all, and uploaded it to the archive. Here’s how to use all of this.

General principle

The best way to run an OpenStack tempest CI is to run it on a Debian Live system. Why? Because setting-up a full OpenStack environment takes a lot of time, mostly spent on disk I/O. And on a live system, everything runs on a RAM disk, so installing under this environment is the fastest way one could do. This is what I did when working with Mirantis: I had a real baremetal server, which I was PXE booting on a Debian Live system. However nice, this imposes having access to 2 servers: one for running the Live system, and one running the dhcp/pxe/tftp server. Also, this means the boot server needs 2 nics, one on the internet, and one for booting the 2nd server that will run the Live system. It was not possible to have such specific setup at packet, so I decided to replicate this using KVM, so it would become portable. And since the servers at packet.net are very fast, it isn’t much of an issue anymore to not run on baremetal.

Anyway, let’s dive into setting-up all of this.

Network topology

We’ll assume that one of your interface has internet access, let’s say eth0. Since we don’t want to destroy any of your network config, the openstack-tempest-ci-live-booter package will use a dummy network interface (ie: modprobe dummy) and bridge it to the network interface of the KVM virtual machine. That dummy network interface will be configured with, and the Debian Live KVM will use This convenient default can be changed, but then you’ll have to pass your specific network configuration to each and every script (just read the beginning of each script to read the parameters).

Configure the host machine

First install the openstack-tempest-ci-live-booter package. This runtime depends on the isc-dhcp-server, tftpd-hpa, apache2, qemu-kvm and all what’s needed to run a Debian Live machine, booting it over PXE / iPXE (the package support both, more on iPXE later). So, let’s do it:

apt-get install openstack-tempest-ci-live-booter

The package, once installed, doesn’t do much. To respect the Debian policy, it can’t touch configuration files of other packages in maintainer scripts. Therefore, you have to manually run:

openstack-tempest-ci-live-booter-config --configure-dummy-nick

Running this script will:

  • configure the kvm-intel module to allow nested visualization (by unloading the module, adding “options kvm-intel nested=y” to /etc/modprobe.d, and reloading the module)
  • modprobe the dummy kernel module, run “ip link set name tempestnic0 dev dummy0” to create a tempestnic0 dummy interface
  • create a tempestbr bridge, set for the bridge IP, bridge the tempestnic0 and tempesttap
  • configure tftpd-hpa to listen on
  • configure isc-dhcp-server to dhcpreply on the tempestbr, so that the KVM machine can boot up with an IP
  • configure apache2 to serve the filesystem.squashfs root filesystem, loaded by the Linux kernel at boot time. Note that you may need to manually start and/or reload apache after this setup though.

Again, you can change the IP addresses if you like. You can also use a real interface if you intend to boot a real hardware rather than a KVM machine (in which case, just omit the –configure-dummy-nick, and manually configure your 2nd interface).

Also, openstack-tempest-ci-live-booter provides a /etc/init.d/openstack-tempest-ci-live-booter script which will configure NAT on your server, so that the Debian Live machine has internet access (needed for apt-get operations). Edit the file if you need to change by something else. The script will pick-up the interface that is connected to the default gateway by itself.

The dhcp server is configured to support both legacy PXE and the new iPXE standard. I had to support iPXE, because that’s what the standard KVM ROM does, and also I wanted to keep legacy support for older baremetal hardware. The way iPXE works is that dhcpd tells the client where to fetch the iPXE script, which itself chains to lpxelinux.0 (instead of the standard pxelinux.0). It’s rather easy to setup once you understood how it works.

Build the live image

Now that the PXE server is configured, it’s now time to build the Debian live image. Simply do this to build the image, and copy its resulting files in the PXE server folder (ie: /var/lib/tempest-live-booter):

mkdir live cd live openstack-tempest-ci-build-live-image --debian-mirror-addr http://ftp.nl.debian.org/debian

Since we need to login in that server later on, the script will create an ssh key-pair. If you want your own keys, simply drop the id_rsa and id_rsa.pub files in your current folder before running the script. Then make it so that this key-pair can be later on used by default by the user who will run the tempest script (ie: copy id_rsa and id_rsa.pub in the ~/.ssh folder).

Running the openstack-tempest-ci

What the openstack-tempest-ci script does is (re-)starting your KVM virtual machine, ssh into it, upgrade it to sid, install OpenStack, and eventually run all the tempest suite. There’s 2 ways to run it: either install the openstack-tempest-ci package, eventually configure it (in /etc/default/openstack-tempest-ci), and simply run the “openstack-tempest-ci” command. Or, you can skip the installation of the package, and simply run it from source:

git clone http://anonscm.debian.org/git/openstack/debian/openstack-meta-packages.git cd openstack-meta-packages/src ./openstack-tempest-ci

Indeed, the script is designed to copy all scripts from source inside the Debian Live machine before using these scripts. The reason it’s doing that is because we want to avoid the situation where a modification needs to be uploaded to Debian before being able to test it, and also it was needed to be able to run the openstack-tempest-ci script without installing a package (which would need root access that I don’t have on casulana.debian.org, where running tempest is needed to test official OpenStack Debian images). So, definitively, feel free to hack everything in openstack-meta-packages/src before running the tempest script. Also, openstack-tempest-ci will look for a sources.list file in the current directory, and upload it to the Debian Live system before doing the upgrade/install. This way, it is easy to use the closest mirror.

Categories: FLOSS Project Planets

Chris Lamb: Simple media cachebusting with GitHub pages

Planet Debian - Thu, 2017-12-07 17:10

GitHub Pages makes it really easy to host static websites, including sites with custom domains or even with HTTPS via CloudFlare.

However, one typical annoyance with static site hosting in general is the lack of cachebusting so updating an image or stylesheet does not result in any change in your users' browsers until they perform an explicit refresh.

One easy way to add cachebusting to your Pages-based site is to use GitHub's support for Jekyll-based sites. To start, first we add some scaffolding to use Jekyll:

$ cd "$(git rev-parse --show-toplevel) $ touch _config.yml $ mkdir _layouts $ echo '{{ content }}' > _layouts/default.html $ echo /_site/ >> .gitignore

Then in each of our HTML files, we prepend the following header:

--- layout: default ---

This can be performed on your index.html file using sed:

$ sed -i '1s;^;---\nlayout: default\n---\n;' index.html

Alternatively, you can run this against all of your HTML files in one go with:

$ find -not -path './[._]*' -type f -name '*.html' -print0 | \ xargs -0r sed -i '1s;^;---\nlayout: default\n---\n;'

Due to these new headers, we can obviously no longer simply view our site by pointing our web browser directly at the local files. Thus, we now test our site by running:

$ jekyll serve --watch

... and navigate to

Finally, we need to append the cachebusting strings itself. For example, if we had the following HTML to include a CSS stylesheet:

<link href="/css/style.css" rel="stylesheet">

... we should replace it with:

<link href="/css/style.css?{{ site.time | date: '%s%N' }}" rel="stylesheet">

This adds the current "build" timestamp to the file, resulting in the following HTML once deployed:

<link href="/css/style.css?1507450135153299034" rel="stylesheet">

Don't forget to to apply it all your other static media, including images and Javascript:

<img src="image.jpg?{{ site.time | date: '%s%N' }}"> <script src="/js/scripts.js?{{ site.time | date: '%s%N' }}')">

To ensure that transitively-linked images are cachebusted, instead of referencing them in the CSS you can specify them directly in the HTML instead:

<header style="background-image: url(/img/bg.jpg?{{ site.time | date: '%s%N' }})">
Categories: FLOSS Project Planets

Colan Schwartz: Aegir: Your open-source hosting platform for Drupal sites

Planet Drupal - Thu, 2017-12-07 16:15

If you need an open-source solution for hosting and managing Drupal sites, there's only one option: the Aegir Hosting System. While it's possible to find a company that will host Drupal sites for you, Aegir helps you maintain control whether you want to use your own infrastructure or manage your own software-as-a-service (SaaS) product. Plus, you get all the benefits of open source.

Aegir turns ten (10) today. The first commit occurred on December 7th, 2007. We've actually produced a timeline including all major historical events. While Aegir had a slow uptake (the usability wasn't great in the early days), it's now being used by all kinds of organizations, including NASA.

I got involved in the project a couple of years ago, when I needed a hosting solution for a project I was working on. I started by improving the documentation, working on contributed modules, and then eventually the core system. I've been using it ever since for all of my SaaS projects, and have been taking the lead on Drupal 8 e-commerce integration. I became a core maintainer of the project about a year and a half ago.

So what's new with the project? We've got several initiatives on the go. While Aegir 3 is stable and usable now (Download it!), we've started moving away from Drush, which traditionally handles the heavy lifting (see Provision: Drupal 8.4 support for details), and into a couple of different directions. We've got an Aegir 4 branch based on Symfony, which is also included in Drupal core. This is intended to be a medium-term solution until Aegir 5 (codenamed AegirNG), a complete rewrite for hosting any application, is ready. Neither of these initiatives are stable yet, but development is on-going. Feel free to peruse the AegirNG architecture document, which is publicly available.

Please watch this space for future articles on the subject. I plan on writing about the following Aegir-related topics:

  • Managing your development workflow across Aegir environments
  • Automatic HTTPS-enabled sites with Aegir
  • Remote site management with Aegir Services
  • Preventing clients from changing Aegir site configurations

Happy Birthday Aegir! It's been a great ten years.

This article, Aegir: Your open-source hosting platform for Drupal sites, appeared first on the Colan Schwartz Consulting Services blog.

Categories: FLOSS Project Planets

Aegir Dispatch: Aegir: Your open-source hosting platform for Drupal sites

Planet Drupal - Thu, 2017-12-07 16:15
If you need an open-source solution for hosting and managing Drupal sites, there’s only one option: the Aegir Hosting System. While it’s possible to find a company that will host Drupal sites for you, Aegir helps you maintain control whether you want to use your own infrastructure or manage your own software-as-a-service (SaaS) product. Plus, you get all the benefits of open source. Aegir turns ten (10) today. The first commit occurred on December 7th, 2007.
Categories: FLOSS Project Planets

Python Engineering at Microsoft: What’s new for Python in Visual Studio 2017 15.6 Preview 1

Planet Python - Thu, 2017-12-07 15:00

Today we have released the first preview of our next update to Visual Studio 2017. You will see a notification in Visual Studio within the next few days, or you can download the new installer from visualstudio.com.

In this post, we're going to take a look at some of the new features we have added for Python developers. As always, the preview is a way for us to get features into your hands early so you can provide feedback and we can identify issues with a smaller (and hopefully more forgiving!) audience. If you encounter any trouble, please use the Report a Problem tool to let us know.

Immediate IntelliSense updates with no database

Remember how every time you installed or updated a package we would make you wait for hours while we "refresh" our "completion DB"? No more! In this update we are fundamentally changing how we handle this for installed Python environments, including virtual environments, so that we can provide IntelliSense immediately without the refresh.

This has been available as an experimental feature for a couple of releases, and we think it's ready to turn on by default. When you open the Python Environments window, you'll see the "IntelliSense" view is disabled and there is no longer a way to refresh the database -- because there is no database!

The new system works by doing lightweight analysis of Python modules as you import them in your code. This includes .pyd files, and if you have .pyi files alongside your original sources then we will prefer those (see PEP 484 for details of .pyi files. In essence, these are Python "include" files for editors to obtain information about Python modules, but do not actually have any code in them - just function stubs with type annotations).

You should notice some improvements in IntelliSense for packages like pandas and scikit-learn, though there will likely be some packages that do not work as well as before. We are actively working on improving results for various code constructs, and you will also see better IntelliSense results as packages start including .pyi type hint files. We encourage you to post on this github issue to let us know about libraries that still do not work well.

(NOTE: If you install this preview alongside an earlier version of Visual Studio 2017, the preview of this feature will also be enabled in earlier version. You can go back to the old model by disabling the feature in Preview. To do this, open Tools, Options, find the Python/Experimental page, deselect "Use new style IntelliSense" and restart both versions of Visual Studio.)

conda integration

If you use Anaconda, you likely already manage your environments and packages using the conda tool. This tool installs pre-built packages from the Anaconda repository (warning: long page) and manages compatibility with your environment and the other packages you have installed.

For this preview of Visual Studio, we have added two experimental options to help you work with Anaconda:

  • Automatically detect when conda is a better option for managing packages
  • Automatically detect any Anaconda environments you have created manually

To enable either or both of these features, open Tools, Options, find the Python/Experimental page, and select the check box. For this preview we are starting with both disabled to avoid causing unexpected trouble, but we intend to turn them on by default in a future release.

With "Automatically detect Conda environments" enabled, any environments created by the conda tool will be detected and listed in the Python Environments window automatically. You can open interactive windows for these environments, assign them in projects or make them your default environment.

With the "Use Conda package manager when available" option enabled, any environments that have conda installed will use that for search, install and updating instead of pip. Very little will visibly change, but we hope you'll be more successful when adding or removing packages to your environment.

Notice that these two options work independently: you can continue to use pip to manage packages if you like, even if you choose to detect environments that were created with conda. If you are an Anaconda user, you will likely want to enable both options. However, if you do this and encounter issues, disabling each one in turn and then reporting any differences will help us quickly narrow down the source.

Other improvements

We have made a range of other minor improvements and bug fixes throughout all of our Python language support and there are more to come.

Our "IPython interactive mode" is now using the latest APIs, with improved IntelliSense and the same module and class highlighting you see in the editor.

There are new code snippets for the argparse module. Start typing "arg" in the editor to see what is available.

We've also added new color customization options for docstrings and regular expression literals (under Tools, Options, Fonts and Colors). Doc strings have a new default color.

If you encounter any issues, please use the Report a Problem tool to let us know (this can be found under Help, Send Feedback) or continue to use our github page. Follow our blog to make sure you hear about our updates first, and thanks for using Visual Studio!

Categories: FLOSS Project Planets

Steinar H. Gunderson: Thoughts on AlphaZero

Planet Debian - Thu, 2017-12-07 14:35

The chess world woke up to something of an earthquake two days ago, when DeepMind (a Google subsidiary) announced that they had adapted their AlphaGo engine to play chess with only minimal domain knowledge—and it was already beating Stockfish. (It also plays shogi, but who cares about shogi. :-) ) Granted, the shock wasn't as huge as what the Go community must have felt when the original AlphaGo came in from nowhere and swept with it the undisputed Go throne and a lot of egos in the Go community over the course of a few short months—computers have been better at chess than humans for a long time—but it's still a huge event.

I see people are trying to make sense of what this means for the chess world. I'm not a strong chess player, an AI expert or a top chess programmer, but I do play chess, I've worked in AI (in Google, briefly in the same division as the DeepMind team) and I run what's the strongest chess analysis website online whenever Magnus Carlsen is playing (next game 17:00 UTC tomorrow!), so I thought I should share some musings.

First some background: We've been trying to make computers play chess for almost 70 years now; originally in the hopes that it would lead us to general AI, although we sort of abandoned that eventually. In almost all of that time, we've used the same basic structure; you have an evaluation function that can look at a specific position and say “I think this is good for white”, and then search that sees what happens with that evaluation function by playing all possible moves and countermoves (“oh wow, no matter what happens black is going to take white's queen, so maybe this wasn't so good after all”). The evaluation function roughly consists of a few hundred of hand-crafted features (everything from “the queen is worth nine points and rooks are five” to more complex issues around king safety, pawn structure and piece mobility) which are more or less added together, and the search tries very hard to prune out uninteresting lines so it can go deeper into the more interesting ones. In the end, you're left with a single “principal variation” (PV) consisting of a series of chess moves (presumably the best the engine can find within the allotted time), and the evaluation of the position at the end of the PV is the final evaluation of the position.

AlphaZero is different. Instead of a hand-crafted evaluation function, it just throws the raw information about the position (where the pieces are, and a few other tidbits like right-to-castle) into a neural network and gets out something like an expected win percentage. And instead of searching for the best line, it uses Monte Carlo tree search to make sort-of a weighted average of possible outcomes, explored in a stochastic way. The neural network is simply optimized through reinforcement learning under self-play; it starts off playing what's essentially random moves (it's restricted from playing illegal ones—that's one of the very few pieces of domain-specific knowledge), but rapidly gets better as the neural network learns what works or not.

These are not new ideas (in fact, I'm hard pressed to find a single new thing in the paper), and the basic structure has been attempted applied to chess in the past with master-level results, but it hasn't really made something approaching the top before now. The idea of numerical optimization through self-play is widely used, though, mostly to tune things like piece-square tables and other evaluation function weights. So I think that it's mainly
through great engineering and tons of computing power, not a radical breakthrough, that DeepMind has managed to make what's now probably the world's strongest chess entity on the planet. (I say “probably” because it “only” won 64–36 against Stockfish 8, which is about 100 Elo, and that's probably possible to do with a few hardware doublings and/or Stockfish improvements. Granted, it didn't lose a single game, and it's highly likely that AlphaZero's approach has a lot more room for further improvement than classical alpha-beta has.)

So what do I think AlphaZero will change? In the short term: Nothing. The paper contains ten games (presumably cherry-picked wins) of the 100-game match, and while those show beautiful chess that at times makes Stockfish seem cramped and limited, they don't seem to show any radically new chess ideas like AlphaGo did with Go. Nobody knows when or if DeepMind will release more games, although they have released a fair amount of Go games in the past, and also done Go exhibition matches. People are trying to pick out information from its opening choices (for instance, it prefers the infamous Berlin defense as black), which is interesting, but right now, there's just too little data to kill entire lines or openings.

We're also not likely to stop playing chess anytime soon, for the same reason that Magnus Carlsen nearly hitting 3000 Elo in blitz didn't stop me from playing online. AlphaZero hasn't solved chess by any means, and even though checkers has been weakly solved (Chinook) provably never loses a game from the opening position, although it won't win every won position), people still play it even on the top level. Most people simply are not at the level where the existence of perfect play matters, nor are their primary motivation to try to explore the frontiers of it.

So the primary question is whether top players can use this to improve their game. Now, DeepMind is not in the business of selling software; they're an AI research company, and AlphaZero runs on hardware (TPUs) you can't buy at this moment, and hardly even rent in the cloud. (Granted, you could probably make AlphaZero run efficiently on GPUs, especially the newer ones that start to get custom blocks for accelerating neural networks, although probably slower and with higher power usage.) Thus, it's unlikely that they will be selling or open-sourcing AlphaZero anytime soon. You could imagine top players wanting to go into talks to pay for exclusive access, but if you look at the costs of developing such a thing (just the training time alone has to be significant), it's obvious that they didn't do this in the hope of recouping the development costs. If anything, you would imagine that they'd sell it as a cloud service, but nothing like that has emerged for AlphaGo, where they have a competitive much larger market lead, so it seems unlikely.

Could anyone take their paper and reimplement it? The answer is: Maybe. AlphaGo was two years ago, has been backed up with several papers, and we still don't have anything public that's really close. Tencent's AI lab has made their own clone (Fine Art), and then there's DeepZenGo and others, but nothing nearly as strong that you can download or buy at this stage (as far as I know, anyway). Chess engines are typically made by teams of one or two people, and so far, deep learning-based approaches seem to require larger teams and a fair amount of (expensive) computing time, and most chess programmers are nor deep learning experts anyway. It's hard to make a living off of selling chess engines even in a small team; one could again assume a for-hire project, but I think not even most of the top players have the money to hire someone for a year or two for doing a speculative project to making an entirely new kind of one. It's limited how much a 100 Elo stronger engine will help you during opening preparation/training anyway; knowing how to work effectively with the computer is much more valuable. After all, it's not like you can use it while playing (unless it's freestyle chess).

The good news is that DeepMind's approach seems to become simpler and simpler over time. The first version of AlphaGo had all sorts of complexities and relied partially on hand-crafted features (although it wasn't very widely publicized), while the latest versions have removed a lot of the fluff. Make no mistake, though; the devil is in the details, and writing a top-class chess engine is a huge undertaking. My hunch is on two to three years before you can buy something that beats Stockfish on the same hardware. But I will hedge my bet here; it's hard to make predictions, especially about the future. Even with a world-class neural network in your brain.

Categories: FLOSS Project Planets

Aegir Dispatch: Ægir Turns 10!

Planet Drupal - Thu, 2017-12-07 13:49
My tenure with the Ægir Project only dates back about 7 or 8 years. I can’t speak first-hand about its inception and those early days. So, I’ll leave that to some of the previous core team members, many of whom are publishing blog posts of their own. I’ll try to maintain an up-to-date list of links to blog posts about Ægir’s 10-year anniversary here: Aegir is ten! from Steven Jones at ComputerMinds.
Categories: FLOSS Project Planets

Lullabot: Building a Sustainable Model for Drupal Contrib Module Development

Planet Drupal - Thu, 2017-12-07 11:50
Matt and Mike talk with Webform 8.5.x creator Jacob Rockowitz, #D8Rules initiative member Josef Dabernig, and WordPress (and former Drupal) developer Chris Wiegman about keeping Drupal's contrib ecosystem sustainable by enabling module creators to benefit financially from their development.
Categories: FLOSS Project Planets

Jonathan Dowland: Three Minimalism reads

Planet Debian - Thu, 2017-12-07 11:26

"The Life-Changing Magic of Tidying Up" by Marie Kondo is a popular (New York Times best selling) book by lifestyle consultant Mari Kondo about tidying up and decluttering. It's not strictly about minimalism, although her approach is informed by her own preferences which are minimalist. Like all self-help books, there's some stuff in here that you might find interesting or applicable to your own life, amongst other stuff you might not. Kondo believes, however, that her methods only works if you stick to them utterly.

Next is "Goodbye, Things" by Fumio Sasaki. The end-game for this book really is minimalism, but the book is structured in such a way that readers at any point on a journey to minimalism (or coinciding with minimalism, if that isn't your end-goal) can get something out of it. A large proportion of the middle of the book is given over to a general collection of short, one-page-or-less tips on decluttering, minimising, etc. You can randomly flip through this section a bit like randomly drawing a card from a deck. I started to wonder whether there's a gap in the market for an Oblique Strategies-like minimalism product. The book recommended several blogs for further reading, but they are all written in Japanese.

Finally issue #18 of New Philosopher is the "Stuff" issue and features several articles from modern Philosophers (as well as some pertinent material from classical ones) on the nature of materialism. I've been fascinated by Philosophy from a distance ever since my brother read it as an Undergraduate so I occasionally buy the philosophical equivalent of Pop Science books or magazines, but this was the most accessible for me that I've read to date.

Categories: FLOSS Project Planets

Django Weblog: What it&amp;#39;s like to serve on the DSF Board

Planet Python - Thu, 2017-12-07 09:27

I am currently the Vice-President of the Django Software Foundation, and have served as a member of the DSF Board for two years. This article is intended to help give a clearer picture of what's involved in being on the DSF Board, and might help some people decide whether they wish to stand for election.

What we do

Each month we - the six directors - have a board meeting, via Hangout. This lasts about an hour. We follow an agenda, and discuss questions that have arisen, have report on the state of our finances, and vote on any questions that have come up.

Each month a number of the questions we vote on are about grant applications for events (conferences, Django Girls and so on) and nominations for new members.

Mostly it's fairly routine business, and doesn't require much deliberation.

Occasionally there are trickier questions, for example that might concern:

  • matters where we are not sure what the best way forward is
  • legal questions about what the DSF is and isn't allowed to do
  • disagreements or contentious questions within the DSF or Django community

On the whole we find that when it's a matter of judgement about something, that we come to agreement pretty quickly.

At each meeting we'll each agree to take on certain administrative tasks that follow on from the discussion.

During the month a number of email messages come in that need to be answered - mostly enquiries about support for events, use of the Django logo, and so on, and also several for technical help with Django that we refer elsewhere.

Any one of us will answer those, if we can.

Some members of the board have special duties or interests - for example the Treasurer and Secretary have official duties, while I often take up enquiries about events.

Overall, it's a few hours' work each month.

What you need to be a board member

The board members are officially "Directors of the Django Software Foundation", which might make it sound more glamorous and/or difficult than it really is. It's neither...

If you can:

  • spare a few hours each month
  • spare some personal energy for the job
  • take part in meetings and help make decisions
  • answer email
  • read proposals, requests, applications and other documents carefully
  • help write documents (whether it's composing or proof-reading)
  • listen to people and voices in the Django community

then you probably have everything that's required to make a genuine, valuable contribution to Django by serving on the board.

Obviously, to serve as the Treasurer or Secretary requires some basic suitable skills for those roles - but you don't need to be a qualified accountant or have formal training.

In any case, no-one is born a DSF board member, and it's perfectly reasonable that in such a role you will learn to do new things if you don't know them already.

What it's like

I can only speak for myself - but I enjoy the work very much. Everyone on the board has a common aim of serving Django and its community, and the way the board works is friendly, collaborative and supportive. There's room for a variety of skills, special knowledge and experience. Different perspectives are welcomed.

There's also a very clear Django ethos and direction, that aims at inclusivity and generosity. The sustainability of the project and the well-being of people involved in it are always concerns that are visibly and explicitly on the table in board discussions.

It's a very good feeling each month to have our board meeting and be reminded how true the "boring means stable" equation is. Django is a big ship, and it sails on month after month, steadily. It requires some steering, and a shared vision of the way ahead, but progresses without big dramas. As a member of the board, this makes me feel that I am involved in something safe and sustainable.

I've been on the DSF board for nearly two years. Serving on the board does require some extra energy and time in my life, but it very rarely, if ever, feels like wasted or useless expenditure of energy. What we do makes sense, and has actual, tangible, useful results.

If you have some energy that you would like to do something useful with to help Django and all the individuals and organisations involved in it, I think that serving as DSF board member is an excellent way to use it, because the DSF is a machine that works well and your time and energy won't be wasted.

All of this discussion has been wholly from my own perspective, and even then it's quite incomplete. I'm just one board member of six, and other board members might have things they feel are important to add that I have not mentioned. Even so, I hope this account reassures anyone who had any doubts that:

  • they don't need special skills or credentials to be a board member
  • being a board member is a rewarding way to spend their time and energy
  • serving on the board makes a genuine contribution to Django

Daniele Procida

Categories: FLOSS Project Planets

GUIX Project news: GNU Guix and GuixSD 0.14.0 released

GNU Planet! - Thu, 2017-12-07 08:00

We are pleased to announce the new release of GNU Guix and GuixSD, version 0.14.0!

The release comes with GuixSD ISO-9660 installation images, a virtual machine image of GuixSD, and with tarballs to install the package manager on top of your GNU/Linux distro, either from source or from binaries.

It’s been 6 months since the previous release, during which 88 people contributed code and packages. The highlights include:

See the release announcement for details.

About GNU Guix

GNU Guix is a transactional package manager for the GNU system. The Guix System Distribution or GuixSD is an advanced distribution of the GNU system that relies on GNU Guix and respects the user's freedom.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. Guix uses low-level mechanisms from the Nix package manager, except that packages are defined as native Guile modules, using extensions to the Scheme language. GuixSD offers a declarative approach to operating system configuration management, and is highly customizable and hackable.

GuixSD can be used on an i686 or x86_64 machine. It is also possible to use Guix on top of an already installed GNU/Linux system, including on mips64el, armv7, and aarch64.

Categories: FLOSS Project Planets

Wouter Verhelst: Adding subtitles with FFmpeg

Planet Debian - Thu, 2017-12-07 07:52

For future reference (to myself, for the most part):

ffmpeg -i foo.webm -i foo.en.vtt -i foo.nl.vtt -map 0:v -map 0:a \ -map 1:s -map 2:s -metadata:s:a language=eng -metadata:s:s:0 \ language=eng -metadata:s:s:1 language=nld -c copy -y \ foo.subbed.webm

... is one way to create a single .webm file from one .webm input file and multiple .vtt files. A little bit of explanation:

  • The -i arguments pass input files. You can have multiple input files for one output file. They are numbered internally (this is necessary for the -map and -metadata options later), starting from 0.
  • The -map options take a "mapping". With them, you specify which input streams should go where in the output stream. By default, if you have multiple streams of the same type, ffmpeg will only pick one (the "best" one, whatever that is). The mappings we specify are:

    • -map 0:v: this means to take the video stream from the first file (this is the default if you do not specify any mapping at all; but if you do specify a mapping, you need to be complete)
    • -map 0:a: take the audio stream from the first file as well (same as with the video).
    • -map 1:s: take the subtitle stream from the second (i.e., indexed 1) file.
    • -map 2:s: take the subtitle stream from the third (i.e., indexed 2) file.
  • The -metadata options set metadata on the output file. Here, we pass:

    • -metadata:s:a language=eng, to add a 's'tream metadata item on the 'a'udio stream, with name language and content eng. The language metadata in ffmpeg is special, in that it gets automatically translated to the correct way of specifying the language in the target container format.
    • -metadata:s:s:0 language=eng, to add a 's'tream metadata item on the first (indexed 0) 's'ubtitle stream in the output file. This too has the english language set
    • `-metadata:s:s:1 language=nld', to add a 's'tream metadata item on the second (indexed 1) 's'ubtitle stream in the output file. This has dutch set as the language.
  • The -c copy option tells ffmpeg to not transcode the input video data, but just to rewrite the container. This works because all input files (WebM video plus VTT subtitles) are valid for WebM. If you do not have an input subtitle format that is valid for WebM, you can instead limit the copy modifier to the video and audio only, allowing ffmpeg to transcode the subtitles. This is done by way of -c:v copy -c:a copy.
  • Finally, we pass -y to specify that any pre-existing output file should be overwritten, and the name of the output file.
Categories: FLOSS Project Planets