FLOSS Project Planets

Stanford Web Services Blog: Drupal 8 REST Requests

Planet Drupal - Tue, 2016-02-02 11:25

In November, 2015, the Stanford Web Services team got to dive into Drupal 8 during a weeklong sprint. I was excited to look at the RESTful web services that Drupal 8 gives out-of-the-box; what follows is my documentation of the various types of requests supported, required headers, responses, and response codes.

This is not intended to be an exhaustive documentation of RESTful web services in Drupal 8. However, I have pulled information from various posts around the Web, and my own experimentation, into this post.

Categories: FLOSS Project Planets

PyTennessee: PyTN Profiles: Dave Forgac (@tylerdave)

Planet Python - Tue, 2016-02-02 09:02

PyTN Profiles: Dave Forgac (@tylerdave)

Dave Forgac has been a FOSS enthusiast ever since installing Linux for the first time in the late ‘90s. He got a taste of Python in the early 00’s and was hooked. He currently works as a Sr. Software Engineer at American Greetings where he is the API development team lead and also has a hand in Python application packaging and deployment. He has previously worked as a Linux Systems Administrator and web hosting Support Engineer, automating as much as possible using Python.

Dave will be presenting the tutorial “Writing Command Line Applications that Click” at 1PM Sunday. Click is a Python package that helps you create beautiful command line interfaces with minimal code. In this tutorial you will exercise the most commonly-used features of Click and get an overview of the more advanced functionality available. You will leave with an example application that you can use as a basis for your own command line development.

Categories: FLOSS Project Planets

Caktus Consulting Group: Writing Unit Tests for Django Migrations

Planet Python - Tue, 2016-02-02 08:00

Testing in a Django project ensures the latest version of a project is as bug-free as possible. But when deploying, you’re dealing with multiple versions of the project through the migrations.

The test runner is extremely helpful in its creation and cleanup of a test database for our test suite. In this temporary test database, all of the project's migrations are run before our tests. This means our tests are running the latest version of the schema and are unable to verify the behavior of those very migrations because the tests cannot set up data before the migrations run or assert conditions about them.

We can teach our tests to run against those migrations with just a bit of work. This is especially helpful for migrations that are going to include significant alterations to existing data.

The Django test runner begins each run by creating a new database and running all migrations in it. This ensures that every test is running against the current schema the project expects, but we'll need to work around this setup in order to test those migrations. To accomplish this, we'll need to have the test runner step back in the migration chain just for the tests against them.

Ultimately, we're going to try to write tests against migrations that look like this:

class TagsTestCase(TestMigrations): migrate_from = '0009_previous_migration' migrate_to = '0010_migration_being_tested' def setUpBeforeMigration(self, apps): BlogPost = apps.get_model('blog', 'Post') self.post_id = BlogPost.objects.create( title = "A test post with tags", body = "", tags = "tag1 tag2", ).id def test_tags_migrated(self): BlogPost = self.apps.get_model('blog', 'Post') post = BlogPost.objects.get(id=self.post_id) self.assertEqual(post.tags.count(), 2) self.assertEqual(post.tags.all()[0].name, "tag1") self.assertEqual(post.tags.all()[1].name, "tag2")

Before explaining how to make this work, we'll break down how this test is actually written.

We're inheriting from a TestCase helper that will be written to make testing migrations possible named TestMigrations and defining for this class two attributes that configure the migrations before and after that we want to test. migrate_from is the last migration we expect to be run on machines we want to deploy to and migrate_to is the latest new migration we're testing before deploying.

class TagsTestCase(TestMigrations): migrate_from = '0009_previous_migration' migrate_to = '0010_migration_being_tested'

Because our test is about a migration, data modifying migrations in particular, we want to do some setup before the migration in question (0010_migration_being_tested) is run. An extra setup method is defined to do that kind of data setup after 0009_previous_migration has run but before 0010_migration_being_tested.

def setUpBeforeMigration(self, apps): BlogPost = apps.get_model('blog', 'Post') self.post_id = BlogPost.objects.create( title = "A test post with tags", body = "", tags = "tag1 tag2", ).id

Once our test runs this setup, we expect the final 0010_migration_being_tested migration to be run. At that time, one or more test_*() methods we define can do the sort of assertions tests would normally do. In this case, we're making sure data was converted to the new schema correctly.

def test_tags_migrated(self): BlogPost = self.apps.get_model('blog', 'Post') post = BlogPost.objects.get(id=self.post_id) self.assertEqual(post.tags.count(), 2) self.assertEqual(post.tags.all()[0].name, "tag1") self.assertEqual(post.tags.all()[1].name, "tag2")

Here we've fetched a copy of this Post model's after-migration version and confirmed the value we set up in setUpBeforeMigration() was converted to the new structure.

Now, let's look at that TestMigrations base class that makes this possible. First, the pieces from Django we'll need to import to build our migration-aware test cases.

from django.apps import apps from django.test import TransactionTestCase from django.db.migrations.executor import MigrationExecutor from django.db import connection

We'll be extending the TransactionTestCase class. In order to control migration running, we'll use MigrationExecutor, which needs the database connection to operate on. Migrations are tied pretty intrinsically to Django applications, so we'll be using django.apps.apps and, in particular, get_containing_app_config() to identify the current app our tests are running in.

class TestMigrations(TransactionTestCase): @property def app(self): return apps.get_containing_app_config(type(self).__module__).name migrate_from = None migrate_to = None

We're starting with a few necessary properties.

  • app is a dynamic property that'll look up and return the name of the current app.
  • migrate_to will be defined on our own test case subclass as the name of the migration we're testing.
  • migrate_from is the migration we want to set up test data in, usually the latest migration that's currently been deployed in the project.
def setUp(self): assert self.migrate_from and self.migrate_to, \ "TestCase '{}' must define migrate_from and migrate_to properties".format(type(self).__name__) self.migrate_from = [(self.app, self.migrate_from)] self.migrate_to = [(self.app, self.migrate_to)] executor = MigrationExecutor(connection) old_apps = executor.loader.project_state(self.migrate_from).apps

After insisting the test case class had defined migrate_to and migrate_from migrations, we use the internal MigrationExecutor utility to get a state of the applications as of the older of the two migrations.

We'll use old_apps in our setUpBeforeMigration() to work with old versions of the models from this app. First, we'll run our migrations backwards to return to this original migration and then call the setUpBeforeMigration() method.

# Reverse to the original migration executor.migrate(self.migrate_from) self.setUpBeforeMigration(old_apps)

Now that we've set up the old state, we simply run the migrations forward again. If the migrations are correct, they should update any test data we created. Of course, we're validating that in our actual tests.

# Run the migration to test executor.migrate(self.migrate_to)

And finally, we store a current version of the app configuration that our tests can access and define a no-op setUpBeforeMigration()

self.apps = executor.loader.project_state(self.migrate_to).apps def setUpBeforeMigration(self, apps): pass

Here's a complete version:

from django.apps import apps from django.test import TransactionTestCase from django.db.migrations.executor import MigrationExecutor from django.db import connection class TestMigrations(TransactionTestCase): @property def app(self): return apps.get_containing_app_config(type(self).__module__).name migrate_from = None migrate_to = None def setUp(self): assert self.migrate_from and self.migrate_to, \ "TestCase '{}' must define migrate_from and migrate_to properties".format(type(self).__name__) self.migrate_from = [(self.app, self.migrate_from)] self.migrate_to = [(self.app, self.migrate_to)] executor = MigrationExecutor(connection) old_apps = executor.loader.project_state(self.migrate_from).apps # Reverse to the original migration executor.migrate(self.migrate_from) self.setUpBeforeMigration(old_apps) # Run the migration to test executor.migrate(self.migrate_to) self.apps = executor.loader.project_state(self.migrate_to).apps def setUpBeforeMigration(self, apps): pass class TagsTestCase(TestMigrations): migrate_from = '0009_previous_migration' migrate_to = '0010_migration_being_tested' def setUpBeforeMigration(self, apps): BlogPost = apps.get_model('blog', 'Post') self.post_id = BlogPost.objects.create( title = "A test post with tags", body = "", tags = "tag1 tag2", ).id def test_tags_migrated(self): BlogPost = self.apps.get_model('blog', 'Post') post = BlogPost.objects.get(id=self.post_id) self.assertEqual(post.tags.count(), 2) self.assertEqual(post.tags.all()[0].name, "tag1") self.assertEqual(post.tags.all()[1].name, "tag2")
Categories: FLOSS Project Planets

Dirk Eddelbuettel: Like peanut butter and jelly: x13binary and seasonal

Planet Debian - Tue, 2016-02-02 07:58

This post was written by Dirk Eddelbuettel and Christoph Sax and will be posted on both author's respective blogs.

The seasonal package by Christoph Sax brings a very featureful and expressive interface for working with seasonal data to the R environment. It uses the standard tool of the trade: X-13ARIMA-SEATS. This powerful program is provided by the statisticians of the US Census Bureau based on their earlier work (named X-11 and X-12-ARIMA) as well as the TRAMO/SEATS program by the Bank of Spain. X-13ARIMA-SEATS is probably the best known tool for de-seasonalization of timeseries, and used by statistical offices around the world.

Sadly, it also has a steep learning curve. One interacts with a basic command-line tool which users have to download, install and properly reference (by environment variables or related means). Each model specification has to be prepared in a special 'spec' file that uses its own, cumbersome syntax.

As seasonal provides all the required functionality to use X-13ARIMA-SEATS from R --- see the very nice seasonal demo site --- it still required the user to manually deal with the X-13ARIMA-SEATS installation.

So we decided to do something about this. A pair of GitHub repositories provide both the underlying binary in a per-operating system form (see x13prebuilt) as well as a ready-to- use R package (see x13binary) which uses the former to provide binaries for R. And the latter is now on CRAN as package x13binary ready to be used on Windows, OS-X or Linux. And the seasonal package (in version 1.2.0 -- now on CRAN -- or later) automatically makes use of it. Installing seasaonal and x13binary in R is now as easy as:


which opens the door for effortless deployment of powerful deasonalization. By default, the principal function of the package employs a number of automated techniques that work well in most circumstances. For example, the following code produces a seasonal adjustment of the latest data of US retail sales (by the Census Bureau) downloaded from Quandl:

library(seasonal) library(Quandl) ## not needed for seasonal but has some niceties for Quandl data rs <- Quandl(code="USCENSUS/BI_MARTS_44000_SM", type="ts")/1e3 m1 <- seas(rs) plot(m1, main = "Retail Trade: U.S. Total Sales", ylab = "USD (in Billions)")

This tests for log-transformation, performs an automated ARIMA model search, applies outlier detection, tests and adjusts for trading day and easter effects, and invokes the SEATS method to perform seasonal adjustment. And this is how the adjusted series looks like:

Of course, you can access all available options of X-13ARIMA-SEATS as well. Here is an example where we adjust the latest data for Chinese exports (as tallied by the US FED), taking into account the different effects of Chinese New Year before, during and after the holiday:

xp <- Quandl(code="FRED/VALEXPCNM052N", type="ts")/1e9 m2 <- seas(window(xp, start = 2000), xreg = cbind(genhol(cny, start = -7, end = -1, center = "calendar"), genhol(cny, start = 0, end = 7, center = "calendar"), genhol(cny, start = 8, end = 21, center = "calendar") ), regression.aictest = c("td", "user"), regression.usertype = "holiday") plot(m2, main = "Goods, Value of Exports for China", ylab = "USD (in Billions)")

which generates the following chart demonstrating a recent flattening in export activity measured in USD.

We hope this simple examples illustrates both how powerful a tool X-13ARIMA-SEATS is, but also just how easy it is to use X-13ARIMA-SEATS from R now that we provide the x13binary package automating its installation.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Deeson: Warden: Monitoring the Secruity of a Web Estate

Planet Drupal - Tue, 2016-02-02 07:30

Warden is a solution for in-house development teams and agencies who need to keep track of the status of many Drupal websites, hosted on a variety of different platforms.

Warden gives you a central dashboard which lists all your Drupal websites and highlights any which have issues, for example needing secuity updates.

Hosting companies, like Acquia and Pantheon, have their own reporting tools but these only work if you host on their platforms. If you have an estate of websites which run on multiple platforms you need a tool which can report on them all.

The Warden application is composed of two parts, a Warden module which you need to install on each of your websites and the central Warden dashboard you will need to host on a web server. The Warden dashboard is an application written in Symfony and is freely available on github.

At present only a Drupal integration exists but work is underway to produce a pluggable system which will allow new modules to be created for Wordpress and pure Symfony sites. Others may then wish to contribute additions for their own needs, for example by providing different kinds of reports for the sites.

Warden Dashboard

After correctly configuring the Warden Symfony application you will be presented with the Warden Dashboard. This lists all the sites in your estate with high level details of each. Sites requiring a security update are highlighted as red, sites with module updates which are not security are yellow and sites with no problems are white.

Drupal modules listing screen

The Drupal plugin for the Warden application provides a modules listing screen. This lists all Drupal modules installed across all you estate and allows you to see which Drupal websites have and do not have a particular module installed. This helps when you need to know how many sites need to be updated as a result of a module change or knowing how many of your Drupal sites might be missing a best practice module.


The Warden application uses OpenSSL to encyrpt data which is sent between it and the Drupal website. The PHP OpenSSL Cryptography extension is required for both Warden and the Drupal sites it will take data from. You can also IP restrict which servers can request data from your Drupal websites in the module configuration.

In normal operation the Warden dashboard will poll the sites periodically to request the sites data be refreshed. You can alternatively configure it so that the sites push the data to the Warden dashboard. In either configuration, the site will only send data to the configured dashboard and not to the site making the request for data.

It is also recommended that you use a signed SSL certificate on your Drupal websites and your Warden dashboard.

Where to get Warden

You can download the Warden central applications from GitHub here: https://github.com/teamdeeson/warden 

The Drupal module is available on drupal.org here: https://www.drupal.org/project/warden

What next?

We welcome contributions to the Drupal module or the Symfony application codebase, let us know what you think! 

If you are intersted in integrating Warden into other web tools then you'll need a copy of the PHP API which is available here: https://github.com/teamdeeson/wardenapi  

Categories: FLOSS Project Planets

Kristján Valur Jónsson: What is Stackless?

Planet Python - Tue, 2016-02-02 07:27
I sometimes get this question. And instead of starting a rant about microthreads, co-routines, tasklets and channels, I present the essential piece of code from the implementation: The Code: /* the frame dispatcher will execute frames and manage the frame stack until the "previous" frame reappears. The "Mario" code if you know that game :-) */ […]
Categories: FLOSS Project Planets

Norbert Preining: Gaming: The Talos Principle – Road to Gehenna

Planet Debian - Tue, 2016-02-02 06:30

After finishing the Talos Principle I immediately started to play the extension Road to Gehenna, but was derailed near completion by the incredible Portal Stories: Mel. Now that I finally managed to escape from the test chambers my attention returned to the Road to Gehenna. As with the pair Portal 2 and Portal Stories: Mel, the challenges are going up considerably from the original Talos Principle to the Road to Gehenna. Checking the hours of game play it took me about 24h through all the riddles in Road to Gehenna, but I have to admit, I had some riddles where I needed to cheat.

The Road to Gehenna does not bring much new game play elements, but loads of new riddles. And the best of all, playable on Linux! And as with the original game, the graphics are really well done, while still be playable on my Vaio Pro laptop with Intel integrated graphic card – a plus that is rare in the world of computer games where everyone is expected to have a high-end nVidia or Radeon card. Ok, there is not much action going on where quick graphic computations are necessary, still the impression of the game is great.

The riddles contain the well known elements (connectors, boxes, jammer, etc), but the settings are often spectacular, sometimes very small and narrow, just a few moves if done in the right order, sometimes like wide open fields with lots of space to explore. Transportation between various islands suspended in the air is with vents, giving you a lot of nice flight time!

If one searches a lot, or uses a bit of cheating, one can find good old friends from the Portal series, burried in the sand in one of the world. This is not the only easter egg hidden in the game, there are actually a lot, some of which I have not seen but only read about afterwards. Guess I need to replay the whole game.

Coming back to the riddles, I really believe that the makers have been ingenious in using the few items at hand to create challenging and surprising riddles. As it is so often, many of the riddles look completely impossible at first glance, and often even after staring at them for tens and tens of minutes. Until (and if) one has the the a-ha effect and understands the trick. This often still needs a lot of handwork and trial-error rounds, but all in all the game is well balanced. What is a bit a pain – similar to the original game – are collecting the stars to reach the hidden world and free the admin. There the developers overdid it in my opinion, with some rather absurd and complicated stars.

The end of the game, ascension of the messengers, is rather unspectacular. A short discussion on who remains and then a big closing scene with the messenger being beamed up a la Starship Enterprise, and a closing black screen. But well, the fun was with the riddles.

All in all an extension that is well worth the investment if one enjoyed the original Talos, and is looking for rather challenging riddles. Now that I have finished all the Portal and Talos titles, I am hard thinking of what is next … looking into Braid …


Categories: FLOSS Project Planets

Michal &#268;iha&#345;: Weekly phpMyAdmin contributions 2016-W04

Planet Debian - Tue, 2016-02-02 06:00

As I've already mentioned in separate blog post we mostly had some security issues fun in past weeks, but besides that some other work has been done as well.

I've still focused on code cleanups and identified several pieces of code which are no longer needed (given our required PHP version). Another issue related to security updates was to set testing of 4.0 branch using PHP 5.2 as this is what we've messed up in the security release (what is quite bad as this is only branch supporting PHP 5.2).

In addition to this, I've updated phpMyAdmin packages in both Debian and Ubuntu PPA.

All handled issues:

Filed under: Debian English phpMyAdmin | 0 comments

Categories: FLOSS Project Planets

Jeremy Quinn: The Shrine [Flickr]

Planet Apache - Tue, 2016-02-02 05:05

sharkbait posted a photo:

The Bowie mural in Brixton
Still popular, still growing
Very touching

Categories: FLOSS Project Planets

Kay Hayen: Nuitka Release 0.5.19

Planet Python - Tue, 2016-02-02 03:04

This is to inform you about the new stable release of Nuitka. It is the extremely compatible Python compiler. Please see the page "What is Nuitka?" for an overview.

This release brings optimization improvements for dictionary using code. This is now lowering subscripts to dictionary accesses where possible and adds new code generation for known dictionary values. Besides this there is the usual range of bug fixes.

Bug Fixes
  • Fix, attribute assignments or deletions where the assigned value or the attribute source was statically raising crashed the compiler.
  • Fix, the order of evaluation during optimization was considered in the wrong order for attribute assignments source and value.
  • Windows: Fix, when g++ is the path, it was not used automatically, but now it is.
  • Windows: Detect the 32 bits variant of MinGW64 too.
  • Python3.4: The finalize of compiled generators could corrupt reference counts for shared generator objects. Fixed in already.
  • Python3.5: The finalize of compiled coroutines could corrupt reference counts for shared generator objects.
  • When a variable is known to have dictionary shape (assigned from a constant value, result of dict built-in, or a general dictionary creation), or the branch merge thereof, we lower subscripts from expecting mapping nodes to dictionary specific nodes. These generate more efficient code, and some are then known to not raise an exception.

    def someFunction(a,b): value = {a : b} value["c"] = 1 return value

    The above function is not yet fully optimized (dictionary key/value tracing is not yet finished), however it at least knows that no exception can raise from assigning value["c"] anymore and creates more efficient code for the typical result = {} functions.

  • The use of "logical" sharing during optimization has been replaced with checks for actual sharing. So closure variables that were written to in dead code no longer inhibit optimization of the then no more shared local variable.

  • Global variable traces are now faster to decide definite writes without need to check traces for this each time.

  • No more using "logical sharing" allowed to remove that function entirely.
  • Using "technical sharing" less often for decisions during optimization and instead rely more often on proper variable registry.
  • Connected variables with their global variable trace statically avoid the need to check in variable registry for it.
  • Removed old and mostly unused "assume unclear locals" indications, we use global variable traces for this now.

This release aimed at dictionary tracing. As a first step, the value assign is now traced to have a dictionary shape, and this this then used to lower the operations which used to be normal subscript operations to mapping, but now can be more specific.

Making use of the dictionary values knowledge, tracing keys and values is not yet inside the scope, but expected to follow. We got the first signs of type inference here, but to really take advantage, more specific shape tracing will be needed.

Categories: FLOSS Project Planets

S. Lott: Why I don't want to share your screen -- OR -- What I learned from stackoverflow

Planet Python - Tue, 2016-02-02 03:00
I know it sounds arrogant, but I don't want to share your screen to sort out a Python programming problem. I have two reasons and I think one of them is a good one.

It's both pedagogical and personal. 
Personally, I'm often left breathless by demos. Watching the cursor fly around the screen is -- well -- dizzying. What was I supposed to be watching? Who's IM messages are popping up? What meeting reminders are you ignoring?

It may seem helpful to wave the cursor around, and show me your whole desktop world. And for some people, the discussion may actually be helpful. Sometimes they have an epiphany while they're explaining stuff to me. That's good. For me, it's bewildering. Sorry. I'm only going to read the visible fragments of your emails in the background window.

From a pedagogical perspective, there's this point:

I think that it's very important to learn how to focus on the details that matter.

This breaks down into several related skills:

  1. I think everyone needs to be able to copy and paste text. Screenshot images are hard to work with. On Stack Overflow, a 4-space indent is mandatory. It's not hard. A surprising number of programmers struggle with it.
  2. Articulate the actual problem. "Doesn't work" really is not sensible. I think it's important to insist on a concrete statement of the problem. Asking me to deduce it while looking at your screen isn't building any of your skills. 
  3. Find the relevant portion of the Python traceback. Yes, that's hard. But it's part of coding. Asking me to read the traceback doesn't build your skills.
  4. Find the relevant portions of the code that's broken. Again, when I pinpoint the line of code from reading the traceback, your skills haven't grown. I'm well aware that it's confusing when there's a long traceback from a framework that only seems to include your module 6 levels in. If you aspire to mastering code, that has to be part of your aspiration.
  5. Hypothesize a root cause. This is perhaps the hardest skill. The confirmation bias problem leads many people to write wrong code and complain that it's "broken" in a vague way. During screen sharing they scroll past their assumptions as if they're always correct. I have sympathy. But, it's essential to understand the semantics of alanguage. More importantly, it's essential to learn to judge where our assumptions might deviate from reality. Overcoming confirmation bias is hard. Maybe a long conversation is the only way to realize this; I hope not.
  6. Experiment. Python offers the >>> prompt at which you can experiment. Use it. This is the best way to explore your assumptions and see what the actual language semantics are.
Maybe I'm just being hypersensitive, but there's little to really talk about. If we could focus on the relevant code, perhaps through copy-and-paste, I can help. Otherwise, I feel like I'm just watching helplessly while an amusement park ride spins me around for a while, leaving me dizzy and confused. And not having offered any concrete help.
Categories: FLOSS Project Planets

Talk Python to Me: #44 Project Jupyter and IPython

Planet Python - Tue, 2016-02-02 03:00
One of the fastest growing areas in Python is scientific computing. In scientific computing with Python, there are a few key packages that make it special. These include NumPy / SciPy / and related packages. The one that brings it all together, visually, is IPython (now known as Project Jupyter). That's the topic on episode 44 of Talk Python To Me. <br/> <br/> You'll learn about "the big split", the plans for the recent $6 million in funding, Jupyter at CERN and the LHC and more with Min RK & Matthias Bussonnier. <br/> <br/> Links from the show: <br/> <div style="font-size: .85em;"> <br/> <b>Project Jupyter</b>: <a href='http://jupyter.org/' target='_blank'>jupyter.org</a> <br/> <b>Min RK</b>: <a href='https://twitter.com/minrk' target='_blank'>@minrk</a> <br/> <b>Matthias Bussonnier</b>: <a href='https://twitter.com/mbussonn' target='_blank'>@mbussonn</a> <br/> <b>Complexity graph</b>: <br/> <a href='http://grokcode.com/864/snakefooding-python-code-for-complexity-visualization/' target='_blank'>grokcode.com/864/snakefooding-python-code-for-complexity-visualization</a> <br/> <b>Jess Hamrick deployment</b>: <br/> <a href='https://developer.rackspace.com/blog/deploying-jupyterhub-for-education/' target='_blank'>developer.rackspace.com/blog/deploying-jupyterhub-for-education</a> <br/> <b>My Binder</b>: <a href='http://mybinder.org/' target='_blank'>mybinder.org</a> <br/> <b>Try Jupyter</b>: <a href='https://try.jupyter.org/' target='_blank'>try.jupyter.org</a> <br/> <b>Lorena Barba's AeroPython course</b>: <a href='https://github.com/barbagroup/AeroPython' target='_blank'>github.com/barbagroup/AeroPython</a> <br/> <b>Jessica Hamrick's Ansible scripts</b>: <a href='https://github.com/compmodels/jupyterhub-deploy' target='_blank'>github.com/compmodels/jupyterhub-deploy</a> <br/> <b>Jake Vanderplas blogging with notebooks</b>: <a href='https://jakevdp.github.io/' target='_blank'>jakevdp.github.io</a> <br/> <b>Peter Norvig's regex golf notebook</b>: <br/> <a href='http://nbviewer.jupyter.org/url/norvig.com/ipython/xkcd1313.ipynb' target='_blank'>nbviewer.jupyter.org/url/norvig.com/ipython/xkcd1313.ipynb</a> <br/> <b>SageMathCloud</b>: <a href='https://cloud.sagemath.com/' target='_blank'>cloud.sagemath.com</a> <br/> <b>First version of IPython</b>: <a href='https://gist.github.com/fperez/1579699' target='_blank'>gist.github.com/fperez/1579699</a> <br/> <b>Historical perspective</b>: <br/> <a href='http://blog.fperez.org/2012/01/ipython-notebook-historical.html' target='_blank'>blog.fperez.org/2012/01/ipython-notebook-historical.html</a> <br/> </div>
Categories: FLOSS Project Planets

Russell Coker: Compatibility and a Linux Community Server

Planet Debian - Tue, 2016-02-02 00:44

Compatibility/interoperability is a good thing. It’s generally good for systems on the Internet to be capable of communicating with as many systems as possible. Unfortunately it’s not always possible as new features sometimes break compatibility with older systems. Sometimes you have systems that are simply broken, for example all the systems with firewalls that block ICMP so that connections hang when the packet size gets too big. Sometimes to take advantage of new features you have to potentially trigger issues with broken systems.

I recently added support for IPv6 to the Linux Users of Victoria server. I think that adding IPv6 support is a good thing due to the lack of IPv4 addresses even though there are hardly any systems that are unable to access IPv4. One of the benefits of this for club members is that it’s a platform they can use for testing IPv6 connectivity with a friendly sysadmin to help them diagnose problems. I recently notified a member by email that the callback that their mail server used as an anti-spam measure didn’t work with IPv6 and was causing mail to be incorrectly rejected. It’s obviously a benefit for that user to have the problem with a small local server than with something like Gmail.

In spite of the fact that at least one user had problems and others potentially had problems I think it’s clear that adding IPv6 support was the correct thing to do.

SSL Issues

Ben wrote a good post about SSL security [1] which links to a test suite for SSL servers [2]. I tested the LUV web site and got A-.

This blog post describes how to setup PFS (Perfect Forward Secrecy) [3], after following it’s advice I got a score of B!

From the comments on this blog post about RC4 etc [4] it seems that the only way to have PFS and not be vulnerable to other issues is to require TLS 1.2.

So the issue is what systems can’t use TLS 1.2.

TLS 1.2 Support in Browsers

This Wikipedia page has information on SSL support in various web browsers [5]. If we require TLS 1.2 we break support of the following browsers:

The default Android browser before Android 5.0. Admittedly that browser always sucked badly and probably has lots of other security issues and there are alternate browsers. One problem is that many people who install better browsers on Android devices (such as Chrome) will still have their OS configured to use the default browser for URLs opened by other programs (EG email and IM).

Chrome versions before 30 didn’t support it. But version 30 was released in 2013 and Google does a good job of forcing upgrades. A Debian/Wheezy system I run is now displaying warnings from the google-chrome package saying that Wheezy is too old and won’t be supported for long!

Firefox before version 27 didn’t support it (the Wikipedia page is unclear about versions 27-31). 27 was released in 2014. Debian/Wheezy has version 38, Debian/Squeeze has Iceweasel 3.5.16 which doesn’t support it. I think it is reasonable to assume that anyone who’s still using Squeeze is using it for a server given it’s age and the fact that LTS is based on packages related to being a server.

IE version 11 supports it and runs on Windows 7+ (all supported versions of Windows). IE 10 doesn’t support it and runs on Windows 7 and Windows 8. Are the free upgrades from Windows 7 to Windows 10 going to solve this problem? Do we want to support Windows 7 systems that haven’t been upgraded to the latest IE? Do we want to support versions of Windows that MS doesn’t support?

Windows mobile doesn’t have enough users to care about.

Opera supports it from version 17. This is noteworthy because Opera used to be good for devices running older versions of Android that aren’t supported by Chrome.

Safari supported it from iOS version 5, I think that’s a solved problem given the way Apple makes it easy for users to upgrade and strongly encourages them to do so.

Log Analysis

For many servers the correct thing to do before even discussing the issue is to look at the logs and see how many people use the various browsers. One problem with that approach on a Linux community site is that the people who visit the site most often will be more likely to use recent Linux browsers but older Windows systems will be more common among people visiting the site for the first time. Another issue is that there isn’t an easy way of determining who is a serious user, unlike for example a shopping site where one could search for log entries about sales.

I did a quick search of the Apache logs and found many entries about browsers that purport to be IE6 and other versions of IE before 11. But most of those log entries were from other countries, while some people from other countries visit the club web site it’s not very common. Most access from outside Australia would be from bots, and the bots probably fake their user agent.

Should We Do It?

Is breaking support for Debian/Squeeze, the built in Android browser on Android <5.0, and Windows 7 and 8 systems that haven’t upgraded IE as a web browsing platform a reasonable trade-off for implementing the best SSL security features?

For the LUV server as a stand-alone issue the answer would be no as the only really secret data there is accessed via ssh. For a general web infrastructure issue it seems that the answer might be yes.

I think that it benefits the community to allow members to test against server configurations that will become more popular in the future. After implementing changes in the server I can advise club members (and general community members) about how to configure their servers for similar results.

Does this outweigh the problems caused by some potential users of ancient systems?

I’m blogging about this because I think that the issues of configuration of community servers have a greater scope than my local LUG. I welcome comments about these issues, as well as about the SSL compatibility issues.

Related posts:

  1. Name Server IP and a Dead Server About 24 hours ago I rebooted the system that runs...
  2. Server Costs vs Virtual Server Costs The Claim I have seen it claimed that renting a...
  3. My Blog Server was Cracked On the 1st of August I noticed that the server...
Categories: FLOSS Project Planets

Twisted Matrix Labs: January 2016 - SFC Sponsored Development

Planet Python - Tue, 2016-02-02 00:18
This is my report for the work done in January 2016 as part of the Twisted Maintainer Fellowship program.
It is my last report of the Twisted Maintainer Fellowship 2015 program.

With this fellowship the review queue size was reduced and the review round-trips were done much quicker.
This fellowship has produced the Git/GitHub migration plan but has failed to finalize its execution.

Tickets reviewed and merged* #7671 - It is way too hard to specify a trust root combining multiple certificates, especially to HTTP
* #7993 - Port twisted.web.wsgi to Python 3
* #8140 - twisted.web.test.requesthelper.DummyRequest is out of sync with the real Request
* #8148 - Deprecate twisted.protocols.mice
* #8173 - Twisted does not have a code of conduct
* #8180 - Conch integration tests fail because DSA is deprecated in OpenSSH 7.
* #8187 - Use a less ancient OpenSSL method in twisted.test.test_sslverify

Tickets reviewed and not merged yet* #7889 - replace win32api.OpenProcess and win32api.FormatMessage with cffi
* #8150 - twisted.internet.ssl.KeyPair should provide loadPEM
* #8159 - twisted.internet._win32serialport incompatible with pyserial 3.x
* #8169 - t.w.static.addSlash does not work on Python 3
* #8188 - Advertise H2 via ALPN/NPN when available.

Thanks to the Software Freedom Conservancy and all of the sponsors who made this possible, as well as to all the other Twisted developers who helped out by writing or reviewing code.
Categories: FLOSS Project Planets

Justin Mason: Links for 2016-02-01

Planet Apache - Mon, 2016-02-01 18:58
Categories: FLOSS Project Planets

Lunar: Reproducible builds: week 40 in Stretch cycle

Planet Debian - Mon, 2016-02-01 18:39

What happened in the reproducible builds effort between January 24th and January 30th:

Media coverage

Holger Levsen was interviewed by the FOSDEM team to introduce his talk on Sunday 31st.

Toolchain fixes

Jonas Smedegaard uploaded d-shlibs/0.63 which makes the order of dependencies generated by d-devlibdeps stable accross locales. Original patch by Reiner Herrmann.

Packages fixed

The following 53 packages have become reproducible due to changes in their build dependencies: appstream-glib, aptitude, arbtt, btrfs-tools, cinnamon-settings-daemon, cppcheck, debian-security-support, easytag, gitit, gnash, gnome-control-center, gnome-keyring, gnome-shell, gnome-software, graphite2, gtk+2.0, gupnp, gvfs, gyp, hgview, htmlcxx, i3status, imms, irker, jmapviewer, katarakt, kmod, lastpass-cli, libaccounts-glib, libam7xxx, libldm, libopenobex, libsecret, linthesia, mate-session-manager, mpris-remote, network-manager, paprefs, php-opencloud, pisa, pyacidobasic, python-pymzml, python-pyscss, qtquick1-opensource-src, rdkit, ruby-rails-html-sanitizer, shellex, slony1-2, spacezero, spamprobe, sugar-toolkit-gtk3, tachyon, tgt.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues, but not all of them:

  • gnubg/1.05.000-4 by Russ Allbery.
  • grcompiler/4.2-6 by Hideki Yamane.
  • sdlgfx/2.0.25-5 fix by Felix Geyer, uploaded by Gianfranco Costamagna.

Patches submitted which have not made their way to the archive yet:

  • #812876 on glib2.0 by Lunar: ensure that functions are sorted using the C locale when giotypefuncs.c is generated.
diffoscope development

diffoscope 48 was released on January 26th. It fixes several issues introduced by the retrieval of extra symbols from Debian debug packages. It also restores compatibility with older versions of binutils which does not support readelf --decompress.

strip-nondeterminism development

strip-nondeterminism 0.015-1 was uploaded on January 27th. It fixes handling of signed JAR files which are now going to be ignored to keep the signatures intact.

Package reviews

54 reviews have been removed, 36 added and 17 updated in the previous week.

30 new FTBFS bugs have been submitted by Chris Lamb, Michael Tautschnig, Mattia Rizzolo, Tobias Frost.


Alexander Couzens and Bryan Newbold have been busy fixing more issues in OpenWrt.

Version 1.6.3 of FreeBSD's package manager pkg(8) now supports SOURCE_DATE_EPOCH.

Ross Karchner did a lightning talk about reproducible builds at his work place and shared the slides.

Categories: FLOSS Project Planets

Nacho Digital: Drupal Planet (RSS spanish & portugues)

Planet Drupal - Mon, 2016-02-01 16:31
If we have enough movemente and content we will have drupal.org/planeta in spanish and portugues

Since Drupalcamp chile I'm pushing with a bunch of nice people an space within drupal.org to share content in spanish and portugues.

We are almost ready, we just need to generate enough movement to make it public. Right now it exists, but it is not publicly available: drupal.org/planeta.

The main purpose of this post was to share how to do it. So I will invite you to read the spanish version to read the steps and details. I see no reason to share them in english at this time. If there is interest I can do a full translation. Just leave a coment with your request.


Categories: FLOSS Project Planets

Commercial Progression: The Future of Decoupled Drupal & other Bold 2016 Predictions (E13)

Planet Drupal - Mon, 2016-02-01 16:09

Commercial Progression presents Hooked on Drupal, “Episode 13: Future Predictions of Drupal, Technology, and Powerball Winners".  That's right, you heard it here folks, someone has already won the Powerball. Ok, so maybe that is old news... but what about the future of Decoupled Drupal website architecture, static site generators, and the next revolution in IOT technology? To receive these clairvoyant predictions along with many important highlights from 2015, you will need to tune into the future with our latest podcast. 

Hooked on Drupal Content Team


CHRIS KELLER - Developer



 Podcast Subscription

Tags:  podcast, Hooked on Drupal, Decoupled Drupal, Planet Drupal, IOT, Static Sites
Categories: FLOSS Project Planets

Bryan Pendleton: Stuff I'm reading, early February edition

Planet Apache - Mon, 2016-02-01 16:07

If the groundhog were to look today, she would DEFINITELY see her shadow, at least here in my neighborhood.

  • All Change PleaseThe combined changes in networking, memory, storage, and processors that are heading towards our data centers will bring about profound changes to the way we design and build distributed systems, and our understanding of what is possible. Today I’d like to take a moment to summarise what we’ve been learning over the past few weeks and months about these advances and their implications.
  • High-Availability at Massive Scale: Building Google’s Data Infrastructure for Ads While most distributed systems handle machine-level failures well, handling datacenter-level failures is less common. In our experience, handling datacenter-level failures is critical for running true high availability systems. Most of our systems (e.g. Photon, F1, Mesa) now support multi-homing as a fundamental design property. Multi-homed systems run live in multiple datacenters all the time, adaptively moving load between datacenters, with the ability to handle outages of any scale completely transparently. This paper focuses primarily on stream processing systems, and describes our general approaches for building high availability multi-homed systems, discusses common challenges and solutions, and shares what we have learned in building and running these large-scale systems for over ten years.
  • Immutability Changes EverythingIt wasn't that long ago that computation was expensive, disk storage was expensive, DRAM (dynamic random access memory) was expensive, but coordination with latches was cheap. Now all these have changed using cheap computation (with many-core), cheap commodity disks, and cheap DRAM and SSDs (solid-state drives), while coordination with latches has become harder because latch latency loses lots of instruction opportunities. Keeping immutable copies of lots of data is now affordable, and one payoff is reduced coordination challenges.
  • To Trie or not to Trie – a comparison of efficient data structures I have been reading up a bit by bit on efficient data structures, primarily from the perspective of memory utilization. Data structures that provide constant lookup time with minimal memory utilization can give a significant performance boost since access to CPU cache is considerably faster than access to RAM. This post is a compendium of a few data structures I came across and salient aspects about them
  • POPL 2016Last month saw the 43rd edition of the ACM SIGPLAN-SIGACT Symposium on the Principles of Programming Languages (POPL). Gabriel Scherer did a wonderful job of gathering links to all of the accepted papers in a GitHub repo. For this week, I’ve chosen five papers from the conference that caught my eye.
  • NSA’s top hacking boss explains how to protect your network from his attack squadsNSA tiger teams follow a six-stage process when attempting to crack a target, he explained. These are reconnaissance, initial exploitation, establish persistence, install tools, move laterally, and then collect, exfiltrate and exploit the data.
  • Amazon’s Experiment with Profitability Amazon Chief Executive Officer Jeff Bezos has spent more than two decades reinvesting earnings back into the company. That steadfast refusal to strive for profitability never seemed to hurt the company or its stock price, and Amazon’s market value (now about $275 billion) passed Wal-Mart’s last year. All the cash it generated went into infrastructure development, logistics and technology; it experimented with new products and services, entered new markets, tried out new retail segments, all while capturing a sizable share of the market for e-commerce.
  • I Hate the Lord of the RingsA software developer explains why the Lord of the Rings is too much like work, and why Middle Earth exists in every office.
  • Startup Interviewing is (redacted)Silicon Valley is full of startups who fetishize the candidate that comes into the interview, answers a few clever fantasy coding challenges, and ultimately ends up the award-winning hire that will surely implement the elusive algorithm that will herald a new era of profitability for the fledging VC-backed company.
  • Startup Interviewing is (redacted)Silicon Valley is full of startups who fetishize the candidate that comes into the interview, answers a few clever fantasy coding challenges, and ultimately ends up the award-winning hire that will surely implement the elusive algorithm that will herald a new era of profitability for the fledging VC-backed company.
  • Inverting Binary Trees Considered Harmful he was like - wait a minute I read this really cute puzzle last week and I must ask you this - there are n sailors and m beer bottles and something to do with bottles being passed around and one of the bottles containing a poison and one of the sailors being dishonest and something about identifying that dishonest sailor before you consume the poison and die. I truly wished I had consumed the poison instead of laboring through that mess.
  • "Can you solve this problem for me on the whiteboard?" Programmers use computers. It's what we do and where we spend our time. If you can't at least give me a text editor and the toolchain for the language(s) you're interested in me using, you're wasting both our time. While I'm not afraid of using a whiteboard to help illustrate general problems, if you're asking me to write code on a whiteboard and judging me based on that, you're taking me out of my element and I'm not giving you a representative picture of who I am or how I code.

    Don't get me wrong, whiteboards can be a great interview tool. Some of the best questions I've been asked have been presented to me on a whiteboard, but a whiteboard was used to explain the concept, the interviewer wrote down what I said and used that to help frame a solution to the problem. It was a very different situation than "write code for me on the whiteboard."

Categories: FLOSS Project Planets

OpenStack Summit Austin: CFS period extended

Planet KDE - Mon, 2016-02-01 15:35
Just a small update on the Call for Speakers for the OpenStack Austin summit. The submission period was extended. The new deadline is February 2nd, 2016,11:59 PM PST (February 3rd, 8:59 CEST). You can find more information about the submission and speaker selection process here.
Categories: FLOSS Project Planets
Syndicate content