FLOSS Project Planets

Django Weblog: DSF board election 2015 results

Planet Python - Thu, 2014-11-20 12:20

We're happy to announce the winners of the DSF board elections 2015:

President: Russell Keith-Magee

Board members: Karen Tracey and Ola Sitarska

Secretary: Andy McKay

Treasurer: Stacey Haysler

Feel free to let us know if you'd like to know the full voting results.

The board of the Django Software Foundation (DSF) has just met and voted unanimously to confirm Russell, Karen, Ola, Andy and Stacey for their seat on the DSF board.

We want to thank all the other candidates again for their participation and hope to see them running at the next annual board election.

Ola is a co-organizer of many community events like DjangoCon Europe in Warsaw (DjangoCircus) and Django: Under The Hood and a co-founder of Django Girls where she helps run non-profit and free events for hundreds of women who want to learn about building the web. She cares about bringing more inclusivity into the Django community and making it easier for beginners to start using and developing Django. She works as a Django Developer at Potato in London.

Stacey brings an incredible amount of experience with corporate administration and financial management to the board. She has experience managing 501(c)(3) organizations, and has been involved in the organisation of Open Source conferences such as PgCon. She works as a Client Services Director at PostgreQL Experts, an OSS consultancy.

We look forward to seeing the contributions that Ola and Stacey can bring to the DSF board as new members of the board.

The DSF would like to thank Adrian for his many years of service, who has literally been there since its inception back in 2008. He has been an amazing influence and in many cases the driving force behind what Django is today.

Our thanks also to Joseph Kocherhans who has served as DSF Treasurer similarly since 2008. Joseph has done an amazing job over the last 6 years, and especially over the last 12 months.

Categories: FLOSS Project Planets

Mediacurrent: The Weather Channel’s Journey to Drupal

Planet Drupal - Thu, 2014-11-20 12:00

When my business partner, Paul Chason, and I joined forces over seven years ago we had a rather simple vision for Mediacurrent. We were convinced that open-source software offered a superior value proposition over proprietary, licensed based solutions. We had an ambitious goal of starting a digital agency that was going to revolutionize how companies thought about the way they managed their web properties. As Simon Sinek so eloquently describes, this was our "why" and purpose.

Categories: FLOSS Project Planets

Drupal Watchdog: Different, Not Difficult

Planet Drupal - Thu, 2014-11-20 11:36
Article

As AppNeta’s developer evangelist, I work with customers in five different programming languages to monitor application performance. Drupal is just one part of one language, but I’ll always have a soft spot for it because it’s where I learned to program. When I get a chance, I like to keep my skills sharp by contributing to the community-maintained TraceView integration module. Last spring, I decided to port it and learn Drupal 8 the hard way.

Like most Drupal developers, I’d never tried writing Symfony code or using Composer to manage packages. Before attempting it, I decided to research both Symfony in its own right and how it is being leveraged to rewrite Drupal. Thankfully, there were many rich tutorials on “the basics” even then, and, after a relatively painless porting process, I had the module running with a skeletal Symfony bundle inside it.

Initially, I relied on the same strategy as the Drupal 7 version of the TraceView module, which monitors hook execution time by installing two additional modules: an “early” module with a very low weight and a “late” module with a very high weight. As each hook was removed from core, I moved its implementations from the modules into the bundle and tagged that event with listeners at maximum and minimum priority.

Categories: FLOSS Project Planets

Dries Buytaert: Weather.com using Drupal

Planet Drupal - Thu, 2014-11-20 11:06
Topic: DrupalAcquiaDrupal sites

One of the world's most trafficked websites, with more than 100 million unique visitors every month and more than 20 million different pages of content, is now using Drupal. Weather.com is a top 20 U.S. site according to comScore. As far as I know, this is currently the biggest Drupal site in the world.

Weather.com has been an active Drupal user for the past 18 months; it started with a content creation workflow on Drupal to help its editorial team publish content to its existing website faster. With Drupal, Weather.com was able to dramatically reduce the number of steps that was required to publish content from 14 to just a few. Speed is essential in reporting the weather, and Drupal's content workflow provided much-needed velocity. The success of that initial project is what led to this week's migration of Weather.com from Percussion to Drupal.

The company has moved the entire website to Acquia Cloud, giving the site a resilient platform that can withstand sudden onslaughts of demand as unpredictable as the weather itself. As we learned from our work with New York City's MTA during Superstorm Sandy in 2012, “weather-proofing” the delivery of critical information to insure the public stays informed during catastrophic events is really important and can help save lives.

The team at Weather.com worked with Acquia and Mediacurrent for its site development and migration.

Categories: FLOSS Project Planets

Acquia: Meet Cal Evans ... Meet Jeffrey A. "jam" McGuire

Planet Drupal - Thu, 2014-11-20 09:14
Language Undefined

Voices of the ElePHPant / Acquia Podcast Ultimate Showdown Part 1 - Cal Evans and I got the chance to sit down and talk (a lot!) at DrupalCon Amsterdam and talk about a range of topics we have in common. In this first part of a 2-part series, we talk Drupal, PHP convergence and the "PHP Renaissance", open source communities, proprietary v open source business and the ethics of helping, and more.

Why PHP?

According to Cal, PHP has three things going for it:

Categories: FLOSS Project Planets

PyTennessee: Keynote: Chris Fonnesbeck

Planet Python - Thu, 2014-11-20 08:53

Chris Fonnesbeck

Chris Fonnesbeck is an Assistant Professor in the Department of Biostatistics at the Vanderbilt University School of Medicine. He specializes in computational statistics, Bayesian methods, evidence-based medicine, and infectious disease modeling. Chris created and continues to contribute to PyMC, a Python package for Bayesian statistical modeling. He originally hails from Vancouver, BC and received his Ph.D. from the University of Georgia.

I’m super excited to have Chris keynote for us this year after this fantastic presentation last year. The CFP closed last night, and Chris finishes out the powerful quadrant of keynotes this year.  Get your tickets soon!

Categories: FLOSS Project Planets

Steve Kemp: An experiment in (re)building Debian

Planet Debian - Thu, 2014-11-20 08:28

I've rebuilt many Debian packages over the years, largely to fix bugs which affected me, or to add features which didn't make the cut in various releases. For example I made a package of fabric available for Wheezy, since it wasn't in the release. (Happily in that case a wheezy-backport became available. Similar cases involved repackaging gtk-gnutella when the protocol changed and the official package in the lenny release no longer worked.)

I generally release a lot of my own software as Debian packages, although I'll admit I've started switching to publishing Perl-based projects on CPAN instead - from which they can be debianized via dh-make-perl.

One thing I've not done for many years is a mass-rebuild of Debian packages. I did that once upon a time when I was trying to push for the stack-smashing-protection inclusion all the way back in 2006.

Having had a few interesting emails this past week I decided to do the job for real. I picked a random server of mine, rsync.io, which stores backups, and decided to rebuild it using "my own" packages.

The host has about 300 packages installed upon it:

root@rsync ~ # dpkg --list | grep ^ii | wc -l 294

I got the source to every package, patched the changelog to bump the version, and rebuild every package from source. That took about three hours.

Every package has a "skx1" suffix now, and all the build-dependencies were also determined by magic and rebuilt:

root@rsync ~ # dpkg --list | grep ^ii | awk '{ print $2 " " $3}'| head -n 4 acpi 1.6-1skx1 acpi-support-base 0.140-5+deb7u3skx1 acpid 1:2.0.16-1+deb7u1skx1 adduser 3.113+nmu3skx1

The process was pretty quick once I started getting more and more of the packages built. The only shortcut was not explicitly updating the dependencies to rely upon my updages. For example bash has a Debian control file that contains:

Depends: base-files (>= 2.1.12), debianutils (>= 2.15)

That should have been updated to say:

Depends: base-files (>= 2.1.12skx1), debianutils (>= 2.15skx1)

However I didn't do that, because I suspect if I did want to do this decently, and I wanted to share the source-trees, and the generated packages, the way to go would not be messing about with Debian versions instead I'd create a new Debian release "alpha-apple", "beta-bananna", "crunchy-carrot", "dying-dragonfruit", "easy-elderberry", or similar.

In conclusion: Importing Debian packages into git, much like Ubuntu did with bzr, is a fun project, and it doesn't take much to mass-rebuild if you're not making huge changes. Whether it is worth doing is an entirely different question of course.

Categories: FLOSS Project Planets

ShiningPanda: Production Server Monitoring

Planet Python - Thu, 2014-11-20 07:58

Today requires.io introduces Site Monitoring, a security feature to check that the dependencies of the Python apps deployed on your production servers are up-to-date and secure.

Requires.io can already monitor the requirements of your projects from their source code. We expanded the API so that by adding two lines to your deployment scripts you can now check that your production apps are secure:

$ pip install -U requires.io $ requires.io update-site -t $MY_SECRET_TOKEN -r $MY_REPO Step-by-step Tutorial

In this small tutorial we will setup Site Monitoring for the project requires/myapp. This tutorial assumes that you already have an account on requires.io... If you don't, just register!

1. Plan upgrade

First ensure that your plan support the Site Monitoring feature. This can be done from the settings page. In this case I need an Indie+ account.

2. Upgrade your deployment script

Go to the "monitoring" section of your settings. There you can just copy the necessary line. In this case it is:

requires.io update-site -t 6ade5eb345d8a79ad69a9f868021e0210522aceb -r REPO

The token is valid for the account requires, so for the project requires/myapp we just need to replace REPO by myapp.

requires.io update-site -t 62717a87341c8500d316bf52635a9e40ced04ace -r myapp

For an app deployed with a simple fabric script (using fabtools to handle the virtualenv), the resulting script would look similar to this:

with fabtools.python.virtualenv(virtualenv): run('pip install -r requirements.txt') run('pip install requires.io') run('requires.io update-site -t 6ade5eb345d8a79ad69a9f868021e0210522aceb -r myapp')

Adapt for your own deployment scripts!

4. Check the result

Just go to your requirements page on requires.io: you will see a new section called "Sites" in the right column.

Notifications

Notifications for the Site Monitoring feature are coming very soon... Requires.io notification system is being thoroughly updated, but it is not quite ready yet.

Heroku

We are currently testing the requires.io Heroku app. So if you want to hook requires.io to your heroku account to use the Site Monitoring feature, let us know!

Categories: FLOSS Project Planets

Daniel Pocock: Is Amnesty giving spy victims a false sense of security?

Planet Debian - Thu, 2014-11-20 07:48

Amnesty International is getting a lot of attention with the launch of a new tool to detect government and corporate spying on your computer.

I thought I would try it myself. I went to a computer running Microsoft Windows, an operating system that does not publish its source code for public scrutiny. I used the Chrome browser, users often express concern about Chrome sending data back to the vendor about the web sites the users look for.

Without even installing the app, I would expect the Amnesty web site to recognise that I was accessing the site from a combination of proprietary software. Instead, I found a different type of warning.

Beware of Amnesty?

Instead, the only warning I received was from Amnesty's own cookies:

Even before I install the app to find out if the government is monitoring me, Amnesty is keen to monitor my behaviour themselves.

While cookies are used widely, their presence on a site like Amnesty's only further desensitizes Internet users to the downside risks of tracking technologies. By using cookies, Amnesty is effectivley saying a little bit of tracking is justified for the greater good. Doesn't that sound eerily like the justification we often hear from governments too?

Is Amnesty part of the solution or part of the problem?

Amnesty is a well known and widely respected name when human rights are mentioned.

However, their advice that you can install an app onto a Windows computer or iPhone to detect spyware is like telling people that putting a seatbelt on a motorbike will eliminate the risk of death. It would be much more credible for Amnesty to tell people to start by avoiding cloud services altogether, browse the web with Tor and only use operating systems and software that come with fully published source code under a free license. Only when 100% of the software on your device is genuinely free and open source can independent experts exercise the freedom to study the code and detect and remove backdoors, spyware and security bugs.

It reminds me of the advice Kim Kardashian gave after the Fappening, telling people they can continue trusting companies like Facebook and Apple with their private data just as long as they check the privacy settings (reality check: privacy settings in cloud services are about as effective as a band-aid on a broken leg).

Write to Amnesty

Amnesty became famous for their letter writing campaigns.

Maybe now is the time for people to write to Amnesty themselves, thank them for their efforts and encourage them to take more comprehensive action.

Feel free to cut and paste some of the following potential ideas into an email to Amnesty:

I understand you may not be able to respond to every email personally but I would like to ask you to make a statement about these matters on your public web site or blog.

I understand it is Amnesty's core objective to end grave abuses of human rights. Electronic surveillence, due to its scale and pervasiveness, has become a grave abuse in itself and in a disturbing number of jurisdictions it is an enabler for other types of grave violations of human rights.

I'm concerned that your new app Detekt gives people a false sense of security and that your campaign needs to be more comprehensive to truly help people and humanity in the long term.

If Amnesty is serious about solving the problems of electronic surveillance by government, corporations and other bad actors, please consider some of the following:

  • Instead of displaying a cookie warning on Amnesty.org, display a warning to users who access the site from a computer running closed-source software and give them a link to download an open source web browser like Firefox.
  • Redirect all visitors to your web site to use the HTTPS encrypted version of the site.
  • Using spyware-free open source software such as the GNU/Linux operating system (using one of the Debian, Fedora or Ubuntu systems is one of the more common ways to achieve this) and LibreOffice for all Amnesty's own operations, making a public statement about your use of free open source software and mentioning this in the closing paragraph of all press releases relating to surveillance topics.
  • Encouraging Amnesty donors, members and supporters to choose similar software especially when engaging in any political activities.
  • Make a public statement that Amnesty will not use cloud services such as SalesForce or Facebook to store, manage or interact with data relating to members, donors or other supporters.
  • Encouraging the public to move away from centralized cloud services such as those provided by their smartphone or social networks and use de-centralized or federated services such as XMPP chat.

Given the immense threat posed by electronic surveillance, I'd also like to call on Amnesty to allocate at least 10% of annual revenue towards software projects releasing free and open source software that offers the public an alternative to the centralized cloud.

While publicity for electronic privacy is great, I hope Amnesty can go a step further and help people use trustworthy software from the ground up.

Categories: FLOSS Project Planets

Paul Booker: Creating you own API endpoint using Services

Planet Drupal - Thu, 2014-11-20 06:53
/** * Implements of hook_services_resources(). */ function mymodule_services_services_resources() { $api = array( 'frontpage' => array( 'operations' => array( 'retrieve' => array( 'help' => 'Retrieves front page', 'callback' => '_mymodule_services_frontpage_retrieve', 'access callback' => 'user_access', 'access arguments' => array('access content'), 'access arguments append' => FALSE, 'args' => array( array( 'name' => 'fn', 'type' => 'string', 'description' => 'Function to perform', 'source' => array('path' => '0'), 'optional' => TRUE, 'default' => '0', ), array( 'name' => 'nitems', 'type' => 'int', 'description' => 'Number of latest items to get', 'source' => array('param' => 'nitems'), 'optional' => TRUE, 'default' => '0', ), array( 'name' => 'since', 'type' => 'int', 'description' => 'Posts from the last number of days', 'source' => array('param' => 'since'), 'optional' => TRUE, 'default' => '0', ), ), ), ), ), ); return $api; } /** * Callback function for blog retrieve */ function _mymodule_services_frontpage_retrieve($fn, $nitems, $timestamp) { // Check for mad values $nitems = intval($nitems); $timestamp = intval($timestamp); return _mymodule_services_blog_items($nitems, $timestamp); } /** * Gets frontpage blog posts */ function _mymodule_services_blog_items($nitems, $timestamp) { // Compose query $query = db_select('node', 'n'); $query->join('node_revision', 'v', '(n.nid = v.nid) AND (n.vid = v.vid)'); $query->join('comment', 'c', 'c.nid = n.nid'); $query->join('users', 'u', 'n.uid = u.uid'); $query->fields('v', array('timestamp', 'title')); $query->addField('u', 'name', 'author'); $query->addField('n', 'nid'); $query->addField('n', 'title'); $query->addField('n', 'uid'); $query->addField('n', 'created'); $query->addField('n', 'changed'); $query->addField('u', 'picture'); $query->addExpression('COUNT(c.cid)', 'comments'); $query->condition('n.type', 'blog', '='); $query->groupBy('n.nid'); // How many days ago? if ($timestamp) { $query->condition('v.timestamp', time() - ($timestamp * 60 * 60 * 24), '>'); } $query->orderBy('v.timestamp', 'DESC'); // Limited by items? if ($nitems) { $query->range(0, $nitems); } $items = $query->execute()->fetchAll(); return $items; } Tags:
Categories: FLOSS Project Planets

Kubuntu CI: the replacement for Project Neon

Planet KDE - Thu, 2014-11-20 05:46
KDE Project:


[thanks to jens for the lovely logo]

Many years ago Ubuntu had a plan for Grumpy Groundhog, a version of Ubuntu which was made from daily packages of free software development versions. This never happened but Kubuntu has long provided Project Neon (and later Project Neon 5) which used launchpad to build all of KDE Software Compilation and make weekly installable images. This is great for developers who want to check their software works in a final distribution or want to develop against the latest libraries without having to compile them, but it didn't help us packagers much because the packaging was monolithic and unrelated to the packages we use in Kubuntu real.

Recently Harald has been working on a replacement, Kubuntu Continuous Integration (Kubuntu CI) which makes packages fresh each day from KDE Git for Frameworks and Plasma and crucially uses the Kubuntu packaging branches. There are three PPAs, unstable (unchecked), unstable daily (with some automated checking to ensure it builds) and unstable weekly (with some manual checking)

At the same time he's been hard at work making weekly Kubuntu CI images which can be run as a live image or installable. They include the latest KDE Frameworks, Plasma 5 and a few other apps.

We've moved our packaging into Debian Git because it'll make merges ever so much easier and mean we can share fixes faster.

The Kubuntu CI Jenkins setup has the reports on what has built and what needs fixed.

Ahora es la hora

Categories: FLOSS Project Planets

Jonathan Wiltshire: Getting things into Jessie (#5)

Planet Debian - Thu, 2014-11-20 05:30
Don’t assume another package’s unblock is a precedent for yours

Sometime we’ll use our judgement when granting an unblock to a less-than-straightforward package. Lots of factors go into that, including the regression risk, desirability, impact on other packages (of both acceptance and refusal) and trust.

However, a judgement call on one package doesn’t automatically mean that the same decision will be made for another. Every single unblock request we get is called on its own merits.

Do by all means ask about your package in light of another. There may be cross-over that makes your change desirable as well.

Don’t take it personally if the judgement call ends up being not what you expected.

Getting things into Jessie (#5) is a post from: jwiltshire.org.uk | Flattr

Categories: FLOSS Project Planets

PyPy Development: Tornado without a GIL on PyPy STM

Planet Python - Thu, 2014-11-20 05:10

This post is by Konstantin Lopuhin, who tried PyPy STM during the Warsaw sprint.

Python has a GIL, right? Not quite - PyPy STM is a python implementation without a GIL, so it can scale CPU-bound work to several cores. PyPy STM is developed by Armin Rigo and Remi Meier, and supported by community donations. You can read more about it in the docs.

Although PyPy STM is still a work in progress, in many cases it can already run CPU-bound code faster than regular PyPy, when using multiple cores. Here we will see how to slightly modify Tornado IO loop to use transaction module. This module is described in the docs and is really simple to use - please see an example there. An event loop of Tornado, or any other asynchronous web server, looks like this (with some simplifications):

while True: for callback in list(self._callbacks): self._run_callback(callback) event_pairs = self._impl.poll() self._events.update(event_pairs) while self._events: fd, events = self._events.popitem() handler = self._handlers[fd] self._handle_event(fd, handler, events)

We get IO events, and run handlers for all of them, these handlers can also register new callbacks, which we run too. When using such a framework, it is very nice to have a guaranty that all handlers are run serially, so you do not have to put any locks. This is an ideal case for the transaction module - it gives us guaranties that things appear to be run serially, so in user code we do not need any locks. We just need to change the code above to something like:

while True: for callback in list(self._callbacks): transaction.add( # added self._run_callback, callback) transaction.run() # added event_pairs = self._impl.poll() self._events.update(event_pairs) while self._events: fd, events = self._events.popitem() handler = self._handlers[fd] transaction.add( # added self._handle_event, fd, handler, events) transaction.run() # added

The actual commit is here, - we had to extract a little function to run the callback.

Part 1: a simple benchmark: primes

Now we need a simple benchmark, lets start with this - just calculate a list of primes up to the given number, and return it as JSON:

def is_prime(n): for i in xrange(2, n): if n % i == 0: return False return True class MainHandler(tornado.web.RequestHandler): def get(self, num): num = int(num) primes = [n for n in xrange(2, num + 1) if is_prime(n)] self.write({'primes': primes})

We can benchmark it with siege:

siege -c 50 -t 20s http://localhost:8888/10000

But this does not scale. The CPU load is at 101-104 %, and we handle 30 % less request per second. The reason for the slowdown is STM overhead, which needs to keep track of all writes and reads in order to detect conflicts. And the reason for using only one core is, obviously, conflicts! Fortunately, we can see what this conflicts are, if we run code like this (here 4 is the number of cores to use):

PYPYSTM=stm.log ./primes.py 4

Then we can use print_stm_log.py to analyse this log. It lists the most expensive conflicts:

14.793s lost in aborts, 0.000s paused (1258x STM_CONTENTION_INEVITABLE) File "/home/ubuntu/tornado-stm/tornado/tornado/httpserver.py", line 455, in __init__ self._start_time = time.time() File "/home/ubuntu/tornado-stm/tornado/tornado/httpserver.py", line 455, in __init__ self._start_time = time.time() ...

There are only three kinds of conflicts, they are described in stm source, Here we see that two threads call into external function to get current time, and we can not rollback any of them, so one of them must wait till the other transaction finishes. For now we can hack around this by disabling this timing - this is only needed for internal profiling in tornado.

If we do it, we get the following results (but see caveats below):

Impl. req/s PyPy 2.4 14.4 CPython 2.7 3.2 PyPy-STM 1 9.3 PyPy-STM 2 16.4 PyPy-STM 3 20.4 PyPy STM 4 24.2    

As we can see, in this benchmark PyPy STM using just two cores can beat regular PyPy! This is not linear scaling, there are still conflicts left, and this is a very simple example but still, it works!

But its not that simple yet :)

First, these are best-case numbers after long (much longer than for regular PyPy) warmup. Second, it can sometimes crash (although removing old pyc files fixes it). Third, benchmark meta-parameters are also tuned.

Here we get relatively good results only when there are a lot of concurrent clients - as a results, a lot of requests pile up, the server is not keeping with the load, and transaction module is busy with work running this piled up requests. If we decrease the number of concurrent clients, results get slightly worse. Another thing we can tune is how heavy is each request - again, if we ask primes up to a lower number, then less time is spent doing calculations, more time is spent in tornado, and results get much worse.

Besides the time.time() conflict described above, there are a lot of others. The bulk of time is lost in these two conflicts:

14.153s lost in aborts, 0.000s paused (270x STM_CONTENTION_INEVITABLE) File "/home/ubuntu/tornado-stm/tornado/tornado/web.py", line 1082, in compute_etag hasher = hashlib.sha1() File "/home/ubuntu/tornado-stm/tornado/tornado/web.py", line 1082, in compute_etag hasher = hashlib.sha1() 13.484s lost in aborts, 0.000s paused (130x STM_CONTENTION_WRITE_READ) File "/home/ubuntu/pypy/lib_pypy/transaction.py", line 164, in _run_thread got_exception)

The first one is presumably calling into some C function from stdlib, and we get the same conflict as for time.time() above, but is can be fixed on PyPy side, as we can be sure that computing sha1 is pure.

It is easy to hack around this one too, just removing etag support, but if we do it, performance is much worse, only slightly faster than regular PyPy, with the top conflict being:

83.066s lost in aborts, 0.000s paused (459x STM_CONTENTION_WRITE_WRITE) File "/home/arigo/hg/pypy/stmgc-c7/lib-python/2.7/_weakrefset.py", line 70, in __contains__ File "/home/arigo/hg/pypy/stmgc-c7/lib-python/2.7/_weakrefset.py", line 70, in __contains__

Comment by Armin: It is unclear why this happens so far. We'll investigate...

The second conflict (without etag tweaks) originates in the transaction module, from this piece of code:

while True: self._do_it(self._grab_next_thing_to_do(tloc_pending), got_exception) counter[0] += 1

Comment by Armin: This is a conflict in the transaction module itself; ideally, it shouldn't have any, but in order to do that we might need a little bit of support from RPython or C code. So this is pending improvement.

Tornado modification used in this blog post is based on 3.2.dev2. As of now, the latest version is 4.0.2, and if we apply the same changes to this version, then we no longer get any scaling on this benchmark, and there are no conflicts that take any substantial time.

Comment by Armin: There are two possible reactions to a conflict. We can either abort one of the two threads, or (depending on the circumstances) just pause the current thread until the other one commits, after which the thread will likely be able to continue. The tool ``print_stm_log.py`` did not report conflicts that cause pauses. It has been fixed very recently. Chances are that on this test it would report long pauses and point to locations that cause them.

Part 2: a more interesting benchmark: A-star

Although we have seen that PyPy STM is not all moonlight and roses, it is interesting to see how it works on a more realistic application.

astar.py is a simple game where several players move on a map (represented as a list of lists of integers), build and destroy walls, and ask server to give them shortest paths between two points using A-star search, adopted from ActiveState recipie.

The benchmark bench_astar.py is simulating players, and tries to put the main load on A-star search, but also does some wall building and destruction. There are no locks around map modifications, as normal tornado is executing all callbacks serially, and we can keep this guaranty with atomic blocks of PyPy STM. This is also an example of a program that is not trivial to scale to multiple cores with separate processes (assuming more interesting shared state and logic).

This benchmark is very noisy due to randomness of client interactions (also it could be not linear), so just lower and upper bounds for number of requests are reported

Impl. req/s PyPy 2.4 5 .. 7 CPython 2.7 0.5 .. 0.9 PyPy-STM 1 2 .. 4 PyPy STM 4 2 .. 6

Clearly this is a very bad benchmark, but still we can see that scaling is worse and STM overhead is sometimes higher. The bulk of conflicts come from the transaction module (we have seen it above):

91.655s lost in aborts, 0.000s paused (249x STM_CONTENTION_WRITE_READ) File "/home/ubuntu/pypy/lib_pypy/transaction.py", line 164, in _run_thread got_exception)

Although it is definitely not ready for production use, you can already try to run things, report bugs, and see what is missing in user-facing tools and libraries.

Benchmarks setup:

Categories: FLOSS Project Planets

Drupal Commerce: Commerce 2.x Stories: Taxes

Planet Drupal - Thu, 2014-11-20 04:40

"Why doesn’t Commerce/Magento/$otherSolution handle my taxes properly? That’s the most basic feature!” - many people, often.

When it comes to eCommerce, nobody likes taxes. We expect taxes to “just work”, so we can finish our projects and get on with our lives. At the same time, no other topic is as complex.

Selling online puts us at the crossroads of different (and sometimes conflicting) laws with many rules and even more exceptions. All eCommerce systems provide the basic tools (“Define your tax rates and specify when to apply them”) and make the site developer responsible for tax compliance. The developer usually passes that responsibility to the client, sometimes implicitly. The client consults an accountant, sometimes. But the buck has to stop somewhere, and it often comes back to the developer, 5 days after launch.

As taxes become more and more complex, there is a need for smarter tax handling, where the application does more and the site administrator less. In the Commerce 1.x lifecycle we’ve built the commerce_vat module to handle the more and more complex VAT taxes. For 2.x, we’re bringing this approach back into core, and releasing several libraries to share the solution with the wider PHP community.

Read more...

Categories: FLOSS Project Planets

Stefano Zacchiroli: Thoughts on the Init System Coupling GR

Planet Debian - Thu, 2014-11-20 03:59
on perceived hysteria and silent sanity

As you probably already know by now, the results of the Debian init system coupling general resolution (GR) look like this:

Init system coupling GR: results (arrow from A to B means that voters preferred A to B by that margin)

Some random thoughts about them:

  • The turnout has been the highest since 2010 DPL elections and the 2nd highest among all GRs (!= DPL elections) ever. The highest among all GRs dates back to 2004 and was about dropping non-free. In absolute terms this vote scores even better: it is the GR with the highest number of voters ever.

    Clearly there was a lot of interest within the project about this vote. The results appear to be as representative of the views of project members as we have been able to get in the second half of Debian history.

  • There is a total ordering of options (which is not always the case with our voting system). Starting with the winning option, each option in the results beats every subsequent option. The winning option ("General resolution is not required") beats the runner-up ("Support for other init systems is recommended, i.e., "you SHOULD NOT require a specific init") by a large margin: 100 votes, ~20.7% of the voters. The winning options wins over further options by increasingly large margins: 173 votes (~35.8%) against "Packages may require specific init systems if maintainers decide" (the MAY option); 176 (~36.4%) against "Packages may not require a specific init system" (the MUST NOT option); 263 (~54.5%) against "Further discussion" (the "let's keep on flaming" option).

    While judging from Debian mailing lists and news sites you might have gotten the impression that the project was evenly split on init system matters, at least w.r.t. the matter on the ballot that doesn't seem to be the case.

  • The winning option is not as crazy as its label might imply (voting to declare that the vote was not required? WTH?). What the winning option actually says is more articulated than that; quoting from the ballot (highlight mine):

    Regarding the subject of this ballot, the Project affirms that the procedures for decision making and conflict resolution are working adequately and thus a General Resolution is not required.

    With this GR the Debian Project affirms that the procedures we have used to decide the default init system for Jessie and to arbitrate the ensuing conflicts are just fine. People might flame and troll debian-devel as much as they want (actually, I'm pretty sure we would all like them to stop, but that matter wasn't on the ballot so you'll have to take my word for it). People might write blog posts and make headlines corroborating the impression that Debian is still being torn apart by ongoing init system battles. But this vote says instead that the large majority of project members thinks our decision making and conflict-arbitration procedures, which most prominently include the Debian Technical Committee, have served use "adequately" well over the past troubled months.

    That of course doesn't mean that everyone in Debian is happy about every single recent decision, otherwise we wouldn't have had this GR in the first place. But it does mean that we consider our procedures good enough to (a) avoid getting in their way with a project-wide vote, and (b) keep on trusting them for the foreseeable future.

  • [ It is not the main focus of this post, but if you care specifically about the implications of this GR on systemd adoption in Debian, I recommend reading this excellent GR commentary by Russ Allbery. ]

My take home message is that we are experiencing a huge gap between the public perception of the state of Debian (both from within and from without the project) and the actual beliefs of the silent majority of people that make Debian with their work, day after day.

In part this is old news. The most "senior" members of the project will remember that the topic of "vocal minorities vs silent majority" was a recurrent one in Debian 10+ years ago, when flames were periodically ravaging the project. Since then Debian has grown a lot though, and we are now part of a much larger and varied ecosystem. We are now at a scale at which there are plenty of FOSS "mass-media" covering daily what happens in Debian, inducing feedback loops with our own perception of ourselves which we do not fully grok yet. This is a new factor in the perception gap. This situation is not intrinsically bad, nor there is blame to assign here: after all influential bloggers, news sites, etc., just do their job. And their attention also testifies of the huge interest that there is around Debian and our choices.

But we still need to adapt and learn to take perceived hysteria with a pinch (or two) of salt. It might just be time for our decennial check-up. Time to remind ourselves that our ways of doing things might in fact still be much more sane than sometimes we tend to believe.

We went on 10+ years ago, after monumental flames. It looks like we are now ready to move on again, putting The Era of the Great systemd Histeria™ behind us.

Categories: FLOSS Project Planets

Stefan Behnel: lxml christmas funding

Planet Python - Thu, 2014-11-20 01:59

My bicycle was recently stolen and since I now have to get a new one, here's a proposal.

From today on until December 24th, I will divert all donations that I receive for my work on lxml to help in restoring my local mobility.

If you do not like this 'misuse', do not donate in this time frame. I do hope, however, that some of you like the idea that the money they give for something they value is used for something that is of value to the receiver.

All the best -- Stefan

Categories: FLOSS Project Planets

Matthew Palmer: Multi-level prefix delegation is not a myth! I've seen it!

Planet Debian - Thu, 2014-11-20 00:00

Unless you’ve been living under a firewalled rock, you know that IPv6 is coming. There’s also a good chance that you’ve heard that IPv6 doesn’t have NAT. Or, if you pay close attention to the minutiae of IPv6 development, you’ve heard that IPv6 does have NAT, but you don’t have to (and shouldn’t) use it.

So let’s say we’ll skip NAT for IPv6. Fair enough. However, let’s say you have this use case:

  1. A bunch of containers that need Internet access…

  2. That are running in a VM…

  3. On your laptop…

  4. Behind your home router!

For IPv4, you’d just layer on the NAT, right? While SIP and IPsec might have kittens trying to work through three layers of NAT, for most things it’ll Just Work.

In the Grand Future of IPv6, without NAT, how the hell do you make that happen? The answer is “Prefix Delegation”, which allows routers to “delegate” management of a chunk of address space to downstream routers, and allow those downstream routers to, in turn, delegate pieces of that chunk to downstream routers.

In the case of our not-so-hypothetical containers-in-VM-on-laptop-at-home scenario, it would look like this:

  1. My “border router” (a DNS-323 running Debian) asks my ISP for a delegated prefix, using DHCPv6. The ISP delegates a /561. One /64 out of that is allocated to the network directly attached to the internal interface, and the rest goes into “the pool”, as /60 blocks (so I’ve got 15 of them to delegate, if required).

  2. My laptop gets an address on the LAN between itself and the DNS-323 via stateless auto-addressing (“SLAAC”). It also uses DHCPv6 to request one of the /60 blocks from the DNS-323. The laptop puts one /64 from that block as the address space for the “virtual LAN” (actually a Linux bridge) that connects the laptop to all my VMs, and puts the other 15 /64 blocks into a pool for delegation.

  3. The VM that will be running the set of containers under test gets an address on the “all VMs virtual LAN” via SLAAC, and then requests a delegated /64 to use for the “all containers virtual LAN” (another bridge, this one running on the VM itself) that the containers will each connect to themselves.

Now, almost all of this Just Works. The current releases of ISC DHCP support prefix delegation just fine, and a bit of shell script plumbing between the client and server seals the deal – the client needs to rewrite the server’s config file to tell it the netblock from which it can delegate.

Except for one teensy, tiny problem – routing. When the DHCP server delegates a netblock to a particular machine, the routing table needs to get updated so that packets going to that netblock actually get sent to the machine the netblock was delegated to. Without that, traffic destined for the containers (or the VM) won’t actually make it to its destination, and a one-way Internet connection isn’t a whole lot of use.

I cannot understand why this problem hasn’t been tripped over before. It’s absolutely fundamental to the correct operation of the delegation system. Some people advocate running a dynamic routing protocol, but that’s a sledgehammer to crack a nut if ever I saw one.

Actually, I know this problem has been tripped over before, by OpenWrt. Their solution, however, was to use a PHP script to scan logfiles and add routes. Suffice it to say, that wasn’t an option I was keen on exploring.

Instead, I decided to patch ISC DHCP so that the server can run an external script to add the necessary routes, and perhaps modify firewall rules – and also to reverse the process when the delegation is released (or expired). If anyone else wants to play around with it, I’ve put it up on Github. I don’t make any promises that it’s the right way to do it, necessarily, but it works, and the script I’ve added in contrib/prefix-delegation-routing.rb shows how it can be used to good effect. By the way, if anyone knows how pull requests work over at ISC, drop me a line. From the look of their website, they don’t appear to accept (or at least encourage) external contributions.

So, that’s one small patch for DHCP, one giant leap for my home network.

  1. The standard recommendation is for ISPs to delegate each end-user customer a /48 (giving 65,536 /64 networks); my ISP is being a little conservative in “only” giving me 256 /64s. It works fine for my purposes, but if you’re an ISP getting set for deploying IPv6, make life easy on your customers and give them a /48.

Categories: FLOSS Project Planets

Russ Allbery: Interpreting the Debian init system GR

Planet Debian - Wed, 2014-11-19 23:42

I originally posted this in a thread on debian-private, but on further reflection it seems appropriate for a broader audience. So I'm posting it here, as well as on debian-project.

There is quite a lot of discussion in various places about what the recent GR result means. Some are concluding that systemd won in some way that implies Debian is not going to support other init systems, or at least that support for other init systems is in immediate danger. A lot of that analysis concludes that the pro-systemd "side" in Debian won some sort of conclusive victory.

I have a different perspective.

I think we just had a GR in which the Debian developer community said that we, as a community, would like to work through all of the issues around init systems together, as a community, rather than having any one side of the argument win unambiguously and impose its views on those who disagree.

There were options on the ballot that clearly required loose coupling and that clearly required tight coupling. The top two options did neither of those things. The second-highest option said, effectively, that we should feel free to exercise our technical judgement for our own packages, but should do so with an eye to enabling people to make different choices, and should merge their changes and contributions where possible. The highest option said that we don't even want to say that, and would instead prefer to work this whole thing out through discussion, respect, consensus, and mutual support, without giving *anyone* a clear mandate or project-wide blessing for their approach.

In other words, the way I choose to look at this GR is that the project as a whole just voted to take away the sticks that we were using to beat each other with.

In a way, we just chose the *hardest* option. We didn't make a simplifying technical decision that provides clear guidance to everyone. Instead, we made a complicating social decision that says that, sorry, there's no short cut to avoid having to talk to each other, respect each other's views, and try to reach workable collaborative compromises. Even though it's really hard, even though everyone is raw and upset, that's what the project as a whole is asking us to do.

Are we up to the challenge?

Categories: FLOSS Project Planets

Vasudev Ram: Find if a Python string is an anagram of a palindrome

Planet Python - Wed, 2014-11-19 22:05

By Vasudev Ram

I saw this interesting thread on Hacker News some 10-odd days ago:

HN: Rust and Go

Apart from being generally of interest, it had a sub-thread that was about finding if a given string is an anagram of a palindrome. A few people replied in the thread, giving solutions in different languages, such as Scala, JavaScript, Go and Python.

Some of the Python solutions were already optimized to some extent (e.g. using collections.Counter and functools.partial - it was a thread about the merits of programming languages, after all), so I decided to write one or two simple or naive solutions instead, and then see if those could be optimized some, maybe differently from the solutions in the HN thread.

Here is one such simple solution to the problem, of finding out if a string is an anagram of a palindrome. I've named it iaop_01.py (for Is Anagram Of Palindrome, version 01). The solution includes a scramble() function, to make an anagram of a palindrome, so that we have input for the test, and a main function to run the rest of the code to exercise things, for both the case when the string is an anagram of a palindrome, and when it is not.

The logic I've used is this (in pseudocode, even though Python is executable pseudocode, ha ha):
For each character c in the string s:
If c already occurs as a key in dict char_counts,
increment its count (the value corresponding to the key),
else set its count to 1.
After the loop, the char_counts dict will contain the counts
of all the characters in the string, keyed by character.
Then we check how many of those counts are odd.
If at most one count is odd, the string is an anagram of
a palindrome, else not.

And here is the Python code for iaop_01.py:
"""
Program to find out whether a string is an anagram of a palindrome.
Based on the question posed in this Hacker News thread:
https://news.ycombinator.com/item?id=8575589
"""

from random import shuffle

def anagram_of_palindrome(s):
char_counts = {}
for c in s:
char_counts[c] = char_counts.get(c, 0) + 1
odd_counts = 0
for v in char_counts.values():
if v % 2 == 1:
odd_counts += 1
return odd_counts = 1

def scramble(s):
lis = [ c for c in s ]
shuffle(lis)
return ''.join(lis)

def main():
# First, test with a list of strings which are anagrams of palindromes.
aops = ['a', 'bb', 'cdc', 'foof', 'madamimadam', 'ablewasiereisawelba']
for s in aops:
s2 = scramble(s)
print "{} is an anagram of palindrome ({}): {}".format(s2, \
s, anagram_of_palindrome(s2))
print
# Next, test with a list of strings which are not anagrams of palindromes.
not_aops = ['ab', 'bc', 'cde', 'fool', 'padamimadam']
for s in not_aops:
s2 = scramble(s)
print "{} is an anagram of a palindrome: {}".format(s2, \
anagram_of_palindrome(s2))

main()
And here is the output of running it: $ python iaop_01.py
a is an anagram of palindrome (a): True
bb is an anagram of palindrome (bb): True
ccd is an anagram of palindrome (cdc): True
ffoo is an anagram of palindrome (foof): True
daadmamimma is an anagram of palindrome (madamimadam): True
srewaeaawbeilebials is an anagram of palindrome (ablewasiereisawelba): True

ba is an anagram of a palindrome: False
bc is an anagram of a palindrome: False
dec is an anagram of a palindrome: False
loof is an anagram of a palindrome: False
ampdaaiammd is an anagram of a palindrome: False
One simple optimization that can be made is to add these two lines: if odd_counts > 1:
return False
just after the line "odd_count += 1". What that does is stop early if it finds that the number of characters with odd counts is greater than 1, even if there are many more counts to be checked, since our rule has been satisfied.

If I think up more optimizations to the above solution, or any alternative solutions, I'll show them in a future post. Update: Since it is on a related topic, you may also like to check out this other post I wrote a while ago: A simple text file indexing program in Python.

BTW, the two longer palindromes are lower-cased, scrunched-together versions of these well-known palindromes:

Madam, I'm Adam.

Able was I ere I saw Elba

(attributed to Napoleon).

- Vasudev Ram - Dancing Bison Enterprises

Signup for news about products from me.

Contact Page Share | Vasudev Ram
Categories: FLOSS Project Planets

PreviousNext: Community gathering at DrupalCamp Melbourne

Planet Drupal - Wed, 2014-11-19 21:51

It's been a while since the last DrupalCamp in Melbourne, so the community came together recently to share what they know. Here's a brief wrap up of the two day event.

Categories: FLOSS Project Planets
Syndicate content