FLOSS Project Planets

Christine Spang: PyCon 2014 retrospective

Planet Debian - Mon, 2014-04-14 12:15

PyCon 2014 happened. (Sprints are still happening.)

This was my 3rd PyCon, but my first year as a serious contributor to the event, which led to an incredibly different feel. I also came as a person running a company building a complex system in Python, and I loved having the overarching mission of what I'm building driving my approach to what I chose to do. PyCon is one of the few conferences I go to where the feeling of acceptance and at-homeness mitigates the introvert overwhelm at nonstop social interaction. It's truly a special event and community.

Here are some highlights:

  • I gave a tutorial about search, which was recorded in its entirety... if you watch this video, I highly recommend skipping the hands-on parts where I'm just walking around helping people out.
  • I gave a talk! It's called Subprocess to FFI, and you can find the video here. Through three full iterations of dry runs with feedback, I had a ton of fun preparing this talk. I'd like to give more like it in the future as I continue to level up my speaking skills.
  • Allen Downey came to my talk and found me later to say hi. Omg amazing, made my day.
  • Aux Vivres and Dieu du Ciel, amazing eats and drink with great new and old friends. Special shout out to old Debian friends Micah Anderson, Matt Zimmerman, and Antoine Beaupré for a good time at Dieu du Ciel.
  • The Geek Feminism open space was a great place to chill out and always find other women to hang with, much thanks to Liz Henry for organizing it.
  • Talking to the community from the Inbox booth on Startup Row in the Expo hall on Friday. Special thanks for Don Sheu and Yannick Gingras for making this happen, it was awesome!
  • The PyLadies lunch. Wow, was that amazing. Not only did I get to meet Julia Evans (who also liked meeting me!), but there was an amazing lineup of amazing women telling everyone about what they're doing. This and Noami Ceder's touching talk about openly transitioning while being a member of the Python community really show how the community walks the walk when it comes to diversity and is always improving.
  • Catching up with old friends like Biella Coleman, Selena Deckelmann, Deb Nicholson, Paul Tagliamonte, Jessica McKellar, Adam Fletcher, and even friends from the bay area who I don't see often. It was hard to walk places without getting too distracted running into people I knew, I got really good at waving and continuing on my way.

I didn't get to go to a lot of talks in person this year since my personal schedule was so full, but the PyCon video team is amazing as usual, so I'm looking forward to checking out the archive. It really is a gift to get the videos up while energy from the conference is still so high and people want to check out things they missed and share the talks they loved.

Thanks to everyone, hugs, peace out, et cetera!

Categories: FLOSS Project Planets

Rich Bowen: ApacheCon North America 2014

Planet Apache - Mon, 2014-04-14 12:03

Last week I had the honor of chairing ApacheCon North America 2014 in Denver Colorado. I could hardly be any prouder of what we were able to do on such an incredibly short timeline. Most of the credit goes to Angela Brown and her amazing team at the Linux Foundation who handled the logistics of the event.

My report to the Apache Software Foundation board follows:

ApacheCon North America 2014 was held April 7-9 in Denver, Colorado, USA. Despite the very late start, we had higher attendance than last year, and almost everyone that I have spoken with has declared it an enormous success. Attendees, speakers and sponsors have all expressed approval of the job that Angela and the Linux Foundation did in the production of the event. Speaking personally, it was the most stress-free ApacheCon I have ever had.

Several projects had dedicated hackathon spaces, while the main hackathon room was unfortunately well off of the beaten path, and went unnoticed by many attendees. We plan to have the main hackathon space much more prominently located in a main traffic area, where it cannot be missed, in Budapest, as I feel that the hackathon should remain a central part of the event, for its community-building opportunities.

Speaking of Budapest, on the first day of the event, we announced ApacheCon Europe, which will be held November 17-21 2014 in Budapest. The website for that is up at http://apachecon.eu/ and the CFP is open, and will close June 25, 2014. We plan to announce the schedule on July 28, 2014, giving us nearly 4 months lead time before the conference. We have already received talk submissions, and a few conference registrations. I will try to provide statistics each month between now and the conference.

As with ApacheCon NA, there will be a CloudStack Collaboration Conference co-located with ApacheCon. We are also discussing the possibility of a co-located Apache OpenOffice user-focused event on the 20th and 21st, or possibly just one day.

We eagerly welcome proposals from other projects which wish to have similar co-located events, or other more developer- or PMC-focused events like the Traffic Server Summit, which was held in Denver.

Discussion has begun regarding a venue for ApacheCon North America 2015, with Austin and Las Vegas early favorites, but several other cities being considered.

I'll be posting several more things abut it, because they deserve individual attention. Also, we'll be posting video and audio from the event on the ApacheCon website in the very near future.

Categories: FLOSS Project Planets

Appnovation Technologies: 12 Best Designed College Websites

Planet Drupal - Mon, 2014-04-14 11:08
Here's a look at 12 of the best designed college websites. var switchTo5x = false;stLight.options({"publisher":"dr-75626d0b-d9b4-2fdb-6d29-1a20f61d683"});
Categories: FLOSS Project Planets

Machinalis: Migrating data into your Django project

Planet Python - Mon, 2014-04-14 10:52

There are times when we have an existing, legacy, DB and we need to migrate its data into our Django application. In this post I’ll share a technique that we successfully applied for this.

Working on a big project, our client had an existing application using a MySQL DB. Our objective was to develop a new, more modern, feature-rich, Django 1.5-based version of his tool. At a certain stage of the development our client requested that we migrate some of the current users’ data into the new system, so we could move to a beta-testing phase.

The method that we applied not only allowed us to effectively migrate dozens of users to the new system, but also we could keep doing migrations as the application continued its development.

General description

We based our work in two very powerful Django’s features:

  1. Multiple databases and
  2. Integrating Django with a legacy database

So, the general procedure would be:

  1. Add a new, legacy database to your project.

  2. Create a legacy app.
    • Automatically generate the models
    • Set up a DB router.
  3. Write your migration script.

Let’s describe each step a little bit more:

1. A legacy database

We assume here that you have access to the legacy DB. In our particular case, before each migration our client will give us a MySQL dump of the legacy DB. So we create a fresh legacydb in our own DB server and import the dump, every time.

However, it doesn’t matter how you access the legacy DB as long as you can do it from Django. So, following the Multiple databases approach, you must edit the project’s settings.py and add the legacy database. For example like this:

DATABASES = { 'default': { 'NAME': 'projectdb', 'ENGINE': 'django.db.backends.mysql', 'USER': 'some_user', 'PASSWORD': '123' }, 'legacy': { 'NAME': 'legacydb', 'ENGINE': 'django.db.backends.mysql', 'USER': 'other_user', 'PASSWORD': '456' } }

Depending on your objectives regarding the migration, this settings can be set either in your standard project’s settings.py file or in a different, special, settings file to be used only during extraordinary migrations.

2. A legacy app

The general idea here is that you start a new app that will represent your legacy data. All the work (other than the settings) will be done within this app. Thus, you can keep it in a different branch (maintain the migration feature isolated) and continue the development process normally.

inspectdb

Now, the key for this step is to follow the Integrating Django with a legacy database document. By using the admin’s inspectdb command the models.py file can be automatically generated!.

$ mkdir apps/legacy $ python manage.py startapp legacy apps/legacy/ $ python manage.py inspectdb --database=legacy > apps/legacy/models.py

Anyways, as the documentation says:

This feature is meant as a shortcut, not as definitive model generation. After you run it, you’ll want to look over the generated models yourself to make customizations.

In our particular case, it worked like a charm and only cosmetic modifications were needed!

Database router

Next, a database router must be provided. It is Django’s mechanism to match objects with their original database.

Django’s default routing scheme ensures that if a database isn’t specified, all queries fall back to the default database. In our case, we will make sure that objects from the legacy app are taken from its corresponding DB (and make it read-only). An example router would be:

# Specific router to point all read-operations on legacy models to the # 'legacy' DB. # Forbid write-operations and syncdb. class LegacyRouter(object): def db_for_read(self, model, **hints): """Point all operations on legacy models to the 'legacy' DB.""" if model._meta.app_label == 'legacy': return 'legacy' return 'default' def db_for_write(self, model, **hints): """Our 'legacy' DB is read-only.""" return False def allow_relation(self, obj1, obj2, **hints): """Forbid relations from/to Legacy to/from other app.""" obj1_is_legacy = (obj1._meta.app_label == 'legacy') obj2_is_legacy = (obj2._meta.app_label == 'legacy') return obj1_is_legacy == obj2_is_legacy def allow_syncdb(self, db, model): return db != 'legacy' and model._meta.app_label != 'legacy'

Finally, to use the router you’ll need to add it to your settings.py file.

DATABASE_ROUTERS = ['apps.legacy.router.LegacyRouter']

Now you are ready to access your legacy data using Django’s ORM. Open the shell, import your legacy models and play around!

For a more detailed example of this technique applied, check this other blog post. It is based on Django 1.3 but still useful.

3. Your migration script

At this point you have access to the legacy data using Django’s ORM. Now it is time to write the actual migration script. There is no magic nor much automation here: you know your data model and (hopefully) the legacy DB structure. It is in your hands to create your system’s models instances and their relations.

In our case, we wrote an export.py script that we manually run from the command line whenever we need.

It’s a really good idea to perform the migration inside a single transaction. Otherwise, any error while running the migration script will let you with a partial (and possible inconsistent migration) and will force you to write complex logic to be able to resume it. The @transaction.commit_on_success decorator is a good way to achieve the desired effect. As a helpful side effect, it will also be faster to do a single commit.

Conclusions

As a general data-migration technique for Django applications, it has several advantages:

  • allows to migrate lots of data,
  • can be used with immature or changing data-models,
  • relies on standard Django’s features (ORM, Multiple databases),
  • the project’s testing infrastructure can be used normally,
  • can be used for one-time-only migration scripts as well as for continuous-migration’s features,
  • it can be applied in case of multiple and heterogeneous data sources.

On the other side, as usual, it is no silver bullet. One of the main problems here is that the complexity of the task is directly proportional to the difference between the DB models. Since the actual data manipulation must be programmed manually, very different data models potentially means a lot of work.

So, as stated in the beginning of the post: the method allowed us to successfully migrate a considerable amount of data into our system, allowing us to accommodate to changes as the application continued its development.

Categories: FLOSS Project Planets

Drupal Association News: Drupal Association Board Meeting this Wednesday

Planet Drupal - Mon, 2014-04-14 10:29

The month of March was pretty huge for the Association - we tackled a lot! Join us for the next Drupal Asssociation board meeting where we will review the work we accomplished and set the stage for even more. In addition to our review of March, we'll be discussing a new Marketing Committeee charter, a new Procurement Policy, and review some branding updates for the Association.

Categories: FLOSS Project Planets

Craig Small: mutt ate my i key

Planet Debian - Mon, 2014-04-14 09:11

I did a large upgrade tonight and noticed there was a mutt upgrade, no biggie really….Except my I have for years (incorrectly?) used the “i” key when reading a specific email to jump back to the list of emails, or from index to pager in mutt speak.

Instead of my pager of mails, I got “No news servers defined!” The fix is rather simple, in muttrc put

bind pager i exit

and you’re back to using the i key the wrong way again like me.

 

Categories: FLOSS Project Planets

Chris Lamb: Race report: Cambridge Duathlon 2014

Planet Debian - Mon, 2014-04-14 08:59

(This is my first race of the 2014 season.)


I had entered this race in 2013 and found it was effective for focusing winter training. As triathlons do not typically start until May in the UK, scheduling earlier races can be motivating in the colder winter months.

I didn't have any clear goals for the race except to blow out the cobwebs and improve on my 2013 time. I couldn't set reasonable or reliable target times after considerable "long & slow" training in the off-season but I did want to test some new equipment and stategies, especially race pacing with a power meter, but also a new wheelset, crankset and helmet.

Preparation was both accidentally and deliberately compromised: I did very little race-specific training as my season is based around an entirely different intensity of race, but compounding this I was confined to bed the weekend before.

Sleep was acceptable in the preceding days and I felt moderately fresh on race morning. Nutrition-wise, I had porridge and bread with jam for breakfast, a PowerGel before the race, 750ml of PowerBar Perform on the bike along with a "Hydro" PowerGel with caffeine at approximately 30km.

Run 1 (7.5km)

A few minutes before the start my race number belt—the only truly untested equipment that day—refused to tighten. However, I decided that once the race began I would either ignore it or even discard it, risking disqualification.

Despite letting everyone go up the road, my first km was still too fast so I dialed down the effort, settling into a "10k" pace and began overtaking other runners. The Fen winds and drag-strip uphill from 3km provided a bit of pacing challenge for someone used to shelter and shorter hills but I kept a metered effort through into transition.

Time
33:01 (4:24/km, T1: 00:47) — Last year: 37:47 (5:02/km)
Bike (40km)

Although my 2014 bike setup features a power meter, I had not yet had the chance to perform an FTP test outdoors. I was thus was not able to calculate a definitive target power for the bike leg. However, data from my road bike suggested I set a power ceiling of 250W on the longer hills.

This was extremely effective in avoiding going "into the red" and compromising the second run. This lends yet more weight to the idea that a power meter in multisport events is "almost like cheating".

I was not entirely comfortable with my bike position: not only were my thin sunglasses making me raise my head more than I needed to, I found myself creeping forward onto the nose of my saddle. This is sub-optimal, even if only considering that I am not training in that position.

Overall, the bike was uneventful with the only memorable moment provided by a wasp that got stuck between my head and a helmet vent. Coming into transition I didn't feel like I had really pushed myself that hard—probably a good sign—but the time difference from last year's bike leg (1:16:11) was a little underwhelming.

Time
1:10:45 (T2: 00:58)
Run 2 (7.5km)

After leaving transition, my legs were extremely uncooperative and I had great difficulty in pacing myself in the first kilometer. Concentrating hard on reducing my cadence as well as using my rehearsed mental cue, I managed to settle down.

The following 4 kilometers were a mental struggle rather than a physical one, modulo having to force a few burps to ease some discomfort, possibly from drinking too much or too fast on the bike.

I had planned to "unload" as soon as I reached 6km but I didn't really have it in me. Whilst I am physiologically faster compared to last year, I suspect the lack of threshold-level running over the winter meant the mental component required for digging deep will require some coaxing to return.

However, it is said that you have successfully paced a duathlon if the second run faster than the first. On this criterion, this was a success, but it would have been a bonus to have really felt completely completely drained at the end of the day, if only from a neo-Calvinist perspective.

Time
32:46 (4:22/km) / Last year: 38:10 (5:05/km)
Overall
Total time
2:18:19

A race that goes almost entirely to plan is a bit of a paradox – there's certainly satisfaction in setting goals and hitting them without issue, but this is a gratification of slow-burning fire rather than the jubilation of a fireworks display.

However, it was nice to learn that I managed to finish 5th in my age group despite this race attracting an extremely strong field: as an indicator, the age-group athlete finishing immediately before me was seven minutes faster and the overall winner finished in 1:54:53 (!).

The race identified the following areas to work on:

  • Perform an outdoors FTP on my time-trial bike outdoors to develop an optimum power plan.
  • Do a few more brick runs, at least to re-acclimatise the feeling.
  • Schedule another bike fit.

Although not strictly race-related, I also need to find techniques to ensure transporting a bike on public transport is less stressful. (Full results & full 2014 race schedule)

Categories: FLOSS Project Planets

Acquia: The best kind of learning technology

Planet Drupal - Mon, 2014-04-14 08:31

Our training is hands-on, but what that means has changed through the years we’ve run Drupal training. Now you’re just as likely to see learners drawing on paper, collaborating with someone, giving a quick demo, or of course, working hard on their computers. I was reminded of this recently looking at some photos of a client training by our partner, Cegeka with Laurens Vandeput, Senior Drupal developer and team coach.

Categories: FLOSS Project Planets

Martijn Faassen: The Call of Python 2.8

Planet Python - Mon, 2014-04-14 07:38
Introduction

Guido recently felt he needed to re-empathize that there will be no Python 2.8. The Python developers have been very clear for years that there will never be a Python 2.8.

http://legacy.python.org/dev/peps/pep-0404/

At the Python language summit there were calls for a Python 2.8. Guido reports:

We (I) still don't want to do a 2.8 release, and I don't want to accelerate 3.5, but I do think we should make things better for people who have to straddle Python 2 and 3 in a single codebase, by developing more tools, and by security and possibly installer updates to 2.7 (PEP 466).

At his keynote at PyCon, he said it again:

A very good thing happened to recognize the reality that Python 2.7 is still massively popular: the end of life date for Python 2.7 was changed by Guido to 2020 (it was 2015). In the same change he felt he should repeat there will be no Python 2.8:

+There will be no Python 2.8.

The call for Python 2.8 is strong. Even Guido feels it!

People talk about a Python 2.8, and are for it, or, like Guido, against it, but rarely talk about what it should be. So let's actually have that conversation.

Why talk about something that will never be? Because we can't call for something, nor reject something if we don't know what it is.

What is Python 2.8 for?

Python 2.8 could be different things. It could be a Python 2.x release that reduces some pain points and adds features for Python 2 developers independent from what's going on in Python 3. It makes sense, really: we haven't had a new Python 2 feature release since 2010 now. Those of us with existing large Python 2 codebases haven't benefited from the work the language developers have done in those years. Even polyglot libraries that support Python 2 and 3 both can't use the new features, so are also stuck with a 2010 Python. Before Python 2.7, the release cycle of Python has seen a new compatible release every 2 years or less. The reality of Python for many of its users is that there has been no feature update of the language for years now.

But I don't want to talk about that. I want to talk about Python 2.8 as an incremental upgrade path to Python 3. If we are going to add features to Python 2, let's take them from Python 3. I want to talk about bringing Python 2.x closer to Python 3. Python 2 might never quite reach Python 3 parity, but it could still help a lot if it can get closer incrementally.

Why an incremental upgrade?

In the discussion about Python 3 there is a lot of discussion about the need to port Python libraries to Python 3. This is indeed important if you want the ability to start new projects on Python 3. But many of us in the trenches are working on large Python 2 code bases. This isn't just maintenance. A large code base is alive, so we're building new features in Python 2.

Such a large Python codebase is:

  • Important to some organization. Important enough for people to actually pay developers money to work on Python code.
  • Cannot be easily ported in a giant step to Python 3, even if all external open source libraries are ported.
  • Porting would not see any functional gain, so the organization won't see it as a worthwhile investment.
  • Porting would entail bugs and breakages, which is what the organization would want to avoid.

You can argue that I'm overstating the risks of porting. But we need to face it: many codebases written in Python 2 have low automatic test coverage. We don't like to talk about it because we think everybody else is better at automated testing than we are, but it's the reality in the field.

We could say, fine, they can stay on Python 2 forever then! Well, at least until 2020. I think this would be unwise, as these organizations are paying a lot of developers money to work on Python code. This has an effect on the community as a whole. It contributes to the gravity of Python 2.

Those organizations, and thus the wider Python community, would be helped if there was an incremental way to upgrade their code bases to Python 3, with easy steps to follow. I think we can do much more to support such incremental upgrades than Python 2.7 offers right now.

Python 2.8 for polyglot developers

Besides helping Python 2 code bases go further step by step, Python 2.8 can also help those of us who are maintaining polyglot libraries, which work in both Python 2 and Python 3.

If a Python 2.8 backported Python 3 features, it means that polyglot authors can start using those features if they drop Python 2.7 support right there in their polyglot libraries, without giving up Python 2 compatibility. Python 2.8 would actually help encourage those on Python 2.7 codebases to move towards Python 3, so they can use the library upgrades.

Of course dropping Python 2.x support entirely for a polyglot library will also make that possible. But I think it'll be feasible to drop Python 2.7 support in favor of Python 2.8 much faster than it is possible to drop Python 2 support entirely.

But what do we want?

I've seen Python 3 developers say: but we've done all we could with Python 2.7 already! What do you want from a Python 2.8?

And that's a great question. It's gone unanswered for far too long. We should get a lot more concrete.

What follows are just ideas. I want to get them out there, so other people can start thinking about them. I don't intend to implement any of it myself; just blogging about it is already breaking my stress-reducing policy of not worrying about Python 3.

Anyway, I might have it all wrong. But at least I'm trying.

Breaking code

Here's a paradox: I think that in order to make an incremental upgrade possible for Python 2.x we should actually break existing Python 2.x code in Python 2.8! Some libraries will need minor adjustments to work in Python 2.8.

I want to do what the from __future__ pattern was introduced for in the first place: introduce a new incompatible feature in a release but making it optional, and then later making the incompatible feature the default.

The Future is Required

Python 2.7 lets you do from __future__ import something to get the interpreter behave a bit more like Python 3. In Python 2.8, those should be the default behavior.

In order to encourage this and make it really obvious, we may want to consider requiring these in Python 2.8. That means that the interpreter raises an error unless it has such a from __future__ import there.

If we go for that, it means you have to have this on the top of all your Python modules in Python 2.8:

  • from __future__ import division
  • from __future__ import absolute_import
  • from __future__ import print_function

absolute_import appears to be uncontroversial, but I've seen people complain about both division and print_function. If people reject Python 3 for those reasons, I want to make clear I'm not in the same camp. I believe that is confusing at most a minor inconvenience with a dealbreaker. I think discussion about these is pretty pointless, and I'm not going to engage in it.

I've left out unicode_literals. This is because I've seen both Nick Coghlan and Armin Ronacher argue against them. I have a different proposal. More below.

What do we gain by this measure? It's ugly! Yes, but we've made the upgrade path a lot more obvious. If an organisation wants to upgrade to Python 2.8, they have to review their imports and divisions and change their print statements to function calls. That should be doable enough, even in large code bases, and is an upgrade path a developer can do incrementally, maybe even without having to convince their bosses first. Compare that to an upgrade to Python 3.

from __future3__ import new_classes

We can't do everything with the old future imports. We want to allow more incremental upgrading. So let's introduce a new future import.

New-style classes, that is classes that derive from object, were introduced in Python 2 many years ago, but old-style classes are still supported. Python 3 only has new-style classes. Python 2.8 can help here by making new style classes the default. If you import from __future3__ import new_classes at the top of your module, any class definition in that module that looks like this:

class Foo: pass

is interpreted as a new-style class.

This might break the contract of the module, as people may subclass from this class and expect an old-style class, and in some (rare) cases this can break code. But at least those problems can be dealt with incrementally. And the upgrade path is really obvious.

__future3__?

Why did I write __future3__ and not __future__? Because otherwise we can't write polyglot code that is compatible in Python 2 and Python 3.

Python 3.4 doesn't support from __future__ import new_classes. We don't want to wait for a Python 3.5 or Python 3.6 to support this, even there is even any interest in supporting this among the Python language developers at all. Because after all, there won't be a Python 2.8.

That problem doesn't exist for __future3__. We can easily fake a __python3__ module in Python 3 without being dependent on the language developers. So polyglot code can safely use this.

from __future3__ import explicit_literals

Back to the magic moment of Nick Coghlan and Armin Ronacher agreeing.

Let's have a from __future3__ import explicit_literals.

This forces the author to be entirely explicit with string literals in the module that imports it. "foo" and 'foo' are now errors; the module won't import. Instead the module has to be explicit and use b'foo' and u'foo' everywhere.

What does that get us? It forces a developer to think about string literals everywhere, and that helps the codebase become incrementally more compatible with Python 3.

from __future3__ import str

This import line does two things:

  • you get a str function that creates a Python 3 str. This string has unicode text in it and cannot be combined with Python 2 style bytes and Python 3 style bytes without error (which I'll discuss later).
  • if from __future__ import explicit_literals is in effect, a bare literal now creates a Python 3 str. Or maybe explicit_literals is a prerequisite and from __future3__ import str should error if it isn't there.

I took this idea from the Python future module, which makes Python 3 style str and bytes (and much more) available in Python 2.7. I've modified the idea as I have the imaginary power to change the interpreter in Python 2.8. Of course anything I got wrong is my own fault, not the fault of Ed Schofield, the author of the future module.

from __past__ import bytes

To ensure you still have access to Python 2 bytes (really str) just in case you still need it, we need an additional import:

from __past__ import bytes as oldbytes

oldbytes` can be called with Python 2 str, Python 2 bytes and Python 3 bytes. It rejects a Python 3 str. I'll talk about why it can be needed in a bit.

Yes, __past__ is another new namespace we can safely support in Python 3. It would get more involved in Python 3: it contains a forward port of the Python 2 bytes object. Python 3 bytes have less features than Python 2 bytes, and this has been a pain point for some developers who need to work with bytes a lot. Having a more capable bytes object in Python 3 would not hurt existing Python 3 code, as combining it with a Python 3 string would still result in an error. It's just an alternative implementation of bytes with more methods on it.

from __future3__ import bytes

This is the equivalent import for getting the Python 3 bytes object.

Combining Python 3 str/bytes with Python 2 unicode/str

So what happens when we somehow combine a Python 3 str/bytes with a Python 2 str/bytes/unicode? Let's think about it.

The future module by Ed Schofield forbids py3bytes + py2unicode, but supports other combinations and upcasts them to their Python 3 version. So, for instance, py3str + py2unicode -> py3str. This is a consequence of the way it tries to make Python 2 string literals work a bit like they're Python 3 unicode literals. There is a big drawback to this approach; a Python 3 bytes is not fully compatible with APIs that expect a Python 2 str, and a library that tried to use this approach would suffer API breakage. See this issue for more information on that.

I think since we have the magical power to change the interpreter, we can do better. We can make real Python 3 string literals exist in Python 2 using __future3__.

I think we need these rules:

  • py3str + py2unicode -> py3str
  • py3str + py2str: UnicodeError
  • py3bytes + py2unicode: TypeError
  • py3bytes + py2str: TypeError

So while we upcast existing Python 2 unicode strings to Python 3 str we refuse any other combination.

Why not let people combine Python 2 str/bytes with Python 3 bytes? Because the Python 3 bytes object is not compatible with the Python 2 bytes object, and we should refuse to guess and immediately bail out when someone tries to mix the two. We require an explicit Python 2 str call to convert a Python 3 bytes to a str.

This is assuming that the Python 3 str is compatible with Python 2 unicode. I think we should aim for making a Python 3 string behave like a subclass of a Python 2 unicode.

What have we gained?

We can now start using Python 3 str and Python 3 bytes in our Python 2 codebases, incrementally upgrading, module by module.

Libraries could upgrade their internals to use Python 3 str and bytes entirely, and start using Python 3 str objects in any public API that returns Python 2 unicode strings now. If you're wrong and the users of your API actually do expect str-as-bytes instead of unicode strings, you can go deal with these issues one by one, in an incremental fashion.

For compatibility you can't return Python 3 bytes where Python 2 str-as-bytes is used, so judicious use of __past__.str would be needed at the boundaries in these cases.

After Python 2.8

People who have ported their code to Python 2.8 and have turned on all the __future3__ imports incrementally will be in a better place to port their code to Python 3. But to offer a more incremental step, we can have a Python 2.9 that requires the __future3__ imports introduced by Python 2.8. And by then we might have thought of some other ways to smoothen the upgrade path.

Summary
  • There will be no Python 2.8. There will be no Python 2.8! Really, there will be no Python 2.8.
  • Large code bases in Python need incremental upgrades.
  • The upgrade from Python 2 to Python 3 is not incremental enough.
  • A Python 2.8 could help smoothen the way.
  • A Python 2.8 could help polyglot libraries.
  • A Python 2.8 could let us drop support for Python 2.7 with an obvious upgrade path in place that brings everybody closer to Python 3.
  • The old __future__ imports are mandatory in Python 2.8 (except unicode_literals).
  • We introduce a new __future3__ in Python 2.8. __future3__ because we can support it in Python 3 today.
  • We introduce from __future3__ import new_classes, mandating new style objects for plain class statements.
  • We introduce from __future3__ import explicit_literals, str, bytes to support a migration to use Python 3 style str and bytes.
  • We introduce from __past__ import bytes to be able to access the old-style bytes object.
  • A forward port of the Python 2 bytes object to Python 3 would be useful. It would error if combined with a Python 3 str, just like the Python 3 bytes does.
  • A future Python 2.9 could introduce more incremental upgrade steps. But there will be no Python 2.9.
  • I'm not going to do the work, but at least now we have something to talk about.
Categories: FLOSS Project Planets

Nick Kew: Bleeding Heart

Planet Apache - Mon, 2014-04-14 06:11

The fallout from heartbleed seems to be manifesting itself in a range of ways.  I’ve been required to set new passwords for a small number of online services, and expect I may encounter others as and when I next access them.

The main contrast seems to be between admins who tell you what’s happening, vs services that just stop working.  Contrast Apache and Google:

Apache: email arrives from the infrastructure folks: all system passwords will have to be reset.  Then a second email: if you haven’t already, you’ll have to set a new password via the “forgot my password” mechanism (which sends you PGP-encrypted email instructions).  All very smooth and maximally secure – unless some glitch has yet to manifest itself.

Google: @employer email address, which is hosted on gmail, just stopped working without explanation.  But this is the weekend, and similar things have happened before at weekends, so I ignore it.  But when it’s still not back on Monday, I try logging in with my web browser.  It allows me that, and insists I set a new password, whereupon normal imap access is also restored.  Hmmm … In the first place, no explanation or warning.  In the second place, if the password had been compromised then anyone who had it could trivially have reset it.  Bottom of the class both for insecurity and for the user experience.

There is also secondary fallout: worried users of products that link OpenSSL asking or wondering what they have to upgrade: for example, here.  For most, the answer is that you just upgrade your OpenSSL installation and then restart any services that link it (or reboot the whole system if you favour the sledgehammer approach).  Exceptions to that will be cases where you have custom builds with statically linked OpenSSL, or multiple OpenSSL installations (as might reasonably be the case on a developer’s machine).  If in doubt, restart your services and check for the OpenSSL version appearing in its startup messages: for example, with Apache HTTPD you’ll see it in the error log at startup.


Categories: FLOSS Project Planets

Future Foundries: Crochet 1.2.0, now with a better API!

Planet Python - Mon, 2014-04-14 04:52
Crochet is a library for using Twisted more easily from blocking programs and libraries. The latest version, released here at PyCon 2014, includes a much improved API for calling into Twisted from threads. In particular, a timeout is passed in - if it is hit the underlying operation is cancelled, and an exception is raised. Not all APIs in Twisted support cancellation, but for those that do (or APIs you implement) this is a really nice feature. You get high level timeouts (instead of blocking sockets' timeout-per-socket-operation) and automatic cleanup of resources if something takes too long.

#!/usr/bin/python
"""
Do a DNS lookup using Twisted's APIs.
"""
from __future__ import print_function

# The Twisted code we'll be using:
from twisted.names import client

from crochet import setup, wait_for
setup()


# Crochet layer, wrapping Twisted's DNS library in a blocking call.
@wait_for(timeout=5.0)
def gethostbyname(name):
"""Lookup the IP of a given hostname.

Unlike socket.gethostbyname() which can take an arbitrary amount
of time to finish, this function will raise crochet.TimeoutError
if more than 5 seconds elapse without an answer being received.
"""
d = client.lookupAddress(name)
d.addCallback(lambda result: result[0][0].payload.dottedQuad())
return d


if __name__ == '__main__':
# Application code using the public API - notice it works in a normal
# blocking manner, with no event loop visible:
import sys
name = sys.argv[1]
ip = gethostbyname(name)
print(name, "->", ip)
Categories: FLOSS Project Planets

Web Omelette: 3 ways to prompt for user input in Drush

Planet Drupal - Mon, 2014-04-14 03:07

Drush is awesome. It makes Drupal development much easier. Not only that it comes already packed with a bunch of useful commands, but you can declare your own with great ease. So if you need to call some of your module's functionality from Drush, all you have to do is declare a simple command that integrates with it.

In this tutorial I am going to show you how to get user feeback for such a command. I do not refer to arguments or options in this case. But how you can ask for confirmation on whether or not the command should proceed as requested and how you can ask for a choice. Additionally we'll quickly look at how to get free text back from the user.

So let's dive in with an example command callback function called drush_module_name_example_command():

/** * Callback function for the example command */ function drush_module_name_example_command() { // Command code we will look at drush_print('Hello world!'); } Confirmation

The first thing we'll look at is how to get the user to confirm the action. So in our case, we'll ask the user if they really want this string to be printed to the screen. Drush provides a great API for this:

if (drush_confirm('Are you sure you want \'Hello world\' printed to the screen?')) { drush_print('Hello world!'); } else { drush_user_abort(); }

You'll notice 2 new functions. The drush_confirm() function prints a question to the screen with the intent of getting one of two answers back form the user: y or n. If the response is y, the function returns true which means our print statement proceeds. If the answer is n, the drush_user_abort() function gets called instead. This is the recommended way to stop executing a Drush command.

Select option

Now let's see how you can make the user choose an option from a list you provide. For our super Hello world use case, we will give the user the choice to select from a list who Drush should say hello to. It can be implemented like this:

$options = array( 'world' => 'World', 'univers' => 'Univers', 'planet' => 'Planet', ); $choice = drush_choice($options, dt('Who do you want to say hello to?')); if ($choice) { drush_print(dt('Hello ' . $options[$choice] . '!')); }

So what happens above? First, we create an array to store the choices called $options. The array keys are the machine name and the values are the human friendly versions. Then, we call the drush_choice() function to which we pass 2 arguments: the $options array and the question we ask from the user.

When the command is run, this function is called and returns the machine name of the option the users chooses. Then we check if this value exists and print to the screen the concatenated string. We do use the human readable value by extracting it from the $options array using the key returned.

Free text values

A third type of user input is in the form of free text that you can ask the user to input. Of course the validation of this kind of input must be much stricter so as to not break your application somehow. But let's ask our user exactly who they want to say hello to.

$value = drush_prompt(dt('Who do you want to say hello to?')); drush_print(dt('Hello ' . $value . '!'));

This one is very simple. When the command is run, the drush_prompt() function is called to which we pass a string of text to be displayed in the terminal. The return value is given by the user and we use that for concatenation. But do remember that this is example code only so if you do use this function, make sure you validate the user input properly.

Conclusion

So there you have it. Three different ways to get user input in the terminal using Drush. The first two are the most common ones I believe but it's good to know there is also the last one available in case we need it.

Drush safely!

In Drupal var switchTo5x = true;stLight.options({"publisher":"dr-8de6c3c4-3462-9715-caaf-ce2c161a50c"});
Categories: FLOSS Project Planets

Bits from Debian: DPL election is over, Lucas Nussbaum re-elected

Planet Debian - Mon, 2014-04-14 02:10

The Debian Project Leader election has concluded and the winner is Lucas Nussbaum. Of a total of 1003 developers, 401 developers voted using the Condorcet method.

More information about the result is available in the Debian Project Leader Elections 2014 page.

The new term for the project leader will start on April 17th and expire on April 17th 2015.

Categories: FLOSS Project Planets

Propeople Blog: Propeople Wins Gold at the Danish Drupal Awards

Planet Drupal - Mon, 2014-04-14 01:55

Propeople was the big winner at the first ever Danish Drupal Awards. This new competition acknowledges the agencies and companies that excel in Drupal web design and development. Propeople won gold in 5 of the 7 award categories, one in every category for which we were nominated!

Drupal agencies in Denmark were the ones who nominated, and voted for, each other (with individual companies not able to vote for themselves). It is, of course, a great recognition for the winners to have been chosen by those that make up the industry itself. As a Drupal company that started in Denmark, Propeople is incredibly proud to have received this acknowledgement and seal of approval from our colleagues in the Danish industry.

 

Propeople walked away from the ceremony with awards in the following categories: Best Drupal Website, Best Drupal Media site, Best Drupal NGO Site, Best Drupal Intranet, and Best Public Drupal Site. The last three awards were won in collaboration with Bysted, one of our sister companies who, like Propeople, is a part of the Intellecta Group. The awards bestowed upon Propeople are a testament to the quality and professionalism of our team of web specialists and Drupal experts, and we couldn’t be happier about them! See below for a video recap of the awards ceremony, and a list of the winning websites. 

 

Video of Drupal Award 2014 - Propeople

 

The Winning Websites

Best Drupal Website:
Gold Award: NFBIO.dk , created for Nordisk Film by Propeople

Best Drupal NGO Site:
Gold Award: visitcopenhagen.com, created for Wonderful Copenhagen by Propeople and Bysted

Best Drupal Intranet:
Gold Award : KK intranet, created for the Municipality of Copenhagen by Propeople and Bysted

Best Public Drupal website:
Gold Award: visitcopenhagen.com, created for Wonderful Copenhagen by Propeople and Bysted
Bronze Award: roskilde.dk, created for the Municipality of Roskilde by Propeople and Bysted

Best Drupal Media site:
Gold: NFBIO.dk, created for Nordisk Film by Propeople

The awards bestowed upon Propeople are a testament to the quality and professionalism of our team of web specialists and Drupal experts, and we couldn’t be happier about them! If you want to learn about how Propeople can make your next project a winning website, make sure to contact us.

Tags: PropeopleDrupalAwardsDenmarkCheck this option to include this post in Planet Drupal aggregator: planetTopics: Business & Strategy
Categories: FLOSS Project Planets

Andrew Pollock: [life] Day 76: Dora + Fever

Planet Debian - Mon, 2014-04-14 01:46

We had a bit of a rough night last night. I noticed Zoe was pretty hot when she had a nap yesterday after not really eating much lunch. She still had a mild fever after her nap, so I gave her some paracetamol (aka acetaminophen, that one weirded me out when I moved to the US) and called for a home doctor to check her ears out.

Her ears were fine, but her throat was a little red. The doctor said it was probably a virus. Her temperature wasn't so high at bed time, so I skipped the paracetamol, and she went to bed fine.

She did wake up at about 1:30am and it took me until 3am to get her back to bed. I think it was a combination of the fever and trying to phase out her white noise, but she just didn't want to sleep in her bed or her room. At 3am I admitted defeat and let her sleep with me.

She had only a slightly elevated temperature this morning, and otherwise seemed in good spirits. We were supposed to go to a family lunch today, because my sister and brother are in town with their respective families, but I figured we'd skip that on account that Zoe may have still had something, and coupled with the poor night's sleep, I wasn't sure how much socialising she was going to be up for.

My ear has still been giving me grief, and I had a home doctor check it yesterday as well, and he said the ear canal was 90% blocked. First thing this morning I called up to make an appointment with my regular doctor to try and get it flushed out. The earliest appointment I could get was 10:15am.

So we trundled around the corner to my doctor after a very slow start to the day. I got my ear cleaned out and felt like a million bucks afterwards. We went to Woolworths to order an undecorated mud slab cake, so I can try doing a trial birthday cake. I've given up on trying to do the sitting minion, and significantly scaled back to just a flat minion slab cake. The should be ready tomorrow.

The family thing was originally supposed to be tomorrow, and was only moved to today yesterday. My original plan had been to take Zoe to a free Dora the Explorer live show that was on in the Queen Street Mall.

I decided to revert back to the original plan, but by this stage, it was too late to catch the 11am show, so the 1pm show was the only other option. We had a "quick" lunch at home, which involved Zoe refusing the eat the sandwich I made for her and me convincing her otherwise.

Then I got a time-sensitive phone call from a friend, and once I'd finished dealing with that, there wasn't enough time to take any form of public transport and get there in time, so I decided to just drive in.

We parked in the Myer Centre car park, and quickly made our way up to the mall, and made it there comfortably with 5 minutes to spare.

The show wasn't anything much to phone home about. It was basically just 20 minutes of someone in a giant Dora suit acting out was was essentially a typical episode of Dora the Explorer, on stage, with a helper. Zoe started out wanting to sit on my lap, but made a few brief forays down to the "mosh pit" down the front with the other kids, dancing around.

After the show finished, we had about 40 minutes to kill before we could get a photo with Dora, so we wandered around the Myer Centre. I let Zoe choose our destinations initially, and we browsed a cheap accessories store that was having a sale, and then we wandered downstairs to one of the underground bus station platforms.

After that, we made our way up to Lincraft, and browsed. We bought a $5 magnifying glass, and I let Zoe do the whole transaction by herself. After that it was time to make our way back down for the photo.

Zoe made it first in line, so we were in and out nice and quick. We got our photos, and they gave her a little activity book as well, which she thought was cool, and then we headed back down the car park.

In my haste to park and get top side, I hadn't really paid attention to where we'd parked, and we came down via different elevators than we went up, so by the time I'd finally located the car, the exit gate was trying to extract an extra $5 parking out of me. Fortunately I was able to use the intercom at the gate and tell my sob story of being a nincompoop, and they let us out without further payment.

We swung by the Valley to clear my PO box, and then headed home. Zoe spontaneously announced she'd had a fun day, so that was lovely.

We only had about an hour and half to kill before Sarah was going to pick up Zoe, so we just mucked around. Zoe looked at stuff around the house with her magnifying glass. She helped me open my mail. We looked at some of the photos on my phone. Dayframe and a Chromecast is a great combination for that. We had a really lovely spell on the couch where we took turns to draw on her Magna Doodle. That was some really sweet time together.

Zoe seemed really eager for her mother to arrive, and kept asking how much longer it was going to be, and going outside our unit's front door to look for her.

Sarah finally arrived, and remarked that Zoe felt hot, and so I checked her temperature, and her fever had returned, so whatever she has she's still fighting off.

I decided to do my Easter egg shopping in preparation for Sunday. A friend suggested this cool idea of leaving rabbit paw tracks all over the house in baby powder, and I found a template online and got that all ready to go.

I had a really great yoga class tonight. Probably one of the best I've had in a while in terms of being able to completely clear my head.

I'm looking forward to an uninterrupted night's sleep tonight.

Categories: FLOSS Project Planets

Drupal core announcements: Drupal core security release window on Wednesday, April 16

Planet Drupal - Mon, 2014-04-14 01:18
Start:  2014-04-16 (All day) America/New_York Sprint Organizers:  David_Rothstein

The monthly security release window for Drupal 6 and Drupal 7 core will take place on Wednesday, April 16.

This does not mean that a Drupal core security release will necessarily take place on that date for either the Drupal 6 or Drupal 7 branches, only that you should prepare to look out for one (and be ready to update your Drupal sites in the event that the Drupal security team decides to make a release).

There will be no bug fix release on this date; the next window for a Drupal core bug fix release is Wednesday, May 7.

For more information on Drupal core release windows, see the documentation on release timing and security releases, and the discussion that led to this policy being implemented.

Categories: FLOSS Project Planets

Larry Garfield: The Functional PHP tour

Planet Drupal - Mon, 2014-04-14 00:35

Ever heard of functional programming? Not procedural programming, but actual functional programming. Probably, as some fancy academic thing that no one really uses, right?

Did you know you can do it in PHP, too? It's true. In fact, I'll be speaking about it four times in the next couple of weeks!

read more

Categories: FLOSS Project Planets

Ned Batchelder: PyCon 2014

Planet Python - Sun, 2014-04-13 23:36

PyCon 2014 is over, and as usual, I loved every minute. There are a huge number of people that I know there, and about 5 different sub-communities that I feel an irrationally strong attachment to.

Some highlights:

  • I gave a talk entitled Getting Started Testing, which people seemed to like, though if you are interested, I can point out the five places I messed up...
  • Jenny turned me into a cute illustration, which was a fun surprise.
  • I was super-proud of Michelle Fullwood, who has been working on an Arabic learning tool at Boston Python project nights, and always demurred when I brought up the idea of her talking about it. But Sunday morning, she gave a kick-ass lightning talk about it!
  • I had a great meal with Kenneth Reitz, Cory Benfield, and Ian Cordasco, where we bonded over indentation, packaging, abstract syntax trees, and startup gossip.
  • Had a chat with Guido in which I explained what the word MOOC means, and introduced him to Will and Sarina, who told him about diff-cover, our tool for per-commit coverage and quality measurement.
  • I sat next to John Perry Barlow for a bit before his keynote. He was witty, erudite, inspiring, and good-natured.
  • We held some Open edX open spaces which drew a number of people. I gave a talk about it at the Education Summit, and I was pleasantly surprised at the number of people asking me about it.
  • I had conversations with tons of good people, some I already knew, and some I just met.

My head is still spinning from the high-energy four days I've had, I'm sure I'm leaving out an important high point. I just love every minute!

On the downside, I did not see as much of Montreal as I would have liked, but we'll be back for PyCon 2015, so I have a second chance!

Categories: FLOSS Project Planets

Matthew Garrett: Real-world Secure Boot attacks

Planet Debian - Sun, 2014-04-13 23:22
MITRE gave a presentation on UEFI Secure Boot at SyScan earlier this month. You should read the the presentation and paper, because it's really very good.

It describes a couple of attacks. The first is that some platforms store their Secure Boot policy in a run time UEFI variable. UEFI variables are split into two broad categories - boot time and run time. Boot time variables can only be accessed while in boot services - the moment the bootloader or kernel calls ExitBootServices(), they're inaccessible. Some vendors chose to leave the variable containing firmware settings available during run time, presumably because it makes it easier to implement tools for modifying firmware settings at the OS level. Unfortunately, some vendors left bits of Secure Boot policy in this space. The naive approach would be to simply disable Secure Boot entirely, but that means that the OS would be able to detect that the system wasn't in a secure state[1]. A more subtle approach is to modify the policy, such that the firmware chooses not to verify the signatures on files stored on fixed media. Drop in a new bootloader and victory is ensured.

But that's not a beautiful approach. It depends on the firmware vendor having made that mistake. What if you could just rewrite arbitrary variables, even if they're only supposed to be accessible in boot services? Variables are all stored in flash, connected to the chipset's SPI controller. Allowing arbitrary access to that from the OS would make it straightforward to modify the variables, even if they're boot time-only. So, thankfully, the SPI controller has some control mechanisms. The first is that any attempt to enable the write-access bit will cause a System Management Interrupt, at which point the CPU should trap into System Management Mode and (if the write attempt isn't authorised) flip it back. The second is to disable access from the OS entirely - all writes have to take place in System Management Mode.

The MITRE results show that around 0.03% of modern machines enable the second option. That's unfortunate, but the first option should still be sufficient[2]. Except the first option requires on the SMI actually firing. And, conveniently, Intel's chipsets have a bit that allows you to disable all SMI sources[3], and then have another bit to disable further writes to the first bit. Except 40% of the machines MITRE tested didn't bother setting that lock bit. So you can just disable SMI generation, remove the write-protect bit on the SPI controller and then write to arbitrary variables, including the SecureBoot enable one.

This is, uh, obviously a problem. The good news is that this has been communicated to firmware and system vendors and it should be fixed in the future. The bad news is that a significant proportion of existing systems can probably have their Secure Boot implementation circumvented. This is pretty unsurprisingly - I suggested that the first few generations would be broken back in 2012. Security tends to be an iterative process, and changing a branch of the industry that's historically not had to care into one that forms the root of platform trust is a difficult process. As the MITRE paper says, UEFI Secure Boot will be a genuine improvement in security. It's just going to take us a little while to get to the point where the more obvious flaws have been worked out.

[1] Unless the malware was intelligent enough to hook GetVariable, detect a request for SecureBoot and then give a fake answer, but who would do that?
[2] Impressively, basically everyone enables that.
[3] Great for dealing with bugs caused by YOUR ENTIRE COMPUTER BEING INTERRUPTED BY ARBITRARY VENDOR CODE, except unfortunately it also probably disables chunks of thermal management and stops various other things from working as well.

comments
Categories: FLOSS Project Planets

Dirk Eddelbuettel: RcppBDT 0.2.3

Planet Debian - Sun, 2014-04-13 20:37
A new release of the RcppBDT package is now on CRAN.

Several new modules were added; the package can now work on dates, date durations, "ptime" (aka posix time), and timezones. Most interesting may be the fact that ptime is configured to use 96 bits. This allows a precise representation of dates and times down to nanoseconds, and permits date and time calculations at this level.

The complete NEWS entry is below:

Changes in version 0.2.3 (2014-04-13)
  • New module 'bdtDt' replacing the old 'bdtDate' module in a more transparent style using a local class which is wrapped, just like the three other new classes do

  • New module 'bdtTd' providing date durations which can be added to dates.

  • New module 'bdtTz' providing time zone information such as offset to UTC, amount of DST, abbreviated and full timezone names.

  • New module 'bdtDu' using 'posix_time::duration' for time durations types

  • New module 'bdtPt' using 'posix_time::ptime' for posix time, down to nanosecond granularity (where hardware and OS permit it)

  • Now selects C++11 compilation by setting CXX_STD = CXX11 in src/Makevars* and hence depend on R 3.1.0 or later – this gives gives us long long needed for the nano-second high-resolution time calculations across all builds and platforms.

Courtesy of CRANberries, there is also a diffstat report for the lastest release. As always, feedback is welcome and the rcpp-devel mailing list off the R-Forge page for Rcpp is the best place to start a discussion.

Update: I just learned the hard way that the combination of 32-bit OS, g++ at version 4.7 or newer and a Boost version of 1.53 or 1.54 does not work with this new upload. Some Googling suggests that this ought to be fixed in Boost 1.54; seemingly it isn't as our trusted BH package with Boost headers provides that very version 1.54. However, the Googling also suggested a quick two-line fix which I just committed in the Github repo. A new BH package with the fix may follow in a few days.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets
Syndicate content