FLOSS Project Planets

Vincent Sanders: The care of open source creatures

Planet Debian - Thu, 2014-11-13 17:32
A mini Debian conference happened at the weekend in Cambridge at which I was asked to present. Rather than go with an old talk I decided to try something new. I attempted to cover the topic of application life cycle for open source projects.

The presentation abstract tried to explain this:
A software project that is developed by more than a single person starts requiring more than just the source code. From revision control systems through to continuous integration and issue tracking, all these services need deploying and maintaining.

This presentation takes a look at what a services a project ought to have, what options exist to fulfil those requirements and a practical look at an open source projects actual implementation.I presented on Sunday morning but got a good audience and I am told I was not completely dreadful. The talk was recorded and is publicly available along with all the rest of the conference presentations.

Unfortunately due to other issues in my life right now I did not prepare well enough in advance and my slide deck was only completed on Saturday so I was rather less familiar with the material than I would have preferred.

The rest of the conference was excellent and I managed to see many of the presentations on a good variety of topics without an overwhelming attention to Debian issues. My youngest son brought himself along on both days and "helped" with front desk. He was also the only walk out in my presentation, he insists it was just because he "did not understand a single thing I was saying" but perhaps he just knew who the designated driver was.

I would like thank everyone who organised and sponsored this event for an enjoyable weekend and look forward to the next one.
Categories: FLOSS Project Planets

Lightweight Project Management

Planet KDE - Thu, 2014-11-13 16:52

Hi, my name is Aurélien and I have a problem: I start too many side projects. In fact my problem is even worse: I don't plan to stop running them, or creating new ones.

Most of those projects are tools I created to fill a personal need, but a few of them evolved to the point where I believe they can be useful to others. I restrain from talking about them however because I don't have the time to turn them into proper projects: creating a home page for them, doing regular releases and so on. This means they only exist as git repositories and end up staying unknown, unless I bump into someone who could benefit from one of them, at which point I mention the git repository.

Running software from git repositories is not always a great experience though: depending on how a project is managed, upgrading to the latest content can be a frustrating game of hit-and-miss if one cannot rely on the "master" branch being stable. I don't want others to experience random regressions. To address this, I decided that starting today, I will now run such potentially-useful-to-someone-else side-projects using what I am going to pompously call my "Lightweight Project Management Policy":

  • The "master" branch is always stable. You are encouraged to run it.

  • There is no "release" branches and no manually created release archives, but there may be release tags if need arise.

  • All developments happen in the "dev" branch or in topic branches which are then merged into "dev".

  • To avoid regressions, code inside the "dev" branch does not get merged into "master" until it has received at least three days of real usage.

  • The project homepage is the README.md file at the root of the git repository.

  • The policy is mentioned in the README.md.

Any project managed with this policy should thus always be usable as long as you stick with the "master" branch, and it should not take me too much time to keep them alive.

In the next weeks, I am going to "migrate" some of my projects to this policy. Once they are ready, I'll blog about them. Who knows, maybe you will find some of them useful?

Categories: FLOSS Project Planets

Commerce Guys: Is your Drupal site protected?

Planet Drupal - Thu, 2014-11-13 14:47

On October 15th a new version of Drupal core was published (see details of this fix), so naturally everyone is wondering: How do I protect my site?

How Updates Work in Drupal

Drupal is open source software managed by a community made up of all kinds of experts and hobbyists. Community members who manage security specialize in the processing and verification of all modules hosted on drupal.org and the core of Drupal itself. This super-smart team has a long history in Drupal and a vast understanding of the core code, its history and its planned future. 

They are in charge of analyzing the existing application to protect it from malicious threats, regardless of their origins. When an issue is detected, they evaluate its impact and urgency in order to determine an appropriate mode of communication that meets the needs of the community. This usually means that in the event of a risk, an update is issued on one of the pre-planned bi-weekly release dates.

The security team works independently and regularly offers updates to the modules and Drupal core. Below are some ways you can follow these updates to keep your site secure and up to date.

The Security Alerts

Most Drupal users have an account on drupal.org. If you don’t have one, you’re missing out and you should get one immediately. From your account, you have access to the "Newsletter" tab. On this page, you are invited to subscribe to the security newsletter and be informed of updates.

Twitter

Like any self-respecting tech community, the security team is on Twitter: @drupalsecurity.

RSS

You can find subscribe to two different RSS feeds of security advisories for Drupal core and for contributed modules.

Application maintenance of your site

Whether you developed your site or worked with an agency, once online it must be maintained. The purpose of this maintenance is not to make your site a Rolls Royce, but rather to protect it against errors, insecurities and to improve it with the new features added to Drupal core and the modules you use. It’s encouraged to update early and often.

You can choose the frequency and process for updates, but the operations to be carried out are always the same: update the core of Drupal, update themes and modules and test the full operation of your application before you push your updated project live. Prior to deployment, ensure you have a full backup of your codebase, your files directories, and your database in case anything goes wrong.

How do I update my site?

Several technical means are available to you to get the latest version of core, themes and Drupal modules. Whatever method you choose, you will retrieve new files to install it on your production site. Here is a summary of what to do in general (this protocol is an example for your project, please refer to your usual procedure of deployment).

Starting with a copy of your site on a local environment:

  • Get the new version of files or a patch containing updates.
  • Review the changelog to see what has been changed that may affect existing functionality on your site, including any new dependencies, minor API changes, or other notes requiring manual intervention in the update process.
  • Replace the files or apply the patch. At this point updates are physically available but they are not necessarily applied on your site.
  • You may be asked to launch an "update" of the database, for example.
    • In this case, start Drush UPDB drush command or run the update.php page on your local copy site. This operation will be applied to your site changes in its database.
  • To ensure that the updates have all been taken into account, empty the cache of your site. Please note this may take some time and will affect the navigation on the site for treatment. For production sites, it is recommended to keep your current deployment procedure.
  • Once this is done, test your site. Check that everything is working properly.

If you update a Drupal site between two very different versions of the core, it is possible that some functionalities could be affected. However, in an update of one direct release to another, you should not experience major functional changes. When you are confident with this procedure, following your usual process, update your site or sites.

How to update Security SA-CORE-2014-005 - Drupal core - SQL injection

If your site has been well-maintained, the security update will be simple and have no effect on the functionality of your project. You can update the core of Drupal as you normally do using this new version: https://www.drupal.org/project/drupal

However, if you have not maintained the core of your application for some time (skipping several versions) and even though we do not recommend it, if you made a manual change in the core of Drupal, we recommend that you apply the patch only containing the security patch itself, here: https://www.drupal.org/files/issues/SA-CORE-2014-005-D7.patch

In both cases, the changes in the new version of Drupal will have no effect on the functionality of your project, because it only affects one file related to forms.

How to ensure security on my eCommerce site?

Security is a key issue for an eCommerce website and it is your duty as a merchant to maintain a safe site for your users. To ensure the security of your site, you must first perform regular Drupal core updates, security or not, or suffer the risky consequences.

Then, regularly update the modules you use. In some cases, this may affect the functionality of your site, and must be treated with kid gloves.

In any case, to make these updates, please refer to the standard procedure for updating your site that you have set up with your agency or web host, or enjoy the new technology implementation of Platform.sh to easily update your site and test with confidence.

How Commerce Guys ensures the security of your projects

Subscribers of our Drupal Application Support and Commerce Application Support programs have seen first hand how we can help protect your sites. We patched our customers immediately and 100% were protected whether they hosted with us or not.

Our Platform.sh subscribers benefited from the ability to use a “Drush make” driven workflow to manage the codebase for their sites. This workflow has the advantage of managing the versions of Drupal core and contributed themes and modules on your site through a single configuration file that contains a list of elements that make up your site. Platform.sh uses this file to create and deploy your site by downloading modules and the core of Drupal, making updates fast and easy.

By creating a file Drush Make File, you can ask to recover the latest version of Drupal with the security patch automatically. You gain in maintenance time and reduce your potential for errors.

In addition to ensuring the stability of your hosting, Platform.sh blocked incoming HTTP requests for applications that had not applied the patch. Therefore, only stable sites were available on Platform.sh, and any unprotected sites were immediately aware that action must be taken.

Read more about this protective block here.

If you want to know more about the updates to Drupal, the following links to learn more:

Categories: FLOSS Project Planets

Joey Hess: on leaving

Planet Debian - Thu, 2014-11-13 13:59

I left Debian. I don't really have a lot to say about why, but I do want to clear one thing up right away. It's not about systemd.

As far as systemd goes, I agree with my friend John Goerzen:

I promise you – 18 years from now, it will not matter what init Debian chose in 2014. It will probably barely matter in 3 years.

read the rest

And with Jonathan Corbet:

However things turn out, if it becomes clear that there is a better solution than systemd available, we will be able to move to it.

read the rest

I have no problem with trying out a piece of Free Software, that might have abrasive authors, all kinds of technical warts, a debatable design, scope creep etc. None of that stopped me from giving Linux a try in 1995, and I'm glad I jumped in with both feet.

It's important to be unafraid to make a decision, try it out, and if it doesn't work, be unafraid to iterate, rethink, or throw a bad choice out. That's how progress happens. Free Software empowers us to do this.

Debian used to be a lot better at that than it is now. This seems to have less to do with the size of the project, and more to do with the project having aged, ossified, and become comfortable with increasing layers of complexity around how it makes decisions. To the point that I no longer feel I can understand the decision-making process at all ... or at least, that I'd rather be spending those scarce brain cycles on understanding something equally hard but more useful, like category theory.

It's been a long time since Debian was my main focus; I feel much more useful when I'm working in a small nimble project, making fast and loose decisions and iterating on them. Recent events brought it to a head, but this is not a new feeling. I've been less and less involved in Debian since 2007, when I dropped maintaining any packages I wasn't the upstream author of, and took a year of mostly ignoring the larger project.

Now I've made the shift from being a Debian developer to being an upstream author of stuff in Debian (and other distros). It seems best to make a clean break rather than hang around and risk being sucked back in.

My mailbox has been amazing over the past week by the way. I've heard from so many friends, and it's been very sad but also beautiful.

Categories: FLOSS Project Planets

Martijn Faassen: Better REST with Morepath 0.8

Planet Python - Thu, 2014-11-13 12:28

Today I released Morepath 0.8 (CHANGES). In this release Morepath has become faster, simpler and more powerful at the same time. I like it when I can do all three in a release!

I'll get faster and simpler out of the way fast, so I can go into the "more powerful", which is what Morepath is all about.

Faster

I run this simple benchmark once every while to make sure Morepath's performance is going in the right direction. The benchmark does almost nothing: it just sends the text "Hello world" back to the browser from a view on a path.

It's still useful to try such a small benchmark, as it can help show how much your web framework is doing to send something that basic back to the browser. In July when I presented Morepath at EuroPython, I measured it. I was about as fast as Django then at this task, and was already significantly faster than Flask.

I'm pleased to report that Morepath 0.8 is 50% faster than in July. At raw performance on this benchmark, we have now comfortably surpassed Django and are leaving Flask somewhere in the distance.

Morepath is not about performance -- it's fast enough anyway, other work will dominate in most real-world applications, but it's nice to know.

Performance is relative of course: Pyramid for instance is still racing far ahead on this benchmark, and so is wheezy.web, the web framework from which I took this benchmark and hacked up.

Simpler

Morepath 0.8 is running on a new engine: a completely refactored Reg library. Reg was originally inspired by zope.interface (which Pyramid uses), but it has since evolved almost beyond recognition into a powerful generic dispatch system.

In Reg 0.9, the dispatch system has been simplified and generalized to also let you dispatch on the value of arguments as well as their classes. Reg 0.9 also lifts the restriction that you have to dispatch on all non-key keyword arguments. Reg could also cache lookups to make things go faster, but this now also works for the new non-class-based dispatch.

Much of Morepath's flexibility and power is due to Reg. Morepath 0.9's view lookup system has been rewritten to make use of the new powers of Reg, making it both faster and more powerful.

Enough abstract talk: let's look at what implementing a REST web service looks like in Morepath 0.8.

The Power of Morepath: REST in Morepath Scenario

Here's the scenario we are going to implement.

Say you're implementing a REST API (also known as a hypermedia API).

You want to support the URL (hostname info omitted):

/customers/{id}

When you access it with a GET request, you get JSON describing the customer with the given id, or if it doesn't exist, 404 Not Found.

There's also the URL:

/customers

This represents a collection of customers. You want to be able to GET it and get some JSON information about the customers back.

Moreover, you want to POST JSON to it that represents a new customer, to add it a customer to the collection.

The customer JSON at /customers/{id} looks like this:

{ "@id": "/customers/0", "@type": "Customer", "name": "Joe Shopper" }

What's this @id and @type business? They're just conventions (though I took them took from the JSON-LD standard). @id is a link to the customer itself, which also uniquely identifies this customer. @type describes the type of this object.

The customer collection JSON at /customers looks like this:

{ "@id": "/customers", "@type": "CustomerCollection" "customers": ['/customers/0', '/customers/1'], "add": "/customers", }

When you POST a new customer @id is not needed, but it gets added after POST. The response to a POST should be JSON representing the new customer we just POSTed, but now with the @id added.

Implementing this scenario with Morepath

First we define a class Customer that defines the customer. In a real-world application this is backed by some database, perhaps using an ORM like SQLAlchemy, but we'll keep it simple here:

class Customer(object): def __init__(self, name): self.id = None # we will set it after creation self.name = name

Customer doesn't know anything about the web at all; it shouldn't have to.

Then there's a CustomerCollection that represents a collection of Customer objects. Again in the real world it would be backed by some database, and implemented in terms of database operations to query and add customers, but here we show a simple in-memory implementation:

class CustomerCollection(object): def __init__(self): self.customers = {} self.id_counter = 0 def get(self, id): return self.customers.get(id) def add(self, customer): self.customers[self.id_counter] = customer # here we set the id customer.id = self.id_counter self.id_counter += 1 return customer customer_collection = CustomerCollection()

We register this collection at the path /customers:

@App.path(model=CustomerCollection, path='/customers') def get_customer_collection(): return customer_collection

We register Customer at the path /customers/{id}:

@App.path(model=Customer, path='/customers/{id}' converters={'id': int}) def get_customer(id): return customer_collection.get(id)

See the converters bit we did there? This makes sure that the {id} variable is converted from a string into an integer for you automatically, as internally we use integer ids.

We now register a dump_json that can transform the Customer object into JSON:

@App.dump_json(model=Customer) def dump(self, request): return { '@type': 'Customer', '@id': self.id, 'name': self.name }

Now we are ready to implement a GET (the default) view for Customer, so that /customer/{id} works:

@App.json(model=Customer) def customer_default(self, request): return self

That's easy! It can just return self and let dump_json take care of making it be JSON.

Now let's work on the POST of new customers on /customers.

We register a load_json directive that can transform JSON into a Customer instance:

@App.load_json() def load(json, request): if json['@type'] == 'Customer': return Customer(name=json['name']) return json

We now can register a view that handles the POST of a new Customer to the CustomerCollection:

@App.json(model=CustomerCollection, request_method='POST', body_model=Customer) def customer_collection_post(self, request): return self.add(request.body_obj)

This calls the add method we defined on CustomerCollection before. body_obj is a Customer instance, converted from the incoming JSON. It returns the resulting Customer instance which is automatically transformed to JSON.

For good measure let's also define a way to transform the CustomerCollection into JSON:

@App.dump_json(model=CustomerCollection) def dump_customer_collection(self, request): return { '@id': request.link(self), '@type': 'CustomerCollection', 'customers': [ request.link(customer) for customer in self.customers.values() ], 'add': request.link(self), }

request.link automatically creates the correct links to Customer instances and the CustomerCollection itself.

We now need to add a GET view for CustomerCollection:

@App.json(model=CustomerCollection) def customer_collection_default(self, request): return self

We done with our implementation. Check out a working example on Github. To try it out you could use a commandline tool like wget or curl, or Chrome's Postman extension, for instance.

What about HTTP status codes?

A good REST API sends back the correct HTTP status codes when something goes wrong. There's more to HTTP status codes than just 200 OK and 404 Not Found.

Now with a normal Python web framework, you'd have to go through your implementation and add checks for various error conditions, and then return or raise HTTP errors in lots of places.

Morepath is not a normal Python web framework.

Morepath does the following:

/customers and /customers/1

200 Ok (if customer 1 exists)

Well, of course!

/flub

404 Not Found

Yeah, but other web frameworks do this too.

/customers/1000

404 Not Found (if customer 1000 doesn't exist)

Morepath automates this for you if you return None from the ``@App.path`` directive.

/customers/not_an_integer

400 Bad Request

Oh, okay. That's nice!

PUT on /customers/1

405 Method Not Allowed

You know about this status code, but does your web framework?

POST on /customers of JSON that does not have @type Customer
422 Unprocessable Entity

Yes, 422 Unprocessable Entity is a real HTTP status code, and it's used in REST APIs -- the Github API uses it for instance. Other REST API use 400 Bad Request for this case. You can make Morepath do this as well.

Under the hood

Here's the part of the Morepath codebase that implements much of this behavior:

@App.predicate(generic.view, name='model', default=None, index=ClassIndex) def model_predicate(obj): return obj.__class__ @App.predicate_fallback(generic.view, model_predicate) def model_not_found(self, request): raise HTTPNotFound() @App.predicate(generic.view, name='name', default='', index=KeyIndex, after=model_predicate) def name_predicate(request): return request.view_name @App.predicate_fallback(generic.view, name_predicate) def name_not_found(self, request): raise HTTPNotFound() @App.predicate(generic.view, name='request_method', default='GET', index=KeyIndex, after=name_predicate) def request_method_predicate(request): return request.method @App.predicate_fallback(generic.view, request_method_predicate) def method_not_allowed(self, request): raise HTTPMethodNotAllowed() @App.predicate(generic.view, name='body_model', default=object, index=ClassIndex, after=request_method_predicate) def body_model_predicate(request): return request.body_obj.__class__ @App.predicate_fallback(generic.view, body_model_predicate) def body_model_unprocessable(self, request): raise HTTPUnprocessableEntity()

Don't like 422 Unprocessable Entity when body_model doesn't match? Want 400 Bad Request instead? Just override the predicate_fallback for this in your own application:

class MyApp(morepath.App): pass @MyApp.predicate_fallback(generic.view, body_model_predicate) def body_model_unprocessable_overridden(self, request): raise HTTPBadRequest()

Want to have views respond to the HTTP Accept header? Add a new predicate that handles this to your app.

Now what are you waiting for? Try out Morepath!

Categories: FLOSS Project Planets

How designers think about ‘Save as…’

Planet KDE - Thu, 2014-11-13 10:54

In a short survey we analyzed the association for a couple of newly introduced 'Save as...' icons compared to the classic floppy symbol.

Keep on reading: How designers think about ‘Save as…’

Categories: FLOSS Project Planets

Bits from Debian: DebConf15 welcomes its first nine sponsors!

Planet Debian - Thu, 2014-11-13 08:35

DebConf15 will take place in Heidelberg, Germany in August 2015. We strive to provide an intense working environment and enable good progress for Debian and for Free Software in general. We extend an invitation to everyone to join us and to support this event. As a volunteer-run non-profit conference, we depend on our sponsors.

Nine companies have already committed to sponsor DebConf15! Let's introduce them:

Our first Gold sponsor is credativ, a service-oriented company focusing on open-source software, and also a Debian development partner.

Our second Gold sponsor is sipgate, a Voice over IP service provider based in Germany that also operates in the United Kingdom (sipgate site in English).

Google (the search engine and advertising company), Fairsight Security, Inc. (developers of real-time passive DNS solutions), Martin Alfke / Buero 2.0 (Linux & UNIX Consultant and Trainer, LPIC-2/Puppet Certified Professional) and Ubuntu (the OS supported by Canonical) are our three Silver sponsors.

And last but not least, Logilab, Netways and Hetzner have agreed to support us as Bronze-level.

Become a sponsor too!

Would you like to become a sponsor? Do you know of or work in a company or organization that may consider sponsorship?

Please have a look at our sponsorship brochure (also available in German), in which we outline all the details and describe the sponsor benefits. For instance, sponsors have the option to reach out to Debian contributors, derivative developers, upstream authors and other community members during a Job Fair and through postings on our job wall, and to show-case their Free Software involvement by staffing a booth on the Open Weekend. In addition, sponsors are able to distribute marketing materials in the attendee bags. And it goes without saying that we honour your sponsorship with visibility of your logo in the conference's videos, on our website, on printed materials, and banners.

The final report of DebConf14 is also available, illustrating the broad spectrum, quality, and enthusiasm of the community at work, and providing detailed information about the different outcomes that last conference brought up (talks, participants, social events, impact in the Debian project and the free software scene, and much more).

For further details, feel free to contact us through sponsors@debconf.org, and visit the DebConf15 website at http://debconf15.debconf.org.

Categories: FLOSS Project Planets

KDEPIM: Any More Guesses?

Planet KDE - Thu, 2014-11-13 08:31

Yesterday I posed a somewhat cryptic question: what happened in KDEPIM in late 2010 that might have caused such a dramatic spike in an unnamed (but real) metric? I want to leave the conversation running a little bit longer. So here is some further food for thought, to keep you guessing… Some things to note: The new metric is developer… Read more →

Categories: FLOSS Project Planets

Tanguy Ortolo: Re: About choice

Planet Debian - Thu, 2014-11-13 06:42

This is a reply to Josselin Mouette's blog article About choice, since his blog does not seem to accept comments¹.

Please note that this is not meant to be systemd-bashing, just a criticism base one a counter-example refutation of Josselin's implication that there is no use case better covered by SysV init: this is false, as there is at least one. And yes, there are probably many cases better covered by systemd, I am making no claims about that.

A use case better covered by SysV init: encrypted block devices

So, waiting for a use case better covered by SysV init? Rejoice, you will not die waiting, here is one: encrypted block devices. That case works just fine with SysV init, without any specific configuration, whereas systemd just sucks at it. There exist a way to make it work², but:

  • if systemd requires specific configuration to handle such a case, whereas SysV init does not, that means this case is better covered by SysV init;
  • that work around does not actually work.

If you know any better, I would be glad to try it. Believe me, I like the basic principles of systemd³ and I would be glad to have it working correctly on my system.

Notes
  1. Well, it does accept comments, but marks them as span and does not show them, which is roughly equivalent.
  2. Installing an additional piece of software, Plymouth, is supposed to make systemd work correctly with encrypted block devices. Yes, this is additional configuration, as that piece of software does not come when you install systemd, and it is not even suggested so a regular user cannot guess it.
  3. Though I must say I hate the way it is pushed into the GNU/Linux desktop systems.
Categories: FLOSS Project Planets

Simon Wittber: Python-3 Game Server

Planet Python - Thu, 2014-11-13 05:37
I've been working on something new. I get a lot of projects where I have to add multiuser functionality. Finally I've made the jump to Python 3, and built a generic multi-user game server, so I can stop rewriting this sort of thing for every new project. It's using the most excellent asyncio framework, with aiopg for Postgresql support.

The server code is on GitHub, the Unity3D client is still in progress.

Those tedious functions, such as Registration, Authentication, Reset Password, Messaging, Rooms and Object storage are solved, may I never have to write a "Reset Password" module again!

Categories: FLOSS Project Planets
Syndicate content