FLOSS Project Planets
I published the slides of my talk "An introduction to peering in Italy - Interconnections among the Italian networks" that I presented today at the MIX-IT (the Milano internet exchange) technical meeting.
It’s been a while since I wrote a long-form blog post here, but this post on the Swrve Engineering blog is worth a read; it describes how we use SSD caching on our EC2 instances to greatly improve EBS throughput.
This post was originally shared on the Drupal Association blog.
DrupalCon Amsterdam has wrapped up, and now that we're over the jet lag, it's time to look back on one of the most successful DrupalCons to date. DrupalCon Amsterdam was the largest European DrupalCon yet, by far. Just to knock your socks off, here are some numbers:
- More than 2,300 attendees showed up to 120 sessions and nearly 100 BoFs
- 115 attendees showed up to the community summit and the business summit
- 146 training attendees, 400 trivia night attendees, and 400 Friday sprinters made the week a success
- …and through it all, we ate 1,200 stroopwafels.
The fun extended to more than just the conference -- with 211 transit passes and 56 bike rentals, attendees from over 64 countries were able to enjoy all the city of Amsterdam had to offer. What a success!
As with any DrupalCon, DrupalCon Amsterdam wouldn't have been a success without lots and lots of help from our passionate volunteers. We'd like to take a moment to send out a big THANK YOU to all of our track chairs and summit organizers:
- Pedro Cambra - Coding and Development
- Théodore Biadala - Core Conversations
- Steve Parks - Drupal Business
- Lewis Nyman and Ruben Teijeiro - Frontend
- Michael Schmid - Site Building
- Bastian Widmer - DevOps
- Cathy Theys, Ruben Teijeiro, Gábor Hojtsy and the Core Mentors - Sprints
- Adam Hill and Ieva Uzule - Onsite Volunteer Coordinators
- Baris Wanschers - Social Media
- Emma Jane Westby - Business Summit
- Morten Birch and Addison Berry - Community Summit
We also appreciate everything our core mentors did to make DrupalCon Amsterdam a hit — and it’s thanks to lots of hard work from our passionate community members that Drupal 8 is in Beta!
We hope you had as fun and exiting a time in Amsterdam as we did. For those of us who weren’t able to make it, and even for those who were, you can relive the fun on the flickr stream, or catch any number of great sessions on the Drupal Association YouTube channel. And remember to mark your calendar for DrupalCon Barcelona in September 2015. See you there! Below is Holly Ross' set of slides from the Amsterdam closing session.
The datafeed API is one of the nice features of the CubicWeb framework. It makes it possible to easily build such things as a news aggregator (or even a semantic news feed reader), a LDAP importer or an application importing data from another web platform. The underlying API is quite flexible and powerful. Yet, the documentation being quite thin, it may be hard to find one's way through. In this article, we'll describe the basics of the datafeed API and provide guiding examples.
The datafeed API is essentially built around two things: a CWSource entity and a parser, which is a kind of AppObject.
The CWSource entity defines a list of URL from which to fetch data to be imported in the current CubicWeb instance, it is linked to a parser through its __regid__. So something like the following should be enough to create a usable datafeed source .create_entity('CWSource', name=u'some name', type='datafeed', parser=u'myparser')
The parser is usually a subclass of DataFeedParser (from cubicweb.server.sources.datafeed). It should at least implement the two methods process and before_entity_copy. To make it easier, there are specialized parsers such as DataFeedXMLParser that already define process so that subclasses only have to implement the process_item method.Overview of the datafeed API
Before going into further details about the actual implementation of a DataFeedParser, it's worth having in mind a few details about the datafeed parsing and import process. This involves various players from the CubicWeb server, namely: a DataFeedSource (from cubicweb.server.sources.datafeed), the Repository and the DataFeedParser.
- Everything starts from the Repository which loops over its sources and pulls data from each of these (this is done using a looping task which is setup upon repository startup). In the case of datafeed sources, Repository sources are instances of the aforementioned DataFeedSource class .
- The DataFeedSource selects the appropriate parser from the registry and loops on each uri defined in the respective CWSource entity by calling the parser's process method with that uri as argument (methods pull_data and process_urls of DataFeedSource).
- If the result of the parsing step is successful, the DataFeedSource will call the parser's handle_deletion method, with the URI of the previously imported entities.
- Then, the import log is formatted and the transaction committed. The DataFeedSource and DataFeedParser are connected to an import_log which feeds the CubicWeb instance with a CWDataImport per data pull. This usually contains the number of created and updated entities along with any error/warning message logged by the parser. All this is visible in a table from the CWSource primary view.
So now, you might wonder what actually happens during the parser's process method call. This method takes an URL from which to fetch data and processes further each piece of data (using a process_item method for instance). For each data-item:
- the repository is queried to retrieve or create an entity in the system source: this is done using the extid2entity method;
- this extid2entity method essentially needs two pieces of
- a so-called extid, which uniquely identifies an item in the distant source
- any other information needed to create or update the corresponding entity in the system source (this will be later refered to as the sourceparams)
- then, given the (new or existing) entity returned by extid2entity, the parser can perform further postprocessing (for instance, updating any relation on this entity).
In step 1 above, the parser method extid2entity in turns calls the repository method extid2eid given the current source and the extid value. If an entry in the entities table matches with the specified extid, the corresponding eid (identifier in the system source) is returned. Otherwise, a new eid is created. It's worth noting that the created entity (in case the entity is to be created) is not complete with respect to the data model at this point. In order the entity to be completed, the source method before_entity_insertion is called. This is where the aforementioned sourceparams are used. More specifically, on the parser side the before_entity_copy method is called: it usually just updates (using entity.cw_set() for instance) the fetched entity with any relevant information.Case study: a news feeds parser
Now we'll go through a concrete example to illustrate all those fairly abstract concepts and implement a datafeed parser which can be used to import news feeds. Our parser will create entities of type FeedArticle, which minimal data model would be:class FeedArticle(EntityType): title = String(fulltextindexed=True) uri = String(unique=True) author = String(fulltextindexed=True) content = RichString(fulltextindexed=True, default_format='text/html')
Here we'll reuse the DataFeedXMLParser, not because we have XML data to parse, but because its interface fits well with our purpose, namely: it ships an item-based processing (a process_item method) and it relies on a parse method to fetch raw data. The underlying parsing of the news feed resources will be handled by feedparser.class FeedParser(DataFeedXMLParser): __regid__ = 'newsaggregator.feed-parser'
The parse method is called by process, it should return a list tuples with items information.def parse(self, url): """Delegate to feedparser to retrieve feed items""" data = feedparser.parse(url) return zip(data.entries)
Then the process_item method takes an individual item (i.e. an entry of the result obtained from feedparser in our case). It essentially defines an extid, here the uri of the feed entry (good candidate for unicity) and calls extid2entity with that extid, the entity type to be created / retrieved and any additional data useful for entity completion passed as keyword arguments. (The process_feed method call just transforms the results obtained from feedparser into a dict suitable for entity creation following the data model described above.)def process_item(self, entry): data = self.process_feed(entry) extid = data['uri'] entity = self.extid2entity(extid, 'FeedArticle', feeddata=data)
The before_entity_copy method is called before the entity is actually created (or updated) in order to give the parser a chance to complete it with any other attribute that could be set from source data (namely feedparser data in our case).def before_entity_copy(self, entity, sourceparams): feeddata = sourceparams['feeddata'] entity.cw_edited.update(feeddata)
And this is all what's essentially needed for a simple parser. Further details could be found in the news aggregator cube. More sophisticated parsers may use other concepts not described here, such as source mappings.Testing datafeed parsers
Testing a datafeed parser often involves pulling data from the corresponding datafeed source. Here is a minimal test snippet that illustrates how to retrieve the datafeed source from a CWSource entity and to pull data from it.with self.admin_access.repo_cnx() as cnx: # Assuming one knows the URI of a CWSource. rset = cnx.execute('CWSource X WHERE X uri %s' % uri) # Retrieve the datafeed source instance. dfsource = self.repo.sources_by_eid[rset] # Make sure it's parser matches the expected. self.assertEqual(dfsource.parser_id, '<my-parser-id>') # Pull data using an internal connection. with self.repo.internal_cnx() as icnx: stats = dfsource.pull_data(icnx, force=True, raise_on_error=True) icnx.commit()
The resulting stats is a dictionnary containing eids of created and updated entities during the pull. In addition all entities created should have the cw_source relation set to the corresponding CWSource entity.Notes 
It is possible to add some configuration to the CWSource entity in the form a string of configuration items (one per line). Noteworthy items are:
- the synchronization-interval;
- use-cwuri-as-url=no, which avoids using external URL inside the CubicWeb instance (leading to any link on an imported entity to point to the external source URI);
- delete-entities=[yes,no] which controls if entities not found anymore in the distant source should be deleted from the CubicWeb instance.
This event is also on our meetup page:
As always, it will be the 3rd Wednesday of the month. That's the 15th of October at 6pm.
When 3rd Wednesday of the month, 6pm
October 15 2014
WhereInmar635 Vine St,
Room 1130H "Dash"
This will be at the Inmar building in downtown Winston Salem.
What Do you have a project you want to show off? Or do you need a second set of eyes on your code? Would you like to hack on an open source project? Perhaps you need help getting a module on a specific platform? Don't have a project but would like to help others or to learn more Python?
Don Jennings will continue our new feature of our monthly project nights: The Newbie Corner
1) Configuring JAAS in Apache CXF
There are a number of different ways to configure your CXF web service to authenticate tokens via JAAS. For all approaches, you must define the System property "java.security.auth.login.config" to point towards your JAAS configuration file.
CXF provides a interceptor called the JAASLoginInterceptor that can be added either to the "inInterceptor" chain of an endpoint (JAX-WS or JAX-RS) or a CXF bus (so that it applies to all endpoints). The JAASLoginInterceptor typically authenticates a Username/Password credential (such as a WS-Security UsernameToken or HTTP/BA) via JAAS. Note that for WS-Security, you must tell WSS4J not to authenticate the UsernameToken itself, but just to process it and store it for later authentication via the JAASLoginInterceptor. This is done by setting the JAX-WS property "ws-security.validate.token" to "false".
At a minimum it is necessary to set the "contextName" attribute of the JAASLoginInterceptor, which references the JAAS Context Name to use. It is also possible to define how to retrieve roles as part of the authentication process, by default CXF assumes that javax.security.acl.Group Objects are interpreted as "role" Principals. See the CXF wiki for more information on how to configure the JAASLoginInterceptor. After successful authentication, a CXF SecurityContext Object is created with the name and roles of the authenticated principal.
Newer versions of CXF also have a CXF Feature called the JAASAuthenticationFeature. This simply wraps the JAASLoginInterceptor with default configuration for Apache Karaf. If you are deploying a CXF endpoint in Karaf, you can just add this Feature to your endpoint or Bus without any additional information, and CXF will authenticate the received credential to whatever Login Modules have been configured for the "karaf" realm in Apache Karaf.
As stated above, it is possible to validate a WS-Security UsernameToken in CXF via the JAASLoginInterceptor or the JAASAuthenticationFeature by first setting the JAX-WS property "ws-security.validate.token" to "false". This tells WSS4J to avoid validating UsernameTokens. However it is possible to also validate UsernameTokens using JAAS directly in WSS4J via the JAASUsernameTokenValidator. You can configure this validator when using WS-SecurityPolicy via the JAX-WS property "ws-security.ut.validator".
2) Using JAAS LoginModules in Apache CXF
Once you have decided how you are going to configure JAAS in Apache CXF, it is time to pick a JAAS LoginModule that is appropriate for your authentication requirements. Here are some examples of LoginModules you can use.
2.1) Validating a Username + Password to LDAP / Active Directory
For validating a Username + Password to an LDAP / Active Directory backend, use one of the following login modules:
- com.sun.security.auth.module.LdapLoginModule: Example here (context name "sun").
- org.eclipse.jetty.plus.jaas.spi.LdapLoginModule: Example here (context name "jetty"). Available via the org.eclipse.jetty/jetty-plus dependency. This login module is useful as it's easy to retrieve roles associated with the authenticated user.
Kerberos tokens can be validated via:
- com.sun.security.auth.module.Krb5LoginModule: Example here.
Apache Karaf contains some LoginModules that can be used when deploying your application in Karaf:
- org.apache.karaf.jaas.modules.properties.PropertiesLoginModule: Authenticates Username + Passwords and retrieves roles via "etc/users.properties".
- org.apache.karaf.jaas.modules.properties.PublickeyLoginModule: Authenticates SSH keys and retrieves roles via "etc/keys.properties".
- org.apache.karaf.jaas.modules.properties.OsgiConfigLoginModule: Authenticates Username + Passwords and retrieves roles via the OSGi Config Admin service.
- org.apache.karaf.jaas.modules.properties.LDAPLoginModule: Authenticates Username + Passwords and retrieves roles from an LDAP backend.
- org.apache.karaf.jaas.modules.properties.JDBCLoginModule: Authenticates Username + Passwords and retrieves roles from a database.
- org.apache.karaf.jaas.modules.properties.SyncopeLoginModule: Authenticates Username + Passwords and retrieves roles via the Apache Syncope IdM.
To complete my series on multilingual Drupal, here are some mini-lessons I've learnt. Some of them are are to improve the experience for administrators & translators, others cover obscure corners of code. The first few don't require any knowledge of code, but the later ones will be much more technical.
DrupalCon Europe 2014 was all about a new era that has now began on Wednesday, October 1st with the official beta 1 release - the era of Drupal 8. Compared with the DrupalCon Europe 2013, this conference was more about business and development and less about UX related matters. However, UX topics and raised questions are definitely worth mentioning.
UX and Dev: We need to have a date!
One of the most frequent UX topics of last year was better user experience crafted for the end-users of a website together with the importance of user research as a business objective. Knowledge of how and why people use a product conveys better understanding where the areas of improvements that need immediate mitigation are.
Last week, voices were raised to provide a clear statement that the new and shiny Drupal 8 and the further development of Drupal on both core and contrib levels, needs to get UX research and design involved. Some actions were taken during the last year. However, there is definitely more potential. In her talk "Engaging UX and Design Contributors", Dani Nordin makes it clear. All people who contribute to Drupal have a common goal to create a better, highly scalable and stable version of our content management system. And to do so, UX researches, designers and developers are bound to work closely together, listen to each other and actually apply results of user research. To make this happen, we have to take action within the Drupal community. For example, UX researches and designers should be more involved in the development processes and procedures. Sometimes, designers and UX professionals are treated, as if their contribution is not important, which leads to a decrease of their motivation to spend their spare time for the community. Raising this topic lead to a lot of discussions. This resulted in the UX sprint producing a list of goals and directions that the community should go for in 2014 & 2015.
It’s about interaction.
More and more discussions within the community were devoted to interaction design. The word “design” has a rather broad definition and can be understood differently. For example, “design” for the majority of people only means visual design or visual representation of the object. UX researches and designers differentiate the visual and interaction design. The latter is about the workflows or concrete steps that users should perform to achieve the goal. No doubt, visual design is crucial for making people want to start an interaction with an object in the first place. However, the quality of interactions is the key for the product’s success. The best way to determine a user-friendly interaction is to actually attempt to accomplish a task, find the obstacles, redefine the workflow and test again before the implementation even starts. The best way to do so is prototyping. No matter how you create your prototypes, you can use HTML, as Roy Scholten suggests in his talk “Because it's about the interactions. (Better UX through prototyping)”, or a broad palette of other tools that do not require advanced programming skills. The most crucial point is to prove that interactions really work and they do so for real users.
Don’t prototype in Drupal!
The topic of the importance of prototyping gained further attention in the presentation of Dani Nordin. Don’t prototype in Drupal is a strong statement, and it has a point. One of the most popular prototyping tools Axure can help with the design at different levels: high and low fidelity. Creating Axure-based prototypes is faster and cheaper than spending days or weeks on Drupal site building and coding. Prototypes allow, however, to ensure the directions of a project, make it possible to talk with stakeholders about tangible and interactive product before applying the development forces and, again, it’s the way to receive the user feedback. This insighs can be applied not only to the products built in Drupal, but to Drupal itself. This will improve not only the life of content creators, but can help developing more elegant solutions using the better design features of the Drupal core together with contrib modules.
For a long time I’ve harboured the belief that too few in Drupal receive the recognition they deserve. I often wonder how many bright (young) stars fall through the cracks of our vast community, fade away and perhaps leave altogether. When the Drupal project was small if you did some heavy lifting, people noticed. Now our numbers have swelled beyond 1 million it is no wonder worthy effort goes relatively unnoticed.
Few of us are seeking the limelight but being noticed, that’s another matter. It’s not just recognition either. We should be mentoring emerging talent, supporting promising businesses and championing contributions from end users. But with such a vast and growing community, as Dries so eloquently put in his keynote, the status quo is unsustainable.
“Social capital and altruism is what we already do and I don’t think it is very scalable”DriesNote
We should remember the material for the Dries keynote was crowd sourced and as he said we should “keep an open mind, don’t draw to conclusions”. Now is the time to debate openly, engage in constructive conversation and help invigorate Drupal.
What struck me at Amsterdam was that when the audience members having contributed to Drupal Core and contrib were invited to stand, surprisingly few remained seated. In many respects talking to DrupalCon attendees was preaching to the converted. The job is for us to reach the rest.Understanding how our ecosystem works is a very important first step
I believe the path to a healthier community hinges on how we record contributions, not how we recognise them. Currently we track what individuals contribute and there is a fixation on code based contributions. How about documentation, organising camps, mentoring, community affairs, design and so many more valuable activitites. Introducing a way to differentiate between efforts made by individuals, agencies and end users and a move towards “Track all types of contributions” for me is a giant leap forwards. It couldn't happen sooner.
How we might use this new data is less important than the ability to "track how our community really works”. Most significantly it will bring to the surface things are not working. Who is funding what? Are we depending too heavily on small groups or certain individuals? Could specific areas do with more help? Achieving the tracking Dries would like, could happen within weeks. The Rankomatic? I like to think of it as Drupal Analytics, which is open for all to query. Expose the data like data.gov.uk and the community will surely hack insightful mashups and we will have something very empowering.Is D.O ready for a reboot?
What word is conspicuously missing from the D.O homepage? Contribute, yup, seriously. How can we expect more people to do so if we don’t clearly signpost how? When Dries talked about “Free riders” I reflected on how long (years) it took me to find a niche in Drupal. I existed on the fringes under employed. It was only when I was noticed by Isabel Schultz during DrupalCon London that things really flew for me. We need more "talent scouts". Why is Get Involved buried in the footer and not somewhere more obvious? Are many “free riders” just looking for a way to engage?
Rewarding contributions through enhanced visibility, leading by example will certainly help. Encouraging companies who don’t contribute to start doing so via subtle design changes to user and marketplace profile pages has merit. Will invigorate the contributors themselves to even greater heights? That's certainly how I roll.
What Dries suggests is sweeping change. In this respect the cynic in me worries about the glacially slow speed of change on Drupal.org (D.O). 5 months ago I created an issue proposing the community spotlight feature on the homepage of D.O. It had unanimous support but nothing ever happened. Maybe I did something wrong? Perhaps I don't understand the process?
The good news is the Drupal Association has hired staff whose role is to deliver these kinds of enhancements. It also feels like a more measured and strategic approach is being taken. I hear talk of user personas. The concept designs by During his TED Talk Harish stated companies that will thrive in the future are those who "play their role in terms of serving the communities who actually sustain them”.
Drupal is a platform having a community that sustains many thousands of businesses. Drupal has strong evidence that Harish’s hypothesis is already true. Drupal shops, even end users who contribute to the project tend to be the ones enjoying the best outcomes. The problem is we operate in a bubble, where everyone is hooked on open source. Out in the real world people’s thinking is years behind and the principles of open source sound quite peculiar even alien. Those coming to use Drupal need introducing to the benefits of taking a benevolent role.
We know who Lullabot, Palantir, Phase2, Wunderkraut are. The various Drupal shops are contributing, often in bucket loads. What we actually need is more companies like that. Indeed we need more end users contributing back too. Like how Oxfam, The Whitehouse, Sony have been instrumental in developing significant contributions to the Contrib space. Again Dries’s advertising model could help to champion these organisations, communicating why they chose this path and the benefits they have reaped over time. It’s not about penalising “free riders” it is more about persuading them to engage with Drupal, to start contributing.“If everybody that used the commons contributed a little bit things would be perfectly fine”
Dries talked about The Tragedy of the commons - overgrazing in Drupal translates not to the software rather the "Free riders" are depending too much on people’s generosity, personal time or indeed time which they should be running their business. I’ve seen situations where people have neglected their business for the benefit of Drupal. This goes way beyond the well known plight of Alex Pott, I assure you. I'm sure the few examples I have seen are a small fraction. Here everyone loses.Time for change
As Dries said “An imperfect solution is better than no solution”. We’ve tried for some time to encourage those who sit on the sidelines to start contributing. Now is the time for a small revolution. Recording, recognising, rewarding code and non code contributors on D.O is simply a way of scaling how it has always been. And if that sounds too radical, how about we try it out on beta.drupal.org? We need to take a leap. After all, aren’t we the round pegs in the square holes? We are the ones who think different. It’s way overdue we took some of our own medicine.Further information: Outlandish Josh's Thoughts on the DriesNote: Towards Drupal "Contribution Credits"Watch the Dries KeynotePhoto credit - Steffen Rühlmann
Last week I participated as a panelist in the Continuous Discussions talk hosted by Electric Cloud, and the recording is now available. A bit long but there are some good points in there.
Some excerpts from twitter
@csanchez: “How fast can your tests absorb your debs agility” < and your Ops, and your Infra?
@cobiacomm: @orfjackal says ‘hard to do agile when the customer plan is to release once per year’
@cobiacomm: biggest agile obstacles -> long regression testing cycles, unclear dependencies, and rebuilding the wheel
Andrew Rivers – blog.andrewrivers.co.uk
Carlos Sanchez – @csanchez | http://csanchez.org
Chris Haddad – @cobiacomm
Dave Josephsen – @djosephsen
Eriksen Costa – @eriksencosta | blog.eriksen.com.br
Esko Luontola – @orfjackal | www.orfjackal.net
John Ryding – @strife25 | blog.johnryding.com
Norm MacLennan – @nromdotcom | blog.normmaclennan.com
J. Randall Hunt – @jrhunt | blog.ranman.org
Sriram Narayan – @sriramnarayan | www.sriramnarayan.com
Sunanda Jayanth – @sunandaj17 | http://blog.qruizelabs.com/
While introducing people to Python metaclasses I realized that sometimes the big problem of the most powerful Python features is that programmers do not perceive how they may simplify their usual tasks. Therefore, features like metaclasses are considered a fancy but rather unuseful addition to a standard OOP language, instead of a real game changer.
This post wants to show how to use metaclasses and decorators to create a powerful class that can be inherited and customized by easily adding decorated methods.Metaclasses and decorators: a match made in space
Metaclasses are a complex topic, and most of the times even advanced programmers do not see a wide range of practical uses for them. Chances are that this is the part of Python (or other languages that support metaclasses, like Smalltalk and Ruby) that fits the least the "standard" object-oriented patterns or solutions found in C++ and Java, just to mention two big players.
Indeed metaclasess usually come in play when programming advanced libraries or frameworks, where a lot of automation must be provided. For example, Django Forms system heavily relies on metaclasses to provide all its magic.
We also have to note, however, that we usually call "magic" or "tricks" all those techniques we are not familiar with, and as a result in Python many things are called this way, being its implementation often peculiar compared to other languages.
Time to bring some spice into your programming: let's practice some Python wizardry and exploit the power of the language!
In this post I want to show you an interesting joint use of decorators and metaclasses. I will show you how to use decorators to mark methods so that they can be automatically used by the class when performing a given operation.
More in detail, I will implement a class that can be called on a string to "process" it, and show you how to implement different "filters" through simple decorated methods. What I want to obtain is something like this:
```python class MyStringProcessor(StringProcessor):@stringfilter def capitalize(self, str): [...] @stringfilter def remove_double_spaces(self, str): [...]
msp = MyStringProcessor() "A test string" == msp("a test string") ```
The module defines a StringProcessor class that I can inherit and customize adding methods that have a standard signature (self, str) and are decorated with @stringfilter. This class can later be instatiated and the instance used to directly process a string and return the result. Internally the class automatically executes all the decorated methods in succession. I also would like the class to obey the order I defined the filters: first defined, first executed.The Hitchhiker's Guide To Metaclasses
How can metaclasses help to reach this target?
Simply put, metaclasses are classes that are instantiated to get classes. That means that whenever I use a class, for example to instantiate it, first Python builds that class using the metaclass and the class definition we wrote. For example, you know that you can find the class members in the __dict__ attribute: this attribute is created by the standard metaclass, which is type.
Given that, a metaclass is a good starting point for us to insert some code to identify a subset of functions inside the definition of the class. In other words, we want the output of the metaclass (that is, the class) be built exactly as happens in the standard case, but with an addition: a separate list of all the methods decorated with @stringfilter.
You know that a class has a namespace, that is a dictionary of what was defined inside the class. So, when the standard type metaclass is used to create a class, the class body is parsed and a dict() object is used to collect the namespace.
We are however interested in preserving the order of definition and a Python dictionary is an unordered structure, so we take advantage of the __prepare__ hook introduced in the class creation process with Python 3. This function, if present in the metaclass, is used to preprocess the class and to return the structure used to host the namespace. So, following the example found in the official documentation, we start defining a metaclass like
```python class FilterClass(type):@classmethod def __prepare__(metacls, name, bases, **kwds): return collections.OrderedDict()
This way, when the class will be created, an OrderedDict will be used to host the namespace, allowing us to keep the definition order. Please note that the signature __prepare__(metacls, name, bases, **kwds) and the @classmethod decorator are enforced by the language.
The second function we want to define in our metaclass is __new__. Just like happens for the instantiation of classes, this method is invoked by Python to get a new instance of the metaclass, and is run before __init__. Its signature has to be __new__(cls, name, bases, namespace, **kwds) and the result shall be an instance of the metaclass. As for its normal class counterpart (after all a metaclass is a class), __new__() usually wraps the same method of the parent class, type in this case, adding its own customizations.
The customization we need is the creation of a list of methods that are marked in some way (the decorated filters). Say for simplicity's sake that the decorated methods have an attribute _filter.
The full metaclass is then
```python class FilterClass(type):@classmethod def __prepare__(metacls, name, bases, **kwds): return collections.OrderedDict() def __new__(cls, name, bases, namespace, **kwds): result = type.__new__(cls, name, bases, dict(namespace)) result._filters = [ value for value in namespace.values() if hasattr(value, '_filter')] return result
Now we have to find a way to mark all filter methods with a _filter attribute.The Anatomy of Purple Decorators
decorate: to add something to an object or place, especially in order to make it more attractive (Cambridge Dictionary)
Decorators are, as the name suggests, the best way to augment functions or methods. Remember that a decorator is basically a callable that accepts another callable, processes it, and returns it.
Used in conjunction with metaclasses, decorators are a very powerful and expressive way to implement advanced behaviours in our code. In this case we may easily use them to add an attribute to decorated methods, one of the most basic tasks for a decorator.
I decided to implement the @stringfilter decorator as a function, even if I usually prefer implementing them as classes. The reason is that decorator classes behave differently when used to implement decorators without arguments rather than decorators with arguments. In this case this difference would force to write some complex code and an explanation of that would be overkill now. In a future post on dectorators you will find all the gory details, but in the meantime you may check the three Bruce Eckel posts listed in the references section.
The decorator is very simple:
```python def stringfilter(func):func._filter = True return func
As you can see the decorator just creates an attribute called _filter into the function (remember that functions are objects). The actual value of this attribute is not important in this case, since we are just interested in telling apart class members that contain it.The Dynamics of a Callable Object
We are used to think about functions as special language components that may be "called" or executed. In Python functions are objects, just like everything else, and the feature that allows them to be executed comes from the presence of the __call__() method. Python is polymorphic by design and based on delegation, so (almost) everything that happens in the code relies on some features of the target object.
The result of this generalization is that every object that contains the __call__() method may be executed like a function, and gains the name of callable object.
The StringProcessor class shall thus contain this method and perform there the string processing with all the contained filters. The code is
```python class StringProcessor(metaclass=FilterClass):def __call__(self, string): _string = string for _filter in self._filters: _string = _filter(self, _string) return _string
A quick review of this simple function shows that it accepts the string as an argument, stores it in a local variable and loops over the filters, executing each of them on the local string, that is on the result of the previous filter.
The filter functions are extracted from the self._filters list, that is compiled by the FilterClass metaclass we already discussed.
What we need to do now is to inherit from StringProcessor to get the metaclass machinery and the __call__() method, and to define as many methods as needed, decorating them with the @stringfilter decorator.
Note that, thanks to the decorator and the metaclass, you may have other methods in your class that do not interfere with the string processing as long as they are not decorated with the decorator under consideration.
An example derived class may be the following
```python class MyStringProcessor(StringProcessor):@stringfilter def capitalize(self, string): return string.capitalize() @stringfilter def remove_double_spaces(self, string): return string.replace(' ', ' ')
The two capitalize() and remove_double_spaces() methods have been decorated, so they will be applied in order to any string passed when calling the class. A quick example of this last class is
import strproc msp = strproc.MyStringProcessor() input_string = "a test string" output_string = msp(input_string) print("INPUT STRING:", input_string) INPUT STRING: a test string print("OUTPUT STRING:", output_string) OUTPUT STRING: A test string
That's it!Final words
There are onviously other ways to accomplish the same task, and this post wanted just to give a practical example of what metaclasses are good for, and why I think that they should be part of any Python programmer's arsenal.Book Trivia
Section titles come from the following books: A Match Made in Space - George McFly, The Hitchhiker's Guide To the Galaxy - Various Authors, The Anatomy of Purple Dragons - Unknown, The Dynamics of an Asteroid - James Moriarty.Source code
The strproc.py file contains the full source code used in this post.Online resources
The following resources may be useful.Metaclasses
- Python 3 official documentation: customizing class creation.
- Python 3 OOP Part 5 - Metaclasses on this blog.
- Metaprogramming examples and patterns (still using some Python 2 code but useful).
- Bruce Eckel on decorators (series of three posts, 6 years old but still valid).
- A different approach on explaining decorators.
- Jeff Knupp goes deep inside the concept of function.
- Rafe Kettler provides a very detaild guide on Python "magic" methods.
For the past 21-days, I've been on a sugar detox. Becky Reece, a long-time friend of Trish's, inspired us to do it. Becky is a nutritionist and we've always admired how fit she is. Becky challenged a bunch of her friends to do it, and Trish signed up. I told Trish I'd do it with her to make things easier from a cooking perspective.
To be honest, we really didn't know what we were getting into when we started it. Trish ordered the book the week before we started and it arrived a couple days before things kicked off. Trish started reading the book the night before we started. That's when we realized we should've prepared more. The book had all kinds of things you were supposed to do the week before you started the detox. Most things involved shopping and cooking, so you were prepared with pre-made snacks and weren't too stressed out.
We started the detox on Monday, September 22, 2014. That's when we first realized there was no alcohol (we both love craft beer). Trish shopped and cooked like a madwoman that first week. I think we spent somewhere around $600 on groceries. Trish wrote about our first week on her blog.We are on Sunday Day-7 and made it through the first week with two birthday parties and a nice dinner out eating well and staying on track. I'm not weighing myself until the end, but my face looks a little slimmer, my skin feels smoother and my wedding ring is not as tight as it used to be. I feel great and have started to believe this is the last detox, diet or cleanse I will ever need. Cleansing my life of sugar could be a life changer especially when an Avo-Coconana Smoothie with Almond Butter Pad Thai becomes my new favorite meal.
What she didn't mention is what we discovered shortly after. She'd printed out a list of “Yes” and “No” foods from the book. But, she printed out the hardest level! We'd been doing level 3 for a week! There are three different levels suggested in the book:Level 1: ½ cup whole grains, full fat dairy and all the meat and vegetables you want
Level 2: No grains, full fat dairy and all the meat and vegetables you want
Level 3: No grains, no dairy, just meat, nuts and veggies
All levels allow fruit: unlimited limes and lemons, but only one green apple, green-tipped banana or grapefruit per day.
Figuring “we made it this far”, we decided to continue with the hardest level. Unlike Trish, I did weigh myself and was pumped to find I'd lost 5 lbs (2.3 kg) in the first week. I was trying to exercise everyday as well, by riding my bike, running, hiking and walking. Nothing strenuous, just something to get the blood pumping.
I did notice during the second week that I'd get really tired when exercising. I'd hit a wall after about 30-40 minutes were I felt like I'd lost all my energy. I'd felt this before when I was out of shape, but I didn't think I was that out of shape when we'd started.
The second week, our kids were at their Mom's house, so we ate out a bit more and socialized with friends. Getting through happy hours wasn't too hard, as long as we mentioned we were on a sugar detox up front. I don't have much of a sweet tooth, so I never had any chocolate cravings. I also rarely drink soda (except with Whiskey), so I didn't really miss much from the sweet side.
What I did miss was sugar in my coffee. Black coffee still makes my face wrinkle after three weeks and I finally switched to Green Tea during the last week.
During the second week, I noticed my weight loss plateaued. I think this is why they don't want you to weigh yourself - so you don't get discouraged. Even though the pounds stopped dropping, I did notice my pants were a lot looser around the waist.
We watched the movie Fed Up for extra motivation at the end of the second week. I thought it was enlightening to learn that when they take out fat from food, they often add sugar to give it back flavor. The sugar content difference between a diet and regular Coke? None - they both have 3.5 teaspoons.
They mentioned that most kids have their recommended daily allowance of sugar before they leave the house in the morning (in their bowl of cereal and milk). Apparently, adding sugar all started in the late 70s / early 80s when dieting became a fad. The USDA (or someone similar) recommended they warn Americans about sugar, but they chose to strike that from the record and warm them about fat instead. In the last couple years, they've discovered that fat isn't that bad and sugar is likely the cause of our country's obesity problem. They also mentioned that there's a lot of folks that are skinny, but fat on the inside.
The third week was the hardest one. This was mainly because Trish traveled out of town for work and I became a single parent with a mean cooking habit. I was amazed at the amount of dishes I went through, and I was only cooking breakfast and dinner for three. I must've spent almost two hours in the kitchen each day.
This final week is when I realized the sugar detox wasn't a sustainable long term practice. On Wednesday of that week, I rode from my kids school to my office in LoDo. It's a 15-mile ride and took me just over an hour. I felt fine while riding, but once I arrived, I felt sick. I was able to work, drink water and take deep breaths for a couple hours to feel better. However, I ended up taking a nap before noon to shake the weakness I felt. That night, it took me a bit longer to ride back, because it was uphill and I stopped a few times to rest. I loaded up on a green-tipped banana before I left, as well as a couple handfuls of almonds. The same sickness hit me after riding and I almost threw up an hour after the ride. After that experience, I decided not to push myself when exercising. Not until I was off the sugar detox anyway.
The next day I encountered a phenomenon I haven't had in years. I had to roll my pant legs up because my pants kept falling down and dragging on the ground. I also experienced a few headaches this week, something I rarely get.
In fact, as I'm writing this (on day 22), I still have a headache I woke up this morning with. This could be caused by other stressors in my life, for example, looking for a new gig and realizing how much I've spent on my VW Bus project.
As far as feeling good and skin complexion - two things the book said you'd noticed - I haven't noticed them that much. I didn't feel terrible before, but I was annoyed by how tight my pants were. More energy? Nope, I feel like I had more before. During the detox, I didn't get the 2:30 doldrums, but I don't recall feeling them much before either. I did notice that I was never famished when on the detox. If my stomach growled, it was merely an indicator that I should eat in the next hour or two. But I didn't feel like I needed to eat right away. There were some evenings where I was very hungry, but we often had snacks (nuts and jerky) to make the hunger go away.
I weighed myself on the morning of day 21 and I was down 12 pounds (5.4 kg). I weighed myself this morning (day 22) and I'm down 15 pounds (6.8 kg)! Who knew you could lose 3 pounds in 24 hours?!
I realize the biggest problems with these crash diets is keeping the weight off. However, the detox is also designed to change one's taste buds and I think it succeeded there. I'm much more aware of sugars in our food supply and I plan to eat a lot less of it. I hope to keep losing weight, but I'm only aiming for a pound a week for the next couple months.
My plan is to add back in full fat dairy, keep exercising and eat sugar-free meals more often than not. I'm also going to steer clear of any low-fat foods, as they often add sugar to make up for the fat flavor loss. With any luck, I'll be in great shape for this year's ski season!
Many thanks to Becky for inspiring Trish and to Trish for asking me to do it with her. I needed it a lot more than she did.
What makes this so fascinating? Legacy. Computers are so steeped in legacy to such a remarkable degree, but they're still so young. What kind of boxes are we putting ourselves in? So I find it fun to explore what could have been different. What we have today, built on so many layers built up over the decades, is really nothing short of accidental. We could have ended up with a lot of other paths these developments could have taken over the years... couldn't we have?
So maybe I'll just keep these private, or maybe I'll publish some of it if I ever polish any bits of it in an interesting format. Have any thoughts? Do you think this sounds fun? Let me know.
About a month ago, the Django Software Foundation (DSF) announced that we were going to run a pilot project for a Django Fellow - a paid contractor (or contractors) engaged to manage some of the administrative and community management tasks of the Django project.
We received a almost 30 applicants for the Fellowship position. Today, we're proud to announce the two successful applicants: Tim Graham and Berker Peksağ.
Tim is already well known to the Django community. He's been a member of the core team for several years, and for the last 12 months has been a major contributor, reviewing and merging pull requests. Tim has been contracted as a full-time Fellow for the duration of the 3 month pilot.
Berker isn't as well known to the Django community, but he's got a lot of experience contributing to Open Source projects. He's a core developer on CPython, Gunicorn and Hylang. He's also contributed to some Mozilla projects. In the Django community, he has submitted a few patches, as well as organizing a Django sprint in Istanbul. Berker will be a part-time Fellow supporting Tim.
Berker won't be given a "free pass" into the core team - he'll have to earn it the same way as a volunteer contributor would. However, since he'll be dedicating a lot of time to Django contributions over the next three months, we hope he'll be nominated for core team membership in the near future.
We've got some legal paperwork to sort out before the Fellowship formally begins, but once it does, we'll have three months to see the impact that some paid community managers can have on the momentum of the Django project.
Congratulations Tim and Berker, and a huge thanks to all the members of the Django community that applied.