Commercial Progression: Michigan Drupal Developers and their Summer Projects (E10)

Planet Drupal - Wed, 2015-07-29 11:44

Commercial Progression presents Hooked on Drupal, “Episode 10: Summer of Drupal with Special Guests Hillary Lewandowski and Michael Zhang".  In this episode of Hooked on Drupal, the usual crew is joined by two new members to the CP team.  Hillary Lewandowski, the latest member to the development team brings her wisdom from a formal education in computer science.  

Additionally, Michael Zhang is one of two summer interns from Northville High School and an active member of Northville DECA, with a focus on marketing.

Hooked on Drupal is available for RSS syndication here at the Commercial Progression site. Additionally, each episode is available to watch online via our YouTube channel, within the iTunes store, on SoundCloud, and now via Stitcher.

If you would like to participate as a guest or contributor, please email us at


Content Links and Related Information

We experienced this year's DrupalCon vicariously through our last Michigan Drupal meetup and our previous podcast with Steve Burge from OSTraining.  This summer proves to be quite busy with new team members, projects, and Drupal 8 investigations.

As the Commercial Progression team size grows, our development team has begun to specialize.  Brad has focused on developing new processes for site architecture and shares his discoveries for preparing a Drupal project for design and development. Other team members share their personal project subject matter.

OOP In Drupal 8

In addition to working with the new WYSIWYG Fields and Conditional Fields,  Hillary shares some of her thoughts and computer science background with Object Oriented Programming and Drupal 8 in her latest blog post.

Personalize Module

Chris and Shane discuss the Acquia contrib Personalize module based on Lift technology for content personalization via URL based campaign parameters, geography, visitor cookies, A/B or Multi-variate testing, and a host of other variable session data.

Paragraphs Module

Inspired by Jeff Eaton and the Battle for the Body Field DrupalCon presentation, Brad dug into the Paragraphs module and put together a popular paragraphs blog post with some best practices for winning the battle for the body field.  When Brad is not fighting the good fight for the supremacy of the Paragraphs module, he has also created an automated competitive marketing intelligence research script… yeah I know, really.


Hooked on Drupal Content Team


CHRIS KELLER - Developer




 Podcast Subscription

Tags:  Hooked on Drupal, Drupal 8, OOP, Personalize, Planet Drupal, podcast
Categories: FLOSS Project Planets

David MacIver: Massive performance boosts with judicious applications of laziness

Planet Python - Wed, 2015-07-29 10:37

I noticed about two hours after releasing that there’s a massive performance problem with the recursive data implementation I shipped in Hypothesis 1.9.0. Obviously this made me rather sad.

You have to do something slightly weird in order to hit this: Use recursive data as the base case for another recursive data implementation (actually using it in the expansion would probably work too).

The reason for this turns out to be nothing like what I expected and is kinda interesting, and I was really stuck as to how to solve it until I realised that with a little bit of lazy evaluation the problem was literally trivial to solve.

We’ll need to look at the implementation a bit to see what was happening and how to fix it.

Internally, Hypothesis’s recursive data implementation is just one big union of strategies. recursive(A, f) is basically just A | f(A) | f(A | f(A)) | … until you’ve added enough clauses that any subsequent one will basically never get in under the limit.

In order to understand why this is a problem, you need to understand a bit about Hypothesis generates data. It happens in two stages: The first is that we draw a parameter value at random, and the second is we pass that parameter value in to the strategy again and draw a value of the type we actually want (actually it’s more complicated than that too, but we’ll ignore that). This approach gives us higher quality data and lets us shape the distribution better.

Parameters can be literally any object at all. There are no valid operations on a parameter except to pass it back to the strategy you got it from.

So what does the parameter for a strategy of the form x | y | … look like?

Well, it looks like a weighting amongst the branches plus a parameter for each of the individual values. You pick a branch, then you feed the parameter you have for that branch to the underlying strategy.

Notably, drawing this parameter requires drawing a parameter from each of the underlying strategies. i.e. it’s O(n) in the number of branches.

Which means that if you have something like the recursive case above, you’re doing O(n) operations which are each themselves O(n), and you’re accidentally quadratic. Moreover it turns out that the constant factor on this may be really bad.

But there turns out to be an easy fix here: Almost all of those O(n^2) leaf parameters we’re producing are literally never used – you only ever need the parameter for the strategy you’re calling.

Which means we can fix this problem with lazy evaluation. Instead of storing a parameter for each branch, we store a deferred calculation that will produce a parameter on need. Then when we select a branch, we force that calculation to be evaluated (and save the result in case we need it again) and use that. If we never need a particular parameter, we never evaluate it.

And this means we’re still doing O(n) work when we’re drawing the parameter for a branch, but we’re only doing O(1) work per individual element of the branch until we actually need that value. In the recursion case we’re also saving work when we evaluate it. This greatly reduces the amount of work we have to do because it means that we’re now doing only as much work as we needed to do anyway to draw the template and more or less removes this case as a performance bottleneck. It’s still a little slower than I’d like, but it’s in the category of “Hypothesis is probably less likely to be the bottleneck than typical tests are” again.

In retrospect this is probably obvious – it falls into the category of “the fastest code is the code that doesn’t execute” – but it wasn’t obvious to me up front until I thought of it in the right way, so I thought I’d share in case this helps anyone else.

Categories: FLOSS Project Planets

Drupalize.Me: Release Day: Send Email Using MailChimp with Drupal 7

Planet Drupal - Wed, 2015-07-29 09:00

This week we'll be continuing our Using MailChimp with Drupal 7 series. And like last week, all the tutorials are free. Last week we looked at creating, and collecting contacts for, a MailChimp mailing list. This week we'll look at all the different ways we can send email to our lists.

Categories: FLOSS Project Planets

Realityloop: Community driven development

Planet Drupal - Wed, 2015-07-29 04:05
29 Jul Stuart Clark

Realityloop has a long history with the Melbourne Drupal community; We’ve been heavily involved in the monthly Drupal meetups and began the monthly mentoring meetups. However, the monthly mentoring only came about after a failed experiment in community based web development.

While that experiment may have failed, the idea of community driven Drupal development has been of great interest to me as it truly embraces the spirit of open source.

In the last two weeks I have release two websites for the Drupal Melbourne community, a DrupalMelbourne community portal and a landing page for the upcoming DrupalCampMelbourne2015. In this tutorial I will be demonstrating how anyone can get involved with the development of these sites, or how the process can work for the benefit of any other community based website.


The workflow:
  1. Build the codebase
  2. Setup and install the site
  3. Make your changes
  4. Update features and makefiles
  5. Test the changes
  6. Fork repository / push changes / pull request


Build the codebase

As per usual for myself and Realityloop, these sites are built using a slim line profile / makefile approach with the GIT repository tracking custom code only, which means you will require Drush to build the site codebase.

If you are not familiar with Drush (DRUpal SHell), I highly recommend familiarising yourself as it’s not only incredibly useful in everyday Drupal development, it is also a requirement for this tutorial. Installation instructions can be found at http://www.drush.org/en/master/install/

Assuming you have Drush ready to go, building the codebase is as simple as running the following, relevant command:

  • DrupalMelbourne​ drush make --working-copy=1 https://raw.githubusercontent.com/drupalmel/drupalmel/master/stub.make drupalmel-7.x
  • DrupalCampMelbourne2015 drush make --working-copy=1 https://raw.githubusercontent.com/drupalmel/drupalcampmel/2015.x/stub.make dcm-2015.x

The resulting codebase contains the specified Drupal core (currently 7.38) along with the relevant install profile containing all custom and contrib code (modules, themes and libraries).

Note: --working-copy=1 is used to retain the link to the GIT repository.


Setup and install the site

Once your codebase is built you simply need to install a site as you would normally do so, ensuring that you use the relevant installation profile (DrupalMelbourne or DrupalCampMelbourne).

The sites are constructed in such a way that there is absolutely no need to copy down the production database, any content is either aggregated from external sources, or dummy content is created via the Devel generate module for the purposes of development. This means that there is no laborious data sanitization processes required, allowing contributors to get up and running in as short of time as possible.

For more details on how to setup a Drupal site, refer to the Installation Guide or your *AMP stack.


Make your changes

No change is insignificant, and a Community driven site thrives on changes; if you think you can make the site look better, found a bug that you can fix, or want new functionality on the site, the only thing stopping you is you!


Update features and makefiles

Once you’re happy with the changes you’ve made, be it content, theme or configuration, you need to ensure that it’s deployable, and as we’re not dealing with databases at all, this means that you need to update the codebase; features and makefiles.

If you’re not familiar with Features or Makefiles, much like Drush, I highly recommend them, as again, they are required for this particular approach of Community driven development.

You can find more details on Features, Makefiles and Drush at my DrupalSouth 2012 talk "Ezy-Bake Drupal:Cooking sites with Distributions".



Features allows you to capture most of your configuration in the filesystem, allowing to be deployed via GIT.

In the case of these sites, there is only one feature which encapsulates all configuration, as these sites have a relatively straight forward purpose. Some sites may warrant more, that is a discussion for another day.

To update the feature, it’s an extremely simple process:

  1. Navigate to the relevant path within your site:
    • DrupalMelbourne: admin/structure/features/drupalmel_core/recreate
    • DrupalCampMelbourne2015: admin/structure/features/drupalcampmel_core/recreate
  2. Add or remove any required components (Page manager, Strongarm, Views, etc).
  3. Expand the Advanced options fieldset and click the Generate feature button.

More information can be found at the Features project page.



Makefiles are recipes of modules, themes and libraries that get downloaded by Drush make, including their versions and patches.

Updating a makefile is relatively straightforward, it’s just a matter of opening the file in your IDE / text editor of choice and updating the entries.

There are two makefile in these sites, stub.make and drupal-org.make; the stub.make contains Drupal core and the install profile (and any relevant patches) and the drupal-org.make contains all third-party (contrib) code.

Any new or updated modules, themes or libraries (and any relevant patches) need to be added to this file, as no third-party code is tracked in the GIT repo.

The makefiles are organized into three primary sections; Modules, Themes and Libraries. Below are some examples of how an entry should be defined:


Bean module, version 1.9:

projects[bean][version] = 1.9


Reroute Email module, specific GIT revision with patch applied:

  1. projects[reroute_email][download][revision] = f2e3878
  2. ; Variable integration - http://drupal.org/node/1964070#comment-7294928
  3. projects[reroute_email][patch][] = http://drupal.org/files/reroute_email-add-variable-module-integration-1964070-2.patch

Note: It is always important to include version, if you need a development release then use a GIT revision as otherwise what you build today may be drastically different from what you build tomorrow.


Bootstrap theme, version 3.1-beta2:

  1. projects[bootstrap][type] = theme
  2. projects[bootstrap][version] = 3.1-beta2

Note: As the default projects 'type' is set to module, themes need to specify their type. This is a personal choice in the Drush make file configuration, as it is highly likely you will always have more modules than themes.


Backbone library, version 1.1.2:

  1. libraries[backbone][download][type] = get
  2. libraries[backbone][download][url] = https://github.com/jashkenas/backbone/archive/1.1.2.zip

Note: As libraries are not projects hosted on Drupal.org (in general), you need to specify the URL of which the files is downloadable, or cloneable, from.

More information can be found on the Drush make manual page.


Other changes / hook_update_N()

Sometimes changes don’t fall under the realm of features or makefiles, either due to a modules lack of integration with features, or when dealing with content rather than configuration. This still needs to be deployable via the codebase, and can be done with the use of a hook_update_N() function.

A hook_update_N() is a magic function that lives in a modules .install file, where hook is the machine name of the module and N is a numeric value, formatted as a 4 digit number in the form of XY##, where X is the major version of Drupal (7), Y is the major version of the module (1) and ## is a sequential value, from 00 to 99.

Example: drupalmel_core_7100() / drupalcampmel_core_7100()

The contents of a hook_update_N() is whatever you wish it to be, and Drupal API function or PHP.

An example of one such function is:

  1. /**
  2. * Assign 'ticket holder' role to ticket holders.
  3. */
  4. function drupalcampmel_core_update_7105() {
  5. $query = new EntityFieldQuery();
  6. $results = $query->entityCondition('entity_type', 'entityform')
  7. ->entityCondition('bundle', 'confirm_order')
  8. ->execute();
  10. if (!empty($results['entityform'])) {
  11. $entityforms = entityform_load_multiple(array_keys($results['entityform']));
  12. foreach ($entityforms as $entityform) {
  13. $user = user_load($entityform->uid);
  14. $user->roles[3] = 'ticket holder';
  15. user_save($user);
  16. }
  17. }
  18. }

For more details, refer to the hook_update_N() API documentation.


Test the changes

Once you’ve made your changes and prepared your features and makefiles, it’s ideal to ensure that everything is working as expected before you push it up to the GIT repo.

This is a multi-step process, but it’s easy enough, especially given that we don’t have a database that we have to worry about.


  1. Take a database dump of your local (development) site; Safety first.
  2. Re-install the site with the the relevant install profile:
    • DrupalMelbourne: drush si drupalmel -y
    • DrupalCampMelbourne2015: drush si drupalcampmel -y
  3. Test to ensure your changes are present and working as expected.



Testing your makefile can be a little bit trickier than testing your features, as when you download a module, theme or library there are various places they can be stored, and it’s easy to get a false positive.

  1. Build a --no-core version of the drupal-org.make file into a temporary directory. A --no-core build is exactly what it sounds like, build the makefile excluding Drupal core.

    1. cd ~/temp
    2. drush make --no-core --no-gitinfofile ~/Sites/drupalmel-7.x/profiles/drupalmel/drupal-org.make dm-temp
  2. Run a diff/merge tool over the --no-core build’s sites/all directory and your local (development) site’s relevant profile directory (e.g., profiles/drupalmel).

    I personally use Changes on OS X, but there are different free and paid diff/merge tools for different operating systems.
  3. Ensure that all third-party (contrib) code is identical on both sides of the diff/merge, any discrepancies imply that you may be missing an entry in your makefile, or that your local version of the code is located in an incorrect location.

If your changes aren’t working as expected, or something is missing, simply restore your database dump and go back to the Update features and makefiles step.


Fork repository / push changes / pull request

Now that you have made your changes and everything is good to go, it’s time to push those changes back to the repository.

For the sake of a manageable review process, it’s preferable that all changes should be made in a fork with a pull request back to the master repository.

If you’ve only ever lived in the realm of Drupal.org, then this may be an entirely alien process, but it is again a relatively straight forward process.

Note: If you don’t have a Github account, you will need one. Signup for free at https://github.com/join

  1. Go to the relevant Github repository:
  2. Click the Fork button (top right of the page) and follow the onscreen instructions.
  3. Click the Copy to clipboard button on the clone URL field (in the right sidebar).
  4. Add a new GIT remote to your local (development) site with the copied URL.

    1. cd ~/Sites/drupalmel-7.x/profiles/drupalmel
    2. git remote add fork git@github.com:Decipher/drupalmel.git
  5. Commit and push the changes to your fork.
  6. Create a pull request via your Github fork by clicking the Pull request button, providing as much detail as possible of what your changes are.

If all goes well, someone will review your pull request and merge the changes into the relevant website.


The review process

So this is the not so community friendly part of the process; In a perfect world the community should be able to run itself, but Github isn’t necessarily setup this way, nor is Drupal.org. Someone has to specifically approve a Pull request. Currently this is only myself and Peter Lieverdink (cafuego).

I’m absolutely open to suggestion on how to improve this, comment below if you have any thoughts on how this could be improved.


The uncommitables

Not everything should be committed, especially in a public repository. A perfect example of something that shouldn’t be committed is an API key.

The DrupalMelbourne website integrates with the Meetup.com API to pull in all DrupalMelbourne Meetups, but exposing the API key to the codebase would open the DrupalMelbourne meetup group to abuse and spam.

To deal with this, API keys and other sensitive items can be dealt included directly on the server or in the database, and placeholders can be user for local development.


Open source your site?

Exposing your website codebase is definitely not the normal practice, and it's absolutely not for everyone. I couldn't imagine trying to convince a client to go down this road. But for a community site, especially a Drupal based community site, it just makes sense. While I wouldn't expect every visitor with Drupal knowledge to volunteer their time to help with the sites development, any who do is 100% more than you'd get otherwise.

drupaldrupal planet
Categories: FLOSS Project Planets

Modules Unraveled: 142 Why Drupal 8 is the Most Important Product Release in the History of the WCM Market with Tom Wentworth - Modules Unraveled Podcast

Planet Drupal - Wed, 2015-07-29 01:00
Published: Wed, 07/29/15Download this episode

So, Doug Vann emailed the two of us a while ago and said that I should have you on because

"I'm on the record as a huge fan of Tom's for his well educated and well rounded perspective on proprietary and Open Source software solutions. ... I'd be excited to hear Tom interviewed on the topic of how Drupal 8 will continue to erode into the proprietary market."

I thought that sounded good, so here we are!

Web Content Management
  • When you replied, you said that Drupal 8 is the most important product release in the history of the WCM market. Can you start out by explaining what WCM stands for and what qualifies software as a WCM product?
  • You also mentioned that the 2nd most important release was Day Software’s CQ5. What is that?
  • When I hear about Drupal’s competitors, I generally hear about Wordpress and Joomla. Why aren’t either of those number two?
Drupal’s Place in the WCM Market
  • How has Drupal faired in the WCM market so far?
  • What do you see Drupal 8 bringing to the table that sets it apart from other products?
Questions from Twitter
  • Jacob Redding

    • How does Drupal 8 change the comparison with AEM? Specifically what are the features with Drupal 8 that bring Drupal to a more level playing field with AEM? Is there a single specific feature that Drupal does hands down better than AEM?
  • Doug Vann

      • Drupal promotes an "Ownership Society" where Universities, Media Companies, Governments, etc. hire in ​Drupal talent and build sites inhouse. How does D8 impact that trend? Is D8 more for shops and agencies and less for DIYers or is that just F.U.D. talking?​
    • Any Drupaler would state that Drupal has been "disruptive" insofar as we have allowed highly visible sites to ditch their proprietary CMS in favour of Drupal.
      • To date, has that success been "truly disruptive" by your definition?
      • With the astounding advancements baked into D8, are you looking forward to an even more disruptive presence in the CMS playing field?
    • Shops
      • Is Drupal 8 ushering in a new era which will see a fundamental shift in how Drupal is delivered in the areas of customer procurement, engagement, and delivery?
      • To reword that. Are Adobe CQ5 and Sitecore shops operating significantly different than Drupal shops today AND are we going to see Drupal shops retooling and reshaping to a more enterprise looking organization?
      • In The past 18+ months, it seems that more people are willing to ​admit that Drupal 8 is moving Drupal "Up Market." Agencies are often the vendor of choice in those deep waters. Should we expect some more mergers and acquisitions which will ultimately empower agencies to deliver Drupal services inhouse?​
    • ​The little guys
      • Where are the little guys in the D landscape?
      • Do you still see the $10K and the $45K range websites feeding the smaller end of the Drupal ecosystem?
Episode Links: Tom on drupal.orgTom on TwitterDrupal 8 Info PageAcquia BlogAcquia’s Drupal 8 Info PageAcquia’s Ultimate Guide to Drupal 8 (Exclusive direct link!)Tags: Drupal 8planet-drupal
Categories: FLOSS Project Planets

Busy is fun!

Planet KDE - Tue, 2015-07-28 20:17
Bugs Bugs Bugs

The beginning of the day was reading some social media in the morning with breakfast catching up with the times. While going though my Google+ feed I saw a post that I seen before about the a bug with a krunner plugin. The plugin in question was this which Riddell, Dan and I debugged to find some more info about the bug such as that is effects Kubuntu, Arch and openSUSE so it is upstream related. Riddell provided some info on the bug page to maybe help resolve it later. All and all I learned some debugging stuff, compiling, grabbing source and some very tiny C++.


Then I was off to VDG again to work more on the High Contrast Color scheme for Plamsa 5 with Andrew Lake. The VDG started doing some concepts and design for a new secure login for KDE aka SDDM


For lunch we had the cafe in the University that we used yesterday again. For a starter I had Spanish Potato and Tuna Salad with a plate of Grilled Chicken and french fries. Ovidiu really really disliked his choice of squid.

AppStream/Muon and KDE Neon

After lunch Matthias Klumpp talked about his work with Debian/Fedora for AppSream with metadata to be used in Muon Discover as well as some redesigns for it to work better and look better for everyone. Up next was the long awaited KDE Neon where people from Kubuntu, Red Hat, and others

After Party

View post on imgur.com

Thanks to the GPUL we had an amazing party with actually good music, free food and beer/wine. There was dancing, swapping name badges and having fun in general.


View post on imgur.com

View post on imgur.com

This is a picture of us walking to the mall for the party:

View post on imgur.com

Categories: FLOSS Project Planets

Justin Mason: Links for 2015-07-28

Planet Apache - Tue, 2015-07-28 19:58
  • Taming Complexity with Reversibility

    This is a great post from Kent Beck, putting a lot of recent deployment/rollout patterns in a clear context — that of supporting “reversibility”:

    Development servers. Each engineer has their own copy of the entire site. Engineers can make a change, see the consequences, and reverse the change in seconds without affecting anyone else. Code review. Engineers can propose a change, get feedback, and improve or abandon it in minutes or hours, all before affecting any people using Facebook. Internal usage. Engineers can make a change, get feedback from thousands of employees using the change, and roll it back in an hour. Staged rollout. We can begin deploying a change to a billion people and, if the metrics tank, take it back before problems affect most people using Facebook. Dynamic configuration. If an engineer has planned for it in the code, we can turn off an offending feature in production in seconds. Alternatively, we can dial features up and down in tiny increments (i.e. only 0.1% of people see the feature) to discover and avoid non-linear effects. Correlation. Our correlation tools let us easily see the unexpected consequences of features so we know to turn them off even when those consequences aren’t obvious. IRC. We can roll out features potentially affecting our ability to communicate internally via Facebook because we have uncorrelated communication channels like IRC and phones. Right hand side units. We can add a little bit of functionality to the website and turn it on and off in seconds, all without interfering with people’s primary interaction with NewsFeed. Shadow production. We can experiment with new services under real load, from a tiny trickle to the whole flood, without affecting production. Frequent pushes. Reversing some changes require a code change. On the website we never more than eight hours from the next schedule code push (minutes if a fix is urgent and you are willing to compensate Release Engineering). The time frame for code reversibility on the mobile applications is longer, but the downward trend is clear from six weeks to four to (currently) two. Data-informed decisions. (Thanks to Dave Cleal) Data-informed decisions are inherently reversible (with the exceptions noted below). “We expect this feature to affect this metric. If it doesn’t, it’s gone.” Advance countries. We can roll a feature out to a whole country, generate accurate feedback, and roll it back without affecting most of the people using Facebook. Soft launches. When we roll out a feature or application with a minimum of fanfare it can be pulled back with a minimum of public attention. Double write/bulk migrate/double read. Even as fundamental a decision as storage format is reversible if we follow this format: start writing all new data to the new data store, migrate all the old data, then start reading from the new data store in parallel with the old. We do a bunch of these in work, and the rest are on the to-do list. +1 to these!

    (tags: software deployment complexity systems facebook reversibility dark-releases releases ops cd migration)

Categories: FLOSS Project Planets

Jim Birch: Googlebot cannot access CSS and JS on your Drupal site

Planet Drupal - Tue, 2015-07-28 19:27

There was a time when search engine bots would come to your site, index the words on the page, and continue on.  Those days are long past.  Earlier this year, we witnessed Google's ability to determine if our sites were mobile or not.  Now, the evolution of the Googlebot continues.

I would say that it was not uncommon for web developers to receive at least a few emails from Google Search Console today.

To: Webmaster...
Google systems have recently detected an issue with your homepage that affects how well our algorithms render and index your content. Specifically, Googlebot cannot access your JavaScript and/or CSS files because of restrictions in your robots.txt file. These files help Google understand that your website works properly so blocking access to these assets can result in suboptimal rankings.

Well, that's a little bit of information that I never thought about before, Google wanting to understand, how my "website works", not just understanding the content and the structure of it.  Turns out, Google has been working toward this since October of last year.

Update your robots.txt

To allow Googlebot to access your Javascript and CSS files, add a specific User-agent for Googlebot, repeating the rules you already have, and adding the new "Allow" rules.

Read more

Categories: FLOSS Project Planets

Jean-Baptiste Onofré: Monitoring and alerting with Apache Karaf Decanter

Planet Apache - Tue, 2015-07-28 17:41

Some months ago, I proposed Decanter on the Apache Karaf Dev mailing list.

Today, Apache Karaf Decanter 1.0.0 first release is now on vote.

It’s the good time to do a presentation

Categories: FLOSS Project Planets

End Point: Python string formatting and UTF-8 problems workaround

Planet Python - Tue, 2015-07-28 16:13
Recently I worked on a program which required me to filter hundred of lines of blog titles. Throughout the assignment I stumbled upon a few interesting problems, some of which are outlined in the following paragraphs.

Non Roman characters issueDuring the testing session I missed one title and investigating why it happened, I found that it was simply because the title contained non-Roman characters.

Here is the code's snippet that I was previously using:

for e in results: simple_author=e['author'].split('(')[1][:-1].strip() if freqs.get(simple_author,0) < 1: print parse(e['published']).strftime("%Y-%m-%d") , "--",simple_author, "--", e['title']
And here is the fixed version

for e in results: simple_author=e['author'].split('(')[1][:-1].strip().encode('UTF-8') if freqs.get(simple_author,0) < 1: print parse(e['published']).strftime("%Y-%m-%d") , "--",simple_author, "--", e['title'].encode('UTF-8')
To fix the issue I faces I added .encode('UTF-8') in order to encode the characters with the UTF-8 encoding. Here is an example title that would have been otherwise left out:

2014-11-18 -- Unknown -- Novo website do Liquid Galaxy em Português!
Python 2.7 uses ASCII as its default encoding but in our case that wasn't sufficient to scrape web contents which often contains UTF-8 characters. To be more precise, this program fetches an RSS feed in XML format and in there it finds UTF-8 characters. So when the initial Python code I wrote met UTF-8 characters, while using ASCII encoding as the default sets, it was unable to identify them and returned an error.

Here is an example of the parsing error it gave us while fetching non-roman characters while using ASCII encoding:
UnicodeEncodeError: 'ascii' codec can't encode character u'\xea' in position 40: ordinal not in range(128)
Right and Left text alignment
In addition to the error previously mentioned, I also had the chance to dig into several ways of formatting output.
The following format is the one I used as the initial output format:

print("Name".ljust(30)+"Age".rjust(30)) Name Age
Using "ljust" and "rjust" method
I want to improve the readability in the example above by left-justify "Name" by 30 characters and "Age" by another 30 characters distance.

Let's try with the '*' fill character. The syntax is str.ljust(width[, fillchar])

print("Name".ljust(30,'*')+"Age".rjust(30)) Name************************** Age
And now let's add .rjust:

print("Name".ljust(30,'*')+"Age".rjust(30,'#')) Name**************************###########################Age
By using str, it counts from the left by 30 characters including the word "Name" which has four characters
and then another 30 characters including "Age" which has three letters, by giving us the desired output.

Using "format" method
Alternatively, it is possible to use the same indentation approach with the format string method:

print("{!s:{fill}}{!s:>{fill}}".format("Name", "Age",fill=30)) Name Age
And with the same progression, it is also possible to do something like:

print("{!s:*{fill}}{!s:>{fill}}".format("Name", "Age",fill=30)) Name************************** Age print("{!s:*{fill}}{!s:#>{fill}}".format("Name", "Age",fill=30)) Name**************************###########################Age
"format" also offers a feature to indent text in the middle. To put the desired string in the middle of the "fill" characters trail, simply use the ^ (caret) character:
print("{!s:*^{fill}}{!s:#^{fill}}".format("Age","Name",fill=30)) *************Age**************#############Name#############
Feel free to refer the Python's documentation on Unicode here:
And for the "format" method it can be referred here:
Categories: FLOSS Project Planets

Steve Holden: How to Get Almost All the Python You Might Need

Planet Python - Tue, 2015-07-28 16:08
I am often asked what is the easiest way to build Python environments. For development purposes it's most convenient to have a "batteries included" environment that can be pared down once development is over. Continuum Analytics have for a while offered an open source project called Anaconda to provide easy access to both Python and the batteries.

If you are a Python professional and you get questions from people who would like to try the language I would suggest that the graphical installer would be most appropriate. For anyone familiar with the command line, the command-line installer is not only slightly smaller but allows the scripted creation of disposable Python environments which you can activate just by adding their bin subdirectory at the start of your path.

The video should explain this in detail.

In the video I create a disposable environment under /tmp. Anaconda environments are far heavier than a standard virtual environment, but they offer a huge variety of Python libraries without ever having to understand the complexities of package installation.

[For a more selective installation utility see How to Get the Bits of Python You Need Where You Want Them]

Categories: FLOSS Project Planets

Midwestern Mac, LLC: Nginx Load Balancer Visualization on a Raspberry Pi Cluster

Planet Drupal - Tue, 2015-07-28 13:03

After some more tinkering with the Raspberry Pi Dramble (a cluster of 6 Raspberry Pis used to demonstrate Drupal 8 deployments using Ansible), I finally was able to get the RGB LEDs to react to Nginx accesses—meaning every time a request is received by Nginx, the LED toggles to red momentarily.

This visualization allows me to see exactly how Nginx is distributing requests among the servers in different load balancer configurations. The default (not only for Nginx, but also for Varnish, HAProxy, and other balancers) is to use round-robin distribution, meaning each request is sent to the next server. This is demonstrated first, in the video below, followed by a demonstration of Nginx's ip_hash method, which pins one person's IP address to one backend server, based on a hash of the person's IP address:

Categories: FLOSS Project Planets

Jonathan Dowland: Sound effect pitch-shifting in Doom

Planet Debian - Tue, 2015-07-28 13:02

My previous blog posts about deterministic Doom proved very popular.

The reason I was messing around with Doom's RNG was I was studying how early versions of Doom performed random pitch-shifting of sound effects, a feature that was removed early on in Doom's history. By fixing the random number table and replacing the game's sound effects with a sine wave, one second long and tuned to middle-c, I was able to determine the upper and lower bounds of the pitch shift.

Once I knew that, I was able to write some patches to re-implement pitch shifting in Chocolate Doom, which I'm pleased to say have been accepted. The patches have also made their way into the related projects Crispy Doom and Doom Retro.

I'm pleased with the final result. It's the most significant bit of C code I've ever released publically, as well as my biggest Doom hack and the first time I've ever done any audio manipulation in code. There was a load of other notes and bits of code that I produced in the process. I've put them together on a page here: More than you ever wanted to know about pitch-shifting.

Categories: FLOSS Project Planets

Mediacurrent: Mediacurrent Dropcast: Episode 8

Planet Drupal - Tue, 2015-07-28 13:01

This episode we welcome Shellie Hutchens, Mediacurrent’s Marketing Director, to talk about upcoming webinars and and the fact that Mediacurrent is hiring. Ryan picked Stage File Proxy as the Module of the Now. We discuss our first non-drupal article from Four Kitchens about Saucier (pronunciation TBD). Mark stumbles through some D8 News and of course we finish off with some great conversation during Ryan’s Final Bell.

Categories: FLOSS Project Planets

Drupal Association News: Drupal Association Board Meeting: July 22, 2015

Planet Drupal - Tue, 2015-07-28 12:37

Here we go again! It's your monthly summary of all things board meeting at the Drupal Association. This month we covered board governenance (there's a seat opening up), the D8 Accelerate Campaign, and the Association strategic frame. Plus, as a bonus, the board approved the Q2 financials for publication. As always, if you want to catch up on all the details, you can find everything you need to know about the meeting online, including minutes, materials, and a recording. If you're just here for a summary view, read on!

Meeting Minutes Related Materials Video Recording Board governance

Angie Byron's term on the board is going to be up this fall, and she has expressed her desire not to renew that term. We're going to be very sad to see Angie go, but thrilled that she will have one less hat to talk about when explaining which hat she is wearing at any given point during your next meeting with her. Seriously - she's brought so much thoughfulness and passion to the board. She's not leaving us yet (her term expires 10/31), but our Governance Committee will be working with the Nominations Committee to recruit candidates and help the board make the next selection.

D8 Accelerate

As I write these words there are just 10(!) release blockers standing between us and a release candidate for Drupal 8. Part of the momentum this year has come from Drupal 8 Accelerate. We've made over 40 grants, worth more than $120,000 so far. That's helped us close nearly 100 issues, addressing some really important features, like a beta to beta upgrade, security bugs, and performance. If you're curious about what's getting funded, you can always see the full list. And, we're getting close to reaching our goal - we've raised $223,000. You can help us reach our $250,000 goal by making a donation today!

Drupal Association Strategic Frame

Why are we doing the work we do? Because everyone at the Association wants to have a positive impact for Drupal. The best way for us to have an impact is to pick a few goals that we are going to focus on achieving. The Association board used their January retreat to set some 3-5 year goals for the Association:

  • To develop sufficient professionals to meet global demand for Drupal
  • To lead the community in focused, efficient, effective development of Drupal
  • To ensure the sustainability of the Drupal project and community
  • To increase Drupal adoption in target markets
  • To increase the strength and resilience of the Drupal Association

We've been working since then to select the right strategies and objectives (1 year to 18 month time frame) for our work. You can see the directions we're headed in the presentation we shared. It's important to note that we expect to revisit our strategies and objectives on a quarterly basis to adjust as we go. The world of Drupal moves fast, and we need to as well. So, although we are setting 12 to 18 month objectives, we will be adjusting the frame much more frequently, and won't be sticking with objectives that we find don't really support the work.

2015 Q2 Financials

And in the most exciting news of all, the second quarter financials were approved by the board. You can always find whatever financials have been released in the public financials folder. If you have never taken a look at the financials before, I recommend it. Although I tease about them being boring, I love financial statements! A while back, I wrote up a post about how to read our financial statements. I also like pointing out that each Con has it's own tab in our financial statements, so you can see exactly how that money coems in, and where it is spent. 

See you next time!

And that's it for this summary. But, if you have questions or ideas, you can always reach out to me!

Flickr photo: Joeri Poesen

Categories: FLOSS Project Planets

Caktus Consulting Group: Using Unsaved Related Models for Sample Data in Django 1.8

Planet Python - Tue, 2015-07-28 11:54

Note: In between the time I originally wrote this post and it getting published, a ticket and pull request were opened in Django to remove allow_unsaved_instance_assignment and move validation to the model save() method, which makes much more sense anyways. It's likely this will even be backported to Django 1.8.4. So, if you're using a version of Django that doesn't require this, hopefully you'll never stumble across this post in the first place! If this is still an issue for you, here's the original post:

In versions of Django prior to 1.8, it was easy to construct "sample" model data by putting together a collection of related model objects, even if none of those objects was saved to the database. Django 1.8 - 1.8.3 adds a restriction that prevents this behavior. Errors such as this are generally a sign that you're encountering this issue:

ValueError: Cannot assign "...": "MyRelatedModel" instance isn't saved in the database.

The justification for this is that, previously, unsaved foreign keys were silently lost if they were not saved to the database. Django 1.8 does provide a backwards compatibility flag to allow working around the issue. The workaround, per the Django documentation, is to create a new ForeignKey field that removes this restriction, like so:

class UnsavedForeignKey(models.ForeignKey): # A ForeignKey which can point to an unsaved object allow_unsaved_instance_assignment = True class Book(models.Model): author = UnsavedForeignKey(Author)

This may be undesirable, however, because this approach means you lose all protection for all uses of this foreign key, even if you want Django to ensure foreign key values have been saved before being assigned in some cases.

There is a middle ground, not immediately obvious, that involves changing this attribute temporarily during the assignment of an unsaved value and then immediately changing it back. This can be accomplished by writing a context manager to change the attribute, for example:

import contextlib @contextlib.contextmanager def allow_unsaved(model, field): model_field = model._meta.get_field(field) saved = model_field.allow_unsaved_instance_assignment model_field.allow_unsaved_instance_assignment = True yield model_field.allow_unsaved_instance_assignment = saved

To use this decorator, surround any assignment of an unsaved foreign key value with the context manager as follows:

with allow_unsaved(MyModel, 'my_fk_field'): my_obj.my_fk_field = unsaved_instance

The specifics of how you access the field to pass into the context manager are important; any other way will likely generate the following error:

RelatedObjectDoesNotExist: MyModel has no instance.

While strictly speaking this approach is not thread safe, it should work for any process-based worker model (such as the default "sync" worker in Gunicorn).

This took a few iterations to figure out, so hopefully it will (still) prove useful to someone else!

Categories: FLOSS Project Planets

François Dion: Linux, not chmod, groups!

Planet Python - Tue, 2015-07-28 11:41
In englishI was reviewing my logs, and I was surprised to see that every week, hundreds of people were going to this page (even though it was posted well over 2 years ago):http://raspberry-python.blogspot.com/2012/09/pas-chmod-groupes.html
This is great if you can read french, but if not... here's a translation.
GroupsThe use of chmod to resolve permission issues appear on a regular basis on forums and in tutorials for Linux.
This is quite a risky proposition, particularly if you are very permissive (chmod 777 !!)
Or another option I've seen is to simply use sudo.
By simply mastering one thing, most of the time you can make this problem go away: using groups.
For example, if we have something like this:
fdion@raspberrypi ~ $ ls -al /dev/fb0
crw-rw---- 1 root video 29, 0 Dec 31 1969 /dev/fb0

Permissions on this file are defined as:c, owner (root): read write, no exec, group (video): read write, no exec, and for everybody else, no access.
If I would like for my python script under my fdion account, or really any program that I run, to read and write the framebuffer (any program using the SDL would be a candidate), I only need to do:
usermod -a -G video fdion  and in this way I added fdion to the video group.

That is it. While on the topic of SDL, I would recommend doing at a minimum:
sudo usermod -a -G video fdion
sudo usermod -a -G audio fdion
sudo usermod -a -G input fdion

This way you'll be able to use the video, audio, mouse etc.

Francois Dion
Categories: FLOSS Project Planets

InternetDevels: The ABC of Creating Reliable Backups in Drupal

Planet Drupal - Tue, 2015-07-28 11:28

Find some useful tips on Drupal website data security from our guest blogger Jack Dawson, founder of Big Drop Inc.

In Drupal web development, there are a number of things that can be done to ensure the superior user experience and consistency, as well as save time and pain for the webmaster in times to come. First, you’ll need to have in mind the theme structure of Drupal and how you intend to draft your content in order to take advantage of Drupal’s best aspects and make the site efficient.

Read more
Categories: FLOSS Project Planets

بايثون العربي: مسح المنافذ في بايثون

Planet Python - Tue, 2015-07-28 11:26

سنقوم في الدرس بإنشاء برنامج بسيط نقوم من خلاله إجراء مسح شامل على منافذ الكمبيوتر بدون تثبيت مكتبات خارجية والإكتفاء بالمكتبات و الوحدات المدمجة

وقبل أن نبدأ دعوني أتكلم عن بعض التعريفات الأساسية في عالم الشبكات  حتى يكون الموضوع موجه للجميع المبتدئين والمتقدمين على التوالي .

المنفذ : هو مكان يتم من خلاله إستقبال وإرسال المعلومات وكل برنامج لديه منفذ خاص به حتى تكون الأمور منظمة (مزيد من المعلومات ).

مسح المنافذ : هي عملية إرسال مجموعة من الحزم الى منفذ معين من أجل معرفة إذا كان مفتوح أو لا و معرفة ما هي الخدمات التي يقدمها هذا المضيف وبالتالي إستغلال ذلك المفذ عبر جمع معلومات وثغرات البرنامج الذي يستخدمه.

إجتياح المنفذ: هو عملية مسح لمنفذ معين في حالة إستماع لعديد من الأجهزة في نفس الوقت وعادة ماتستخدم هذه الطريقة من أجل البحث عن خدمة معينة (وعلى سبيل تخيل معي دودة خبيثة خاصة ب SQL تقوم بإكتساح العديد من أجهزة الكمبيوتر للبحث عن جهاز ما يقوم بالإستماع عبر المنفذ 1433).

TCP/IP :إن جميع العمليات والتصاميم الموجودة على شبكة الأنترنيت قائمة على أساس بروتوكول الأنترنيت وعادة مايسمى ب TCP/IP ففي هذا النظام تقوم جميع الأجهزة و الأجهزة المقدمة للخدمات عنصرين أساسيين هما العناوين وأرقام المنافذ حيث يوجد أكثر من 6000 منفذ يمكن إستعماله .

هناك بعض برامج المسح تقوم بمسح المنافذ المشهورة فقط وبعضها اﻷاخر يقوم بمسح المنافذ المشهورة بثغراتها .

سأكتفي يهذا القدر من التعريفات لأن المقام لا يسمح لي بأكثر من هذا ولو سمحت لنفسي لن أنتهي من موضوع الشبكات أبدا ولكن يمكنك البحث عن المزيد عبر محركات البحث عن فحص المنافذ.

نعود ألان الى بايثون والبرنامج الخاص بنا لفحص منافذ الجهاز حيث سأقوم بوضع الكود الكامل للبرنامج ثم نقوم بعد ذلك بشرح كل جزء منه .

# -*- coding: UTF-8 -*- import socket import sys,os # Clear the screen os.system("clear") max_port=5000 min_port=1 host=input(":قم بإدخال عنوان المضيف لعمل مسح المنافذ ") host_ip=socket.gethostbyname(host) # عرض هيدر جميل مع مجموعة من المعلومات عن المضيف print ("-" * 60) print("من فضلك أنتظر تتم الأن عملية مسح العنوان التالي : ", host_ip) print ("-" * 60) try: for port in range(min_port,max_port): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.settimeout(0.5) result = sock.connect_ex((host_ip, port)) if result == 0: print ("Port %d : \t Open "%(port)) sock.close() except KeyboardInterrupt: print ("You pressed Ctrl+C") print ("\n\n[*] User Requested An Interrupt.") print ("[*] Application Shutting Down.") sys.exit() except socket.gaierror: print ('Hostname could not be resolved. Exiting') sys.exit() except socket.error: print ("Couldn't connect to server") sys.exit() # عرض معلومات أخرى على الشاشة print ("[*] Have a nice day !!!! ... From http://Pyarab.com ")

هذا هو كود البرنامج وهو لا يتعدي 40 سطر وكان يمكن أقل من ذلك ولكن تم إضافة بعض السطور للمزيد من التوضيح فقط .

نقوم الان بشرح الكود سطر بسطر حتى نفهم كيف تمت العملية .

ملاحظة السطر التالي -*- coding: UTF-8 -*- # ترميز خاص باللغة حتى لا تكون لدينا مشاكل خاصة وأنني إسعملت بعض الجمل باللغة العربية وتم تجريب البرنامج على نظام أوبنتو يمكنك الاستغناء عن هذا السطر وحذف الجمل العربية .

الجزء الأول 

import socket import sys,os

في هذا الجزء سنقوم بإستدعاء جميع المكتبات والوحدات التي سنحتاجها في برنامجنا .

المكتبة الاولى هي مكتبة socket وهي المكتبة الأساسية ولا يمكن الإستغناء عنها .

الوحدتان التاليتان هما وحدتا os و sys على التوالي وهما وحدتان غير أساسيتان ويمكن الإستغناء عن هما و لكن سنقوم بإستخدامهما من أجل ترتيب البرنامج .

كان بإمكاني إستدعاء مكتبات أخرى من أجل جمالية البرنامج مثل مكتبة الوقت لحساب مدة عملية المسح وغيرها ولكن يكفي بهذه المكتبات كبداية .

الجزء الثاني :

# تنظيف الطرفية  os.system(clear)

هذا الجزء جزء تجميلي فقط و طبعا هو غير مهم حيث أن مهمته الوحيدة مسح جميع السطور السابقة والموجودة على الطرفية وبدا البرنامج من أعلى الشاشة .

صورة للطرفية قبل تشغيل البرنامج

صورة للطرفية بعد تشغيل البرنامج

الجزء الثالث :

max_port=5000 min_port=1 host=input(":قم بإدخال عنوان المضيف لعمل مسح المنافذ ") host_ip=socket.gethostbyname(host)

في هذه المرحلة قمت بتعيين متغيرين الأول عبارة عن القيمة الأعلى للمنافذ التي يمكن للبرنامج أن يقوم بمسحها أما المتغير الثاني عبارة عن القيمة الأدنى للمنافذ  والتي سيبدأ البرنامج بعملية المسح إنطلاقا منها .

ثم قمت بتعيين متغير أخر ستكون قيمته إسم  (إسم الدومين أو عنوان الأبي ) المضيف أو السيرفر الذي سنقوم بعملية المسح عليه أما المتغير الرابع ستكون قيمته عنوان الأيبي للدومين المختار في المتغير الثالث .

الجزء الرابع :

# عرض هيدر جميل مع مجموعة من المعلومات عن المضيف print ("-" * 60) print("من فضلك أنتظر تتم الأن عملية مسح العنوان التالي : ", host_ip) print ("-" * 60)

هذا الجزء عبارة عن تصميم هيدر البرنامج (خطوة غير مهمة أو أساسية).

هيدر أعلى برنامج مع عنوان IP

الجزء الخامس :

try: for port in range(min_port,max_port): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.settimeout(0.5) result = sock.connect_ex((host_ip, port)) if result == 0: print ("Port %d : \t Open "%(port)) sock.close()

هذا هو أهم جزء في البرنامج إذ أن عملية المسح تتم في هذه المرحلة .

قمت بإستخدام دالة range من أجل توليد قائمة من الأرقام والأعداد إنطلاقا من الرقم الأدنى الى الرقم الأكبر داخل حلقة for  وسيكون ناتج القائمة مجموعة من أرقام المنافذ الذي سيقوم البرنامج بمسحها.

في السطر الثاني قمت بإنشاء كائن المقبس وهو ضروري للقيام بعملية الإتصال حيث أن العملية ستتم عبر الشبكة (لمزيد من المعلومات حول المقبس ) .


قبل أن أتكلم عن هذا السطر عليك أن تعلم أن عملية مسح المنافذ قد تأخذ وقت طويل في بعض الأحيان خاصة على الانترنيت وذلك لعدة أسباب (الظغط ، الحماية الخ …)

وبما أن العملية قد تطول كثيرا قمت بإستخدام هذا السطر وكأنني أقول للبرنامج جرب مع كل منفذ على الأكثر 5 ثواني فإن لم يستجب إعتبره منفذ ميت وقم بالمرور الى المنفذ الموالي.

بعد ذلك قمت بإنشاء متغير أخر مهمته الإتصال بالعنوان والمنفذ ثم تخزين البيانات  الملتقطة من خلال عملية الإتصال بعد ذلك نقوم بالتحقق من قيمة المتغير فإذا كانت قيمته صفر وهي القيمة التي تستخدمها جميع برامج اللينكس عندما يكون هناك تنفيذ ناجح وفي حالة عدم وجود أي إتصال ستكون قيمة المتغيير واحد وستكون هناك رسالة لتخبرك بعدم وجود أي إتصال .

بعد ذلك (في حالة نجاح الإتصال) سيقوم البرنامج بعرض جميع المنافذ المفتوحة مع أرقامه مع كلمة مفتوح أمامها.

الجزء السادس :

except KeyboardInterrupt: print ("You pressed Ctrl+C") print ("\n\n[*] User Requested An Interrupt.") print ("[*] Application Shutting Down.") sys.exit() except socket.gaierror: print ('Hostname could not be resolved. Exiting') sys.exit() except socket.error: print ("Couldn't connect to server") sys.exit()

هذا الجزء خاص بالأخطاء المحتملة من طرف المستخدم او من الشبكة ففي حالة أراد المستخدم إنهاء البرنامج من خلال الزرين CTRL+Z فستاتيه رسالة تخبره بأنه ظغط على تلك الأزرار مع رسالة أخرى تدل على أن البرنامج سيغلق.
أما في حالة عدم وجود السيرفر المراد الاتصال به أو يكون في حالة غير متصل ستكون هناك رسالة أخرى تدل على ذلك

وقبل أن أنهي هذا الموضوع أترككم مع الصورة من جهازي بعد تجريب البرنامج على محرك البحث قوقل.

برنامج مسح المنافذ

إلى هنا نصل الى ختام هذه الجولة وارجوا أن تكونوا إستمتعتم وإستفدتم وأي إستفسار أرجو تركه في التعليقات .

Categories: FLOSS Project Planets

LevelTen Interactive: A Simple Entity Data API for Module Builders

Planet Drupal - Tue, 2015-07-28 11:26

Entity Data is a handy little API to make module builder's lives easier. If you need to build a module that adds functionality and data to an entity, no longer will you have to implement your own CRUD and export/import support.

A module builders dilemma

Fields are a powerful way to add data to Drupal entities. However, sometimes fields can be rather cumbersome. Particularly when you want to add something and thus attach fields to entities that already exists.... Read more

Categories: FLOSS Project Planets
Syndicate content