FLOSS Project Planets

Mass edit your tasks with t_medit

Planet KDE - Fri, 2016-05-27 19:02

If you are a Yokadi user or if you have used other todo list systems, you might have encountered this situation where you wanted to quickly add a set of tasks to a project. Using Yokadi you would repeatedly write t_add <project> <task title>. History and auto-completion on command and project names makes entering tasks faster, but it is still slower than the good old TODO file where you just write down one task per line.

t_medit is a command to get the best of both worlds. It takes the name of a project as an argument and starts the default editor with a text file containing a line for each task of the project.

Suppose you have a "birthday" project like this:

yokadi> t_list birthday birthday ID|Title |U |S|Age |Due date ----------------------------------------------------------------- 1 |Buy food (grocery) |0 |N|2m | 2 |Buy drinks (grocery)|0 |N|2m | 3 |Invite Bob (phone) |0 |N|2m | 4 |Invite Wendy (phone)|0 |N|2m | 5 |Bake a yummy cake |0 |N|2m | 6 |Decorate living-room|0 |N|2m |

Running t_medit birthday will start your editor with this content:

1 N @grocery Buy food 2 N @grocery Buy drinks 3 N @phone Invite Bob 4 N @phone Invite Wendy 5 N Bake a yummy cake 6 N Decorate living-room

By editing this file you can do a lot of things:

  • Change task titles, including adding or removing keywords
  • Change task status by changing the character in the second column to S (started) or D (done)
  • Remove tasks by removing their lines
  • Reorder tasks by reordering lines, this will change the task urgency so that they are listed in the defined order
  • Add new tasks by entering them prefixed with -

Let's say you modify the text like this:

2 N @grocery Buy drinks 1 N @grocery Buy food 3 D @phone Invite Bob 4 N @phone Invite Wendy & David - @phone Invite Charly 5 N Bake a yummy cake - S Decorate table - Decorate walls

Then Yokadi will:

  • Give the "Buy drinks" task a more important urgency because it moved to the first line
  • Mark the "Invite Bob" task as done because its status changed from N to D
  • Change the title of task 4 to "@phone Invite Wendy & David"
  • Add a new task titled: "@phone Invite Charly"
  • Remove task 6 "Decorate living-room"
  • Add a started task titled: "Decorate table" (note the S after -)
  • Add a new task titled: "Decorate walls"

You can even quickly create a project, for example if you want to plan your holidays you can type t_medit holidays. This creates the "holidays" project and open an empty editor. Just type new tasks, one per line, prefixed with -. When you save and quit, Yokadi creates the tasks you entered.

One last bonus: if you use Vim, Yokadi ships with a syntax highlight file for t_medit:

This should be in the upcoming 1.1.0 version, which I plan to release soon. If you want to play with it earlier, you can grab the code from the git repository. Hope you like it!

Categories: FLOSS Project Planets

Drupal Association News: Reorganizing for a changing Drupal

Planet Drupal - Fri, 2016-05-27 18:00
Serving Drupal’s opportunity

The release of Drupal 8 creates many opportunities for organizations worldwide to build something amazing for complex web solutions, mobile, SaaS, the Internet of Things, and so much more. The Drupal Association is excited to work with the community to create these opportunities.

In our mission to support the Drupal Project, the Association unites our global open source community to build and promote Drupal. We do this primarily by using our two main resources: Drupal.org, the center of our community’s interactions, with 2 million unique visitors a month; and DrupalCon, which hosts over 6,000 attendees a year and provides the critical in-person acceleration of ideas.

Both foster the contribution journey that makes amazing software, and the evaluator’s adoption journey that encourages people to use Drupal across industries to create amazing things. As I mentioned in my recent blog post, achieving our mission helps the community thrive into the future and realize their Drupal dream.

With the release of Drupal 8, we have an opportunity to reflect on how the Association leverages these assets to work for Drupal’s current and future opportunities. Working with our board of directors, we determined that the Association needs to:

  • Re-assess the Project’s needs, and find new ways to support and meet those needs
  • Address a structural issue, to be a more sustainable organization

To do this, the Drupal Association board and I made hard choices. Having invested heavily in supporting the Drupal 8 release and exhausting available reserves, we recognize that the Association now must right-size the organization and balance our income with our expenses. The biggest impact is the elimination of seven positions, reducing our staff size from 25 to 17 employees. Also as part of this reduction, we have reorganized staff to better address the Project’s needs now that Drupal 8 is released.

While we do have our eye on a bright future for the Project through these changes, we’re also painfully aware that we’re not just eliminating positions. We’re saying goodbye to seven people who are important to us—whose contributions we value more than we can describe. We’re impacting the lives of people we care about—people who’ve given a lot to the Project and to others in our community.

Making the Drupal Association sustainable

In early 2014, the Association began investing reserves in building an engineering team for two main reasons: to address critical issues that were slowing down the production of Drupal 8, and to modernize Drupal.org. In doing so, we purposefully created a structural deficit, with the hopes that we could grow revenue to meet the cost of this investment before we drew down our reserves.

Because of this investment, we were able to accelerate the release of Drupal 8 through a roadmap of features like semantic versioning, DrupalCI (continuous integration testing for the projects we host), better search and discovery capabilities, numerous issue queue improvements, and issue credits, all of which positively impacted the release of Drupal 8. In addition, the engineering team has addressed years of technical debt and incorporated more modern services in the site that have made it more reliable and faster around the world.

While revenue grew from 2014 to 2015 by 14%, it didn't grow enough. Last year, we acknowledged that we did not meet the revenue goals that would sustain this investment. We addressed it with a retrenchment designed to extend our runway and see if we could increase revenue sufficiently. All told, while we have accomplished both revenue diversification and growth, it wasn’t enough to fully replace the investment. Then in spring 2016, several things happened on the revenue front that created a significant budget gap:

  • Sponsored work: The Association funded Engineering resources by accepting sponsored work to build Composer endpoints for Drupal projects. After that project was completed, we were unable to line up an additional sponsored project to continue underwriting the Engineering team.
  • The Connect Program: This new experimental program designed to connect software companies with service providers for partnership and integration opportunities did not meet its revenue goals.
  • DrupalCon: DrupalCon New Orleans ticket sales did not reflect the increase we were expecting this year, and we have revised our DrupalCon Dublin ticket sales projections accordingly.

"CAGR" means compound annual growth rate.
2016 data is projected revenue and expenses.

Addressing this structural deficit required a reduction of both labor and non-labor expense. Labor is our biggest cost, and we can’t create alignment without cutting roles at the Association. Holly Ross, our Executive Director, Josh Mitchell, CTO, and Matthew Tsugawa, CFO, offered to step down and contribute their salaries to the reduction, as they saw that a smaller organization doesn’t require a full leadership team. Additionally, we are losing three staff members from the Engineering team, one from the Events team, and one from the MarComm team. We are working with these staff members to help them through their transition.

Our second biggest expense is rent. We are working to eliminate the physical office in Portland, Oregon—moving staff to a virtual, distributed team—but those efforts will likely not introduce savings until 2017. We already work with distributed staff and community members around the world, so we have the know-how and tools like Slack and Zoom in place to support this change when it happens.

While these staff reductions are painful today, they correct the structural problem, bringing expenses in line with income. We have conservatively reforecasted revenue to reflect any impact this staffing reduction may have. We can see with our forecasts that the layoffs result in the Association being on healthy financial ground in 2017.

What happens next?

Leading up to now, we invested in tooling to help the community release Drupal 8. Now that Drupal 8 has shipped, the Project has new needs, which are:

  • Promote Drupal 8 to grow adoption
  • Sustain Drupal.org so the community can continue to build and release software

Drupal.org is our strongest channel for promoting Drupal, given that it’s the heart of the community and organically attracts hundreds of thousands of technical decision makers. It provides the biggest opportunity to guide evaluators through an adoption journey and amplify Drupal’s strength in creating new business opportunities through solutions like “DevOps and Drupal” or “Drupal for Higher Education.” These new services on Drupal.org will help evaluators, create value for our partners, and increase revenue for the Drupal Association.

We can also use Drupal.org to better promote DrupalCon. It’ll help grow ticket sales and attract more community members to that special week of in-person interaction, accelerating their adoption and contribution journeys.

Additionally, we’ll expand our efforts to attract more evaluators to DrupalCon. We can accelerate their adoption journey through peer networking and programming that helps them understand how Drupal is the right solution for their organization. We do this today with our vertical-specific Summits (like the Higher Education Summit) and we can do more through relevant sessions and other special programming. And while the Drupal evaluators are there, we’ll connect them with Drupal agencies who can help them realize their Drupal vision.

One thing about our work won’t change: our commitment to the tools you use to build Drupal every day. Though the Engineering team is smaller after today, they will make sure the tools and services you need to build and release the software are supported. That includes things like the issue queues, testing, security updates, and packaging.

Right now, we’re focused on the team as we go through this transition. Once the transition is complete, we’ll be looking at the Project needs and making sure we align our work accordingly. When we make changes, we’ll be sure to keep the community updated so you know what our primary focus is and how we are working towards our vision of Drupal 8 adoption across many sectors.

In the meantime, I invite you to tell me your thoughts on this new focus and how the Drupal Association can best help you.

Categories: FLOSS Project Planets

Jeff Geerling's Blog: Ensuring Drush commands run properly using Drush 8.x via Acquia Cloud Hooks

Planet Drupal - Fri, 2016-05-27 16:56

Any time there are major new versions of software, some of the tooling surrounding the software requires tweaks before everything works like it used to, or as it's documented. Since Drupal 8 and Drush 8 are both relatively young, I expect some growing pains here and there.

One problem I ran into lately was quite a head-scratcher: On Acquia Cloud, I had a cloud hook set up that was supposed to do the following after code deployments:

# Build a Drush alias (e.g. [subscription].[environment]).

# Run database updates.
drush @${drush_alias} updb -y

# Import configuration from code.
drush @${drush_alias} cim vcs

This code (well, with fra -y instead of cim) works fine for some Drupal 7 sites I work on in Acquia Cloud, but it seems that database updates were detected but never run, and configuration changes were detected but never made... it took a little time to see what was happening, but I eventually figured it out.

The tl;dr fix?

Categories: FLOSS Project Planets

Kubuntu Party 4 – The Gathering of Halflings

Planet KDE - Fri, 2016-05-27 16:09

Come and join us for a most excellent Gathering of Halflings at Kubuntu Party 4, Friday 17th June 19:00 UTC.

The party theme is all about digging out those half finished projects we’ve all got lying around our Geekdoms, and fetching them along for a Show ‘n’ Tell. As ever, there will be party fun and games, an opportunity to kick back from all the contributing that we do, so join us and  enjoy good company and laughter.

Our last party Kubuntu party 3 proved to be another success, with further improvement and refinement upon the previous Kubuntu Party.

New to the Kubuntu Party scene? Fear not my intrepid guests, new friendships are merely but a few clicks away. Check out our previous story trail.

The lessons learned from party  2 had been implemented in party 3. Our main focus is on our guests and their topics of conversation. We didn’t try to incorporate too many things, but simply just let things flow and develop un-conference style. We kept to our plan of closing the party at 22:00 UTC, with a 30 minute over-run to allow people to finish up.  This worked really well and the feedback from the guests was really positive. For the next party we will tighten this over-time further to 15 minutes for close.

We had fun discussing many aspects of computing, including of course lots about Kubuntu. As the party progressed we got into a keyboard geek war, with various gaming keyboards, bluetooth devices, and some amazing back lighting.  However, there simply was nothing to compete with the bluetooth Laser projected keyboard and Mouse that Jim produced, it was awesome!

We also had great fun playing with an IRC Controlled Sphero Robot, a project that Rick Timmis has been working on. The party folks got chance to issue various motion and lighting commands to the Sphero spherical robot. Party goers were able to watch the Robot respond via Rick’s webcam in Big Blue Button.

Rick said

“It was also Awesome seeing that brightly coloured little ball, dashing back and forth at the behest of the party revelers.

It all got rather surreal when Marius broke out his VR Headset, a sophisticated version of the Google Cardboard. The headset enabled Marius to place one of his many (and I mean bags full) of mobile devices in the headset aperture, and vanish into an immersive 3D world.

What are you waiting for ? Book the party in your diary now.

Friday 17th June 19:00 UTC.

Details of our conference server will be posted to #kubuntu-podcast on irc.freenode.net at 18:30 UTC. Or you can follow us on Google+ Kubuntu Podcast and check in on the events page.


Categories: FLOSS Project Planets

NEWMEDIA: Build robust forms in Drupal 8

Planet Drupal - Fri, 2016-05-27 14:00
Build robust forms in Drupal 8 Over the last few Drupal releases, the Webform module has been the standard for creating robust forms and surveys. While this venerable module has served the community’s needs quite well, major releases of Drupal often afford the opportunity to take a fresh look at how common problems are solved, leveraging new technologies and concepts introduced in the release. Tanner J. Ferguson Fri, 05/27/2016 - 18:00 Baked into Core

Since Drupal 4.6, Drupal Core has shipped with a basic contact form module that had limited functionality. Finally, the contact module got some much-needed attention in Drupal 8. Contact forms are now fieldable entities, allowing us to build forms with the same fields we build content types, taxonomies, and other entities with.

Building Out the Form

Forms are created and managed by navigating to Structure->Contact Forms in the Admin menu. From here, choose “Add contact form.”


This takes us to a form for setting the name of the form, email addresses for submissions to be sent to, and optionally an auto-reply message to the submitter. Once saved, we are taken back to the Contact Forms admin page. 

This gives us a basic form with Sender Name and Email, a Subject field, and a basic text area for a Message. To add fields to our new form, we need to select the “Manage Fields” option in the Operations dropdown. From here, we can add any of the field types available on the site.

Form Display

To customize how the form is displayed, we want to select the “Manage Form Display” option in the Operations dropdown. This will allow us to change the order of the fields for the form, change configurations for each field, and allow us to disable any fields that are provided by default that we don’t want to use.

Manage Display

Similar to Form Display, if we want to change the order and display of fields in the submission emails, using the “Manage Display” option will allow us reorder or hide fields from showing in the submission email.

Submission Storage and Export

Everything we’ve covered so far is great if we want to build out a form and start getting submissions by email. However, if we want to save and view submissions in the site or want to export the submissions in bulk, we need to look to some contributed modules to fill in the gaps.

Contact Storage Module

As its name implies, the Contact Storage module addresses the need for a central location from which content editors can review and manage form submissions on the site. The module also provides Views integration as well as some additional customization options for our forms. The default configuration provides these features for us, so we can install the module and start benefitting from it immediately.

Submission Exports

We now have robust forms and a place to centrally store their submissions, with Views giving us the ability to build out lists of submissions. What we’re still missing at this point is a way to download the submissions in bulk, and it’s fairly common to want such an export in a format like CSV that can be loaded into a spreadsheet application. To achieve that, we can put our Views integration to use, along with Drupal 8’s REST Module, and the CSV Serialization module. 

Once these modules are installed, create a new view of Contact Messages and check the “Create a REST Export” option, providing the path we will navigate to trigger the export. Then hit Save and Edit to continue configuring the view.

In the format section of the view configuration page, we see the format is set to “Serializer”. Here, we want to configure the settings for Serializer, and select the “csv” format.

At this point we have a working view that will export all submissions as a CSV. We can leave the view set to show content as “Entity,” which will export all fields for the submission, or we can switch the display to “Fields,” which will allow us to specify the fields we want in the export, and how they are formatted.

With exports now provided by Views, we can create custom exports for specific forms, or we can utilize Exposed Filters and Contextual Filters to provide an export that works for all forms, allowing users to choose how they want the export filtered.

More Form Solutions in Contrib

If we need to provide robust survey forms now, the approach covered here is currently the most stable and ready to implement. If this solution doesn’t meet your use case, it might be worth taking a look at eForm, the Drupal 8 version of the Entityform module introduced in Drupal 7. There is also still some discussion of a Drupal 8 port of the Webform module, so it’s possible with enough interest we could have a few different solutions for providing front-facing forms to end users.

While building forms will be a bit different in Drupal 8 compared to previous versions, the experience is more in line with what we’ve come to expect from building Content Types and other fieldable entities. This provides the opportunity for more flexibility and functionality when building front-facing forms, and the Views integration provides the opportunity to present and export the submitted form data just the way we need.

Categories: FLOSS Project Planets

Continuum Analytics News: Taking the Wheel: How Open Source is Driving Data Science

Planet Python - Fri, 2016-05-27 10:25
Company Blog Posted Friday, May 27, 2016 Travis Oliphant Chief Executive Officer & Co-Founder

The world is a big, exciting place—and thanks to cutting-edge technology, we now have amazing ways to explore its many facets. Today, self-driving cars, bullet trains and even private rocket ships allow humans to travel anywhere faster, more safely and more efficiently than ever before. 

But technology's impact on our exploratory abilities isn't just limited to transportation: it's also revolutionizing how we navigate the Data Science landscape. More companies are moving toward Open Data Science and the open source technology that underlies it. As a result, we now have an amazing new fleet of vehicles for our data-related excursions. 

We're no longer constrained to the single railroad track or state highway of a proprietary analytics product. We can use hundreds of freely available open source libraries for any need: web scraping, ingesting and cleaning data, visualization, predictive analytics, report generation, online integration and more. With these tools, any corner of the Data Science map—astrophysics, financial services, public policy, you name it—can be reached nimbly and efficiently. 

But even in this climate of innovation, nobody can afford to completely abandon previous solutions and traditional approaches still remain viable. Fortunately, graceful interoperability is one of the hallmarks of Open Data Science. In appropriate scenarios, it accommodates the blending of legacy code or proprietary products with open source solutions. After all, sometimes taking the train is necessary and even preferable.

Regardless of which technology teams use, the open nature of Open Data Science allows you to travel across the data terrain in a way that is transparent and accessible for all participants.

Data Science in Overdrive

Let's take a look at six specific ways Open Data Science is propelling analytics for small and large teams.

1. Community. Open Data Science prioritizes inclusivity; community involvement is a big reason that open source software has boomed in recent years. Communities can test out new software faster and more thoroughly than any one vendor, accelerating innovation and remediation of any bugs.

Today, the open source software repository, GitHub, is home to more than 5 million open source projects and thousands of distinct communities. One such community is conda-forge, a community of developers that build infrastructure and packages for the conda package manager, a general, cross-platform and cross-language package manager with a large and growing number of data science packages available. Considering that Python is the most popular language in computer science classrooms at U.S. universities, open source communities will only continue to grow.

2. Innovation. The Open Data Science movement recognizes that no one software vendor has all the answers. Instead, it embraces the large—and growing—community of bright minds that are constantly working to build new solutions to age-old challenges.

Because of its adherence to free or low-cost technologies, non-restrictive licensing and shareable code, Open Data Science offers developers unparalleled flexibility to experiment and create innovative software.

One example of the innovation that is possible with Open Data Science is taxcalc, an Open Source Policy Modeling Center project publically available via TaxBrain. Using open source software, the project brought developers from around the globe together to create a new kind of tax policy analysis. This software has the computational power to process the equivalent of more than 120 million tax returns, yet is easy-to-use and accessible to private citizens, policy professionals and journalists alike.

3. Inclusiveness. The Open Data Science movement unites dozens of different technologies and languages under a single umbrella. Data science is a team sport and the Open Data Science movement recognizes that complex projects require a multitude of tools and approaches.

This is why Open Data Science brings together leading open source data science tools under a single roof. It welcomes languages ranging from Python and R to FORTRAN and it provides a common base for data scientists, business analysts and domain experts like economists or biologists.

What's more, it can integrate legacy code or enterprise projects with newly developed code, allowing teams to take the most expedient path to solve their challenges. For example, with the conda package management system, developers can create conda packages from legacy code, allowing integration into a custom analytics platform with newer open source code. In fact, libraries like SciPy already leverage highly optimized legacy FORTRAN code. 

4. Visualizations. Visualization has come a long way in the last decade, but many visualization technologies have been focused on reporting and static dashboards. Open Data Science; however, has unveiled intelligent web apps that offer rich, browser-based interactive visualizations, such as those produced with Bokeh. Visualizations empower data scientists and business executives to explore their data, revealing subtle nuances and hidden patterns.

One visualization solution, Anaconda's Datashader library, is a Big Data visualizer that plays to the strengths of the human visual system. The Datashader library—alongside the Bokeh visualization library—offers a clever solution to the problem of plotting an enormous number of points in a relatively limited number of pixels. 

Another choice for data scientists is the D3 Javascript library, which exploded the number of visual tools for data. With wrappers for Python and other languages, D3 has prompted a real renaissance in data visualization.

5. Deep Learning. One of the hottest branches of data science is deep learning, a sub-segment of machine learning based on algorithms that work to model data abstractions using a multitude of processing layers. Open source technology, such as that embraced by Open Data Science, is critical to its expansion and improvement.

Some of the new entrants to the field—all of which are now open source—are Google's TensorFlow project, the Caffe deep learning framework, Microsoft's Computational Network Toolkit (CNTK), Amazon's Deep Scalable Sparse Tensor Network Engine (DSSTNE), Facebook's Torch framework and Nervana's Neon. These products enter a field with many participants like Theano whose Lasagne extension allows easy construction of deep learning models and Berkley's Caffe, which is an open deep learning framework.

These are only some of the most interesting frameworks. There are many others, which is a testament to the lively and burgeoning Open Data Science community and its commitment to innovation and idea sharing allowing for even more future innovation. 

6. Interoperability. Traditional, proprietary data science tools typically integrate well only with their own suite. They’re either closed to outside tools or provide inferior, slow methods of integration. Open Data Science, by contrast, rejects these restrictions, instead allowing diverse tools to cooperate and interact in every more closely connected ways.

For example, Anaconda, includes open source distributions of the Python and R languages, which interoperate very well together enabling data scientists to use the technologies that make sense for them. For example, a business analyst might start with Excel, then work with predictive models in R and later fire up Tableau for data visualizations. Interoperable tools speed analysis, eliminate the need for switching between multiple toolsets and improve collaboration.

It's clear that open source tools will lead the charge towards innovation in Data Science and many of the top technology companies are moving in this direction. IBM, Microsoft, Google, Facebook, Amazon and others are all joining the Open Data Science revolution, making their technology available with APIs and open source code. This benefits technology companies and individual developers, as it empowers a motivated user base to improve code, create new software and use existing technologies in new contexts.

That's the power of open source software and inclusive Open Data Science platforms like Anaconda. Thankfully, today's user-friendly languages—like Python—make joining this new future easier than ever.

If you're considering open source for your next data project, now’s the time to grab the wheel. Join the Open Data Science movement and shift your analyses into overdrive.

Categories: FLOSS Project Planets

Mike Driscoll: Python 201: An Intro to importlib

Planet Python - Fri, 2016-05-27 10:00

Python provides the importlib package as part of its standard library of modules. Its purpose is to provide the implementation to Python’s import statement (and the __import__() function). In addition, importlib gives the programmer the ability to create their own custom objects (AKA an importer) that can be used in the import process.

What about imp?

There is another module called imp that provides an interface to the mechanisms behind Python’s import statement. This module was deprecated in Python 3.4. It is intended that importlib should be used in its place.

This module is pretty complicated, so we’ll be limiting the scope of this article to the following topics:

  • Dynamic imports
  • Checking is a module can be imported
  • Importing from the source file itself

Let’s get started by looking at dynamic imports!

Dynamic imports

The importlib module supports the ability to import a module that is passed to it as a string. So let’s create a couple of simple modules that we can work with. We will give both modules the same interface, but have them print their names so we can tell the difference between the two. Create two modules with different names such as foo.py and bar.py and the following code in each of them:

def main(): print(__name__)

Now we just need to use importlib to import them. Let’s look at some code to do just that. Make sure that you put this code in the same folder as the two modules you created above.

# importer.py   import importlib     def dynamic_import(module):   return importlib.import_module(module)     if __name__ == '__main__': module = dynamic_import('foo') module.main()   module_two = dynamic_import('bar') module_two.main()

Here we import the handy importlib module and create a really simple function called dynamic_import. All this function does is call importlib’s import_module function with the module string that we passed in and returns the result of that call. Then in our conditional statement at the bottom, we call each module’s main method, which will dutifully print out the name of the module.

You probably won’t be doing this a lot in your own code, but occasionally you’ll find yourself wanting to import a module when you only have the module as a string. The importlib module gives us the ability to do just that.

Module Import Checking

Python has a coding style that is known as EAFP: Easier to ask for forgiveness than permission. What this means is that it’s often easier to just assume that something exists (like a key in a dict) and catch an exception if we’re wrong. You saw this in our previous chapter where we would attempt to import a module and we caught the ImportError if it didn’t exist. What if we wanted to check and see if a module could be imported rather than just guessing? You can do that with importlib! Let’s take a look:

import importlib.util   def check_module(module_name): """ Checks if module can be imported without actually importing it """ module_spec = importlib.util.find_spec(module_name) if module_spec is None: print('Module: {} not found'.format(module_name)) return None else: print('Module: {} can be imported!'.format(module_name)) return module_spec     def import_module_from_spec(module_spec): """ Import the module via the passed in module specification Returns the newly imported module """ module = importlib.util.module_from_spec(module_spec) module_spec.loader.exec_module(module) return module   if __name__ == '__main__': module_spec = check_module('fake_module') module_spec = check_module('collections') if module_spec: module = import_module_from_spec(module_spec) print(dir(module))

Here we import a submodule of importlib called util. The check_module code has the first piece of magic that we want to look at. In it we call the find_spec function against the module string that we passed in. First we pass in a fake name and then we pass in a real name of a Python module. If you run this code, you will see that when you pass in a module name that is not installed, the find_spec function will return None and our code will print out that the module was not found. If it was found, then we will return the module specification.

We can take that module specification and use it to actually import the module. Or you could just pass the string to the import_module function that we learned about in the previous section. But we already covered that so let’s learn how to use the module specification. Take a look at the import_module_from_spec function in the code above. It accepts the module specification that was returned by check_module. We then pass that into importlib’s module_from_spec function, which returns the import module. Python’s documentation recommends executing the module after importing it, so that’s what we do next with the exec_module function. Finally we return the module and run Python’s dir against it to make sure it’s the module we expect.

Import From Source File

The importlib’s util sub-module has another neat trick that I want to cover. You can use util to import a module using just it’s name and file path. The following is a very derived example, but I think it will get the point across:

import importlib.util   def import_source(module_name): module_file_path = module_name.__file__ module_name = module_name.__name__   module_spec = importlib.util.spec_from_file_location( module_name, module_file_path) module = importlib.util.module_from_spec(module_spec) module_spec.loader.exec_module(module) print(dir(module))   msg = 'The {module_name} module has the following methods:' \ ' {methods}' print(msg.format(module_name=module_name, methods=dir(module)))   if __name__ == '__main__': import logging import_source(logging)

In the code above, we actually import the logging module and pass it to our import_source function. Once there, we grab the module’s actual path and its name. Then we call pass those pieces of information into the util’s spec_from_file_location function which will return the module’s specification. Once we have that, we can use the same importlib mechanisms that we used in the previous section to actually import the module.


At this point, you should have an idea of how you might use importlib and import hooks in your own code. There is a lot more to this module than what is covered in this article, so if you have a need to write a custom importer or loader then you’ll want to spend some time reading the documentation and the source code.

Related Readings

Categories: FLOSS Project Planets

Third & Grove: The One and Only entity_metadata_wrapper!

Planet Drupal - Fri, 2016-05-27 10:00
The One and Only entity_metadata_wrapper! miro Fri, 05/27/2016 - 10:00
Categories: FLOSS Project Planets

Wiki, what’s going on? (Part 2)

Planet KDE - Fri, 2016-05-27 09:48


Hello everybod,

I’m here to give you some updates on our work with the WikiToLearn community and, since I like this idea, I was thinking that “Wiki, what’s going on?” could become a nice section of our blog where to give updates on our work and things like that.


So, let’s start: recently the promo team had a (mini) sprint where some important features were discussed. Mainly we focused on participation: both on how to involve new users and on how to enforce the community structure. In my opinion, this is an extremely important task to be accomplished and I appreciate all the efforts we are doing to come up with an effective organization. We came up with cool ideas, such as local groups, “wiki-thons” and others but now it’s time to test them. The editors group is working hard as well: review of the contents, definition of an internal structure and writing new contents are our daily work. Actually I’m not part of the tech group, so I can not give you any update but the certainty that our guys work day after day to improve ux experience, to a better and always effective infrastructure and to solve other problems of which I have no expertise to talk about.


For today it’s everything. I’d like to conclude this post with something I am pretty sure about: if you ask “Wiki, what’s going on?”, “working hard to improve” can be a good answer!




L'articolo Wiki, what’s going on? (Part 2) sembra essere il primo su Blogs from WikiToLearn.

Categories: FLOSS Project Planets

Mediacurrent: Friday 5: 5 Tips for Improving Your Site&#039;s SEO

Planet Drupal - Fri, 2016-05-27 09:22

We made it to the finish line of another busy work week!

Thanks for joining us for Episode 9 of The Mediacurrent Friday 5. This week, Community Lead Damien McKenna discusses 5 Tips for Improving Your Site's SEO with the one and only Mark Casias.

Categories: FLOSS Project Planets

Mike Gabriel: MATE 1.14 landing in Debian unstable...

Planet Debian - Fri, 2016-05-27 09:11

I just did a bundle upload of all MATE 1.14 related packages to Debian unstable. Packages are currently building for the 23 architectures supported by Debian, build status can be viewed on the DDPO page of the Debian MATE Packaging Team [1]


Again a big thanks to the packaging team. Martin Wimpress again did a fabulous job in bumping all packages towards the 1.14 release series during the last weeks. During last week, I reviewed his work and uploaded all binary packages to a staging repository.

Also a big thanks to Vangelis Mouhtsis, who recently added more hardening support to all those MATE packages that do some sort of C compilation at build time.

After testing all MATE 1.14 packages on a Debian unstable system, I decided to do a bundle upload today. Packages should be falling out of the build daemons within the next couple of hours/days (depending on the architecture being built for).

GTK2 -> GTK3

The greatest change for this release of MATE to Debian is the switch over from GTK2 to GTK3.

People using the MATE desktop environment on Debian systems are invited to test the new MATE 1.14 packages and give feedback via the Debian bug tracker, esp. on the user experience regarding the switch over to GTK3.

Thanks to all who help getting MATE 1.14 in Debian better every day!!!

Known issues when running in NXv3 sessions

The new GTK3 build of MATE works fine locally (against local X.org server). However, it causes some trouble (i.e. graphical glitches) when running in an NXv3 based remote desktop session. Those issues have to be addressed by me (while wearing my NXv3 upstream hat), I guess (sigh...).


[1] https://qa.debian.org/developer.php?login=pkg-mate-team@lists.alioth.deb...

Categories: FLOSS Project Planets

Ned Batchelder: Coverage.py 4.1

Planet Python - Fri, 2016-05-27 08:50

Coverage.py 4.1 is out!

I'm not sure what else to say about it that I haven't said a few times in the last six months: branch coverage is completely rewritten, so it should be more accurate and more understandable.

The beta wasn't getting much response, so I shielded my eyes and released the final version a few days ago. No explosions, so it's seems to be OK!

Categories: FLOSS Project Planets

KDE Partition Manager 2.2.0

Planet KDE - Fri, 2016-05-27 08:16

KDE Partition Manager and KPMcore 2.2.0 are now released with a proper LUKS support! This is a fairly big feature release but it also got tested more than usual, so a lot of bugs were fixed (including some crashes). Unfortunately there is still one more reproducible crash (bug 363294) on exit when file open/save dialogs are used (and very similar crashes actually exist in some other KDE programs, e.g. kdebugdialog or Marble). If anybody has any idea how to fix it I would be grateful.

Changes in this release:
  • Much improved LUKS support. We used to just detect LUKS container.
    • Now KDE Partition Manager can create LUKS volumes and format inner file system. Since default options are used (except for the key size which was increased) we recommend cryptsetup 1.6 or later. At the moment we restrict the choice of new inner file systems to ext234, Btrfs, swap, ReiserFS, Reiser4, XFS, JFS, ZFS and LVM physical volumes  when formatting new encrypted partitions but if you create other file systems manually using command line tools they will still work in KDE Partition Manager (other than detection support for LVM PV, the support for LVM is not implemented but this might change soon as a result of GSoC project, there is already some LVM PV resize support in git master). If you think it makes sense to whitelist other file systems and there is a valid use case please leave a comment.
    • LUKS volumes can be opened/closed.
    • Resize support for filesystems encrypted with LUKS (obviously you can’t do this while LUKS volume is closed, you have to decrypt it first). To the best of my knowledge, no other partition manager can do this.
    • To prevent data loss, you can only move LUKS partitions that are closed. A few bugs were fixed in KDE Partition Manager to properly support unmovable but resizeable partitions (i.e. LUKS when it is closed).
    • Filesystems inside LUKS can be checked for errors, mounted, labels can be set, etc. All other stuff like free space reporting also works (and space taken up by LUKS metadata is taken into the account).
    • Opened LUKS partitions now cannot be removed, you have to close them first.
    • Copying LUKS partition works but only when they are closed.
  • More widespread use of C++11 features.
  • Fixed a couple of bugs present from KF5 porting. Also new Qt5 signal/slot syntax is used, so moc is used much less often
  • Clobbering (deleting file system signature) deleted partitions was fixed.
  • Some other bugs were fixed, e.g. NILFS2 resizing support was fixed.
  • ntfslabel from ntfs-3g is now used for setting NTFS labels.
  • A crash when partitions were deleted (mainly extended but not only) was fixed.
  • Compilation with Clang was fixed.

There is also a slightly older (e.g. now we use KPasswordDialog to unlock LUKS partitions) video demonstrating LUKS support.

Note for packagers: Calamares 2.2.2 will most likely work with KPMcore 2.2.0 after recompilation but Calamares 2.3 will be recommended as soon as it is released. Older versions of KDE Partition Manager are not compatible with KPMcore 2.2.0, so you need to update KPMcore and KDE Partition Manager at the same time. Qt 5.6.1 also fixes one minor NTFS bug in KPMcore but unfortunately it is not released yet.

Download links:

KPMcore 2.2.0

KDE Partition Manager 2.2.0

There are already packages for Arch and Gentoo, hopefully other distros will package it too.

Categories: FLOSS Project Planets

Valuebound: Drupal 8 Commerce is on the Way! DrupalCon New Orleans 2016.

Planet Drupal - Fri, 2016-05-27 07:13

A lot of thanks to the commerce guys for contributing the Drupal commerce module to Drupal community, which took drupal to a different level in the CMS world. Its very exciting, Commerce 2.x which is the Drupal 8 version of drupal commerce. As like any other drupal developer / architect, I am also excited about Commerce 2.x

Thank God, I was one of the fortunate ones to attend the Commerce Guys session on DrupalCon New Orleans 2016, the very first session after the release of ‘8.x-2.0-alpha4’  version of drupal commerce. It was an amazing session, which made a lot of things clearer,a lot of unanswered questions were answered…

Categories: FLOSS Project Planets

Mahmoud Hashemi: Running from software

Planet Python - Fri, 2016-05-27 07:11

So while PyCon 2016 starts in less than 48 hours, some kind of anticipation compelled me to polish off the last of the talks from last year. For some reason I went for a keynote. I'm not typically a keynote attendee, and this time I'd missed something big.1

Jacob Kaplan-Moss, the herald of Django, really laid something out. I'll give you the short version, but here's a video in case you want a look:

To summarize, Jacob sets out to explain why mediocrity is acceptable. Bell curves rule everything around us. He holds up his record as a middling ultramarathon runner as proof. He surmises that lack of passion for work is leading people to feel untalented. This, combined with "brilliant asshole" programmers, is shaming people out of the industry. He wraps up with a message of inclusivity, especially toward women. Now, you can probably make sense of any other details with the slides.

Above all, Jacob and I are in complete agreement with his opening and closing. If you consider yourself an average programmer, that is fine and probably better than the alternatives. Also, as a field, software must continue reaching out to and integrating more underrepresented groups, especially women.

That said, I'm not sure how one could have put more missteps between those two points.2

The 10x Programmer

If Jacob makes one thing clear from the keynote, it's that years of being called a 10x programmer has made him very uncomfortable. He rejects the concept, as many have. Now I, too, have at various points been called a rockstar, ninja, and 10xer, and even though I also don't identify with those labels, I will tell you that the 10x programmer is very real.3

Every 10x programmer I know spends most days as a 1x something else. Most 10x code is the result of observing and accumulating 10x more domain knowledge, then being in the right place at the right time. You do what ten developers off the street could never. I've been there, and I have the commits to prove it. And when other aspects of my life take priority, I'm an average programmer, focusing on my job and its share of 1x work.

10x programming is a matter of insight and inspiration, confidence and autonomy. This is a circumstance so unique that it creates an obligation to teach software to the world. You never know when the right 1x programmer is going to be in the right place to transform their surroundings with a 10x moment. Many of the most creative people I know understand very little about programming, and one can't help but wonder what programming skills or insight might bring to their process.

The great thing about Python is that you can teach so much programming with so little overhead. You give those highly creative people even a taste of programming and it opens up vast opportunities. Even just the shared vocabulary is a huge boost to cross-pollination of ideas between disciplines.

Look at Python use among biologists, neuroscientists, and other more academics and analysts. Their amazing results speak volumes. Yet by strict accounts their programming level wilts next to experienced Python systems engineers working at YouTube, PayPal, Dropbox, Continuum Analytics, etc.

It's inexcusable to put such a diverse group on this single bell curve when their goals and disciplines are so different. Our language is the same and our cultures are mutually beneficial. Seeing people measured along this single dimension keeps me up at night.

Putting it all in terms of employment is harmful. Maximizing employee utilization only creates more 1x programming. Software is more than the industry of churning out code. A programmer is more than someone who is paid to write software. A person is more than their profession.

The Privilege

It's said that the most sure sign of privilege is ignorance. Jacob drives this all the way home, but not for lack of trying

From the beginning of the talk, he considers the immediate situation. He disclaims most of his reputation, describes his origins as unremarkable, and points out that his biggest contributions weren't actually his. Later on in the talk, while showcasing the face of the privileged programmer, the 10x archetype, the person most likely to be able to ride on their identity, he shares a chuckle at his own resemblance.

Moving into Jacob's running-programming analogy, the anecdote got off to a false start, but just kept going. Nobody stopped him to point out that by virtue of simply being an ultra-runner, he is the top tier. If you're in the 68th percentile of ultrarunners, then you're in the top 1% of people who run, period. Even finishing a normal marathon faster than the median time demonstrates talent and tremendous physical gifts.

Jacob trimmed the y-axis, measured himself among the top tier, and found himself only slightly better than mediocre. The sort of guilt-inducing behavior that he claims leads people to leave the field, unfolding right on stage.

The Corporatism

Throughout the talk, Jacob cites some statistics. The one that stuck with me was about an impending employment deficit. The U.S. government projects 1.5 million unfilled programming jobs in the year 2020. This becomes a central motivation for Jacob encouraging people to go into software4. Programming is immediately linked to coding for money.

Jacob says software is a skill, like any other. Programming is like running marathons. Individuals are responsible for their own training. But Jacob bears a message of hope: bosses will pay you to run, even if you're not the fastest.

Too many managers are like Jacob, subtly redirecting the creative potential of software into commodity labor. "We" need as many people as possible to learn and teach programming because some a small portion of society has decided to gamble money on software eating everything in a very particular way.

On the contrary, people need exposure to programming for its fundamental concepts. Software offers new ways of decomposing problems and creating solutions, new approaches that are necessary to understand an increasingly fast-paced and connected world. That is totally irrespective of employment. Software design is a new way of thinking, for all people, employed as programmers or not.

In short

Jacob is a much better runner than he gives himself credit for, but programming is not running.

Software is much more than an industry. You don't need a programming job to be a good programmmer.

Which brings me back to rephrasing the good parts we agree on, one doesn't need to be a good programmer to make a difference with software. Recognizing this, it follows that it's best for all of us to accept and support programmers of all walks and skill levels.

  1. Suffice to say, I'm already subscribed to Python 2016 

  2. Dear Jacob, if you are reading this, I just wanted to say no harsh feelings. It was a moving talk and I'm sure that most people got the good messages that bookended the talk. I hope you don't mind the criticism and still find it as interesting as you mentioned on stage. Hope it helps with future keynotes, and I'll be right here if you have any followups. 

  3. This also came up in Episode #54 of Talk Python to Me, while discussing my course, Enterprise Software with Python

  4. "The US Bureau of Labor Statistics estimates that by 2020 there will be a 1.5 million programming job gap, which means there will be that many jobs unfilled. That's in five years. The EU has published similar numbers, 1.2 million in 2018—three years. That means we need to be doing something to get more people into our industry." 

Categories: FLOSS Project Planets

PyCharm: Meet the PyCharm Team at PyCon 2016

Planet Python - Fri, 2016-05-27 05:56

May 28th – June 5th, the JetBrains PyCharm Team will be in Portland, Oregon for PyCon 2016. As usual, we sponsor the event and will have a JetBrains PyCharm booth in the Expo Hall during the main conference days.

The show represents a great opportunity to meet a large part of the PyCharm Team, learn about current developments, watch a live demo or just say hi. We invite you to stop by our booth with your questions and chat about your experiences with PyCharm and other JetBrains tools. We will be raffling PyCharm licenses so be sure to register and grab some of our cool giveaways.

PyCharm team members are going to attend two very important satellite events: Python Language Summit (May 28th) and Python Education Summit (May 29th). Andrey Vlasovskikh will give a talk on PEP 484 and Type Hinting adoption. Dmitry Filippov and Anna Morozova will be at Education Summit to talk about PyCharm Edu and current JetBrains educational initiatives. If you’ll be there, feel free to find and chat with us about latest trends in the Python world.

Also, some of our developers will join the Development Sprints (June 2nd – 5th). Would you like to join us or invite to your own sprint? Come to our booth to discuss things!

We’re looking forward to meeting you at PyCon!

-PyCharm team


Categories: FLOSS Project Planets

Pronovix: Brightcove Video Connect for Drupal 8 - Part 3: Video & Playlist Management

Planet Drupal - Fri, 2016-05-27 05:23

Part 3 of this 4-part blog series illustrates the management of Videos & Playlists and discuss some of the changes compared to the Drupal 7 version of the module - especially the new video listing interface (with a similar layout to Brightcove’s own Studio interface) and autocompleting tags feature.

Categories: FLOSS Project Planets

OpenLucius: 12 Cool Drupal modules for site builders | May 2016

Planet Drupal - Fri, 2016-05-27 05:22

Without further ado, here is what struck me about Drupal modules in the past month:

1. Hide submit button
Categories: FLOSS Project Planets

Patrick Matthäi: Packages updates from may

Planet Debian - Fri, 2016-05-27 05:18

There are some news on my packaging work from may:

  • OTRS
    • I have updated it to version 5.0.10
    • Also I have updated the jessie backports version from 5.0.8 to 5.0.10
    • I have to test the new issue #825291 (database update with Postgres fails UTF-8 Perl error), maybe someone has got an idea?
  • needrestart
    • Thanks to Thomas Liske (upstream author) for adressing mostly all open bugs and wishes from the Debian BTS and Github. Version 2.8 fixes 6 Debian bugs
    • Already available in jessie-backports :)
  • geoip-database
    • As usual package updated and uploaded to jessie-backports and wheezy-backports-sloppy
  • geoip
    • Someone here interested in fixing #811767 with GCC 6? I were not able to fix it
    • .. and if it compiles, the result segfaults :(
  • fglrx-driver
    • I have removed the fglrx-driver from the Debian sid/stretch repository
    • This means that fglrx in Debian is dead
    • You should use the amdgpu driver instead :)
  • icinga2
    • After some more new upstream releases I have updated the jessie-backports version to 2.4.10 and it works like a charm :)
Categories: FLOSS Project Planets

EuroPython: EuroPython 2016 Keynote: Naomi Ceder

Planet Python - Fri, 2016-05-27 05:13

We are pleased to announce our fourth keynote speaker for EuroPython 2016: Naomi Ceder.

About Naomi Ceder

Naomi Ceder has been learning, teaching, using, and talking about Python since 2001. She is the author of the Quick Python Book, 2nd edition and has served the Python community in various ways, including as an organizer for PyCon US and a member of the PSF Board of Directors. Naomi is also the co-founder of Trans*Code, a UK based hack day focusing on trans issues.

She speaks about her own experiences of marginalization with the hope of making the communities she loves more diverse and welcoming for everyone. In her spare time she enjoys knitting and deep philosophical conversations with her dogs.

The Keynote: Come for the Language, Stay for the Community

While Python the language is wonderful, the Python community and the personal, social, and professional benefits that flow from involvement in a community like ours are often more compelling.

“Learn about the goals of the Python Software Foundation and how everyone can take part to help build even better Python communities locally, regionally, and globally.  I will also discuss some of our strengths as a community, and also look at some of the challenges we face going forward.”

With gravitational regards,

EuroPython 2016 Team

Categories: FLOSS Project Planets
Syndicate content