FLOSS Project Planets

A. Jesse Jiryu Davis: Announcing PyMongo 2.8 and Motor 0.4

Planet Python - Thu, 2015-01-29 19:26

I'm delighted to announce that Bernie Hackett released PyMongo 2.8 yesterday, and I released Motor 0.4 today. They support features in the upcoming MongoDB release, which will be named MongoDB 3.0.

More information about PyMongo 2.8 is in my post about the release candidate from November.

Categories: FLOSS Project Planets

Stéphane Wirtel: Python & PostgreSQL, A Wonderful Wedding at PythonFOSDEM 2015

Planet Python - Thu, 2015-01-29 19:00

Dear Python Lovers,

I will present "Python & PostgreSQL, a Wonderful Wedding" to the PythonFOSDEM 2015.

You can find my slides in French on SpeakerDeck but, of course, I will translate them in English asap.


Python and PostgreSQL, two tools we like to use for our projects but do you know everything about them? The talk will give an overview of psycopg2, SQLAlchemy, Alembic and PL/Python, these libraries can be used with PostgreSQL to improve the life of the Python developer.

Full Description

Python and PostgreSQL, two tools we like to use for our projects but do you know everything about them?

The talk will give an overview of psycopg2, SQLAlchemy, Alembic and PL/Python, these libraries can be used with PostgreSQL.

  • psycopg2, the well known connector, this basic component is really useful, well documented and battle-tested and used by the most famous toolkits of the Python ecosystem.
  • SQLAlchemy, a Python SQL toolkit and Object Relational Mapper, you can use this library to create your models and interact with them.
  • Alembic, a lightweight database migration tool for usage with SQLAlchemy, allows to create some migration scripts for your project.
  • PL/Python, a procedural language for PostgreSQL, allows to write functions in the Python language.
  • MultiCorn, a Foreign Data Wrapper in Python
Categories: FLOSS Project Planets

Chris Hostetter: Hey, You Got Your Facets in My Stats! You Got Your Stats In My Facets!!

Planet Apache - Thu, 2015-01-29 18:14

Solr has supported basic “Field Facets” for a very long time. Solr has also supported “Field Stats” over numeric fields for (almost) as long. But starting with Solr 5.0 (building off of the great work done to support Distributed Pivot Faceting in Solr) it will now be possible to compute Field Stats for each Constraint of a Pivot Facet. Today I’d like to explain what the heck that means, and how it might be useful to you.


Field Faceting” is hopefully a fairly straight forward concept to most Solr users. For any query, you can also ask Solr’s FacetComponent to compute the top “terms” from a field of your choice, and return those terms along with the cardinality of the subset of documents that match that term.

To consider a trivial little example: if you have a bunch of documents representing “Books” and you do a query for books about “Crime”, you can then tell Solr to Facet on the author field, and Solr might tell you that it found 1024 books matching the query q=Crime and of those books the most commonly found author is “Kaiser Soze” who has written “42” of those books. If you then subsequently filter your results with fq=author:"Kaiser Soze" you should only get 42 results.

http://localhost:8983/solr/books/select?q=Crime&facet=true&facet.field=author ... "facet_counts":{ "facet_queries":{}, "facet_fields":{ "author":[ "Kaiser Soze",42, "James Moriarty",37, "Carmine Falcone",25, ... Stats

Field Stats” is a feature of Solr many users may not be very familiar with. It’s a way to instruct Solr to use the StatsComponent to compute some aggregate statistics against a numeric field for all documents matching a query. The set of statistics supported are:

  • min
  • mean
  • max
  • sum
  • count (number of unique values found in the field for these docs)
  • missing (number of documents in the result set that have no value in this field
  • stddev (standard deviation)
  • sumOfSquares (Intermediate result used to compute stddev, not useful for most users)

So to continue our previous example: When doing your search for q=Crime you can tell Solr you want to compute stats over the price field and look at the min, mean, max, and stddev values to get an idea of how expensive books about Crime are.

http://localhost:8983/solr/books/select?q=Crime&stats=true&stats.field=price ... "stats":{ "stats_fields":{ "price":{ "min":12.34, "max":57.65, "mean":34.56, ... You Got Your Facets In My Stats!

From the very beginning of it’s existence, the StatsComponent has supported some rudimentary support for generating “sub-facets” over a field using the stats.facet param. This generated a simplistic list of facet terms, and computed the stats over each subset. To continue our earlier example, the results might look something like this….

http://localhost:8983/solr/books/select?q=Crime&stats=true&stats.field=price&stats.facet=author ... "stats":{ "stats_fields":{ "price":{ "min":12.34, "max":57.65, "mean":34.56, ... "facets":{ "author":{ "Carmine Falcone":{ "min":22.50, "max":37.50, ... }, ... "James Moriarty":{ "min":19.95, "max":39.95, ...

But this stats.facet approach has always been plagued with problems:

  • Completely different code from FacetComponent that was hard to maintain, and doesn’t supported distributed search (see EDIT#1 below)
  • Always returns every term from the stats.facet field, w/o any support for facet.limit, facet.sort, etc…
  • Lots of problems with multivalued facet fields and/or non string facet fields.
You Got Your Stats In My Facets!

One of the new features available in Solr 5.0 will be the ability to “link” a stats.field to a facet.pivot param — this inverts the relationship that stats.facet used to offer (nesting the stats under the facets so to speak, instead of putting the facets under the stats) so that the FacetComponent does all the heavy lifting of determining the facet constraints, and delegates to the StatsComponent only as needed to compute stats over the subset of documents for each constraint. (Having the Peanut-Butter on the inside of the Chocolate is much less messy then the alternative.)

With our previous example, this means that you could get results like so…

http://localhost:8983/solr/techproducts/select?q=crime&facet=true&stats=true&stats.field={!tag=t1}price&facet.pivot={!stats=t1}author ... "facet_pivot":{ "author":[{ "field":"author", "value":"Kaiser Soze", "count":42, "stats":{ "stats_fields":{ "price":{ "min":12.95, "max":29.95, ...}}}}, { "field":"author", "value":"James Moriarty", "count":37, "stats":{ "stats_fields":{ "price":{ "min":19.95, "max":39.95, ...

The linkage mechanism is via a tag Local Param specified on the stats.field. This allows multiple facet.pivot params to refer to the same stats.field, or a single facet.pivot to refer to multiple different stats.field params over different fields/functions that all use the same tag, etc. And because this functionality is built on top of Pivot Facets, multiple levels of Pivots can be computed, and the stats will be computed at each level. See the Solr Reference Guide for more details.

Putting the Pieces Together: CitiBike

The examples I’ve mentioned so far have been fairly simple and contrived, but if you are interested in checking out some very cool applications of the new pivot+stats functionality, you should take a look at the “Solr For DataScience” repo Grant Ingersoll put together for a recent presentation using the NYC CitiBike usage data.

With the small sample data subset (bike Usage from July-Oct 2013) indexed in the citi_py collection (See ./index-py.sh), you can use the following queries to find the answers to some non-trivial questions. For example…

Most Popular Trips for Subscribers, with Average Duration

For all trips made by Subscribers, find the top 5 start stations with the most common end station for each, as well as the top 5 end stations and the most common start station for each. Compute stats on the trip duration (in seconds) for each of these pairs of stations.

101202 Total Trips Taken by Subscribers Top 5 Starting Stations With Most Popular Destination For Each StationTripsMean Duration Pershing Square N107513.5 Minutes ⇒ Broadway & W 32 St328.4 Minutes Lafayette St & E 8 St98413.2 Minutes ⇒ E 17 St & Broadway355.5 Minutes E 17 St & Broadway97112.2 Minutes ⇒ W 21 St & 6 Ave166.1 Minutes W 20 St & 11 Ave95713.3 Minutes ⇒ W 17 St & 8 Ave255.3 Minutes 8 Ave & W 31 St93012.8 Minutes ⇒ 8 Ave & W 52 St2410.0 Minutes Top 5 Destination Stations With Most Popular Start For Each StationTripsMean Duration Lafayette St & E 8 St ⇒355.5 Minutes E 17 St & Broadway110311.7 Minutes W 17 St & 8 Ave ⇒306.0 Minutes 8 Ave & W 31 S97313.2 Minutes W 17 St & 8 Ave ⇒247.3 Minutes W 20 St & 11 Ave96012.2 Minutes E 10 St & Avenue A ⇒236.5 Minutes Lafayette St & E 8 St93010.8 Minutes E 30 St & Park Ave S ⇒216.0 Minutes Pershing Square N84014.3 Minutes
Top Destinations For Male & Female Subscribers departing from NYU, with Average Age of Rider

For all trips made by subscribers originating at one of the 5 stations adjacent to NYU, find the top 5 destination stations for each gender, as well as the average age of the rider to each of these stations.

2189 Total Trips Leaving NYU, Top 5 Destination Stations by Gender (1658) Male (531) Female TripsDestinationMean Age TripsDestinationMean Age 54University Pl & E 14 St35 Years 27University Pl & E 14 St34 Years 39E 12 St & 3 Ave29 Years 11Broadway & E 14 St39 Years 31Lafayette St & E 8 St39 Years 9E 10 St & Avenue A37 Years 29Mercer St & Bleecker St37 Years 9E 17 St & Broadway35 Years 28LaGuardia Pl & W 3 St39 Years 9Washington Square E41 Years
Where We Go From Here

There are still a lot of cool improvements in the pipeline for linking Field Stats With Faceting (eg: combining Stats with Range Faceting, combining Range Faceting with Pivot Faceting, etc.) as well as plans to support more options for the statistics (eg: Limiting the stats computed, generating Percentile histograms, etc.). All of this work is being tracked in SOLR-6348 and the associated Sub-Tasks, So please watch those issues in Jira to keep track of future development — we can always use more folks testing out patches!

EDIT#1: A previous version of this post said that stats.facet did not support distributed search — that was incorrect. The problem I was thinking of in my head is that the way stats component works, and deals with distributed requests depends on all of the data from each shard being returend in a single pass — which relates to the second bullet (“Always returns every term from the stats.facet field…”). Fixing stats.facet to support those params, or delegate to the existing Facet code (which uses refinement requests to get accurate counts) was/is virtually impossible in a way that would still support accurate stats.facet counts in distributed search.

The post Hey, You Got Your Facets in My Stats! You Got Your Stats In My Facets!! appeared first on Lucidworks.

Categories: FLOSS Project Planets

FSF News: Libreboot X200 laptop now FSF-certified to respect your freedom

GNU Planet! - Thu, 2015-01-29 17:25

This is the second Libreboot laptop from Gluglug (a project of Minifree, Ltd.) to achieve RYF certification, the first being the Libreboot X60 in December 2013. The Libreboot X200 offers many improvements over the Libreboot X60, including a faster CPU, faster graphics, 64-bit GNU/Linux support (on all models), support for more RAM, higher screen resolution, and more. The Libreboot X200 can be purchased from Gluglug at http://shop.gluglug.org.uk/product/libreboot-x200/.

The Libreboot X200 is a refurbished and updated laptop based on the Lenovo ThinkPad X200. In order to produce a laptop that achieved the Free Software Foundation's certification guidelines, the developers at Gluglug had to replace the low-level firmware as well as the operating system. Microsoft Windows was replaced with the FSF-endorsed Trisquel GNU/Linux operating system, which includes the GNOME 3 desktop environment. The free software boot system of Libreboot and the GNU GRUB 2 bootloader were adapted to replace the stock proprietary firmware, which included a BIOS, Intel's Management Engine system, and Intel's Active Management Technology (AMT) firmware.

The FSF has previously written about Intel's ME and AMT, calling attention to how this proprietary software introduces a fundamental security flaw -- a back door -- into a person's machine that allows a perpetrator to remotely access the computer over a network. It enables powering the computer on and off, configuring and upgrading the BIOS, wiping the hard drives, reinstalling the operating system, and more. While there is a BIOS option to ostensibly disable AMT, because the BIOS itself is proprietary, the user has no means to verify whether this is sufficient. The functionality provided by the ME/AMT could be a very useful security and recovery measure, but only if the user has control over the software and the ability to install modified versions of it.

"The ME and its extension, AMT, are serious security issues on modern Intel hardware and one of the main obstacles preventing most Intel based systems from being liberated by users. On most systems, it is extremely difficult to remove, and nearly impossible to replace. Libreboot X200 is the first system where it has actually been removed, permanently," said Gluglug Founder and CEO, Francis Rowe.

"This is a huge accomplishment, but unfortunately, it is not known if the work they have done to remove the ME and AMT from this device will be applicable to newer Intel-based laptops. It is incredibly frustrating to think that free software developers may have to invest even more time and energy into figuring out how to simply remove proprietary firmware without rendering the hardware nonfunctional. On top of that, the firmware in question poses a serious security threat to its users -- and the organizations who employ them. We call on Intel to work with us to enable removal of ME and AMT for users who don't want it on their machines," said FSF's executive director, John Sullivan.

In order to remove the ME, AMT, and other proprietary firmware from the laptop, the Libreboot developers had to first reverse engineer Intel's firmware. They then created a small software utility to produce a free firmware image that conforms to Intel's specifications. Finally, to install their firmware on the device, they used special hardware (an SPI flasher) that they directly connected to a small chip on the motherboard itself. After many months of work, the Libreboot developers managed to completely overwrite the proprietary firmware with Libreboot and GNU GRUB 2. Those who purchase a Libreboot X200 from Gluglug will receive a laptop that has had all of this work already done to it and will be able to update or install new firmware to their device without needing to make use of any special hardware or complicated procedures.

To learn more about the Respects Your Freedom hardware certification, including details on the certification of the Libreboot X200, visit http://www.fsf.org/ryf. Hardware sellers interested in applying for certification can consult http://www.fsf.org/resources/hw/endorsement/criteria.

Subscribers to the FSF's Free Software Supporter newsletter will receive announcements about future Respects Your Freedom products.

About the Free Software Foundation

The Free Software Foundation, founded in 1985, is dedicated to promoting computer users' right to use, study, copy, modify, and redistribute computer programs. The FSF promotes the development and use of free (as in freedom) software -- particularly the GNU operating system and its GNU/Linux variants -- and free documentation for free software. The FSF also helps to spread awareness of the ethical and political issues of freedom in the use of software, and its Web sites, located at fsf.org and gnu.org, are an important source of information about GNU/Linux. Donations to support the FSF's work can be made at https://donate.fsf.org. Its headquarters are in Boston, MA, USA.

More information about the FSF, as well as important information for journalists and publishers, is at https://www.fsf.org/press.

About Gluglug and Minifree, Ltd

Francis Rowe is the Founder and CEO of Minifree Ltd in the UK, which owns and operates Gluglug, a project to promote adoption of free software globally. To purchase products sold by Gluglug, visit http://shop.gluglug.org.uk.

Media Contacts

Joshua Gay
Licensing & Compliance Manager
Free Software Foundation
+1 (617) 542 5942

Francis Rowe
Founder & CEO

Categories: FLOSS Project Planets

Palantir: Explaining Panels: Why I use Panels

Planet Drupal - Thu, 2015-01-29 17:00

In my last blog post I explained what the Panels Suite is and does. I explained how Panels itself is a User Interface on top of hook_theme() and theme(). That technical explanation of Panels underlines what I think is its main conceptual virtue:

Panels encourages a mental model of pulling data into a specific design component

At Palantir we're working with Design Components that are created in static prototypes. Design Components are the reusable pieces of front-end code that compose a design system. Design Components should not know about Drupal's internal implementation details. We're not alone in this thinking. (Inside the Drupal community, and outside of it).

The task of "theming a node" is now "print this node so that it renders as this design component." Unfortunately Drupal core does not have <code>hook_design_component()</code>. It has <code>hook_theme()</code>. Some of the entries in <code>hook_theme()</code> from core are essentially design components.

Entries like <code>‘item_list'</code> and <code>'table'</code> are design components. They are conceptually based around their HTML rendering. They make sense independent of Drupal. To use them as a Drupal Developer you need to get your data organized before you call <code>theme()</code> (directly or otherwise).

On the other hand, much of the Drupal core usage of <code>hook_theme()</code> is not at all design component minded. <code>'node'</code>, <code>'user'</code>, <code>'comment'</code> all have entries in <code>hook_theme()<code>. In these elements you don't have to organize your data before calling <code>theme()</code>. You can give <code>theme()</code> a node object and after that <code>template_preprocesss_node()</code> has to do a ton of work before hitting the template.

It's no coincidence that the design component-ish <code>hook_theme()</code> entries have minimal preprocessing or no preprocessing whatsoever. The design component-ish entries like </code>‘item_list'<code> know what the HTML will look like but have no idea what your data is other than you were able to get it into a list. The non-design component entries like node know exactly what the Drupal-meaning of the data is but know very little about the markup they will produce on most production sites.

Panels unites the two mindsets. It knows what the incoming data is (A node context, a user context, etc) and it knows what design component it will print as (the layout plugins.) If you put a debug statement inside of </code>panels_theme()</code> you will see the names of layouts and style plugins. These </code>hook_theme()</code> entries are more of the design components-ish <code>hook_theme()</code> entries. They know what their markup will be. And the part of Panels most people pay attention to, the drag-and-drop interface, is where you control how the data of a node is going to prepare itself for the design component.

And here is where the admin UI of Panels might set up a confusing mental model.

How it looks in the Panels admin UI

But at execution time it is

Or the way I think of it

Drupal Data → transforming Drupal data into printable variables → design components for those variables to print in

The next time I get into a discussion about Panels at a meetup, DrupalCamp, or DrupalCon, think I'll first ask, "Does Panels let you think about building websites the way you want to think about building websites?" I like to think about passing variables into encapsulated configuration associated with a specific design component. I prefer that mental model to the "show and hide based on globals" mental model of Core's Blocks or the "just call theme() on a node and figure out the overrides later" mental model encouraged by node--[content-type].tpl.php. As the Drupal community asks itself again how it wants to do rendering, let's also ask "how do we want to think about rendering?"

The rise of design component thinking in the wider Wweb development world is not turning back. Web Components and modern front end MVC frameworks encapsulate design components. They do not care about every single implementation detail and layer of a node object. They care about getting variables ready for printing and updating. Panels module may fall out of the picture once Web Components fully mature. Until then, Panels allows for us to think in ways we will need to think for Web Components to work with Drupal.

Categories: FLOSS Project Planets

Getting ready to leave for FOSDEM 2015

Planet KDE - Thu, 2015-01-29 16:08

It’s almost time to leave and I can’t wait to see my friends from the OpenSource world again and for my first experience at FOSDEM.

I’ve gotten my luggage ready and I’m (almost) ready to leave.

The KDE T-shirts packed and ready for the road.

Route to FOSDEM.

And I’ve prepared my best and most representative T-shirts.


FOSDEM, Here I come!


Categories: FLOSS Project Planets

Drupal Watchdog: Touring Drupal

Planet Drupal - Thu, 2015-01-29 13:51

Drupal 8 has been all about pushing the boundaries, so why should help content be any different?

With the release of Drupal 8, we will also ship with tools to complement hook_help() entries: if you, the developer, are providing a documentation page for your module, why not also provide an interactive step by step guide on how your module works as well?

The idea of Tour isn’t a new one; it has been maturing over the past two years. It all began after the release of Drupal 7 when we decided to move the help passage from the front page to the help page. This meant that users new to Drupal would not see this text, and would have to struggle through with no guidance.

In light of that issue, the below was suggested;

How about creating a “Welcome” message that pops up in an overlay with that same information that continues to appear until either the user checks a box on the overlay saying to dismiss it or the user creates a piece of content on the site?
- Vegantriathlete, August 10, 2011

With tour.module committed to Drupal 8 core, we now have context-sensitive guided tours for Drupal’s complex interfaces, and developers have a new way to communicate with the user. It doesn’t stop at core either: contrib modules can ship with tours to describe how users can take full advantage of their modules. Distributions can also ship with tours on how to get started. Imagine a tour in the Commerce distribution that took the user through setting up products: That would be amazing! It would enable users to discover the pages they are looking for and take the guesswork out of finding pages.

Categories: FLOSS Project Planets

OpenLucius: Why the Bootstrap HTML framework in Drupal & OpenLucius

Planet Drupal - Thu, 2015-01-29 12:58

The Bootstrap HTML framework in Drupal, we love it. That's why we build the front-end of Drupal distribution OpenLucius with it. So we love it, but why is that?

There are alternatives to integrate in Drupal websites. Below we will give you a few reasons why we currently prefer the Bootstrap framework.

Why a HTML framework

First of all, why should you use a HTML framework? These possibilities also exist:

Categories: FLOSS Project Planets

Holger Levsen: 20150129-reproducible-fosdem

Planet Debian - Thu, 2015-01-29 12:16
Bit by bit identical binaries coming to a FOSDEM near you

Tomorrow I'll be going to FOSDEM because of the rather great variety of talks, contributors and beers - and because even after 10 years I still love to see the Grand Place at night!

On Saturday afternoon I'll be giving a talk titled "Stretching out for trustworthy reproducible builds - the status of reproducing byte-for-byte identical binary packages from a given source". I'm pretty thankful to the FOSDEM organisers for accepting this talk despite me submitting it rather (too) late, which was mostly due to the rapid developments in recent times. These are exciting times indeed: it'll be an opportunity to present what we did, how we did it, the progresses we had so far, our findings, and our plans for the future. The interview by the FOSDEM organizers might give you some preliminary insights, but you should come if you can!

So, please spread the word: this talk will not at all only be about Debian. We hope to have many upstream software developers and maintainers from other distributions attending as we will explain how reproducible builds are about reliability in software development and deployment in general. We hope one day reproducibility will be the norm everywhere and thus we want to reach out to upstreams and other distros now.

And it's getting even better: I learned a very pleasant surprise yesterday: I won't be giving this talk alone but rather together with Lunar! I would have gladly given this talk alone as planned, but team work is soo much more fun - and more productive too as this very project is showing everyday!

Categories: FLOSS Project Planets

FSF Events: Richard Stallman - "El movimiento del software libre" (Zaragoza, Spain)

GNU Planet! - Thu, 2015-01-29 09:50

Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir. La charla formará parte de las « X Jornadas sobre educación y exclusión social ».

Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Zaragoza.

Categories: FLOSS Project Planets

Sebastien Goasguen: O'Reilly Docker cookbook

Planet Apache - Thu, 2015-01-29 09:41
The last two months have been busy as I am writing the O'Reilly Docker cookbook at night and on week-ends. CloudStack during the day, Docker at night :) You can read the very "drafty" preface on Safari and you will get a sense of why I started writing the book.

Docker is amazing, it brings a terrific user experience to packaging application and deploying them easily. It is also a software that is moving very fast with over 5,500 pull requests closed so far. The community is huge and folks are very excited about it, just check those 18,000+ stars on Github.

Writing a book on Docker means reading all the documentation, reading countless blogs that are flying through twitter and then because its a cookbook, you need to get your hands dirty and actually try everything, test everything, over and over again. A cookbook is made of recipes in a very set format: Problem, Solution, Discussion. It is meant to be picked up at anytime, opened at any page and read a recipe that is independent of all the others. The book is now on pre-release, it means that you can buy it and get the very drafty version of the book as I write it, mistakes, typos and bad grammar included. As I keep writing you get the updates and once I am done you of course get the final proof-read, corrected and reviewed version.

As I started writing, I thought I would share some of the snippets of code I am writing to do the recipes. The code is available on GitHub at the how2dock account. How2dock should become a small company for Docker training and consulting as soon as I find spare time :).

What you will find there is not really code, but really a repository of scripts and Vagrantfiles that I use in the book to showcase a particular feature or command of Docker. The repository is organized the same way than the book. You can pick a chapter and then a particular recipe then go through the README.

For instance if you are curious about Docker swarm:

$ git clone https://github.com/how2dock/docbook.git
$ cd ch07/swarm
$ vagrant up
This will bring up four virtual machines via Vagrant and do the necessary boostrapping to get the cluster setup with Swarm.

If you want to run a wordpress blog with a mysql database, checkout the fig recipe:

$ cd ch07/fig
$ vagrant up
$ vagrant ssh
$ cd /vagrant
$ fig up -d
And enjoy Wordpress :)

I put a lot more in there. You will find an example of using the Ansible Docker module, a libcloud script to start an Ubuntu Snappy instance on EC2, a Dockerfile to help you create TLS certificates (really a convenience container for testing TLS in Docker). A Docker machine setup and a recipe on using Supervisor.

As I keep writing, I will keep putting all the snippets in this How2dock repo. Except frequent changes, typos, errors...and corrections :)

And FWIW, it is much scarier to put a book out in pre-release unedited than to put some scripts up on GitHub.

Suggestions, comments, reviews all welcome ! Happy Docking !
Categories: FLOSS Project Planets

InternetDevels: Drupal tourists in Ternopil

Planet Drupal - Thu, 2015-01-29 06:31

Nothing keeps Drupal tourists from spreading the word! We are passionate for Drupal and IT, so enjoy meeting like-minded people very much! Despite the cold winter weather, Ternopil welcomed us with warmth and friendliness. How was it? Our blog post will tell.

We were getting ourselves ready for the ride for almost a month. Our brandy Drupal van wanted to make nice impression too, that’s why the journey hit off from the car wash :)

Read more
Categories: FLOSS Project Planets

The Initiation

Planet KDE - Thu, 2015-01-29 06:30

Hello guys,

It has been almost 2 months since I last wrote about my experiences. The past two months have been quite hectic as I was trying my best to learn and implement the various languages which were suggested by my KDE mentors. Today, I would like to write about what new concepts and languages I have learnt. 

Qt and Qml

Qt and Qml are language for designing user interface–centric applications like different types of games. Qml is majorly used to design the front-end of an application whereas Qt is required for the back-end part.

My mentors at KDE introduced me to these exciting concepts. Qt and Qml were the main things that are required to complete my SoK project in Kanagram. 

When I got to reading about these languages the two best sources were relevant videos on youtube(qt viodrealm channel) and qtporjects.It was pretty obvious from the beginning that I was going to face some serious problems. Problems like not being able to implement the things that I was learning but my mentors at KDE came to my rescue and guided me through the process. 

At this point it is imperative that I write a few things about my mentor Jeremy, Debjit and the KDE community.

The mentors

Jeremy has been contributing to the community for the past few years. He is one of the major contributors and maintainers of Kanagram and many more. Although, I was very interested in what he did in real life but I could ask that question because that would be a severe violation of the "code of conduct"(whatever that is!). He provided me with useful sources like books and links to useful websites. He guided me through the entire process and it would have been truly impossible to do what I did without his help. I am sure that he must be a very busy man but the way he selflessly taught me from scratch was quite amazing. Thanks Jeremy! :)

I was also helped by the KDE community which is harbors some of the most talented people I have ever seen. One of these members is Debjit Mondol,the guy who had been appointed as my mentor for the SoK 2014. He is a major contributor of Kanagram. Incidentally Debjit is a student of Jeremy and since Jeremy was already mentoring few other guys he transferred some of his responsibilities to Debjit and I ended up being mentored by him.

My contributions

  • I have converted the anagram letters to clickable objects for better user interface.
  • The answer field was also made clickable so that the user need not type anything for better user interface. 

The future of Kanagram- Kanagram 15.04
After my SoK completion the Kanagram will be entirely transformed into a user interactive application where the users are given more flexibility. They will be able to click on the letters rather than typing and erasing again and again. 

Final thoughts

I have thoroughly enjoyed the SoK program and it has opened up new avenues for me. It introduced me to things which were previously unknown to me and helped me to enhance my knowledge. This is just the beginning of something which I hope would prove fruitful in the near future. I have a lot more to learn and I am looking forward to working with these amazing people and contributing more and more to this community. 
Categories: FLOSS Project Planets

eGenix.com: Python Meeting Düsseldorf - New Videos Online

Planet Python - Thu, 2015-01-29 04:00

The following text is in German, since we're announcing videos available from a regional user group meeting in Düsseldorf, Germany.

Was ist das Python Meeting Düsseldorf ?

Das Python Meeting Düsseldorf ist eine Veranstaltung, die alle drei Monate in Düsseldorf stattfindet und sich an Python Begeisterte aus der Region wendet.

Bei jedem Treffen werden Vorträge gehalten und anschließend in Diskussionen vertieft. Die Meetings dauern üblicherweise ca. 2 Stunden und münden anschließend in eine Restaurant-Session.

Teilnehmer kommen aus ganz Nordrhein-Westfalen, hauptsächlich allerdings aus der näheren Umgebung.

Neue Videos

Um die Vorträge auch für andere Python Enthusiasten zugänglich zu machen, nehmen wir die Vorträge auf, produzieren daraus Videos und laden diese auf unseren PyDDF YouTube Channel hoch.

In den letzten Tagen haben wir die Videos der letzten Treffen aufgearbeitet. Insgesamt sind 34 neue Videos dazugekommen. Viel Spaß damit:

Python Meeting Düsseldorf 2015-01-20 Python Meeting Düsseldorf 2014-09-30 Python Meeting Düsseldorf Sprint 2014 (2014-09-27/28)
Python Meeting Düsseldorf 2014-07-02 Python Meeting Düsseldorf 2014-04-29 Python Meeting Düsseldorf 2014-01-21 Python Meeting Düsseldorf 2013-11-19
Die vollständige Liste aller mehr als 70 Python Meeting Videos ist über unsere Video Liste verfügbar.
Weitere Informationen

Weitere Informationen und Termine rund um das Python Meeting Düsseldorf stehen auf unserer Webseite:


Viel Spaß !

Marc-Andre Lemburg, eGenix.com

Categories: FLOSS Project Planets

HTML text to PDF with Beautiful Soup and xtopdf

LinuxPlanet - Tue, 2015-01-27 16:42
By Vasudev Ram

Recently, I thought of getting the text from HTML documents and putting that text to PDF. So I did it :)

Here's how:
A demo program to show how to convert the text extracted from HTML
content, to PDF. It uses the Beautiful Soup library, v4, to
parse the HTML, and the xtopdf library to generate the PDF output.
Beautiful Soup is at: http://www.crummy.com/software/BeautifulSoup/
xtopdf is at: https://bitbucket.org/vasudevram/xtopdf
Guide to using and installing xtopdf: http://jugad2.blogspot.in/2012/07/guide-to-installing-and-using-xtopdf.html
Author: Vasudev Ram - http://www.dancingbison.com
Copyright 2015 Vasudev Ram

import sys
from bs4 import BeautifulSoup
from PDFWriter import PDFWriter

def usage():
sys.stderr.write("Usage: python " + sys.argv[0] + " html_file pdf_file\n")
sys.stderr.write("which will extract only the text from html_file and\n")
sys.stderr.write("write it to pdf_file\n")

def main():

# Create some HTML for testing conversion of its text to PDF.
html_doc = """
Test file for HTMLTextToPDF
This is text within the body element but outside any paragraph.
This is a paragraph of text. Hey there, how do you do?
The quick red fox jumped over the slow blue cow.
This is another paragraph of text.
Don't mind what it contains.
What is mind? Not matter.
What is matter? Never mind.
This is also text within the body element but not within any paragraph.

pw = PDFWriter("HTMLTextTo.pdf")
pw.setFont("Courier", 10)
pw.setHeader("Conversion of HTML text to PDF")
pw.setFooter("Generated by xtopdf: http://slid.es/vasudevram/xtopdf")

# Use method chaining this time.
for line in BeautifulSoup(html_doc).get_text().split("\n"):

if __name__ == '__main__':

The program uses the Beautiful Soup library for parsing and extracting information from HTML, and xtopdf, my Python library for PDF generation.
Run it with:
python HTMLTextToPDF.py
and the output will be in the file HTMLTextTo.pdf.
Screenshot below:

- Vasudev Ram - Python training and programming - Dancing Bison Enterprises

Read more of my posts about Python or read posts about xtopdf (latter is subset of former)
Signup to hear about my new software products or services.

Contact Page
Share | var addthis_config = {"data_track_clickback":true}; Vasudev Ram

Categories: FLOSS Project Planets

Android Version Stats for LQ Mobile (2015)

LinuxPlanet - Mon, 2015-01-26 14:24

With the recent news that Google will not patch the WebView vulnerability in versions of Android <= 4.3, I thought it would be a good time to look at the Android version stats for LQ Mobile. You can see stats from seven months ago here.

Platform Version Android 4.4 33.14% Android 4.1 16.82% Android 4.2 11.18% Android 4.0.3 – 4.0.4 10.11% Android 2.3.3-2.3.7 9.69% Android 5.0 9.44% Android 4.3 6.96% Android 2.2 1.82%

So, how has the Android version landscape changed since the last post and what are the implications of the WebView vulnerability in that context? Android 4.4 is still the most common version, with over a third of the market. Versions 4.2 and 4.3 are still common, but less so than previously. Versions 4.0.3/4.0.3 and 2.3.x are both very old and still fairly popular with roughly 10% each. That’s disappointing. Lollipop adoption among LQ Mobile users is significantly higher than Google is seeing generally (still less than .1%) which isn’t surprising given the technical nature of LQ members. Even with that advantage, however, roughly half of LQ Mobile users are using a version of Android that’s vulnerable. Given that data, it’s easy to understand why Google has broken out quite a bit of functionality/code into Google Play Services, which they can update independently of handset manufacturers and carriers


Categories: FLOSS Project Planets
Syndicate content