FLOSS Project Planets

Cocomore: We Got It: The Drupal 8 Developer Certificate

Planet Drupal - Mon, 2016-09-19 18:00

For software developers it is very important to continue their education continuously. That’s why our Drupal programmers in Seville didn’t hesitate and participate as some of the first ones in the Drupal 8 Certification Program of Acquia. With success! For everyone thinking about also getting certified for the latest version of the CMS we have some useful tips.

Categories: FLOSS Project Planets

Palantir: Noble Network of Charter Schools

Planet Drupal - Mon, 2016-09-19 17:38
Noble Network of Charter Schools brandt Mon, 09/19/2016 - 16:38 Supporting Students For a More Promising Future

Support for a better managed codebase across a platform of Drupal sites.

Highlights
  • Provided valuable insight into state of the site with an in-depth code audit

  • Transitioned seamlessly from previous hosting platform to Acquia

  • Streamlined theme to reduce code duplication

Do you need a site audit?

Let's Chat.

Noble Network of Charter Schools (Noble) is a Chicago-based nonprofit organization that manages a network of 16 public high schools and one middle school located throughout Chicago. The schools of the Noble Network serve over 10,000 mainly low-income, underserved students, and work toward making a college degree a reality for each of them. 

Noble needed a firm with a deep bench of Drupal talent to optimize its current sites and maintain and extend the underlying platform as Noble improved and advanced its online presence. Noble’s previous support provider no longer had the Drupal staff to support their needs, and they also needed assistance in transitioning their sites to a new hosting platform.

As we do with all of our support clients, we began with an in-depth code audit that allowed Noble to see the state of their sites behind the scenes. This audit provided details on what we could implement to increase the stability and speed of the Noble sites. One of the most significant updates we made was to streamline the theme to reduce code duplication. The existing code base did not leverage Drupal’s theme inheritance properly.

Across the sites in the network, about 95% of the CSS is identical. Only colors and log images are changed. However, when the site was initially built by another firm, the code and CSS for each site was duplicated, which meant that if a change was made in one place, it needed to be changed in 17 different places. Using a base theme properly, our developers updated the codebase so that if something is fixed on the core level, then it is fixed on all of the sites. 

In addition, we implemented bonus features like a homepage slideshow, form and calendar integrations, and modified Google Translate functionality. The form and calendar integrations allows Noble students and families to more efficiently see upcoming events at their campuses and provides a quick way to contact the school by email with general questions and comments. The Google Translate functionality is very important, as many students’ families speak Spanish at home. Noble sends all official documents home with Spanish translations, and it was important that the website be just as accessible.

Working with Acquia, we helped Noble transition to a new hosting provider that provides Drupal support for the network. Because we jumped right in with a great project manager and the right resources to get started, Noble’s sites experienced no downtime which mitigated any negative user experience. Moving to Acquia gave us better tools for managing code and deployments across the sites, and as our partnership continues and more features are added, the result is a better managed and overall more reliable platform of sites.  
 

"Going with Palantir to support your Drupal website is a no brainer." Donnell P. Layne, Director of Information Technology

We want to make your project a success.

Let's Chat. Drupal Services Support Project Management How to Gauge Your Support Needs We've Got Your Back: Palantir's Support Services The Secret Sauce, Ep. 25: What Can Support Do for You? noblenetwork.org
Categories: FLOSS Project Planets

Drupal Aid: Enhancing CKEditor in Drupal 8

Planet Drupal - Mon, 2016-09-19 15:00

It was a happy day for me when CKeditor was incorporated into Drupal 8 core. Out of the box, Ckeditor is great. But there a lot of things you can do to make it better and more user friendly for your clients. In this post, I’ll show you some easy additions you can make to enhance your end-user's content editing experience.

  • How to easily make Links to other content
  • How to easily add Files and Images (and manage them too)
  • Adding Custom Styles
  • Adding Templates
  • Other helpful CKEditor Add-ons that are available.

Read more

Categories: FLOSS Project Planets

Acquia Developer Center Blog: Personalization Happens - Acquia at dmexco 2016

Planet Drupal - Mon, 2016-09-19 12:12

Conversations about delivering business needs with digital tools, or "How to get Drupal into the conversation without talking technology."

Acquia and several partners had a successful presence at the 2016 dmexco trade show for digital marketing and advertising. By "successful," I mean we spoke with hundreds and hundreds of people about how we can help them do better business and I think many of them will end up being happy users, consumers, and contributors to Drupal and our community.

In this podcast (audio and video), I give a quick intro to the dmexco trade fair and speak with the following people about digital transformation, selling Drupal without selling Drupal, the state of Drupal in Germany in 2016, and more:

Bonus! Acquia made it into the official dmexco wrap-up video. Great to see us representing Drupal alongside so many big names.

Bonus 2! Check out my Buzzword Bingo video from the dmexco floor to get a feel for the magnitude of the show and its ecosystem and the sometimes confusing world of contextual cloud targeting, media data reach optimisation, customer brand implementation, storytelling growth and even more that I didn't make up!

Skill Level: BeginnerIntermediateAdvanced
Categories: FLOSS Project Planets

KDevelop 5.0.1 released

Planet KDE - Mon, 2016-09-19 11:30

KDevelop 5.0.1 released

One month after the release of KDevelop 5.0.0, we are happy to release KDevelop 5.0.1 today, fixing a list of issues discovered with 5.0.0. The list of changes below is not exhaustive, but just mentions the most important improvements; for a detailed list, please see our git history.

An update to version 5.0.1 is highly recommended for everyone using 5.0.0.

Issues fixed in 5.0.1
  • Fix a deadlock in the background parser, which especially occured on projects containing both C++ and Python/JS/QML code and caused either parsing or the whole application to freeze randomly. [BR: 355100]
  • Do not display the "project is already open in a different session" dialog on starting up a session under some circumstances.
  • Fix a crash which sometimes happened when switching git branches on command line.
  • Fix a crash when starting debugger from command-line. [BR: 367837]
  • Mouseover highlight now uses the "Search highlight" color from the configuration dialog, instead of a hard-coded bright yellow. [BR: 368458]
  • Fix a crash in the PHP plugin when editing text in the line after a "TODO". [BR: 368257]
  • Fix working directory of Custom Makefile plugin [BR: 239004]
  • Fix a possible crash on triggering an assistant popup action [BR: 368270]
  • Fix a freeze under some circumstances when the welcome page is displayed. [BR: 368138]
  • Fix some translation issues.
  • Fix imports sometimes not being found in kdev-python without pressing F5 by hand [BR: 368556]
Issues fixed in the Linux AppImage
  • Ship the subversion plugin.
  • Fix QtHelp not working.
  • Ship various X11 libraries, which reportedly makes the binary run on relatively old systems now (SLES 11 and similar)
  • Disable the welcome page for now.
Download

The source code for 5.0.1 is available here: http://download.kde.org/stable/kdevelop/5.0.1/src/
Source archives are signed with the GPG key of Sven Brauch, key fingerprint 4A62 9799 32BB BCE5 E395 6ACF 68CA 8E38 C4BB 3F4B.

The AppImage pre-built binaries for Linux can be downloaded from here: http://download.kde.org/stable/kdevelop/5.0.1/bin/linux/

sbrauch Mon, 09/19/2016 - 17:30 Category News Comments Permalink

When I tried to download the AppImage using Chrome, it was blocked! It pointed me to this url https://support.google.com/chrome/answer/6261569

Permalink

In reply to by Chris Hills (not verified)

Hmm. I tried with chrome and chromium, works fine for me. Which URL did you request exactly?

Permalink

Speaking of highlights and colors, I know it is possible to change the "Global colorization intensity" but what about the actual colors of cout, string etc?

I have to admit I tried to modify each and every color and checkbox for the color scheme of sources/c++, but I haven't found how to change this redish cout's or cin's color.

Permalink

In reply to by Petros (not verified)

The colors for the semantic highlighting are currently not configurable, sorry. They can just be changed in their intensity and they adjust to the background brightness.

Categories: FLOSS Project Planets

ActiveLAMP: Who is Drupal Right For: DrupalCamp LA 2016 Table Talk - pt. 2/5

Planet Drupal - Mon, 2016-09-19 11:01

Part two of our table talk! This week the agency owners of Achieve Internet, Stauffer, ActiveLAMP and Facet Interactive discuss the who is Drupal right for.

Read more...
Categories: FLOSS Project Planets

Curtis Miller: An Introduction to Stock Market Data Analysis with Python (Part 1)

Planet Python - Mon, 2016-09-19 11:00
This post is the first in a two-part series on stock data analysis using Python, based on a lecture I gave on the subject for MATH 3900 (Data Science) at the University of Utah. In these posts, I will discuss basics such as obtaining the data from Yahoo! Finance using pandas, visualizing stock data, moving…Read more An Introduction to Stock Market Data Analysis with Python (Part 1)
Categories: FLOSS Project Planets

Dcycle: Using Docker to evaluate, patch or develop Drupal modules

Planet Drupal - Mon, 2016-09-19 10:51

Docker is now available natively on Mac OS in addition to Linux. Docker is also included with CoreOS which you can run on remote Virtual Machines, or locally through Vagrant.

Once you have installed Docker and Git, locally or remotely, you don't need to install anything else.

In these examples we will leverage the official Drupal and mySQL Docker images. We will use the mySQL image as is, and we will add Drush to our Drupal image.

Docker is efficient with caching: these scripts will be slow the first time you run them, but very fast thereafter.

Here are a few scripts I often use to set up quick Drupal 7 or 8 environments for module evaluation and development.

Keep in mind that using Docker for deployment to production is another topic entirely and is not covered here; also, these scripts are meant to be quick and dirty; docker-compose might be useful for more advanced usage.

Port mapping

In all cases, using -p 80, I map port 80 of Drupal to any port that happens to be available on my host, and in these examples I am using Docker for Mac OS, so my sites are available on localhost.

I use DRUPALPORT=$(docker ps|grep drupal7-container|sed 's/.*0.0.0.0://g'|sed 's/->.*//g') to figure out the current port of my running containers. When your containers are running, you can also just docker ps to see port mapping:

$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f1bf6e7e51c9 drupal8-image "apache2-foreground" 15 seconds ago Up 11 seconds 0.0.0.0:32771->80/tcp drupal8-container ...

In the above example (scroll right to see more outpu), port http://localhost:32771 will show your Drupal 8 site.

Using Docker to evaluate, patch or develop Drupal 7 modules

I can set up a quick environment to evaluate one or more Drupal 7 modules. In this example I'll evaluate Views.

mkdir ~/drupal7-modules-to-evaluate cd ~/drupal7-modules-to-evaluate git clone --branch 7.x-3.x https://git.drupal.org/project/views.git # add any other modules for evaluation here. echo 'FROM drupal:7' > Dockerfile echo 'RUN curl -sS https://getcomposer.org/installer | php' >> Dockerfile echo 'RUN mv composer.phar /usr/local/bin/composer' >> Dockerfile echo 'RUN composer global require drush/drush:8' >> Dockerfile echo 'RUN ln -s /root/.composer/vendor/drush/drush/drush /bin/drush' >> Dockerfile echo 'RUN apt-get update && apt-get upgrade -y' >> Dockerfile echo 'RUN apt-get install -y mysql-client' >> Dockerfile echo 'EXPOSE 80' >> Dockerfile docker build -t drupal7-image . docker run --name d7-mysql-container -e MYSQL_ROOT_PASSWORD=root -d mysql docker run -v $(pwd):/var/www/html/sites/all/modules --name drupal7-container -p 80 --link d7-mysql-container:mysql -d drupal-image DRUPALPORT=$(docker ps|grep drupal7-container|sed 's/.*0.0.0.0://g'|sed 's/->.*//g') # wait for mysql to fire up. There's probably a better way of doing this... # See stackoverflow.com/questions/21183088 # See https://github.com/docker/compose/issues/374 sleep 6 docker exec drupal7-container /bin/bash -c "echo 'create database drupal'|mysql -uroot -proot -hmysql" docker exec drupal7-container /bin/bash -c "cd /var/www/html && drush si -y --db-url=mysql://root:root@mysql/drupal" docker exec drupal7-container /bin/bash -c "cd /var/www/html && drush en views_ui -y" # enable any other modules here. Dependencies will be downloaded # automatically echo -e "Your site is ready, you can log in with the link below" docker exec drupal7-container /bin/bash -c "cd /var/www/html && drush uli -l http://localhost:$DRUPALPORT"

Note that we are linking (rather than adding) sites/all/modules as a volume, so any change we make to our local copy of views will quasi-immediately be reflected on the container, making this a good technique to develop modules or write patches to existing modules.

When you are finished you can destroy your containers, noting that all data will be lost:

docker kill drupal7-container d7-mysql-container docker rm drupal7-container d7-mysql-container Using Docker to evaluate, patch or develop Drupal 8 modules

Our script for Drupal 8 modules is slightly different:

  • ./modules is used on the container instead of ./sites/all/modules;
  • Our Dockerfile is based on drupal:8, not drupal:7;
  • Unlike with Drupal 7, your database is not required to exist prior to installing Drupal with Drush;
  • In my tests I need to chown /var/www/html/sites/default/files to www-data:www-data to enable Drupal to write files.

Here is an example where we are evaluating the Token module for Drupal 8:

mkdir ~/drupal8-modules-to-evaluate cd ~/drupal8-modules-to-evaluate git clone --branch 8.x-1.x https://git.drupal.org/project/token.git # add any other modules for evaluation here. echo 'FROM drupal:8' > Dockerfile echo 'RUN curl -sS https://getcomposer.org/installer | php' >> Dockerfile echo 'RUN mv composer.phar /usr/local/bin/composer' >> Dockerfile echo 'RUN composer global require drush/drush:8' >> Dockerfile echo 'RUN ln -s /root/.composer/vendor/drush/drush/drush /bin/drush' >> Dockerfile echo 'RUN apt-get update && apt-get upgrade -y' >> Dockerfile echo 'RUN apt-get install -y mysql-client' >> Dockerfile echo 'EXPOSE 80' >> Dockerfile docker build -t drupal8-image . docker run --name d8-mysql-container -e MYSQL_ROOT_PASSWORD=root -d mysql docker run -v $(pwd):/var/www/html/modules --name drupal8-container -p 80 --link d8-mysql-container:mysql -d drupal8-image DRUPALPORT=$(docker ps|grep drupal8-container|sed 's/.*0.0.0.0://g'|sed 's/->.*//g') # wait for mysql to fire up. There's probably a better way of doing this... # See stackoverflow.com/questions/21183088 # See https://github.com/docker/compose/issues/374 sleep 6 docker exec drupal8-container /bin/bash -c "cd /var/www/html && drush si -y --db-url=mysql://root:root@mysql/drupal" docker exec drupal8-container /bin/bash -c "chown -R www-data:www-data /var/www/html/sites/default/files" docker exec drupal8-container /bin/bash -c "cd /var/www/html && drush en token -y" # enable any other modules here. echo -e "Your site is ready, you can log in with the link below" docker exec drupal8-container /bin/bash -c "cd /var/www/html && drush uli -l http://localhost:$DRUPALPORT"

Again, when you are finished you can destroy your containers, noting that all data will be lost:

docker kill drupal8-container d8-mysql-container docker rm drupal8-container d8-mysql-container Tags: blogplanet
Categories: FLOSS Project Planets

Andre Roberge: Backward incompatible change in handling permalinks with Reeborg coming soon

Planet Python - Mon, 2016-09-19 10:17
About two years ago, I implemented a permalink scheme which was intended to facilitate sharing various programming tasks in Reeborg's World. As I added new capabilities, the number of possible items to include grew tremendously. In fact, for rich enough worlds, the permalink can be too long for the browser to handle. To deal with such situations, I had to implement a clumsy way to import and
Categories: FLOSS Project Planets

Nuvole: Pimp your Behat Drupal Extension and rule the world

Planet Drupal - Mon, 2016-09-19 10:00
Make the most out of your Behat tests by using custom contexts, dependency injection and much more.

This post is an excerpt from the topics covered by our DrupalCon Dublin training: Drupal 8 Development - Workflows and Tools.

At Nuvole we consider writing good tests as a fundamental part of development and, when it comes to testing a complex site, there is nothing better than extensive behavioral tests using Behat. The benefits of such a choice are quite obvious:

  • Tests are very easy to write.
  • Behat scenarios serve as a solid communication mean between business and developers.

As a site grows in complexity, however, the default step definitions provided by the excellent Behat Drupal Extension might not be specific enough and you will quickly find yourself adding custom step to your FeatureContext or creating custom Behat contexts, as advocated by all official documentation.

This is all fine except that your boilerplate test code might soon start to grow into a non-reusable, non-tested bunch of code.

Enter Nuvole's Behat Drupal Extension.

Nuvole's Behat Drupal Extension

Nuvole's Behat Drupal Extension is built on the shoulders of the popular Behat Drupal Extension and it focuses on step re-usability and testability by allowing developers to:

  • Organize their code in services by providing a YAML service description file, pretty much like we all are used to do nowadays with Drupal 8.
  • Override default Drupal Behat Extension services with their own.
  • Benefit of many ready-to-use contexts that are provided by the extension out of the box.
Installation and setup

Install Nuvole's Behat Drupal Extension with Composer by running:

bash $ composer require nuvoleweb/drupal-behat

Setup the extension by following the Quick start section available on the original Behat Drupal Extension page, just use NuvoleWeb\Drupal\DrupalExtension instead of the native Drupal\DrupalExtension in your behat.yml as shown below:

default:
  suites:
    default:
      contexts:
        - Drupal\DrupalExtension\Context\DrupalContext
        - NuvoleWeb\Drupal\DrupalExtension\Context\DrupalContext
        ...
  extensions:
    Behat\MinkExtension:
      goutte: ~
      ...
    # Use "NuvoleWeb\Drupal\DrupalExtension" instead of "Drupal\DrupalExtension".
    NuvoleWeb\Drupal\DrupalExtension:
      api_driver: "drupal"
      ...
      services: "tests/my_services.yml"
      text:
        node_submit_label: "Save and publish" "Service container"-aware Contexts

All contexts extending \NuvoleWeb\Drupal\DrupalExtension\Context\RawDrupalContext and \NuvoleWeb\Drupal\DrupalExtension\Context\RawMinkContext are provided with direct access to the current Behat service container. Developers can also define their own services by adding a YAML description file to their project and setting the services: parameter to point to its current location (as shown above).

The service description file can describe both custom services and override already defined services. For example, given a tests/my_services.yml containing:

services:
  your.own.namespace.hello_world:
    class: Your\Own\Namespace\HelloWorldService

Then all contexts extending \NW\D\DE\C\RawDrupalContext or \NW\D\DE\C\RawMinkContext will be able to access that service by just calling:

<?php
class TestContext extends RawDrupalContext {

  /**
   * Assert service.
   *
   * @Then I say hello
   */
  public function assertHelloWorld() {
    $this->getContainer()->get('your.own.namespace.hello_world')->sayHello();
  }

}
?>

The your.own.namespace.hello_world service class itself can be easily tested using PHPUnit. Also, since Behat uses Symfony's Service Container you can list services your service depends on as arguments so to remove any hardcoded dependency, following Dependency Injection best practices.

Override existing services

Say that, while working on your Drupal 7 project, you have defined a step that publishes a node given its content type and title and you want to use the same exact step on your Drupal 8 project, something like:

Given I publish the node of type "page" and title "My page title"

The problem here is that the actual API calls to load and save a node differs between Drupal 7 and Drupal 8.

The solution is to override the default Drupal core services specifying your own classes in your tests/my_services.yml:

parameters:
  # Overrides Nuvole's Drupal Extension Drupal 7 core class.
  drupal.driver.cores.7.class: Your\Own\Namespace\Driver\Cores\Drupal7
  # Overrides Nuvole's Drupal Extension Drupal 8 core class.
  drupal.driver.cores.8.class: Your\Own\Namespace\Driver\Cores\Drupal8

services:
  your.own.namespace.hello_world:
    class: Your\Own\Namespace\HelloWorldService

You'll then delegate the core-specific business logic to the new core classes allowing your custom step to be transparently run on both Drupal 7 and Drupal 8. Such a step would look like:

<?php
class TestContext extends RawDrupalContext {

  /**
   * @Given I publish the node of type :type and title :title
   */
  public function iPublishTheNodeOfTypeAndTitle($type, $title) {
    $this->getCore()->publishNode($type, $title);
  }

...
?> Ready to use contexts

The extension also provides some utility contexts that you can use right away in your tests. Below a quick overview of what's currently available:

Context Description NuvoleWeb\Drupal\DrupalExtension\Context\DrupalContext
Standard Drupal context. You want to use this one next to (and not instead of) Drupal\DrupalExtension\Context\DrupalContext. NuvoleWeb\Drupal\DrupalExtension\Context\ContentContext
Perform operations on Content. NuvoleWeb\Drupal\DrupalExtension\Context\CKEditorContext
Allows to interact with CKEditor components on your page. NuvoleWeb\Drupal\DrupalExtension\Context\ResponsiveContext:
  devices:
    mobile_portrait: 360x640
    mobile_landscape: 640x360
    tablet_portrait: 768x1024
    tablet_landscape: 1024x768
    laptop: 1280x800
    desktop: 2560x1440
Resize the browser according to the specified devices, useful for testing responsive behaviors. NuvoleWeb\Drupal\DrupalExtension\Context\PositionContext
Check position of elements on the page. NuvoleWeb\Drupal\DrupalExtension\Context\ChosenFieldContext
Interact with Chosen elements on the page.

We will share more steps in the future enriching the current contexts as well as providing new ones so keep an eye on the project repository!

Disclaimer

At the moment only Drupal 8 is supported but we will add Drupal 7 support ASAP (yes, it's as easy as providing missing Drupal 7 driver core methods and adding tests).

Tags: Drupal PlanetBehatTest Driven DevelopmentTrainingDrupalCon
Categories: FLOSS Project Planets

Doug Hellmann: dbm — Unix Key-Value Databases — PyMOTW 3

Planet Python - Mon, 2016-09-19 09:00
dbm is a front-end for DBM-style databases that use simple string values as keys to access records containing strings. It uses whichdb() to identify databases, then opens them with the appropriate module. It is used as a back-end for shelve, which stores objects in a DBM database using pickle . Read more… This post is … Continue reading dbm — Unix Key-Value Databases — PyMOTW 3
Categories: FLOSS Project Planets

Mike Driscoll: PyDev of the Week: Benedikt Eggers

Planet Python - Mon, 2016-09-19 08:30

This week we welcome Benedikt Eggers (@be_eggers) as our PyDev of the Week. Benedikt is one of the core developers working on the IronPython project. IronPython is the version of Python that is integrated with Microsoft’s .NET framework, much like Jython is integrated with Java. If you’re interesting in seeing what Benedikt has been up to lately, you might want to check out his Github profile. Let’s take a few minutes to get to know our fellow Pythoneer!

Could you tell us a little about yourself (hobbies, education, etc):

My name is Benedikt Eggers and I was born and live in Germany (23 years). I’ve working as a software developer and engineer and had studied business informatics. At my little spare time I do sports and work on open source projects, like IronPython.

Why did you start using Python?

To be honest, I’ve started using Python by searching for a script engine for .net. That way I came to IronPython and established it in our company. There we are using it to extend our software and writing and using Python modules in both worlds. After a while I got more into Python and thought that’s a great concept of a dynamic language. So it’s a good contrast to C#. It is perfect for scripting and other nice and quick stuff.

What other programming languages do you know and which is your favorite?

The language I’m most familiar with is C#. To be honest, this is also my “partly” favorite language to write larger application and complex products. But I also like Python/IronPython very much, cause it allows me to achieve my goals very quickly with less and readable code. So a favorite language is hard to pick, cause I like to use the best technology in its specific environment (Same could be said about relational and document based database, …)

What projects are you working on now?

Mostly I’m working on my projects at work. We (http://simplic-systems.com/) are continuously working on creating more and more open source projects and also contribute to other open source projects. So I spend a lot of time there. But I also can use a lot of this time to work on IronPython. So I’m able to mix this up and work a few projects parallel. But spending time working on IronPython is something I really like, so I’m doing it, cause I enjoy it.

Which Python libraries are your favorite (core or 3rd party)?

I really like requests and all the packages to easily work with web-services and other modern technologies. On the other side, I use a lot of Python Modules in our continuous integration environment, to automate our build process. So there I also use the core libraries to move, rename files by reading JSON configurations and so on. So there are a lot of libraries I like. Because they make my life much easier every day.

Is there anything else you’d like to say?

Yes – I’d love to see how fast we are growing and that we found people who are willing to contribute to IronPython. I think we are on a good way and hope that we can achieve all of our goals. I hope that IronPython 3 and all other releases are coming soon. Furthermore I’d like to thank Jeff Hardy a lot, who has contributed to the project in that past years and is always very helpful. Finally also a thanks goes to Alex Earl who has working on this project too in the last years and now wants to bring it back together with the community. I think we will work great together!

Thanks so much for doing the interview!

Categories: FLOSS Project Planets

Wesley Chun: Accessing Gmail from Python (plus BONUS)

Planet Python - Mon, 2016-09-19 08:04
NOTE: The code covered in this blogpost is also available in a video walkthrough here.

UPDATE (Aug 2016): The code has been modernized to use oauth2client.tools.run_flow() instead of the deprecated oauth2client.tools.run(). You can read more about that change here.

IntroductionThe last several posts have illustrated how to connect to public/simple and authorized Google APIs. Today, we're going to demonstrate accessing the Gmail (another authorized) API. Yes, you read that correctly... "API." In the old days, you access mail services with standard Internet protocols such as IMAP/POP and SMTP. However, while they are standards, they haven't kept up with modern day email usage and developers' needs that go along with it. In comes the Gmail API which provides CRUD access to email threads and drafts along with messages, search queries, management of labels (like folders), and domain administration features that are an extra concern for enterprise developers.

Earlier posts demonstrate the structure and "how-to" use Google APIs in general, so the most recent posts, including this one, focus on solutions and apps, and use of specific APIs. Once you review the earlier material, you're ready to start with Gmail scopes then see how to use the API itself.
    Gmail API ScopesBelow are the Gmail API scopes of authorization. We're listing them in most-to-least restrictive order because that's the order you should consider using them in — use the most restrictive scope you possibly can yet still allowing your app to do its work. This makes your app more secure and may prevent inadvertently going over any quotas, or accessing, destroying, or corrupting data. Also, users are less hesitant to install your app if it asks only for more restricted access to their inboxes.
    • 'https://www.googleapis.com/auth/gmail.readonly' — Read-only access to all resources + metadata
    • 'https://www.googleapis.com/auth/gmail.send' — Send messages only (no inbox read nor modify)
    • 'https://www.googleapis.com/auth/gmail.labels' — Create, read, update, and delete labels only
    • 'https://www.googleapis.com/auth/gmail.insert' — Insert and import messages only
    • 'https://www.googleapis.com/auth/gmail.compose' — Create, read, update, delete, and send email drafts and messages
    • 'https://www.googleapis.com/auth/gmail.modify' — All read/write operations except for immediate & permanent deletion of threads & messages
    • 'https://mail.google.com/' — All read/write operations (use with caution)
    Using the Gmail APIWe're going to create a sample Python script that goes through your Gmail threads and looks for those which have more than 2 messages, for example, if you're seeking particularly chatty threads on mailing lists you're subscribed to. Since we're only peeking at inbox content, the only scope we'll request is 'gmail.readonly', the most restrictive scope. The API string is 'gmail' which is currently on version 1, so here's the call to apiclient.discovery.build() you'll use:

    GMAIL = discovery.build('gmail', 'v1', http=creds.authorize(Http()))

    Note that all lines of code above that is predominantly boilerplate (that was explained in earlier posts). Anyway, once you have an established service endpoint with build(), you can use the list() method of the threads service to request the file data. The one required parameter is the user's Gmail address. A special value of 'me' has been set aside for the currently authenticated user.
    threads = GMAIL.users().threads().list(userId='me').execute().get('threads', [])If all goes well, the (JSON) response payload will (not be empty or missing and) contain a sequence of threads that we can loop over. For each thread, we need to fetch more info, so we issue a second API call for that. Specifically, we care about the number of messages in a thread:
    for thread in threads:
    tdata = GMAIL.users().threads().get(userId='me', id=thread['id']).execute()
    nmsgs = len(tdata['messages'])
    We're seeking only all threads more than 2 (that means at least 3) messages, discarding the rest. If a thread meets that criteria, scan the first message and cycle through the email headers looking for the "Subject" line to display to users, skipping the remaining headers as soon as we find one:
    if nmsgs > 2:
    msg = tdata['messages'][0]['payload']
    subject = ''
    for header in msg['headers']:
    if header['name'] == 'Subject':
    subject = header['value']
    break
    if subject:
    print('%s (%d msgs)' % (subject, nmsgs))
    If you're on many mailing lists, this may give you more messages than desired, so feel free to up the threshold from 2 to 50, 100, or whatever makes sense for you. (In that case, you should use a variable.) Regardless, that's pretty much the entire script save for the OAuth2 code that we're so familiar with from previous posts. The script is posted below in its entirety, and if you run it, you'll see an interesting collection of threads... YMMV depending on what messages are in your inbox:
    $ python3 gmail_threads.py
    [Tutor] About Python Module to Process Bytes (3 msgs)
    Core Python book review update (30 msgs)
    [Tutor] scratching my head (16 msgs)
    [Tutor] for loop for long numbers (10 msgs)
    [Tutor] How to show the listbox from sqlite and make it searchable? (4 msgs)
    [Tutor] find pickle and retrieve saved data (3 msgs)
    BONUS: Python 3!As of Mar 2015 (formally in Apr 2015 when the docs were updated), support for Python 3 was added to Google APIs Client Library (3.3+)! This update was a long time coming (relevant GitHub thread), and allows Python 3 developers to write code that accesses Google APIs. If you're already running 3.x, you can use its pip command (pip3) to install the Client Library:

    $ pip3 install -U google-api-python-client

    Because of this, unlike previous blogposts, we're deliberately going to avoid use of the print statement and switch to the print() function instead. If you're still running Python 2, be sure to add the following import so that the code will also run in your 2.x interpreter:

    from __future__ import print_function

    ConclusionTo find out more about the input parameters as well as all the fields that are in the response, take a look at the docs for threads().list(). For more information on what other operations you can execute with the Gmail API, take a look at the reference docs and check out the companion video for this code sample. That's it!

    Below is the entire script for your convenience which runs on both Python 2 and Python 3 (unmodified!):
    from __future__ import print_function

    from apiclient import discovery
    from httplib2 import Http
    from oauth2client import file, client, tools

    SCOPES = 'https://www.googleapis.com/auth/gmail.readonly'
    store = file.Storage('storage.json')
    creds = store.get()
    if not creds or creds.invalid:
    flow = client.flow_from_clientsecrets('client_secret.json', SCOPES)
    creds = tools.run_flow(flow, store)
    GMAIL = discovery.build('gmail', 'v1', http=creds.authorize(Http()))

    threads = GMAIL.users().threads().list(userId='me').execute().get('threads', [])
    for thread in threads:
    tdata = GMAIL.users().threads().get(userId='me', id=thread['id']).execute()
    nmsgs = len(tdata['messages'])

    if nmsgs > 2:
    msg = tdata['messages'][0]['payload']
    subject = ''
    for header in msg['headers']:
    if header['name'] == 'Subject':
    subject = header['value']
    break
    if subject:
    print('%s (%d msgs)' % (subject, nmsgs))
    You can now customize this code for your own needs, for a mobile frontend, a server-side backend, or to access other Google APIs. If you want to see another example of using the Gmail API (displaying all your inbox labels), check out the Python Quickstart example in the official docs or its equivalent in Java (server-side, Android), iOS (Objective-C, Swift), C#/.NET, PHP, Ruby, JavaScript (client-side, Node.js), or Go. That's it... hope you find these code samples useful in helping you get started with the Gmail API!

    EXTRA CREDIT: To test your skills and challenge yourself, try writing code that allows users to perform a search across their email, or perhaps creating an email draft, adding attachments, then sending them! Note that to prevent spam, there are strict Program Policies that you must abide with... any abuse could rate limit your account or get it shut down. Check out those rules plus other Gmail terms of use here.
    Categories: FLOSS Project Planets

    Mike Gabriel: Rocrail changed License to some dodgy non-free non-License

    Planet Debian - Mon, 2016-09-19 05:51
    The Background Story

    A year ago, or so, I took some time to search the internet for Free Software that can be used for controlling model railways via a computer. I was happy to find Rocrail [1] being one of only a few applications available on the market. And even more, I was very happy when I saw that it had been licensed under a Free Software license: GPL-3(+).

    A month ago, or so, I collected my old Märklin (Digital) stuff from my parents' place and started looking into it again after +15 years, together with my little son.

    Some weeks ago, I remembered Rocrail and thought... Hey, this software was GPLed code and absolutely suitable for uploading to Debian and/or Ubuntu. I searched for the Rocrail source code and figured out that it got hidden from the web some time in 2015 and that the license obviously has been changed to some non-free license (I could not figure out what license, though).

    This made me very sad! I thought I had found a piece of software that might be interesting for testing with my model railway. Whenever I stumble over some nice piece of Free Software that I plan to use (or even only play with), I upload this to Debian as one of the first steps. However, I highly attempt to stay away from non-free sofware, so Rocrail has become a no-option for me back in 2015.

    I should have moved on from here on...

    Instead...

    Proactively, I signed up with the Rocrail forum and asked the author(s) if they see any chance of re-licensing the Rocrail code under GPL (or any other FLOSS license) again [2]? When I encounter situations like this, I normally offer my expertise and help with such licensing stuff for free. My impression until here already was that something strange must have happened in the past, so that software developers choose GPL and later on stepped back from that decision and from then on have been hiding the source code from the web entirely.

    Going deeper...

    The Rocrail project's wiki states that anyone can request GitBlit access via the forum and obtain the source code via Git for local build purposes only. Nice! So, I asked for access to the project's Git repository, which I had been granted. Thanks for that.

    Trivial Source Code Investigation...

    So far so good. I investigated the source code (well, only the license meta stuff shipped with the source code...) and found that the main COPYING files (found at various locations in the source tree, containing a full version of the GPL-3 license) had been replaced by this text:

    Copyright (c) 2002 Robert Jan Versluis, Rocrail.net All rights reserved. Commercial usage needs permission.

    The replacement happened with these Git commits:

    commit cfee35f3ae5973e97a3d4b178f20eb69a916203e Author: Rob Versluis <r.j.versluis@rocrail.net> Date: Fri Jul 17 16:09:45 2015 +0200 update copyrights commit df399d9d4be05799d4ae27984746c8b600adb20b Author: Rob Versluis <r.j.versluis@rocrail.net> Date: Wed Jul 8 14:49:12 2015 +0200 update licence commit 0daffa4b8d3dc13df95ef47e0bdd52e1c2c58443 Author: Rob Versluis <r.j.versluis@rocrail.net> Date: Wed Jul 8 10:17:13 2015 +0200 update Getting in touch again, still being really interested and wanting to help...

    As I consider such a non-license as really dangerous when distributing any sort of software, be it Free or non-free Software, I posted the below text on the Rocrail forum:

    Hi Rob, I just stumbled over this post [3] [link reference adapted for this blog post), which probably is the one you have referred to above. It seems that Rocrail contains features that require a key or such for permanent activation. Basically, this is allowed and possible even with the GPL-3+ (although Free Software activists will not appreciate that). As the GPL states that people can share the source code, programmers can easily deactivate license key checks (and such) in the code and re-distribute that patchset as they like. Furthermore, the current COPYING file is really non-protective at all. It does not really protect you as copyright holder of the code. Meaning, if people crash their trains with your software, you could actually be legally prosecuted for that. In theory. Or in the U.S. ( ;-) ). Main reason for having a long long license text is to protect you as the author in case your software causes t trouble to other people. You do not have any warranty disclaimer in your COPYING file or elsewhere. Really not a good idea. In that referenced post above, someone also writes about the nuisance of license discussions in this forum. I have seen various cases where people produced software and did not really care for licensing. Some ended with a letter from a lawyer, some with some BIG company using their code under their copyright holdership and their own commercial licensing scheme. This is not paranoia, this is what happens in the Free Software world from time to time. A model that might be much more appropriate (and more protective to you as the author), maybe, is a dual release scheme for the code. A possible approach could be to split Rocrail into two editions: Community Edition and Professional/Commercial Edition. The Community Edition must be licensed in a way that it allows re-using the code in a closed-source, non-free version of Rocrail (e.g. MIT/Expat License or Apache2.0 License). Thus, the code base belonging to the community edition would be licensed, say..., as Apache-2.0 and for the extra features in the Commercial Edition, you may use any non-free license you want (but please not that COPYING file you have now, it really does not protect your copyright holdership). The reason for releasing (a reduced set of features of a) software as Free Software is to extend the user base. The honey jar effect, as practise by many huge FLOSS projects (e.g. Owncloud, GitLab, etc.). If people could install Rocrail from the Debian / Ubuntu archives directly, I am sure that the user base of Rocrail will increase. There may also be developers popping up showing an interest in Rocrail (e.g. like me). However, I know many FLOSS developers (e.g. like me) that won't waste their free time on working for a non-free piece of software (without being paid). If you follow (or want to follow) a business model with Rocrail, then keep some interesting features in the Commercial Edition and don't ship that source code. People with deep interest may opt for that. Furthermore, another option could be dual licensing the code. As the copyright holder of Rocrail you are free to juggle with licenses and apply any license to a release you want. For example, this can be interesing for a free-again Rocrail being shipped via Apple's iStore. Last but not least, as you ship the complete source code with all previous changes as a Git project to those who request GitBlit access, it is possible to obtain all earlier versions of Rocrail. In the mail I received with my GitBlit credentials, there was some text that prohibits publishing the code. Fine. But: (in theory) it is not forbidden to share the code with a friend, for local usage. This friend finds the COPYING file, frowns and rewinds back to 2015 where the license was still GPL-3+. GPL-3+ code can be shared with anyone and also published, so this friend could upload the 2015-version of Rocrail to Github or such and start to work on a free fork. You also may not want this. Thanks for working on this piece of software! It is highly interesing, and I am still sad, that it does not come with a free license anymore. I won't continue this discussion and move on, unless you are interested in any of the above information and ask for more expertise. Ping me here or directly via mail, if needed. If the expertise leads to parts of Rocrail becoming Free Software again, the expertise is offered free of charge ;-). light+love Mike Wow, the first time I got moderated somewhere... What an experience!

    This experience now was really new. My post got immediately removed from the forum by the main author of Rocrail (with the forum's moderator's hat on). The new experience was: I got really angry when I discovererd having been moderated. Wow! Really a powerful emotion. No harassment in my words, no secrets disclosed, and still... my free speech got suppressed by someone. That feels intense! And it only occurred in the virtual realm, not face to face. Wow!!! I did not expect such intensity...

    The reason for wiping my post without any other communication was given as below and quite a statement to frown upon (this post has also been "moderately" removed from the forum thread [2] a bit later today):

    Mike, I think its not a good idea to point out a way to get the sources back to the GPL periode. Therefore I deleted your posting.

    (The phpBB forum software also allows moderators to edit posts, so the critical passage could have been removed instead, but immediately wiping the full message, well...). Also, just wiping my post and not replying otherwise with some apology to suppress my words, really is a no-go. And the reason for wiping the rest of the text... Any Git user can easily figure out how to get a FLOSS version of Rocrail and continue to work on that from then on. Really.

    Now the political part of this blog post...

    Fortunately, I still live in an area of the world where the right of free speech is still present. I found out: I really don't like being moderated!!! Esp. if what I share / propose is really noooo secret at all. Anyone who knows how to use Git can come to the same conclusion as I have come to this morning.

    [Off-topic, not at all related to Rocrail: The last votes here in Germany indicate that some really stupid folks here yearn for another–this time highly idiotic–wind of change, where free speech may end up as a precious good.]

    To other (Debian) Package Maintainers and Railroad Enthusiasts...

    With this blog post I probably close the last option for Rocrail going FLOSS again. Personally, I think that gate was already closed before I got in touch.

    Now really moving on...

    Probably the best approach for my new train conductor hobby (as already recommended by the woman at my side some weeks back) is to leave the laptop lid closed when switching on the train control units. I should have listened to her much earlier.

    I have finally removed the Rocrail source code from my computer again without building and testing the application. I neither have shared the source code with anyone. Neither have I shared the Git URL with anyone. I really think that FLOSS enthusiasts should stay away from this software for now. For my part, I have lost my interest in this completely...

    References

    light+love,
    Mike

    Categories: FLOSS Project Planets

    Qt Graphics with Multiple Displays on Embedded Linux

    Planet KDE - Mon, 2016-09-19 03:46

    Creating devices with multiple screens is not new to Qt. Those using Qt for Embedded in the Qt 4 times may remember configuration steps like this. The story got significantly more complicated with Qt 5’s focus on hardware accelerated rendering, so now it is time to take a look at where we are today with the upcoming Qt 5.8.

    Windowing System Options on Embedded

    The most common ways to run Qt applications on an embedded board with accelerated graphics (typically EGL + OpenGL ES) are the following:

    • eglfs on top of fbdev or a proprietary compositor API or Kernel Modesetting + the Direct Rendering Manager
    • Wayland: Weston or a compositor implemented with the Qt Wayland Compositor framework + one or more Qt client applications
    • X11: Qt applications here run with the same xcb platform plugin that is used in a typical desktop Linux setup

    We are now going to take a look at the status of eglfs because this is the most common option, and because some of the other approaches rely on it as well.

    Eglfs Backends and Support Levels

    eglfs has a number of backends for various devices and stacks. For each of these the level of support for multiple screens falls into one of the three following categories:

    • [1] Output management is available.
    • [2] Qt applications can choose at launch time which single screen to output to, but apart from this static setting no other configuration option is provided.
    • [3] No output-related configuration is provided.

    Note that some of these, in particular [2], may require additional kernel configuration via a video argument or similar. This is out of Qt’s domain.

    Now let’s look at the available backends and the level of multi-display support for each:

    • KMS/DRM with GBM buffers (Mesa (e.g. Intel) or modern PowerVR and some other systems) [1]
    • KMS/DRM with EGLDevice/EGLOutput/EGLStream (NVIDIA) [1]
    • Vivante fbdev (NXP i.MX6) [2]
    • Broadcom Dispmanx (Raspberry Pi) [2]
    • Mali fbdev (ODROID and others) [3]
    • (X11 fullscreen window – targeted mainly for testing and development) [3]

    Unsurprisingly, it is the backends using the DRM framework that come out best. This is as expected, since there we have a proper connector, encoder and CRTC enumeration API, whereas others have to resort to vendor-specific solutions that are often a lot more limited.

    We will now focus on the two DRM-based backends.

    Short History of KMS/DRM in Qt Qt 5.0 – 5.4

    Qt 5 featured a kms platform plugin right from the beginning. This was fairly usable, but limited in features and was seen more as a proof of concept. Therefore, with the improvements in eglfs, it became clear that a more unified approach was necessary. Hence the introduction of the eglfs_kms backend for eglfs in Qt 5.5.

    Qt 5.5

    While originally developed for a PowerVR-based embedded system, the new backend proved immensely useful for all Linux systems running with Mesa, the open-source stack, in particular on Intel hardware. It also featured a plane-based mouse cursor, with basic support for multiple screens added soon afterwards.

    Qt 5.6

    With the rise of NVIDIA’s somewhat different approach to buffer management – see this presentation for an introduction – an additional backend had to be introduced. This is called eglfs_kms_egldevice and allows running on the automotive-oriented Jetson Pro, DRIVE CX and DRIVE PX systems.

    The initial version of the plugin was standalone and independent from the existing DRM code. This led to certain deficiencies, most notably the lack of multi-display support.

    Qt 5.7

    Fortunately, these problems got addressed pretty soon. Qt 5.7 features proper code sharing between the backends, making most of the multi-display support and its JSON-based configuration system available to the EGLStream-based backend as well.

    Meanwhile the GBM-based backend got a number of fixes, in particular related to the hardware mouse cursor and the virtual desktop.

    Qt 5.8

    The upcoming release features two important improvements: it closes the gaps between the GBM and EGLStream backends and introduces support for advanced configurability. The former covers mainly the handling of the virtual desktop and the default, non-plane-based OpenGL mouse cursor which was unable to “move” between screens in previous releases.

    The documentation is already browsable at the doc snapshots page.

    Besides the ability to specify the virtual desktop layout, the introduction of the touchDevice property is particularly important when building systems where one or more of the screens is made interactive via a touchscreen. Let’s take a quick look at this.

    Touch Input

    Let’s say you are creating digital instrument clusters with Qt, with multiple touch-enabled displays involved. Given that the touchscreens report absolute coordinates in their events, how can Qt tell which screen’s virtual geometry the event should be translated to? Well, on its own it cannot.

    From Qt 5.8 it will be possible to help out the framework. By setting QT_LOGGING_RULES=qt.qpa.*=true we enable logging which lets us figure out the touchscreen’s device node.  We can then create a little JSON configuration file on the device:

    { "device": "drm-nvdc", "outputs": [ { "name": "HDMI1", "touchDevice": "/dev/input/event5", } ] }

    This will come handy in any case since configuration of screen resolution, virtual desktop layout, etc. all happens in the same file.

    Now, when a Qt application is launched with the QT_QPA_EGLFS_KMS_CONFIG environment variable pointing to our file, Qt will know that the display connected to the first HDMI port has a touchscreen as well that shows up at /dev/input/event5. Hence any touch event from that device will get correctly associated with the screen in question.

    Qt on the DRIVE CX

    Let’s see something in action. In the following example we will use an NVIDIA DRIVE CX board, with two monitors connected via HDMI and DisplayPort. The software stack is the default Vibrante Linux image, with Qt 5.8 deployed on top. Qt applications run with the eglfs platform plugin and its eglfs_kms_egldevice backend.

    Our little test environment looks like this:

    This already looks impressive, and not just because we found such good use for the Windows 95, MFC, ActiveX and COM books hanging around in the office from previous decades. The two monitors on the sides are showing a Qt Quick application that apparently picks up both screens automatically and can drive both at the same time. Excellent.

    The application we are using is available here. It follows the standard multi-display application model for embedded (eglfs): creating a dedicated QQuickWindow (or QQuickView) on each of the available screens. For an example of this, check the code in the github repository, or take a look at the documentation pages that also have example code snippets.

    A closer look reveals our desktop configuration:

    The gray MouseArea is used to test mouse and touch input handling. Hooking up a USB touch-enabled display immediately reveals the problems of pre-5.8 Qt versions: touching that area would only deliver events to it when the screen happened to be the first one. In Qt 5.8 this is can now be handled as described above.

    It is important to understand the screen geometry concepts in QScreen. When the screens form a virtual desktop (which is the default for eglfs), the interpretation is the following:

    • geometry() – the screen’s position and size in the virtual desktop
    • availableGeometry() – without a windowing system this is the same as geometry()
    • virtualGeometry() – the geometry of the entire virtual desktop to which the screen belongs
    • availableVirtualGeometry() – same as virtualGeometry()
    • virtualSiblings() – the list of all screens belonging to the same virtual desktop
    Configuration

    How does the virtual desktop get formed? It may seem fairly random by default. In fact it simply follows the order DRM connectors are reported in. This is often not ideal. Fortunately, it is configurable starting with Qt 5.8. For instance, to ensure that the monitor on the first HDMI port gets a top-left position of (0, 0), we could add something like the following to the configuration file specified in QT_QPA_EGLFS_KMS_CONFIG:

    { "device": "drm-nvdc", "outputs": [ { "name": "HDMI1", "virtualIndex": 0 }, { "name": "DP1", "virtualIndex": 1 } ] }

    If we wanted to create a vertical layout instead of horizontal (think an instrument cluster demo with three or more screens stacked under each other), we could have added:

    { "device": "drm-nvdc", "virtualDesktopLayout": "vertical", ... }

    More complex layouts, for example a T-shaped setup with 4 screens, are also possible via the virtualPos property:

    { ... "outputs": [ { "name": "HDMI1", "virtualIndex": 0 }, { "name": "HDMI2", "virtualIndex": 1 }, { "name": "DP1", "virtualIndex": 2 }, { "name": "DP2", "virtualPos": "1920, 1080" } ] }

    Here the fourth screen’s virtual position is specified explicitly.

    In addition to virtualIndex and virtualPos, the other commonly used properties are mode, physicalWidth and physicalHeight. mode sets the desired mode for the screen and is typically a resolution, e.g. “1920×1080”, but can also be set to “off”, “current”, or “preferred” (which is the default).

    For example:

    { "device": "drm-nvdc", "outputs": [ { "name": "HDMI1", "mode": "1024x768" }, { "name": "DP1", "mode": "off" } ] }

    The physical sizes of the displays become quite important when working with text and components from Qt Quick Controls. This is because these base size calculations on the logical DPI that is in turn based on the physical width and height. In desktop environments queries for these sizes usually work just fine, so no further actions are needed. On embedded however, it has often been necessary to provide the sizes in millimeters via the environment variables QT_QPA_EGLFS_PHYSICAL_WIDTH and QT_QPA_EGLFS_PHYSICAL_HEIGHT. This is not suitable in a multi-display environment, and therefore Qt 5.8 introduces an alternative: the physicalWidth and physicalHeight properties (values are in millimeters) in the JSON configuration file. As witnessed in the second screenshot above, the physical sizes did not get reported correctly in our demo setup. This can be corrected, as it was done for the monitor in the first screenshot, by doing something like:

    { "device": "drm-nvdc", "outputs": [ { "name": "HDMI1", "physicalWidth": 531, "physicalHeight": 298 }, ... ] }

    As always, enabling logging can be a tremendous help for troubleshooting. There are a number of logging categories for eglfs, its backends and input, so the easiest is often to enable everything under qt.qpa by doing export QT_LOGGING_RULES=qt.qpa.*=true before starting a Qt application.

    What About Wayland?

    What about systems using multiple GUI processes and compositing them via a Qt-based Wayland compositor? Given that the compositor application still needs a platform plugin to run with, and that is typically eglfs, everything described above applies to most Wayland-based systems as well.

    Once the displays are configured correctly, the compositor can create multiple QQuickWindow instances (QML scenes) targeting each of the connected screens. These can then be assigned to the corresponding WaylandOutput items. Check the multi output example for a simple compositor with multiple outputs.

    The rest, meaning how the client applications’ windows are placed, perhaps using the scenes on the different displays as one big virtual scene, moving client “windows” between screens, etc., are all in QtWayland’s domain.

    What’s Missing and Future Plans

    The QML side of screen management could benefit from some minor improvements: unlike C++, where QScreen, QWindow and QWindow::setScreen() are first class citizens, Qt Quick has currently no simple way to associate a Window with a QScreen, mainly because QScreen instances are only partially exposed to the QML world. While this is not fatal and can be worked around with some C++ code, as usual, the story here will have to be enhanced a bit.

    Another missing feature is the ability to connect and disconnect screens at runtime. Currently such hotplugging is not supported by any of the backends. It is worth noting that with embedded systems the urgency is probably a lot lower than with ordinary desktop PCs or laptops, since the need to change screens in such a manner is less common. Nevertheless this is something that is on the roadmap for future releases.

    That’s it for now. As we know, more screens are better than one, so why not just let Qt power them all?

    The post Qt Graphics with Multiple Displays on Embedded Linux appeared first on Qt Blog.

    Categories: FLOSS Project Planets

    Aurelien Navarre: How to return the path to an enabled Drupal module or theme?

    Planet Drupal - Mon, 2016-09-19 03:33

    In Drupal 7, it was fairly easy to retrieve the filesystem path for, say, enabled modules.

    mysql> SELECT filename, name FROM system WHERE status = 1 AND name = "xmlsitemap"; +--------------------------------------------------------+------------+ | filename | name | +--------------------------------------------------------+------------+ | sites/all/modules/contrib/xmlsitemap/xmlsitemap.module | xmlsitemap | +--------------------------------------------------------+------------+ 1 row in set (0.00 sec)

    Why would you do that? Simply because sometimes you can run into issues caused by duplicate .info files in the filesystem. A common error is when you're deleting a duplicate module or changing directories in the filesystem but the module is still registered in the database in its original location. This will make Drupal sad when it bootstraps.

    When this happens, you need to know where a particular module is being loaded from. You may compare the results of the above MySQL query with a simple Linux command to find all occurrences of those filenames in your Drupal docroot and try to narrow down the issue (e.g. is the filename being loaded still present on the filesystem?). This could be with the form:

    $ find . -type f -name "*.info" | grep -oe "[^/]*\.info" | sort | uniq -d property_validation.info xmlsitemap.info

    Let's say xmlsitemap.info is our culprit. We can refine the Linux find command accordingly:

    $ find . -type f -name "xmlsitemap.info" ./sites/all/modules/xmlsitemap/xmlsitemap.info ./sites/all/modules/contrib/xmlsitemap/xmlsitemap.info

    Which gives the full path to the duplicate .info file.

    What matters the most here is we don't need to bootstrap Drupal, which can be a lifesaver in case the site is down.

    Going forward with Drupal 8

    In Drupal 8 we still have drupal_get_path() to help if we can bootstrap Drupal.

    Psy Shell v0.7.2 (PHP 5.6.24 — cli) by Justin Hileman >>> drupal_get_path('module', 'xmlsitemap'); => "modules/xmlsitemap"

    However, we can no longer query the {system} table. One workaround I found is to decode the corresponding {key_value} entry. E.g.:

    $ drush sqlq "SELECT CONVERT(value USING utf8) FROM key_value WHERE collection = 'state' AND value LIKE '%xmlsitemap.info.yml%'" | grep --color=auto 'xmlsitemap.info.yml'

    This will return a huge array, so, having a colored output for grep is helpful to get the filename to the loaded .info.yml file.

    This gets the job done but not as cleanly as I would. Do you know of any better way to achieve this?

    Categories: FLOSS Project Planets

    Jeff Knupp: Writing Idiomatic Python Video Four Is Out!

    Planet Python - Mon, 2016-09-19 02:07

    After an unplanned two-year hiatus, the fourth video in the Writing Idiomatic Python Video Series is out! This was long overdue, and for that I sincerely apologize. All I can do now is continue to produce the rest at a steady clip and get them out as quickly as possible. I hope you find the video useful! Part 5 will be out soon...

    Categories: FLOSS Project Planets

    KDE Timeline translated into Spanish

    Planet KDE - Sun, 2016-09-18 21:02

    Hi all,

    Thanks to collaboration of Victorhck and Eloy Cuadra, now we also have the KDE 20 years Timeline available in Spanish.

    If you would also like to help us increase the reach of our timeline, translating it to another language, you can simply clone our repository here. For security reasons, the written permission in the KDE websites is restricted, but you can clone the repository, enter your translation in the local xml file and then send us the translated file.

    If you also have any suggestions or adjustment, let us know.


    Categories: FLOSS Project Planets

    Danny Englander: Drupal 8 Architecture: Video Tour for Designing Structured Modular Content Using Entity Construction Kit (ECK) & Inline Entity Form (IEF)

    Planet Drupal - Sun, 2016-09-18 19:54

    A few months back, I read an interesting blog post by Chapter Three about something they call the "Slice Template." I was really inspired after I read that, it struck me as a whole new paradigm for content creation, that of "structured modular content." At the same time, I was working on a new Drupal 8 theme and build where my objective was to create something that would give content creators lots of flexibility.

    When I've had discussions with content creators in the past, more often than not, the one word that kept coming up was "flexibility." In turn, on site builds, this lead to doing some really wacky things all in one wysiwyg.

    In the meantime, I had been playing around with the Paragraphs and Field Collection modules for Drupal 8 but after reading Chapter Three's post, I decided to go in different direction, that being Entity Construction Kit, "ECK."

    One way of building with ECK is that you have "slices" which are entities that contain bundles and can also reference other entities that have their own bundles. On the content creation side, you can leverage the Inline Entity Form and Inline Entity Form Preview modules to create a minimalistic interface for content creators. It took me a long time to wrap my head around all this and lots of trial and error.

    Now that I feel like I have a good handle on this, I decided to record a video tour of what I have been building. It's still a work in progress but I think it's well enough along to give a little demo.

    Tags 
    • Drupal 8
    • Video
    • Tutorial
    • Drupal Planet
    • Theming
    • Architecture
    Categories: FLOSS Project Planets

    Omaha Python Users Group: September 21 Meeting

    Planet Python - Sun, 2016-09-18 19:37

    Lightning Talks, discussion, and topic selection for this seasons meetings.

    Event Details:

    • Where: DoSpace @ 7205 Dodge Street / Meeting Room #2
    • When: July 20, 2016 @ 6:30pm – 8:00pm
    • Who: People interested in programming with Python
    Categories: FLOSS Project Planets
    Syndicate content