Feeds

Lullabot: How to Embed Just About Anything in Drupal 8 WYSIWYG with Entity Embed and URL Embed

Planet Drupal - Wed, 2017-06-21 10:31


Embedding media assets (images, videos, related content, etc) in a WYSIWYG text area is a long-standing need that has been challenging in Drupal sites for a while. In Drupal 7, numerous solutions addressed the problem with different (and sometimes competing) approaches. 


Drupal 8 core comes with basic support for embedding images out-of-the-box, but we all know that “ambitious digital experiences” nowadays need to go far beyond that and provide editors with a powerful tool capable of handling any type of media asset, including remote ones such as a relevant Tweet or Facebook post.
 

This is where the powerful combination of Entity Embed + URL Embed comes into play. Predicated on, and extending the foundation established by Drupal core, these two modules were designed to be a standardized solution for embedding any entity or remote URL content in a WYSIWYG editor in Drupal 8.

In this article, you will find:
  • What Drupal 8 core offers out-of-the-box for embedding images inside rich text
  • How to create a WYSIWYG embed button with the Entity Embed module (with some common pitfalls you may encounter along the way)
  • How to embed remote media assets with the URL Embed module (again, with some common pitfalls!)
  • Additional resources on how to extend and integrate your embedding solutions with other tools from the Media ecosystem.
Drupal 8 core embedding

Drupal 8 has made a big step from Drupal 7 and included the CKEditor WYSIWYG library in core. It also comes with an easy-to-use tool to embed local images in your text.

undefined

But what if the content you want to embed is not an image?

I am sure you have come across something like this before:

undefined

or this:

undefined

These are all examples of non-image media assets that needed to be embedded as well. So how can we extend Drupal 8 core’s ability to embed these pieces of content in a rich-text area? 

Enter the Entity Embed module. The Entity Embed module allows any Drupal entity to be embedded within a text area using a WYSIWYG editor.

 The module doesn't care what you embed, as long as that piece of content is stored in Drupal as a standard entity. In other words, you can embed nodes, taxonomy terms, comments, users, profiles, forms, media entities, etc. all in the same way and using a standard workflow.

In order to limit the length of this article, we will be working only with entities provided by Drupal core, but you should definitely try the Media Entity module, which can take your embedding capabilities to a whole new level (more on this at the end of this article).

Basic installation and configuration

Create an embed button

After downloading and installing the Entity Embed module and its dependencies, navigate to 

Configuration -> Content Authoring -> Text editor embed buttons

or go directly to /admin/config/content/embed and click “Add embed button”

undefined

You will see that there is already a button on this page called “Node,” which was automatically created upon installation. For the purposes of the example, we will create another button to embed nodes, but you can obviously use and modify the provided one on your site if you wish.

As an example, we’ll create a button to embed a node into another node, simulating a “Related article” scenario in a news article.

undefined

The config options you will find on this page are:

  • Label: Give your button a human-readable name and adjust its machine name if needed
  • Embed type: If you only have the Entity Embed module enabled, “Entity” is your only option here. When you install other modules that also extend the Embed framework, other options will be available.
  • Entity type: Choose here the type of the entity you will be embedding with this button. In our case, we choose “Content” in order to be able to embed nodes.
  • Content type: We can optionally filter the entity bundles (aka “Content types” for nodes) that the user will be allowed to choose from. In our case we only want “Articles”.
  • Allowed Entity Embed Display plugins: Choose here the display plugins that will be available when embedding the entity. This means that when an editor chooses an entity to embed, they will be asked how this entity should be displayed, and the options the user sees will be restricted to the selections you make here. In our case, we only want the editor to be able to choose between a simple “Label” pointing to the content, or a “Teaser” visualization of the node (which uses the teaser viewmode). More on this later.
  • Entity browser: In our example, we won’t be using any, but you should definitely try the integration of Entity Embed with Entity Browser, making the selection of the embedded entities much easier!
  • Button icon: You can optionally upload a custom icon to be shown on the button. If left empty, the letter “E” will be used.

Once we’ve created our button, it’s time to add it to the WYSIWYG toolbar.

Entity Embed: WYSIWYG configuration

Navigate to:

Configuration -> Content authoring -> Text formats and editors

or go directly to /admin/config/content/formats and click “configure” on the format you want to add the button to. In our example, we are going to add it to the “Basic HTML” text format.

Step 1: Place the button in the active toolbar

undefined

Step 2: Mark the checkbox “Display embedded entities”

undefined

Step 3: Make sure the special tags are whitelisted

If you are also using the filter “Limit allowed HTML tags and correct faulty HTML” (which you should), it’s important to make sure that the tags used by this module are allowed by this filter. Scroll down a little bit and verify that the tags:

<drupal-entity data-entity-type data-entity-uuid data-entity-embed-display data-entity-embed-display-settings data-align data-caption data-embed-button>

are listed in the “Allowed HTML tags” text area.

undefined

If you are using a recent version of the module, this should be automatically populated as soon as you click on “Display embedded entities”, but it doesn’t hurt to verify it was done correctly.

Common pitfall

If you are embedding images and you want to use “Alt” and “Title” attributes, you probably want to fine-tune the allowed tags to be something like:
<drupal-entity data-entity-type data-entity-uuid data-entity-embed-display data-entity-embed-display-settings data-align data-caption data-embed-button alt title>

Instead of the default value provided.

Common pitfall

If you have the filter “Restrict images to this site” active (which comes activated by default in Drupal core), you will probably want to reorder the filters so that “Display embedded entities” comes after the filter “Restrict images to this site”. If you don’t do this, when you embed some image entities you may end up with something like:

undefined All set, ready to embed some entities! 

Your editors can now navigate to any content form that uses the “Basic HTML” text format and start using the button!

Videos require iframe browser support.

Common pitfall

After this issue got in, the module slightly changed the way it manages the display plugins for viewing the embedded entity. As a result, the entity_reference formatter (“Rendered entity”) that many people are used to from other contexts is not available right away, and instead all non-default viewmodes of the entity type are shown directly as display options

undefined

However, for an entity without custom viewmodes, you won’t have the “Full” (or “default”) viewmode available anymore. There is still some discussion happening about how to ensure the best user experience for this issue, if you want to know more about it or jump in and help, you can find more information herehere and here.

Quick and easy solution for remote embedding with URL Embed


A sister-module of Entity Embed, the URL Embed module allows content editors to quickly embed any piece of remote content that implements the oEmbed protocol. 


Nice tech terms, but in practice what does that mean? It means that any content from one of the sites below (but not limited to these) can be embedded:

  • Deviantart
  • Facebook
  • Flickr
  • Hulu
  • IFTTT
  • Instagram
  • National Film Board of Canada
  • Noembed
  • Podbean
  • Revision3
  • Scribd
  • SlideShare
  • SmugMug
  • SoundCloud
  • Spotify
  • TED
  • Twitter
  • Ustream
  • Viddler
  • Vimeo
  • YouTube

Under the hood, the module leverages the awesome Embed open-source library to fetch the content from the remote sites, and uses the same base framework as Entity Embed to create embed buttons for this type of content. As a result, the site builder has a very similar workflow when configuring the WYSIWYG tools, which can then be used by the content editor in a very intuitive way. Let’s see how all that works.

Install the module and create a URL Embed button

As usual, the first step is to enable the module itself, along with all its dependencies.

Common pitfall

This module depends on the external Embed library. If you are using Composer to download the Drupal module (which you should), Composer will do all the necessary work to fetch the dependencies for you. You can also install only the library itself using Composer if you want. To do that just run "composer require embed/embed". Failing to correctly install the library will prevent the module from being enabled, with an error message such as:

undefined

Once successfully enabled, the “Embed button” concept here is exactly the same as in the previous example, so all we need is to go back to /admin/config/content/embed .

undefined

The module already creates a new button for us, intended for embedding remote content. URL buttons have no specific configuration except for name, type and icon, so we’ll skip the button configuration part.

URL Embed: WYSIWYG Configuration

Similarly to what we did with the Entity button, we need to follow the same steps here to enable the URL Embed button on a given text format:

  • Step 1: Place the button in the active toolbar
  • Step 2: Mark the checkbox “Display embedded URLs
  • Step 3: Make sure the special tags are whitelisted. (Note that here the tags won’t be automatically populated, you need to manually introduce <drupal-url data-*> to the “Allowed HTML tags” text area.)
  • Step 4: (Optional) If you want, you can also mark the checkbox “Convert URLs to URL embeds”, which will automatically convert URLs from the text into embedded objects. If you do this though, make sure you reorder the filters so that “Display embedded URLs” is placed after the filter “Convert URLs to URL embeds
 All done, go embed some remote content! Videos require iframe browser support.

Common pitfall

Please bear in mind that while the Embed library can deal with many remote websites, it won’t do magic with providers that don’t implement the oEmbed protocol in a consistent way! Currently the module will only render the content of providers that return something inside the code property. If you are trying to embed some remote content that is not appearing correctly on your site, you can troubleshoot it by going to the URL: https://oscarotero.com/embed3/demo and trying the same URL there. If there is no content inside the code property, this URL unfortunately can’t be embedded in Drupal using URL Embed. Once this issue is complete, this will be less of an issue because the validation will prevent the user from entering an invalid URL.

Get even more by integrating with other Media solutions

There are at least two important ways you can extend the solutions indicated here, to achieve an even more powerful or easy-to-use editorial experience. Those approaches will be discussed in detail in upcoming articles, but you can have a brief idea about them below, in case you want to start learning about them right away.

Allow content to be selected through Entity Browsers


In the previous example, the content being embedded (in this case a referenced node) was selected by the editor using an autocomplete field. This is a very basic solution, available out-of-the-box for any referenced entity in Drupal core, but it does not provide the ultimate user experience we would expect from a modern CMS. The good news: it’s an easy fix. Plug any Entity Browser you may have on your site to any embed button you have created. 

Going over the configuration of Entity Browsers is beyond the scope of this article, but you can read more at the official documentation, or directly by giving it a try, using one of the pre-packaged modules like File Entity Browser, Content Browser or Media Entity Browser.

Common pitfall

If you are using an Entity Browser to select the content to be embedded, make sure your browser is configured to have a display plugin of type “iFrame.” The Embed options already appear in a modal window, so selecting an Entity Browser that is configured to be shown in a modal window won’t work.

Use the Media Entity module to deal (also) with remote media assets

The URL Embed module is a handy solution to allow your site editors to embed remote media assets in a quick-and-easy way, but what if:

  • you want to standardize your workflow for managing local and remote media assets?
  • you want to have a way to re-use content that was already embedded in other pieces of content?
  • etc.

A possible alternative that would solve all those needs is to standardize how Drupal sees all your media assets. The Media Entity approach means that you can “wrap” both local and remote media assets in a special “media" entity type, and as a result, the Entity Embed could be used with any type of asset, once all of them are Drupal entities regardless of their real storage.

In Conclusion

Hopefully, you find this a useful tutorial. I can strongly recommend this “Media Entity” approach, because it is being partly moved into Drupal core, which will result in an even stronger foundation for how Drupal sites will handle media assets in the near future.

Acknowledgements

Thanks to Seth Brown, David Burns and Dave Reid for their help with this article.

Other articles in this series

Image credit: Daian Gan

Categories: FLOSS Project Planets

DataCamp: New Course: Deep Learning in Python (first Keras 2.0 online course!)

Planet Python - Wed, 2017-06-21 10:10

Hello there! We have a special course released today: Deep Learning in Python by Dan Becker. This happens to be one of the first online interactive course providing instructions in keras 2.0, which now supports Tensorflow integration and this new API will be consistent in the coming years. So you've come to the right deep learning course.

About the course:

Artificial neural networks (ANN) are a biologically-inspired set of models that facilitate computers learning from observed data. Deep learning is a set of algorithms that use especially powerful neural networks. It is one of the hottest fields in data science, and most state-of-the-art results in robotics, image recognition and artificial intelligence (including the famous AlphaGo) use deep learning. In this course, you'll gain hands-on, practical knowledge of how to use neural networks and deep learning with Keras 2.0, the latest version of a cutting edge library for deep learning in Python.

Take me to Chapter 1!

Deep Learning in Python features interactive exercises that combine high-quality video, in-browser coding, and gamification for an engaging learning experience that will make you a master in deep learning in Python!

What you'll learn:

In the first chapter, you'll become familiar with the fundamental concepts and terminology used in deep learning, and understand why deep learning techniques are so powerful today. You'll build simple neural networks yourself and generate predictions with them. You can take this chapter here for free.

In chapter 2, you'll learn how to optimize the predictions generated by your neural networks. You'll do this using a method called backward propagation, which is one of the most important techniques in deep learning. Understanding how it works will give you a strong foundation to build from in the second half of the course.

In the third chapter, you'll use the keras library to build deep learning models for both regression as well as classification! You'll learn about the Specify-Compile-Fit workflow that you can use to make predictions and by the end of this chapter, you'll have all the tools necessary to build deep neural networks!

Finally, you'll learn how to optimize your deep learning models in keras. You'll learn how to validate your models, understand the concept of model capacity, and experiment with wider and deeper networks. Enjoy!

Dive into Deep Learning Today

Categories: FLOSS Project Planets

EuroPython: EuroPython 2017: Conference App available

Planet Python - Wed, 2017-06-21 07:56

We are pleased to announce our very own mobile app for the EuroPython 2017 conference:

EuroPython 2017 Conference App

Engage with the conference and its attendees

The mobile app gives you access to the conference schedule (even offline), helps you in planing your conference experience (create your personal schedule) and provides a rich social engagement platform for all attendees.

You can create a profile within the app (or link this to your existing social accounts), share messages and photos, and easily reach out to other fellow attendees - all from within the app.

Vital for all EuroPython attendees

We will again use the conference app to keep you updated by sending updates of the schedule and inform you of important announcements via push notifications, so please consider downloading it.

Many useful features

Please see our EuroPython 2017 Conference App page for more details on features and guides on how to use them.


Don’t forget to get your EuroPython ticket

If you want to join the EuroPython fun, be sure to get your tickets as soon as possible, since ticket sales have picked up quite a bit after we announced the schedule.

Enjoy,

EuroPython 2017 Team
EuroPython Society
EuroPython 2017 Conference

Categories: FLOSS Project Planets

Kushal Das: Updates on my Python community work: 16-17

Planet Python - Wed, 2017-06-21 06:56

Thank you, everyone, for re-electing me to the Python Software Foundation board 2017. The results of the vote came out on June 12th. This is my third term on the board, 2014, and 2016 were the last two terms. In 2015 I was out as random module decided to choose someone else :)

Things I worked on last year

I was planning to write this in April, but somehow my flow of writing blog posts was broken, and I never managed to do so. But, better late than never

As I had written in wiki page for candidates, one of my major goal last was about building communities out of USA region. I warm welcome I have received in every upstream online community (and also in physical conferences), we should make sure that others should be able to have the same experience.

As part of this work, I worked on three things:

  • Started PyCon Pune, goal of the conference being upstream first
  • Lead the Python track at FOSSASIA in Singapore
  • Helping in the local PyLadies group (they are in the early stage)

You can read about our experience in PyCon Pune here, I think we were successful in spreading the awareness about the bigger community which stands out there on the Internet throughout the world. All of the speakers pointed out how welcoming the community is, and how Python, the programming language binds us all. Let it be scientific computing or small embedded devices. We also managed to have a proper dev sprint for all the attendees, where people did their first ever upstream contribution.

At FOSSASIA, we had many professionals attending the talks, and the kids were having their own workshops. There were various other Python talks in different tracks as well.

Our local PyLadies Pune group still has many beginner Python programmers than working members. Though we have many working on Python on their job, but never worked with the community before. So, my primary work there was not only about providing technical guidance but also try to make sure that the group itself gets better visibility in the local companies. Anwesha writes about the group in much more details than me, so you should go to her blog to know about the group.

I am also the co-chair of the grants working group. As part of this group, we review the grants proposals PSF receives. As the group members are distributed, generally we manage to get good input about these proposals. The number of grant proposals from every region has increased over the years, and I am sure we will see more events happening in the future.

Along with Lorena Mesa, I also helped as the communication officer for the board. She took charge of the board blog posts, and I was working on the emails. I was finding it difficult to calculate the amounts, so wrote a small Python3 script which helps me to get total numbers for every months’ update. This also reminds me that I managed to attend all the board meetings (they are generally between 9:30 PM to 6:30 AM for me in India) expect the last one just a week before PyCon. Even though I was in Portland during that time, I was confused about the actual time of the event, and jet lag did not help either.

I also helped our amazing GSoC org-admin team, Terri is putting countless hours to make sure that the Python community gets a great experience in this program. I am hoping to find good candidates in Outreachy too. Last year, the PSF had funds for the same but did not manage to find a good candidate.

There were other conferences where I participated in different ways. Among them the Science Hack Day India was very special, working with so many kids, learning Python together in the MicroPython environment was a special moment. Watiting for this year’s event eagerly.

I will write about my goals in the 2017-18 term in a future blog post.

Categories: FLOSS Project Planets

Tameesh Biswas | Blog: GSoC17 : Client Side File Crypto : Week 3

Planet Drupal - Wed, 2017-06-21 06:38
GSoC17 : Client Side File Crypto : Week 3

This post summarises my third week of coding with Drupal for Google Summer of Code 2017.

Reshaping the REST API

tameeshb Wed, 06/21/2017 - 16:08 Tags GSoC Google Summer of Code 2017 Drupal Drupal Blog
Categories: FLOSS Project Planets

GSoC’17 : First Blog

Planet KDE - Wed, 2017-06-21 06:30

Hello readers

I’m glad to share this opportunity to be selected 2 times for Google Summer of Code project under KDE. It’s my second consecutive year working with DigiKam team.

DigiKam is an advanced digital photo management  application which enables user to view, manage, edit, organise, tag and share photographs under Linux systems. DigiKam has a feature to search items by similarity. This require to compute image fingerprints stored in main database. These data can take space on disk especially with huge collection and bloat the main database a lots and increase complexity to backup main database which include all main information for each item registered, as tags, label, comments, etc.

The goal of this proposal is to store the similarity fingerprints must be stored in a dedicated database. This would be a big relief for the end users as image fingerprints are around few KB of raw data for each image. And storing all of them takes huge disk space, increases time latency for huge collections.

Thus, to overcome all the above issues, a new DB interface would be created. {This work has already been done for thumbnail and face fingerprints}. Also, from backup point of view, it’s easy to have separate files to optimise.

I’ll add keep you guys updated with my work in upcoming posts.

Till then, I encourage you to use the software. It’s easy to install and use. (You can find cheat sheet to build DigiKam in my previous post! )

Happy DigiKaming!

 

 

 

 

 


Categories: FLOSS Project Planets

Codementor: Building an Hello World Application with Python/Django

Planet Python - Wed, 2017-06-21 06:11
Read this beginner-friendly post to get started with using Django to build web projects!
Categories: FLOSS Project Planets

digiKam 5.6.0 is released

Planet KDE - Wed, 2017-06-21 06:06

Following the 5th release 5.5.0 published in March 2017, the digiKam team is proud to announce the new release 5.6.0 of digiKam Software Collection. With this version the HTML gallery and the video slideshow tools are back, database shrinking (e.g. purging stale thumbnails) is also supported on MySQL, grouping items feature has been improved, the support for custom sidecars type-mime have been added, the geolocation bookmarks introduce fixes to be fully functional with bundles, the support for custom sidecars, and of course a lots of bug has been fixed.

HTML Gallery Tool

The HTML gallery is accessible through the tools menu in the main bar of both digiKam and showFoto. It allows you to create a a web gallery with a selection of photos or a set of albums, that you can open in any web browser. There are many themes to select and you can create your own as well. Javascript support is also available.

Video Slideshow Tool

The Video Slideshow is accessible also through the tools menu in the main bar of both digiKam and showFoto. It allows you to create a video slide with a selection of photos or albums. The generated video file can be view in any media player, as phones, tablets, Blue Ray reader, etc. There are many settings to customize the format, the codec, the resolution, and the transition (as for ex the famous Kens-Burn effect).

Database Integrity Tool

Already in 5.5.0 release, the tool dedicated to tests for database integrity and obsolete information have been improved. Besides obvious data safety improvements this can free up quite a lot of space in the digiKam databases. For technical reasons only SQLite database were shrunk to this smaller size in 5.5.0 release. Now this is also possible for MySQL databases with 5.6.0.

Items Grouping Features

Earlier changes to the grouping behaviour proved that digiKam users have quite diverse workflows - so with the current change we try to represent that diversity.

Originally grouped items were basically hidden away. Due to requests to include grouped items in certain operations, this was changed entirely to include grouped items in (almost) all operations. Needless to say, this wasn’t such a good idea either. So now you can choose which operations should be performed on all images in a group or just the first one.

The corresponding settings live in the configuration wizard under Miscellaneous in the Grouping tab. By default all operations are set to Ask, which will open a dialog whenever you perform this operation and grouped items are involved.

Extra Sidecars Support

Another new capability is to recognise additional sidecars. Under the new Sidecars tab in the Metadata part of the configuration wizard you can specify any additional extension that you want digiKam to recognise as a sidecar. These files will neither be read from nor written to, but they will be moved/rename/deleted/… together with the item that they belong to.

Geolocation Bookmarks

Another important change done for this new version is to restore the geolocation bookmarks feature which did not work with bundle versions of digiKam (AppImage, MacOS, and Windows). The new bookmarker has been fully re-written and is still compatible with previous geolocation bookmarks settings. It is now able to display the bookmark GPS information over a map for a better usability while editing your collection.

Google Summer of Code 2017 Students

This summer the team is proud to assist 4 students to work on separate projects:

Swati Lodha is back in the team. As in 2016, she will work to improve the Database interface. After having fixed and improved the MySQL support in digiKam, she has the task this year to isolate all the database contents dedicated to manage the similarity finger-prints matrix. As for thumbnails and face recognition, these elements will be stored in a new dedicated database. The goal is to reduce the core database size, simplify the maintenance and decrease the core database time-latencies.

Yingjie Liu is a Chinese student, mainly specialized with math and algorithms who will add a new efficient face recognition algorithm and will try to introduce some AI solution to simplify the face tag workflow.

Ahmed Fathi is an Egyptian student who will work to restore and improve the DLNA support in digiKam, to be able to stream collections contents through the network with compatible UPNP device as the Smart TV, tablets or cellulars.

Shaza Ismail is an another Egyptian student who will work to an ambitious project to create a tool for image editor to be used for healing image stains with the use of another part of the image by coloring by the use of one part over the other, mainly testing on dust spots, but can be used for other particles hiding as well.

Final Words

The next main digiKam version 6.0.0 is planned for the end of this year, when all Google Summer of Code projects will be ready to be backported for a beta release. In September, we will release a maintenance version 5.7.0 with a set of bugfixes as usual.

For further information about 5.6.0, take a look at the list of more than 81 issues closed in Bugzilla.

digiKam 5.6.0 Software collection source code tarball, Linux 32/64 bits AppImage bundles, MacOS package, and Windows 32/64 bits installers can be downloaded from this repository.

Happy digiKaming while this summer!

Categories: FLOSS Project Planets

Flocon de toile | Freelance Drupal: Do I have to wait for Drupal 9 for my web project?

Planet Drupal - Wed, 2017-06-21 06:00

In a previous post, we saw the Drupal 8's new policies for versioning, support and maintenance for its minor and major versions. This policy has evolved somewhat since the last DrupalCon Baltimore conference in April 2017. And this evolution of Drupal's strategy deserves a little attention because it can bring new light to those who hesitate to migrate their site on Drupal 8. Or those who are wondering about the relevance of launching their web project on Drupal 8.

Categories: FLOSS Project Planets

Dropsolid: Drupal 8 and React Native

Planet Drupal - Wed, 2017-06-21 05:57
21 Jun Drupal 8 and React Native Niels A Drupal 8 Tech Drupal

One day you might wake up with the next big idea that will shake the world in the most ungentle way. You decide to build an app, because you’ll have full access to all features of the device that you want your solution to work on. But then it dawns on you: you will actually need to build multiple apps in completely different languages while finding a way for them to serve the same content...

Then you start to realise that you won’t be able to step into the shoes of the greats, because web technology is holding you back. Fortunately, Drupal 8 and React Native are here to save your day - and your dream!

In this blog post you'll read how you can leverage Drupal 8 to serve as the back-end for your React Native app. 

First, however, a quick definition of what these technologies are:

  • Drupal is an open source content management system based on PHP.
  • React Native is a framework to build native apps using JavaScript and React.

If you want to read more about Drupal 8 or React Native, you're invited to check the sources at the bottom of this article.
 

Why React Native?

There are a myriad of front-end technologies available to you these days. The most popular ones are Angular and React. Both technologies allow you to build apps, but there is a big difference in how the apps will be built.

The advantage of employing React Native is that it lets you build an app using JavaScript, while converting the JavaScript into native code. In contrast, Angular or Ionic allow you to create a hybrid app, which basically is a website that gets embedded in a web view. Although the benefit here is that you're able to access the native features of a device.

In this case, we prefer React Native, because we want to build iOS and Android applications that run natively.
 

Headless Drupal

One of the big buzzwords that's been doing the rounds in the Drupal community lately is 'Headless'. A headless Drupal is actually a Drupal application where the front-end is not served by Drupal, but by a different technology.

You still get the benefits of a top notch and extremely flexible content management system, but you also get the benefits of your chosen front-end technology.

In this example, you'll discover how to set up a native iOS and Android application that gets its data from a Drupal website. To access the information, users will have to log in to the app, which allows the app to serve content tailored to the preferences of the user. Crucial in the current individualized digital world.

 

So this already brings us to our first hurdle. Because we are using a native application, authenticating users through cookies or sessions is not possible. So we are going to show you how to prepare your React Native application and your Drupal site to accept authenticated requests.
 

The architecture

The architecture consists of a vanilla Drupal 8 version and a React Native project with Redux.

The implemented flow is as following:

  1. A user gets the login screen presented on the app.
  2. The user fills in his credentials in the form
  3. The app posts the credentials to the endpoint in Drupal
  4. Drupal validates the credentials and logs the user in
  5. Drupal responds with a token based on the current user
  6. The app stores the token for future use
  7. The app now uses the token for all other requests the app makes to the Drupal REST API.
     
Creating an endpoint in Drupal

First we had to choose our authentication method. In this example, we opted to authenticate using a JWT or JSON web token, because there already is a great contributed module available for it on Drupal.org (https://www.drupal.org/project/jwt).

This module provides an authentication service that you can use with the REST module that is now in Drupal 8 core. This authentication service will read the token that is passed in the headers of the request and will determine the current user from it. All subsequent functionality in Drupal will then use that user to determine if it has permission to access the requested resources. This authentication service works for all subsequent requests, but not for the original request to get the JWT.

The original endpoint the JWT module provides, already expects the user to be logged in before it can serve the token. You could use the ready available basic authentication service, but we preferred to build our own as an example.
 

Authentication with JSON post

Instead of passing along the username and password in the headers of the request like the basic authentication service expects, we will send the username and password in the body of our request formatted as JSON.

Our authentication class implements the AuthenticationProviderInterface and is announced in json_web_token.services.yml as follows:

services: authentication.json_web_token: class: Drupal\json_web_token\Authentication\Provider\JsonAuthenticationProvider arguments: ['@config.factory', '@user.auth', '@flood', '@entity.manager'] tags: - { name: authentication_provider, provider_id: 'json_authentication_provider', priority: 100 }

The interface states that we have to implement two methods, applies and authenticate:

public function applies(Request $request) { $content = json_decode($request->getContent()); return isset($content->username, $content->password) && !empty($content->username) && !empty($content->password); }

Here we define when the authenticator should be applied. So our requirement is that the JSON that is posted contains a username and password. In all other cases this authenticator can be skipped. Every authenticator service you define will always be called by Drupal. Therefore, it is very important that you define your conditions for applying the authentication service.

public function authenticate(Request $request) { $flood_config = $this->configFactory->get('user.flood'); $content = json_decode($request->getContent()); $username = $content->username; $password = $content->password; // Flood protection: this is very similar to the user login form code. // @see \Drupal\user\Form\UserLoginForm::validateAuthentication() // Do not allow any login from the current user's IP if the limit has been // reached. Default is 50 failed attempts allowed in one hour. This is // independent of the per-user limit to catch attempts from one IP to log // in to many different user accounts. We have a reasonably high limit // since there may be only one apparent IP for all users at an institution. if ($this->flood->isAllowed(json_authentication_provider.failed_login_ip', $flood_config->get('ip_limit'), $flood_config->get('ip_window'))) { $accounts = $this->entityManager->getStorage('user') ->loadByProperties(array('name' => $username, 'status' => 1)); $account = reset($accounts); if ($account) { if ($flood_config->get('uid_only')) { // Register flood events based on the uid only, so they apply for any // IP address. This is the most secure option. $identifier = $account->id(); } else { // The default identifier is a combination of uid and IP address. This // is less secure but more resistant to denial-of-service attacks that // could lock out all users with public user names. $identifier = $account->id() . '-' . $request->getClientIP(); } // Don't allow login if the limit for this user has been reached. // Default is to allow 5 failed attempts every 6 hours. if ($this->flood->isAllowed('json_authentication_provider.failed_login_user', $flood_config->get('user_limit'), $flood_config->get('user_window'), $identifier)) { $uid = $this->userAuth->authenticate($username, $password); if ($uid) { $this->flood->clear('json_authentication_provider.failed_login_user', $identifier); return $this->entityManager->getStorage('user')->load($uid); } else { // Register a per-user failed login event. $this->flood->register('json_authentication_provider.failed_login_user', $flood_config->get('user_window'), $identifier); } } } } // Always register an IP-based failed login event. $this->flood->register('json_authentication_provider.failed_login_ip', $flood_config->get('ip_window')); return []; }

Here we mostly reimplemented the authentication functionality of the basic authorization service, with the difference that we read the data from a JSON format. This code logs the user into the Drupal application. All the extra code is flood protection.

Getting the JWT token

To get the JWT token we leveraged the REST module, and created a new rest resource plugin. We could have used the endpoint the module already provides, but we prefer to create all our endpoints with a version in it. We defined the plugin with the following annotation:

/** * Provides a resource to get a JWT token. * * @RestResource( * id = "token_rest_resource", * label = @Translation("Token rest resource"), * uri_paths = { * "canonical" = "/api/v1/token", * "https://www.drupal.org/link-relations/create" = "/api/v1/token" * } * ) */

The uri_paths are the most important part of this annotation. By setting both the canonical and the weird looking Drupal.org keys, we are able to set a fully custom path for our endpoint. That allows us to set the version of our API in the URI like this: /api/v1/token. This way we can easily roll out new versions of our API and clearly communicate about deprecating older versions.

Our class extends the ResourceBase class provided by the REST module. We only implemented a post method in our class, as we only want this endpoint to handle posts.

public function post() { if($this->currentUser->isAnonymous()){ $data['message'] = $this->t("Login failed. If you don't have an account register. If you forgot your credentials please reset your password."); }else{ $data['message'] = $this->t('Login succeeded'); $data['token'] = $this->generateToken(); } return new ResourceResponse($data); } /** * Generates a new JWT. */ protected function generateToken() { $token = new JsonWebToken(); $event = new JwtAuthIssuerEvent($token); $this->eventDispatcher->dispatch(JwtAuthIssuerEvents::GENERATE, $event); $jwt = $event->getToken(); return $this->transcoder->encode($jwt, array()); }

The generateToken method is a custom method where we leverage the JWT module to get us a token that we can return. 
 
We do not return a JSON object directly. We return a response in the form of an array. This is a very handy feature of the REST module, because you can choose the formats of your endpoint using the interface in Drupal. So you could easily return any other supported format like xml, JSON or hal_json. For this example, we chose hal_json. 

Drupal has some built-in security measures for non-safe methods. The only safe methods are HEAD, GET, OPTIONS and TRACE. We are implementing a non-safe method, so we have to take into account the following things:

  • When the app does a post it also needs to send a X-CSRF-Token in the header to avoid cross site request forgery. This token can be gotten from /session/token endpoint.
  • In case of a POST we also need to set the Content-type request header to “application/hal+json” on top of the query parameter “_format=hal_json”.
Putting things together

The only thing left is to enable our endpoint through the interface that the rest modules provides on /admin/config/services/rest.

As you can see, we’ve configured our token endpoint with our custom json_authentication_provider service and it is available in hal_json and json formats.

Calling the endpoint in our React Native application The login component

Our login component contain two input fields and a button.

this.setState({username})} placeholderTextColor="#FFF" style={styles.input} /> this.setState({password})} style={styles.input} /> this.login({ username: this.state.username, password: this.state.password })} > Get Started

When we click the login button we trigger the login action that is defined in our bindActions function.

function bindActions(dispatch) { return { login: (username, password) => dispatch(login(username, password)), }; }

The login action is defined in our auth.js:

import type { Action } from './types'; import axios from 'react-native-axios'; export const LOGIN = 'LOGIN'; export function login(username, password):Action { var jwt = ''; var endpoint = "https://example.com/api/v1/token?_format=hal_json"; return { type: LOGIN, payload: axios({ method: 'post', url: endpoint, data: { username: username, password: password, jwt: jwt, }, headers: { 'Content-Type':'application/hal+json', 'X-CSRF-Token':'V5GBdzli7IvPCuRjMqvlEC4CeSeXgufl4Jx3hngZYRw' } }) } }

In this example, we set the X-CSRF-token fixed to keep it simple. Normally you would get this first. We’ve also used the react-native-axios package to handle our post. This action will return a promise. If you use the promise and thunk middleware in your Redux Store you can set up your reducer in the following way.

import type { Action } from '../actions/types'; import { LOGIN_PENDING, LOGOUT} from '../actions/auth'; import { REHYDRATE } from 'redux-persist/constants'; export type State = { fetching: boolean, isLoggedIn: boolean, username:string, password:string, jwt: string, error: boolean, } const initialState = { fetching: false, username: '', password: '', error: null, } export default function (state:State = initialState, action:Action): State { switch (action.type) { case "LOGIN_PENDING": return {...state, fetching: true} case "LOGIN_REJECTED": return {...state, fetching: false, error: action.payload} case "LOGIN_FULFILLED": return {...state, fetching: false, isLoggedIn: true, jwt:action.payload.data.token} case "REHYDRATE": var incoming = action.payload.myReducer if (incoming) return {...state, ...incoming, specialKey: processSpecial(incoming.specialKey)} return state default: return state; } }

The reducer will be able to act on the different action types of the promise:

  • LOGIN_PENDING: Allows you to change the state of your component so you could implement a loader while it is trying to get the token.
  • LOGIN_REJECTED: When the attempt fails you could give a notification why it failed.
  • LOGIN_FULFILLED: When the attempt succeeds you have the token and set the state to logged in.

So once we had implemented all of this, we had an iOS and Android app that actually used a Drupal 8 site as it main content store.

Following this example, you should be all set up to deliver tailored content to your users on whichever platform they may be.

The purpose of this article was to demonstrate how effective Drupal 8 can be as a source for your upcoming iOS or Android application.
 

Useful resources:

 

More articles by our Dropsolid Technical Leads, strategist and marketeers? Check them out here.

Categories: FLOSS Project Planets

Vincent Bernat: IPv4 route lookup on Linux

Planet Debian - Wed, 2017-06-21 05:19

TL;DR: With its implementation of IPv4 routing tables using LPC-tries, Linux offers good lookup performance (50 ns for a full view) and low memory usage (64 MiB for a full view).

During the lifetime of an IPv4 datagram inside the Linux kernel, one important step is the route lookup for the destination address through the fib_lookup() function. From essential information about the datagram (source and destination IP addresses, interfaces, firewall mark, …), this function should quickly provide a decision. Some possible options are:

  • local delivery (RTN_LOCAL),
  • forwarding to a supplied next hop (RTN_UNICAST),
  • silent discard (RTN_BLACKHOLE).

Since 2.6.39, Linux stores routes into a compressed prefix tree (commit 3630b7c050d9). In the past, a route cache was maintained but it has been removed1 in Linux 3.6.

Route lookup in a trie

Looking up a route in a routing table is to find the most specific prefix matching the requested destination. Let’s assume the following routing table:

$ ip route show scope global table 100 default via 203.0.113.5 dev out2 192.0.2.0/25 nexthop via 203.0.113.7 dev out3 weight 1 nexthop via 203.0.113.9 dev out4 weight 1 192.0.2.47 via 203.0.113.3 dev out1 192.0.2.48 via 203.0.113.3 dev out1 192.0.2.49 via 203.0.113.3 dev out1 192.0.2.50 via 203.0.113.3 dev out1

Here are some examples of lookups and the associated results:

Destination IP Next hop 192.0.2.49 203.0.113.3 via out1 192.0.2.50 203.0.113.3 via out1 192.0.2.51 203.0.113.7 via out3 or 203.0.113.9 via out4 (ECMP) 192.0.2.200 203.0.113.5 via out2

A common structure for route lookup is the trie, a tree structure where each node has its parent as prefix.

Lookup with a simple trie

The following trie encodes the previous routing table:

For each node, the prefix is known by its path from the root node and the prefix length is the current depth.

A lookup in such a trie is quite simple: at each step, fetch the nth bit of the IP address, where n is the current depth. If it is 0, continue with the first child. Otherwise, continue with the second. If a child is missing, backtrack until a routing entry is found. For example, when looking for 192.0.2.50, we will find the result in the corresponding leaf (at depth 32). However for 192.0.2.51, we will reach 192.0.2.50/31 but there is no second child. Therefore, we backtrack until the 192.0.2.0/25 routing entry.

Adding and removing routes is quite easy. From a performance point of view, the lookup is done in constant time relative to the number of routes (due to maximum depth being capped to 32).

Quagga is an example of routing software still using this simple approach.

Lookup with a path-compressed trie

In the previous example, most nodes only have one child. This leads to a lot of unneeded bitwise comparisons and memory is also wasted on many nodes. To overcome this problem, we can use path compression: each node with only one child is removed (except if it also contains a routing entry). Each remaining node gets a new property telling how many input bits should be skipped. Such a trie is also known as a Patricia trie or a radix tree. Here is the path-compressed version of the previous trie:

Since some bits have been ignored, on a match, a final check is executed to ensure all bits from the found entry are matching the input IP address. If not, we must act as if the entry wasn’t found (and backtrack to find a matching prefix). The following figure shows two IP addresses matching the same leaf:

The reduction on the average depth of the tree compensates the necessity to handle those false positives. The insertion and deletion of a routing entry is still easy enough.

Many routing systems are using Patricia trees:

Lookup with a level-compressed trie

In addition to path compression, level compression2 detects parts of the trie that are densily populated and replace them with a single node and an associated vector of 2k children. This node will handle k input bits instead of just one. For example, here is a level-compressed version our previous trie:

Such a trie is called LC-trie or LPC-trie and offers higher lookup performances compared to a radix tree.

An heuristic is used to decide how many bits a node should handle. On Linux, if the ratio of non-empty children to all children would be above 50% when the node handles an additional bit, the node gets this additional bit. On the other hand, if the current ratio is below 25%, the node loses the responsibility of one bit. Those values are not tunable.

Insertion and deletion becomes more complex but lookup times are also improved.

Implementation in Linux

The implementation for IPv4 in Linux exists since 2.6.13 (commit 19baf839ff4a) and is enabled by default since 2.6.39 (commit 3630b7c050d9).

Here is the representation of our example routing table in memory3:

There are several structures involved:

The trie can be retrieved through /proc/net/fib_trie:

$ cat /proc/net/fib_trie Id 100: +-- 0.0.0.0/0 2 0 2 |-- 0.0.0.0 /0 universe UNICAST +-- 192.0.2.0/26 2 0 1 |-- 192.0.2.0 /25 universe UNICAST |-- 192.0.2.47 /32 universe UNICAST +-- 192.0.2.48/30 2 0 1 |-- 192.0.2.48 /32 universe UNICAST |-- 192.0.2.49 /32 universe UNICAST |-- 192.0.2.50 /32 universe UNICAST [...]

For internal nodes, the numbers after the prefix are:

  1. the number of bits handled by the node,
  2. the number of full children (they only handle one bit),
  3. the number of empty children.

Moreover, if the kernel was compiled with CONFIG_IP_FIB_TRIE_STATS, some interesting statistics are available in /proc/net/fib_triestat4:

$ cat /proc/net/fib_triestat Basic info: size of leaf: 48 bytes, size of tnode: 40 bytes. Id 100: Aver depth: 2.33 Max depth: 3 Leaves: 6 Prefixes: 6 Internal nodes: 3 2: 3 Pointers: 12 Null ptrs: 4 Total size: 1 kB [...]

When a routing table is very dense, a node can handle many bits. For example, a densily populated routing table with 1 million entries packed in a /12 can have one internal node handling 20 bits. In this case, route lookup is essentially reduced to a lookup in a vector.

The following graph shows the number of internal nodes used relative to the number of routes for different scenarios (routes extracted from an Internet full view, /32 routes spreaded over 4 different subnets with various densities). When routes are densily packed, the number of internal nodes are quite limited.

Performance

So how performant is a route lookup? The maximum depth stays low (about 6 for a full view), so a lookup should be quite fast. With the help of a small kernel module, we can accurately benchmark5 the fib_lookup() function:

The lookup time is loosely tied to the maximum depth. When the routing table is densily populated, the maximum depth is low and the lookup times are fast.

When forwarding at 10 Gbps, the time budget for a packet would be about 50 ns. Since this is also the time needed for the route lookup alone in some cases, we wouldn’t be able to forward at line rate with only one core. Nonetheless, the results are pretty good and they are expected to scale linearly with the number of cores.

Another interesting figure is the time it takes to insert all those routes into the kernel. Linux is also quite efficient in this area since you can insert 2 million routes in less than 10 seconds:

Memory usage

The memory usage is available directly in /proc/net/fib_triestat. The statistic provided doesn’t account for the fib_info structures, but you should only have a handful of them (one for each possible next-hop). As you can see on the graph below, the memory use is linear with the number of routes inserted, whatever the shape of the routes is.

The results are quite good. With only 256 MiB, about 2 million routes can be stored!

Routing rules

Unless configured without CONFIG_IP_MULTIPLE_TABLES, Linux supports several routing tables and has a system of configurable rules to select the table to use. These rules can be configured with ip rule. By default, there are three of them:

$ ip rule show 0: from all lookup local 32766: from all lookup main 32767: from all lookup default

Linux will first lookup for a match in the local table. If it doesn’t find one, it will lookup in the main table and at last resort, the default table.

Builtin tables

The local table contains routes for local delivery:

$ ip route show table local broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 broadcast 192.168.117.0 dev eno1 proto kernel scope link src 192.168.117.55 local 192.168.117.55 dev eno1 proto kernel scope host src 192.168.117.55 broadcast 192.168.117.63 dev eno1 proto kernel scope link src 192.168.117.55

This table is populated automatically by the kernel when addresses are configured. Let’s look at the three last lines. When the IP address 192.168.117.55 was configured on the eno1 interface, the kernel automatically added the appropriate routes:

  • a route for 192.168.117.55 for local unicast delivery to the IP address,
  • a route for 192.168.117.255 for broadcast delivery to the broadcast address,
  • a route for 192.168.117.0 for broadcast delivery to the network address.

When 127.0.0.1 was configured on the loopback interface, the same kind of routes were added to the local table. However, a loopback address receives a special treatment and the kernel also adds the whole subnet to the local table. As a result, you can ping any IP in 127.0.0.0/8:

$ ping -c1 127.42.42.42 PING 127.42.42.42 (127.42.42.42) 56(84) bytes of data. 64 bytes from 127.42.42.42: icmp_seq=1 ttl=64 time=0.039 ms --- 127.42.42.42 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms

The main table usually contains all the other routes:

$ ip route show table main default via 192.168.117.1 dev eno1 proto static metric 100 192.168.117.0/26 dev eno1 proto kernel scope link src 192.168.117.55 metric 100

The default route has been configured by some DHCP daemon. The connected route (scope link) has been automatically added by the kernel (proto kernel) when configuring an IP address on the eno1 interface.

The default table is empty and has little use. It has been kept when the current incarnation of advanced routing has been introduced in Linux 2.1.68 after a first tentative using “classes” in Linux 2.1.156.

Performance

Since Linux 4.1 (commit 0ddcf43d5d4a), when the set of rules is left unmodified, the main and local tables are merged and the lookup is done with this single table (and the default table if not empty). Without specific rules, there is no performance hit when enabling the support for multiple routing tables. However, as soon as you add new rules, some CPU cycles will be spent for each datagram to evaluate them. Here is a couple of graphs demonstrating the impact of routing rules on lookup times:

For some reason, the relation is linear when the number of rules is between 1 and 100 but the slope increases noticeably past this threshold. The second graph highlights the negative impact of the first rule (about 30 ns).

A common use of rules is to create virtual routers: interfaces are segregated into domains and when a datagram enters through an interface from domain A, it should use routing table A:

# ip rule add iif vlan457 table 10 # ip rule add iif vlan457 blackhole # ip rule add iif vlan458 table 20 # ip rule add iif vlan458 blackhole

The blackhole rules may be removed if you are sure there is a default route in each routing table. For example, we add a blackhole default with a high metric to not override a regular default route:

# ip route add blackhole default metric 9999 table 10 # ip route add blackhole default metric 9999 table 20 # ip rule add iif vlan457 table 10 # ip rule add iif vlan458 table 20

To reduce the impact on performance when many interface-specific rules are used, interfaces can be attached to VRF instances and a single rule can be used to select the appropriate table:

# ip link add vrf-A type vrf table 10 # ip link set dev vrf-A up # ip link add vrf-B type vrf table 20 # ip link set dev vrf-B up # ip link set dev vlan457 master vrf-A # ip link set dev vlan458 master vrf-B # ip rule show 0: from all lookup local 1000: from all lookup [l3mdev-table] 32766: from all lookup main 32767: from all lookup default

The special l3mdev-table rule was automatically added when configuring the first VRF interface. This rule will select the routing table associated to the VRF owning the input (or output) interface.

VRF was introduced in Linux 4.3 (commit 193125dbd8eb), the performance was greatly enhanced in Linux 4.8 (commit 7889681f4a6c) and the special routing rule was also introduced in Linux 4.8 (commit 96c63fa7393d, commit 1aa6c4f6b8cd). You can find more details about it in the kernel documentation.

Conclusion

The takeaways from this article are:

  • route lookup times hardly increase with the number of routes,
  • densily packed /32 routes lead to amazingly fast route lookups,
  • memory use is low (128 MiB par million routes),
  • no optimization is done on routing rules.
  1. The routing cache was subject to reasonably easy to launch denial of service attacks. It was also believed to not be efficient for high volume sites like Google but I have first-hand experience it was not the case for moderately high volume sites. 

  2. IP-address lookup using LC-tries”, IEEE Journal on Selected Areas in Communications, 17(6):1083-1092, June 1999. 

  3. For internal nodes, the key_vector structure is embedded into a tnode structure. This structure contains information rarely used during lookup, notably the reference to the parent that is usually not needed for backtracking as Linux keeps the nearest candidate in a variable. 

  4. One leaf can contain several routes (struct fib_alias is a list). The number of “prefixes” can therefore be greater than the number of leaves. The system also keeps statistics about the distribution of the internal nodes relative to the number of bits they handle. In our example, all the three internal nodes are handling 2 bits. 

  5. The measurements are done in a virtual machine with one vCPU. The host is an Intel Core i5-4670K running at 3.7 GHz during the experiment (CPU governor was set to performance). The kernel is Linux 4.11. The benchmark is single-threaded. It runs a warm-up phase, then executes about 100,000 timed iterations and keeps the median. Timings of individual runs are computed from the TSC. 

  6. Fun fact: the documentation of this first tentative of more flexible routing is still available in today’s kernel tree and explains the usage of the “default class”

Categories: FLOSS Project Planets

Python Bytes: #31 You should have a change log

Planet Python - Wed, 2017-06-21 04:00
<p><strong>Brian #1:</strong> <a href="https://github.com/schapman1974/tinymongo"><strong>TinyMongo</strong></a></p> <ul> <li>Like MongoDB, but built on top of TinyDB.</li> <li>Even runs on a Raspberry Pi, according to Stephen</li> </ul> <p><strong>Michael #2:</strong> <a href="https://github.com/shopnilsazal/validus"><strong>A dead simple Python data validation library</strong></a></p> <ul> <li><code>validus.isemail('someone@example.com')</code> <ul> <li>Validation functions include:</li> </ul></li> <li>isrgbcolor()</li> <li>isphone()</li> <li>isisbn()</li> <li>isipv4()</li> <li>isint()</li> <li>isfloat()</li> <li>isslug()</li> <li>isuuid() <ul> <li>Requires Python 3.3+</li> </ul></li> </ul> <p><strong>Brian #3:</strong> <a href="https://documen.tician.de/pudb/index.html"><strong>PuDB</strong></a></p> <ul> <li>In episode 29, <a href="https://pythonbytes.fm/29">https://pythonbytes.fm/29</a>, I talked about launching pdb from pytest failures.</li> <li><a href="https://twitter.com/kidpixo">@kidpixo</a> pointed out that PuDB was a better debugger and can also be launched from pytest failures.</li> <li>Starting pudb from pytest failed tests (from <a href="https://documen.tician.de/pudb/starting.html#usage-with-pytest">docs</a>): <code>pytest --pdbcls pudb.debugger:Debugger --pdb --capture=no</code></li> <li>Using <a href="https://pypi.python.org/pypi/pytest-pudb">pytest-pudb</a> plugin to do the same: <code>pytest --pudb</code></li> </ul> <p><strong>Michael #4:</strong> <a href="https://pyup.io/posts/analyzing-django-requirement-files-on-github/"><strong>Analyzing Django requirement files on GitHub</strong></a></p> <ul> <li>From the pyup.io guys</li> <li>Django is the most popular Python web framework. </li> <li>It is now almost 12 years old and is used on all kinds of different projects.</li> <li>Django developers pin their requirements (64%): Pinned or freezed requirements (Django==1.8.12) make builds predictable and deterministic.</li> <li>Django 1.8 is the most popular major release (24%) <ul> <li>A bit worrisome are the 1.9 (14%), 1.7 (13%) and 1.6 (13%) releases on the second, third and fourth place. All of them are no longer receiving security updates, 1.7 and 1.6 went EOL over 2 years ago.</li> </ul></li> <li>Yikes: Only 2% of all Django projects are on a secure release <ul> <li>Among all projects, more than 60% use a Django release with one or more known security vulnerabilities. Only 2% are using a secure Django release.</li> <li>On the remaining part of more than 30% it's unclear what exactly is going to be installed. That's because the Django release is either unpinned or has a range.</li> </ul></li> </ul> <p><strong>Brian #5:</strong> <strong>Changelogs</strong></p> <ul> <li><a href="http://keepachangelog.com">http://keepachangelog.com</a></li> <li><a href="https://github.com/hawkowl/towncrier">https://github.com/hawkowl/towncrier</a></li> </ul> <p><strong>Michael #6:</strong> <a href="https://dbader.org/blog/understanding-asynchronous-programming-in-python"><strong>Understanding Asynchronous Programming in Python</strong></a></p> <ul> <li>by Doug Farrell via Dan Bader’s site</li> <li>A synchronous program is what most of us started out writing, and can be thought of as performing one execution step at a time, one after another.</li> <li>Example: A web server <ul> <li>Could be synchronous</li> <li>Could be fully optimized but</li> <li>You’re at best still waiting on network IO back to all the web clients</li> </ul></li> <li>The Real World is Asynchronous: <em>Kids are a long running task with high priority, superseding any other task we might be doing, like the checkbook or laundry</em>.</li> <li>Example 1: Synchronous Programming (using queuing)</li> <li>Example 2: Simple Cooperative Concurrency (using generators)</li> <li>Example 3: Cooperative Concurrency With Blocking Calls (same, but with slow operations)</li> <li>Example 4: Cooperative Concurrency With Non-Blocking Calls (gevent)</li> <li>Example 5: Synchronous (Blocking) HTTP Downloads</li> <li>Example 6: Asynchronous (Non-Blocking) HTTP Downloads With gevent</li> <li>Example 7: Asynchronous (Non-Blocking) HTTP Downloads With Twisted</li> <li>Example 8: Asynchronous (Non-Blocking) HTTP Downloads With Twisted Callbacks</li> </ul> <p>Errata/Giving Credit:</p> <ul> <li>Also in episode 29, <a href="https://pythonbytes.fm/29">https://pythonbytes.fm/29</a>, I talked about pipcache as an alias for pip download. I think I said the author of a blog post contacted me. It wasn’t him. It was <a href="https://twitter.com/kidpixo">@kidpixo</a>. Sorry kidpixo, keep the ideas coming.</li> </ul> <p>For fun: Python Private Methods</p> <ul> <li><a href="http://turnoff.us/geek/python-private-methods/">http://turnoff.us/geek/python-private-methods/</a></li> </ul> <p>Our news</p> <ul> <li>Beta 3 of <a href="https://pragprog.com/book/bopytest/python-testing-with-pytest">Python Testing with pytest</a> should come out this week with Chapter 7: Using pytest with other tools, which includes using it with pdb, coverage.py, mock, tox, and Jenkins. <ul> <li>Next beta will be the appendices, including a clean up and rewrite of pip and venv appendices, plus a plugin sampler pack, and a tutorial on packaging.</li> <li>Thanks to everyone who has submitted Errata. </li> </ul></li> <li>Finished recording RESTful and HTTP Services in Pyramid AND MongoDB for Python Developers. Add your email address at <a href="https://training.talkpython.fm">https://training.talkpython.fm</a> to get notified upon release of each.</li> </ul>
Categories: FLOSS Project Planets

Talk Python to Me: #117 Functional Python with Coconut

Planet Python - Wed, 2017-06-21 04:00
One of the nice things about the Python language is it's at least 3 programming paradigms in one: There's the procedural style, object-oriented style, and functional style. <br/> <br/> This week you'll meet Evan Hubinger who is taking Python's functional programming style and turning it to 11. We're talking about Coconut. A full functional programming language that is a proper superset of Python itself. <br/> <br/> Show note: Sorry for the lower audio quality in my voice on this one. Looks like my primary mic had trouble and the fallback wasn't as good as it should be. Plus, I had mostly lost my voice from PyCon (PyCon!!! And other loud speaking).<br/> <br/> Links from the show:<br/> <br/> <div style="font-size: .85em;"><b>Evan on Twitter</b>: <a href="https://twitter.com/EvanHub" target="_blank">@EvanHub</a><br/> <b>Evan on LinkedIn</b>: <a href="https://www.linkedin.com/in/ehubinger/" target="_blank">linkedin.com/in/ehubinger</a><br/> <b>Evan on GitHub</b>: <a href="https://github.com/evhub" target="_blank">github.com/evhub</a><br/> <br/> <b>Coconut</b>: <a href="http://coconut-lang.org/" target="_blank">coconut-lang.org</a><br/> <b>Coconut on GitHub</b>: <a href="https://github.com/evhub/coconut" target="_blank">github.com/evhub/coconut</a><br/> <b>Coconut Tutorial</b>: <a href="https://coconut.readthedocs.io/en/master/HELP.html" target="_blank">coconut.readthedocs.io/en/master/HELP.html</a><br/> <b>Coconut Docs</b>: <a href="https://coconut.readthedocs.io/en/master/DOCS.html" target="_blank">coconut.readthedocs.io/en/master/DOCS.html</a><br/> <b>Coconut FAQ</b>: <a href="https://coconut.readthedocs.io/en/master/FAQ.html" target="_blank">coconut.readthedocs.io/en/master/FAQ.html</a><br/> <b>Gitter channel</b>: <a href="https://gitter.im/evhub/coconut" target="_blank">gitter.im/evhub/coconut</a><br/></div>
Categories: FLOSS Project Planets

Growing The Community

Open Source Initiative - Tue, 2017-06-20 23:01


How can you grow an open source community? Two blog posts from The Document Foundation (TDF) illustrate a proven double-ended strategy to sustain an existing community.

Since it was established in 2010, the LibreOffice project has steadily grown under the guidance of The Document Foundation (TDF) where I’ve been a volunteer — most lately as a member of its Board. Starting from a complex political situation with a legacy codebase suffering extensive technical debt, TDF has been able to cultivate both individual contributors and company-sponsored contributors and move beyond the issues to stability and effectiveness.

What made the project heal and grow? I’d say the most important factor was keeping the roles of individual and company contributions and interests in balance, as these two posts illustrate.

Individual Contributors

The first priority for growth in any open source project is to grow the individual contributor base. TDF has used a variety of techniques to make this possible, many embodied in first of the two illustrative posts, announcing another Contributor Month declared for May 2017 which asked people to:

  • Help to confirm bug reports
  • Contribute code
  • Translate the user interface
  • Write documentation
  • Answer questions from users at ask.libreoffice.org
  • Spread the word about LibreOffice on Twitter

As part of the event the co-ordinator promised to send specially-designed stickers to all contributors, providing a positive reinforcement to encourage people to join in and stay around.

Equally importantly TDF provides introductions to many contribution types.  There is a well-established getting started guide for developer contributors, including the well-established “Easy Hacks” list of newbie-friendly things that need doing. There’s also a good introductory portal for translators/localizers.

All of this has been undergirded by sound technical choices that make joining such a complex project feasible:

  • Reimplementing the build system so you don’t have to be a genius to use it
  • Extensive use of automated testing to virtually eliminate uncalled code, pointer errors and other mechanically-identifiable defects
  • An automated build system that stores binaries to allow easy regression testing
  • Translation of comments into English, the project’s language-of-choice (being the most common second language in the technical community)
  • Refactoring key code to use modern UI environments
Commercial Opportunity

Secondly, TDF has operated in a way that encourages the emergence of a commercial ecosystem around the code. That in turn has led to the continued employment of a substantial core developer force, even in the face of corporate restructuring.

LibreOffice has a very large user base — in the double-digit millions range worldwide — and many of those visiting the download page make financial donations to help the project. That’s a wonderful thing for a project to have, but it turns out that having money is not necessarily an easy blessing. It needs spending, but that has to be in a way that does not poison the contributor community.

Rather than hiring staff to work on the software, the project identifies areas where unpaid volunteers are not likely to show up and its Engineering Steering Committee proposes a specification. The Board of Directors then creates a competitive tender so that individuals or companies can earn a living (and gain skills) making LibreOffice better while still retaining the collaborative spirit of the project. Doing this builds commercial capability and thus grows the overall community.

So the second post is a sample tender, this one to do some unglamorous refactoring work to update the SVG handling. TDF is likely to receive a number of costed proposals and the Board will make a decision how to award the work based on the greatest community benefit — a combination of optimum cost, involvement of community members, nature of the bidder and more. The result is a community contributor fairly compensated without creating a permanent TDF staff role.

Balance Yielding Benefit

As a result of this and other carefully-designed policies, the LibreOffice project now has a developer force in the hundreds, companies whose benefits from LibreOffice lead to them making significant contributions, a reworked and effective developer environment with automated testing to complement the test team, and one of the most reliable release schedules in the industry.  This means new features get implemented, bugs get fixed and most importantly security issues are rapidly resolved when identified.

This is what OpenOffice.org became when it left the commercial world. The early controversies have passed and the project has a rhythm that’s mature and effective. I think it’s actually better now than it was at Sun.

(This article originally appeared on "Meshed Networks" and was made possible by Patreon patrons.
Image Credit, Simon Phipps. 2017. CC-BY-ND.

Categories: FLOSS Research

Justin Mason: Links for 2017-06-20

Planet Apache - Tue, 2017-06-20 19:58
Categories: FLOSS Project Planets

Steve McIntyre: So, Stretch happened...

Planet Debian - Tue, 2017-06-20 18:21

Things mostly went very well, and we've released Debian 9 this weekend past. Many many people worked together to make this possible, and I'd like to extend my own thanks to all of them.

As a project, we decided to dedicate Stretch to our late founder Ian Murdock. He did much of the early work to get Debian going, and inspired many more to help him. I had the good fortune to meet up with Ian years ago at a meetup attached to a Usenix conference, and I remember clearly he was a genuinely nice guy with good ideas. We'll miss him.

For my part in the release process, again I was responsible for producing our official installation and live images. Release day itself went OK, but as is typical the process ran late into Saturday night / early Sunday morning. We made and tested lots of different images, although numbers were down from previous releases as we've stopped making the full CD sets now.

Sunday was the day for the release party in Cambridge. As is traditional, a group of us met up at a local hostelry for some revelry! We hid inside the pub to escape from the ridiculouly hot weather we're having at the moment.

Due to a combination of the lack of sleep and the heat, I nearly forgot to even take any photos - apologies to the extra folks who'd been around earlier whom I missed with the camera... :-(

Categories: FLOSS Project Planets

Andreas Bombe: New Blog

Planet Debian - Tue, 2017-06-20 18:09

So I finally got myself a blog to write about my software and hardware projects, my work in Debian and, I guess, stuff. Readers of planet.debian.org, hi! If you can see this I got the configuration right.

For the curious, I’m using a static site generator for this blog — Hugo to be specific — like all the cool kids do these days.

Categories: FLOSS Project Planets

Bryan Pendleton: All Over the Place: a very short review

Planet Apache - Tue, 2017-06-20 16:52

Is it possible that I am the first person to tell you about Geraldine DeRuiter's new book: All Over the Place: Adventures in Travel, True Love, and Petty Theft?

If I am, then yay!

For this is a simply wonderful book, and I hope everybody finds out about it.

As anyone who has read the book (or has read her blog) knows, DeRuiter can be just screamingly funny. More than once I found myself causing a distraction on the bus, as my fellow riders wondered what was causing me to laugh out loud. Many books are described as "laugh out loud funny," but DeRuiter's book truly is.

Much better, though, are the parts of her book that aren't the funny parts, for it is here where her writing skills truly shine.

DeRuiter is sensitive, perceptive, honest, and caring. Best of all, however, is that she is able to write about affairs of the heart in a way that is warm and generous, never cloying or cringe-worthy.

So yes: you'll laugh, you'll cry, you'll look at the world with fresh new eyes. What more could you want from a book?

All Over the Place is all of that.

Categories: FLOSS Project Planets

Drupal.org blog: What’s new on Drupal.org? - May 2017

Planet Drupal - Tue, 2017-06-20 15:37

Read our Roadmap to understand how this work falls into priorities set by the Drupal Association with direction and collaboration from the Board and community.

After returning from DrupalCon Baltimore at the end of April, we spent May regrouping and focusing on spring cleaning tasks. It's important for any technical team to spend time on stability and maintenance, and we used May to find improvements in these areas and look for some other efficiencies.

Drupal.org updates
Categories: FLOSS Project Planets

Elevated Third: Recorded Webinar: A Decoupled Drupal Story

Planet Drupal - Tue, 2017-06-20 15:30
Recorded Webinar: A Decoupled Drupal Story Recorded Webinar: A Decoupled Drupal Story Tue, 06/20/2017 - 13:30

We teamed up with Acquia to present “A Decoupled Drupal Story: Powdr Gives Developers Ultimate Flexibility To Build Best CX Possible.” The webinar aired in June but you can view the recording here anytime.

As the internet and web-connected devices continue to evolve, so do the needs to develop and render content. Given today’s rate of change, organizations are using decoupled architecture to build applications - giving them flexibility to accommodate any device or experience.

In this session, we’ll cover Powdr, a ski resort holding company. To give its developers the freedom to use the right front-end tools needed for any given use case, Powdr built its 17 ski resort websites on one decoupled Drupal platform. Join Elevated Third, Hoorooh Digital and Acquia to learn:

  • How a custom Drupal 8 solution cut implementation time in half vs Drupal 7

  • The ease in which Drupal 8’s internal REST API can be extended to fit a client's needs

  • The details of handling non-Drupal application routing on Acquia's servers

 

If you are considering a decoupled Drupal implementation, let’s talk.

Categories: FLOSS Project Planets
Syndicate content