FLOSS Project Planets

Mediacurrent: Dropcast: Episode 31 - DRUPALCON

Planet Drupal - Thu, 2017-04-20 12:58
Dropcast: Episode 31 - DRUPALCON

Recorded April 12th, 2017

Categories: FLOSS Project Planets

The Accidental Coder: The State-of-Drupal Poll

Planet Drupal - Thu, 2017-04-20 11:25

Speak out about your feelings on several topics that are swirling in the Drupalsphere. The results of the poll will be published here during Drupalcon Baltimore. 

Take the Poll!

Tags: Drupal Planet
Categories: FLOSS Project Planets

Continuum Analytics News: Two Peas in a Pod: Anaconda + IBM Cognitive Systems

Planet Python - Thu, 2017-04-20 11:05
Company Blog Thursday, April 20, 2017 Travis Oliphant President, Chief Data Scientist & Co-Founder Continuum Analytics


There is no question that deep learning has come out to play across a wide range of sectors—finance, marketing, pharma, legal...the list goes on. What’s more, from now until 2022, the deep learning market is expected to grow more than 65 percent. Clearly, companies are increasingly looking deeply at this popular machine learning approach to help fulfill business needs. Deep learning makes it possible to process giant datasets with billions of elements and extract useful predictive models. Deep learning is transforming the businesses of leading consumer Web and mobile app companies and is also being adopted by more traditional business enterprises. 

That’s why this week we are pleased to announce the availability of Anaconda on IBM’s Cognitive Systems, the company’s high performance deep learning platform, highlighting the fact that Anaconda is regarded as an important capability for developers building cognitive solutions. The platform empowers these developers and data scientists to build and deploy deep learning applications that are ready to scale. Anaconda is also integrating with the IBM PowerAI software distribution that makes it simpler for companies to take advantage of Power performance and GPU optimization for data intensive cognitive workloads. 

At Anaconda, we’re helping leading businesses across the world, like IBM, solve the world’s most challenging problems—from improving medical treatments to discovering planets to predicting effects of public policy—by handing them tools to identify patterns in data, uncover key insights and transform basic data into a goldmine of intelligence. This news reiterates the importance of Open Data Science in all factors of business. 

Want to learn more about this news? Read the press release here

Categories: FLOSS Project Planets

Colm O hEigeartaigh: Securing Apache Hadoop Distributed File System (HDFS) - part II

Planet Apache - Thu, 2017-04-20 10:23
This is the second in a series of posts on securing HDFS. The first post described how to install Apache Hadoop, and how to use POSIX permissions and ACLs to restrict access to data stored in HDFS. In this post we will look at how to use Apache Ranger to authorize access to data stored in HDFS. The Apache Ranger Admin console allows you to create policies which are retrieved and enforced by a HDFS authorization plugin. Apache Ranger allows us to create centralized authorization policies for HDFS, as well as an authorization audit trail stored in SOLR or HDFS.

1) Install the Apache Ranger HDFS plugin

First we will install the Apache Ranger HDFS plugin. Follow the steps in the previous tutorial to setup Apache Hadoop, if you have not done this already. Then download Apache Ranger and verify that the signature is valid and that the message digests match. Due to some bugs that were fixed for the installation process, I am using version 1.0.0-SNAPSHOT in this post. Now extract and build the source, and copy the resulting plugin to a location where you will configure and install it:
  • mvn clean package assembly:assembly -DskipTests
  • tar zxvf target/ranger-1.0.0-SNAPSHOT-hdfs-plugin.tar.gz
  • mv ranger-1.0.0-SNAPSHOT-hdfs-plugin.tar.gz ${ranger.hdfs.home}
Now go to ${ranger.hdfs.home} and edit "install.properties". You need to specify the following properties:
  • POLICY_MGR_URL: Set this to "http://localhost:6080"
  • REPOSITORY_NAME: Set this to "HDFSTest".
  • COMPONENT_INSTALL_DIR_NAME: The location of your Apache Hadoop installation
Save "install.properties" and install the plugin as root via "sudo ./enable-hdfs-plugin.sh". The Apache Ranger HDFS plugin should now be successfully installed. Start HDFS with:
  • sbin/start-dfs.sh
2) Create authorization policies in the Apache Ranger Admin console

Next we will use the Apache Ranger admin console to create authorization policies for our data in HDFS. Follow the steps in this tutorial to install the Apache Ranger admin service. Start the Apache Ranger admin service with "sudo ranger-admin start" and open a browser and go to "http://localhost:6080/" and log on with "admin/admin". Add a new HDFS service with the following configuration values:
  • Service Name: HDFSTest
  • Username: admin
  • Password: admin
  • Namenode URL: hdfs://localhost:9000
Click on "Test Connection" to verify that we can connect successfully to HDFS + then save the new service. Now click on the "HDFSTest" service that we have created. Add a new policy for the "/data" resource path for the user "alice" (create this user if you have not done so already under "Settings, Users/Groups"), with permissions of "read" and "execute".

3) Testing authorization in HDFS

Now let's test the Ranger authorization policy we created above in action. Note that by default the HDFS authorization plugin checks for a Ranger authorization policy that grants access first, and if this fails it falls back to the default POSIX permissions. The Ranger authorization plugin will pull policies from the Admin service every 30 seconds by default. For the "HDFSTest" example above, they are stored in "/etc/ranger/HDFSTest/policycache/" by default. Make sure that the user you are running Hadoop as can access this directory.

Now let's test to see if I can read the data file as follows:
  • bin/hadoop fs -cat /data/LICENSE* (this should work via the underlying POSIX permissions)
  • sudo -u alice bin/hadoop fs -cat /data/LICENSE* (this should work via the Ranger authorization policy)
  • sudo -u bob bin/hadoop fs -cat /data/LICENSE* (this should fail as we don't have an authorization policy for "bob").

Categories: FLOSS Project Planets

BeDjango: Understanding Django signals

Planet Python - Thu, 2017-04-20 09:43
.maincontent a {color: #d81b60 !important;} .maincontent ol {padding-top: 0em !important;}

In many cases when there is a modification in a model’s instance we need execute some action. Django provides us an elegant way to handle with these situations. The signals are utilities that allow us to associate events with actions. We can develop a function that will run when a signal calls it.

Some of the most used models’ signals are the following:

  • pre_save/post_save: This signal is thrown before/after the method save().

  • pre_delete/post_delete: Before after delete a model’s instance (method delete()) this signal is thrown.

  • pre_init/post_init: This signal works before/after instantiating a model (__init__() method)

In this post we will explain the django signals with examples of models’ signals, but the signals don’t only work for models. You can check the complete list of signals in this link.

Connecting signals

You can associate your functions to signals in two different ways:

Decorator: With the @receiver decorator, we can link a signal with a function:

from django.db.models.signals import post_save from django.dispatch import receiver from someapp.models import MyModel @receiver(post_save, sender=MyModel) def my_function_post_save(sender, **kwargs):    # do the action…

Every time that a MyModel’s instance ends to run its save() method, the my_function_post_save will start to work.

The other way is to connect the signal with the function:

post_save.connect(my_function_post_save, sender=MyModel)  How can we use a signal-function with several senders?

Let’s suppose we have a base class model and we want to associate a signal with a function for all subclasses of our base class. We can do this using both previous ways:

You can learn more about django signals in the official documentation.

  1.  We can use a list as sender argument for @receiver:del.  ​​​​​​ 
  2. @receiver(post_save, sender=MyBaseClassModel.__subclasses__())
  3. With the other way we can use a for loop:
  4. for sub_class in MyBaseClassModel.__subclasses__():    post_save.connect(my_function_post_save, sender=sub_class)
Where should I write my signals?

Since Django 1.7, the official documentation suggests us to write our signals functions in a different file (i.e. signals.py) and connect them with the signals in the apps.py file of our app. Here we have an example:

# signals.py def my_signal_function(sender, **kwargs):    print("printing a message...") # apps.py from django.apps import AppConfig from django.db.models.signals import post_save from django.utils.translation import ugettext_lazy as _ from .signals import my_signal_function class MyAPPConfig(AppConfig):    name = 'myapp'    verbose_name = _('My App')    def ready(self):        myccl = self.get_model('MyCustomClass')        post_save.connect(my_signal_function, sender=myccl, dispatch_uid="my_unique_identifier")

Sometimes, the signal is called more than once. For that reason, we use an unique string with the argument dispatch_uid in the connect function.

What is it used for?

Well, we already know how to configure our signals and signals function, but when must we use it? Here we show you some powerful examples of django signals:

  • We can have a model that stores a file on disk, i.e. DrivingLicenseModel. This model will have 3 attributes: name, due_date and photography. The photography will be an image that we will store in the disk.
    We can use a pre_delete signal for removing the image of the disk before deleting our DrivingLicenseModel’s instance.

  • We have a List of movies per user in our website (ListMoviesModel) which has an attribute for the last time that a user modifies it. A user can add a movie (ListMoviesEntryModel) to the previous list.
    Every time a user adds or modifies a singular movie, we can update the last_modification attribute of the ListMoviesModel’s instance for that user if we establish a post_save signal for the ListMoviesEntryModel model.

You can learn more about django signals in the official documentation.

Categories: FLOSS Project Planets

Steve Loughran: Fear of Dependencies

Planet Apache - Thu, 2017-04-20 09:39
There are some things to be scared of; some things to view as a challenge and embrace anyway.

Here, Hardknott Pass falls into the challenge category —at least in summertime. You know you'll get up, the only question is "cycling" or "walking".

Hardknott in Winter is a different game, its a "should I be trying to get up here at all" kind of issue. Where, for reference, the answer is usually: no. Find another way around.

Upgrading dependencies to Hadoop jitters between the two, depending on what upgrade is being proposed.

And, as the nominal assignee of HADOOP-9991, "upgrade dependencies", I get to see this.

We regularly get people submitting one line patches "upgrade your dependency so you can work with my project' —and they are such tiny diffs people think "what a simple patch, it's easy to apply"

The problem is they are one line patches that can lead to the HBase, Hive or Spark people cornering you and saying things like "why do you make my life so hard?"

Before making a leap to Java 9, we're trapped whatever we do. Upgrade: things downstream break. Don' t upgrade, things downstream break when they update something else, or pull in a dependency which has itself updated.

While Hadoop has been fairly good at keeping its own services stable, where it causes problems is in applications that pull in the Hadoop classpath for their own purposes: HBase, Hive, Accumulo, Spark, Flink, ...

Here's my personal view on the risk factor of various updates.

Critical :

We know things will be trouble —and upgrades are full cross-project epics

  • protobuf., This will probably never be updated during the lifespan of Hadoop 2, given how google broke its ability to link to previously generated code.
  • Guava. Google cut things. Hadoop ships with Guava 11 but has moved off all deleted classes so runs happily against Guava 16+. I think it should be time just to move up, on the basis of Java 8 compatibility alone.
  • Jackson. The last time we updated, everything worked in Hadoop, but broke HBase. This makes everyone very said
  • In Hive and Spark: Kryo. Hadoop core avoids that problem; I did suggest adding it purely for the pain it would cause the Hive team (HADOOP-12281) —they knew it wasn't serious but as you can see, others got a bit worried. I suspect it was experience with my other POM patches that made them worry.
I think a Jackson update is probably due, but will need conversations with the core downstream projects. And perhaps bump up Guava, given how old it is.

High Risk

Failures are traumatic enough we're just scared of upgrading unless there's a good reason.
  • jetty/servlets. Jetty has been painful (threads in the Datanodes to peform liveness monitoring of Jetty is an example of workarounds), but it was a known and managed problem). Plan is to move off jetty entirely and -> jersey + grizzly.
  • Servlet API.
  • jersey. HADOOP-9613 shows how hard that's been
  • Tomcat. Part of the big webapp set
  • Netty —again, a long standing sore point (HADOOP-12928, HADOOP-12927)
  • httpclient. There's a plan to move off Httpclient completely, stalled on hadoop-openstack. I'd estimate 2-3 days there, more testing than anything else. Removing a dependency entirely frees downstream projects from having to worry about the version Hadoop comes with.
  • Anything which has JNI bindings. Examples: leveldb, the codecs
  • Java. Areas of trauma: Kerberos, java.net, SASL,

With the move of trunk to Java 8, those servlet/webapp versions all need to be rolled.

Medium Risk

These are things where we have to be very cautious about upgrading, either because of a history of brittleness, or because failures would be traumatic
  • Jets3t. Every upgrade of jets3t moved the bugs. It's effectively frozen as "trouble, but a stable trouble", with S3a being the future
  • Curator 2.x ( see HADOOP-11612 ; HADOOP-11102) I had to do a test rebuild of curator 2.7 with guava downgraded to Hadoop's version to be confident that there were no codepaths that would fail. That doesn't mean I'm excited by Curator 3, as it's an unknown.
  • Maven itself
  • Zookeeper -for its use of guava.
Here I'm for leaving Jets3t alone; and, once that Guava is updated, curator and ZK should be aligned.

Low risk:

Generally happy to upgrade these as later versions come out.
  • SLF4J yes, repeatedly
  • log4j 1.x (2.x is out as it doesn't handle log4j.properties files)
  • avro as long as you don't propose picking up a pre-release.
    (No: Avro 1.7 to 1.8 update is incompatible with generated compiled classes, same as protobuf.)
  • apache commons-lang,(minor -yes, major -no)
  • Junit

I don't know which category the AWS SDK and azure SDKs fall into. Their jackson SDK dependency flags them as a transitive troublespot.

Life would be much easier if (a) the guava team stopped taking things away and (b) either jackson stopped breaking things or someone else produced a good JSON library. I don't know of any -I have encountered worse.

2016-05-31 Update: ZK doesn't use Guava. That's curator I'm thinking of.  Correction by Chris Naroth.
Categories: FLOSS Project Planets

Experienced Django: Django, Bootstrap, and you!

Planet Python - Thu, 2017-04-20 09:38

Last post I talked about how to get bootstrap integrated into your Django app.  This week I want to dive a little deeper into how to use bootstrap themes and customize them easily.

While the default bootstrap theme that Django-bootstrap uses by default is much nicer than I could design, there are many, many options on the web for different bootstrap themes.  In this post we’re going to pull in a different theme, and then figure out how to modify with minimal effort.

Loading Different Themes

There are many sites containing free (and paid!) bootstrap themes which you can try out, download and modify until you get the look you want.  A quick search this evening produced several hits.  I’ll be using bootswatch in this example, but other sites should work in similar manners.
The mechanics of this are actually quite simple.  There’s a dictionary that django-bootstrap3 uses from your settings.py file called, wait for it….
The Django-bootstrap documentation lists all of the various things you can set here, but the one we’re concerned with here is the theme_url setting.
So, say you went to bootswatch and thought you’d give the superhero theme a try.  The “Download” button on the site lists a bunch of options, only the first two are of interest to us today; bootstrap.min.css and bootstrap.css.

The two files should be the same with the exception that the .min. file has all of the whitespace removed from the css portion of it. (Usually the header comments still have whitespace.)   This is done to minimize the amount of data that needs to be served on high-volume sites.   If you’re trying to read or edit a theme file, you’ll want the regular version.    There are several websites that will help you minimize your css when you’re ready to deploy.

Try downloading both of the css files and looking at them.

When you download these files, you’ll want to store them in the static files directory of your Django app.  For my app the app directory is tasks, so the full path of where the min file is stored is tasks/static/bootstrap/css/bootstrap.min.css.  (The non-min version is stored in the same directory).

Once you’ve downloaded them, you tell the Django-Bootstrap3 module to use that with the following setting:

BOOTSTRAP3 = { 'theme_url': '/static/bootstrap/css/bootstrap.min.css', } Don’t be a Jerk

While you’re playing around with different themes, you can simply use the URL they have for the download and put it into your settings like this:

BOOTSTRAP3 = { 'theme_url':               'https://maxcdn.bootstrapcdn.com/bootswatch/3.3.7/superhero/bootstrap.min.css', }

But don’t be a jerk.  If you’re actually going to use the theme, download it and serve it statically from your site.  The people providing these themes shouldn’t have to pay to serve content for your web app.  I think it’s fine to use it for trying out a bunch of themes, though.

Customizing your Theme

When you decide to customize the theme you’ve downloaded, I’ll pass on advice I read somewhere doing research for this post.  Rather than just editing the downloaded theme’s css file, you can take advantage of the “cascading” part of CSS and create a file which just has your changes in it.  This keeps your changes separate from the theme which makes it much easier to change themes or roll to a new version of the same theme if and when it becomes available.

I’ve found two ways to do this, and I’m frankly not sure which is “better” or more pythonic.

Method One

The method I figured out first was to use two different ways to include a CSS file into your app.  The BOOTSTRAP dictionary, shown above, gives you the first.  In this method you use that to point to the unmodified theme file.  Then, in the base.html template (which is now included in all other templates) you manually include the link to your changes file.  This makes the bootstrap section of my base.html file look like:

   {% load bootstrap3 %}    {% bootstrap_css %}    {% bootstrap_javascript %}    {% bootstrap_messages %}    <link rel="stylesheet" type="text/css" href="{% static 'bootstrap/css/mybootstrap.css' %}" />

While this spreads the information about which css files you’re using around the system a bit more (which I dislike), it keeps your modifications completely independent of the base theme file (which I like).

Method Two

This method actually came to me while I was writing up this post.  CSS allows your to include other CSS files in them.  It turns out, if the included file is in the same directory, the syntax is pretty straightforward.   Simply add this line to the top of your css file (it must be the first line, I believe).

@import url("superhero.min.css");

You’ll then need to have your changes file in the BOOTSTRAP dictionary in settings.py:

BOOTSTRAP3 = { 'theme_url': '/static/bootstrap/css/mychanges.css', }

and, if superhero.min.css is in the same directory as your mychanges.css, it will just work!

Looking at it more, it feels like this solution is a little “cleaner”, but I’m interested to hear if you have an opinion.

Categories: FLOSS Project Planets

Chromatic: Replacing hook_boot and hook_init Functionality in Drupal 8

Planet Drupal - Thu, 2017-04-20 09:15

Adam uncovers methods of firing code on every page in Drupal 8, the right way.

Categories: FLOSS Project Planets

Zivtech: Empowering Drupal 8 Content Editors with EVA: Attach All the Displays!

Planet Drupal - Thu, 2017-04-20 09:00

Entity Views Attachment, or EVA, is a Drupal module that allows you to attach view displays to entities of your choosing. We used it recently on a project and loved it. You know it’s good because it has a no-nonsense name and an even better acronym. (Plus, the maintainers have taken full advantage of the acronym and placed a spaceman on the project page. Nice!)

Since the now-ubiquitous Paragraphs module provides the “paragraph” entity type, I figured these two will make good dancing partners.

Getting them to tango is simple enough. You create a paragraph bundle, target that bundle in the settings on an EVA view display, then arrange the view in the paragraph’s display settings. Voila – your view display shows up wherever you add this paragraph!

By attaching a view display to a paragraph entity and enabling that paragraph on a node’s paragraph reference field, you give your content editors the ability to place a view wherever they want within their page content. Better still, they can contextualize what they are doing since this all happens in the edit form where the rest of the node content lives. As far as I can tell, no other approach in the Drupal ecosystem (I’m looking at you Blocks and Panels) makes adding views to content this easy for your editors.

Case Study

The concept is pretty straightforward, but with a little cleverness it allows you to build some complex page elements. Let’s walk through an example. Consider the following design:

This mockup represents Section nodes and lists of Subpage nodes that reference them. In addition, the buttons links should point to the parent Section node. With a little elbow grease, we can build a system to output this with our friends EVA and Paragraphs.

Here’s how I’m breaking this down conceptually:

We have three things to build:

  1. A create a container paragraph bundle

  2. A child paragraph bundle with a Section entity reference field

  3. An EVA of subpages to attach to the child paragraph bundle

Building the Subpage EVA

As I mentioned before, Subpage nodes will reference Section nodes. With this in mind, we can build the EVA that lists subpages and expects a section node ID to contextually filter to subpages that reference that node.

Building the Section paragraph type

Next, we’ll create the Section paragraph type that will handle each grouping of a section node with its related subpages. The Section paragraph will have one field, an entity reference field limited to Section nodes, that gives us all the data we need from our section.

We’ll attach our EVA to this paragraph type and configure it to pass the referenced node’s ID as the contextual filter using tokens in the EVA settings. You will need to install the Token module to do this. Go to /admin/help/token to see all available tokens once installed. You need to grab the node ID through your entity reference field, so your token argument should look something like this:


We pass that token to our contextual filter, and we can tell our view to use that argument to create a link to the section node for our “View All Subpages” link. To do this, we’ll add a global text area to the view footer and check the “Use replacement tokens from the first row” checkbox. Then we’ll write some HTML to create a link. It’ll look something like this:

<a href="/node/{{ raw_arguments.nid }}">View all Subpages</a> Building the Section List paragraph type

Lastly, we’ll create the Section List paragraph type. This only really needs a paragraph reference field that only allows the user to add Section paragraphs, but I also added a title field that will act as a header for the whole list.

Tip: Install Fences module to control your field’s wrapper markup. I used this here to wrap the title in <h2> tags.

We’re finished!

Now that everything is built, we can allow users to select the Section List paragraph type in a paragraph reference field of our choosing. A user adds a Section List, then adds Sections via the entity reference. It looks like this in the node edit form:

Do you have any cool ways you use the EVA module in your builds? Let us know in the comments section below.

Categories: FLOSS Project Planets

Evolving Web: Migrate translations from CSV, JSON or XML to Drupal 8

Planet Drupal - Thu, 2017-04-20 08:38

In my last post, I showed you how to migrate translated content from Drupal 6 to Drupal 8. But clients often don't start with their data in Drupal 6. Instead there's some other source of data that may include translations, like a CSV spreadsheet. In this article, I'll show you how to migrate multilingual content from such sources to Drupal 8.

This article would not have been possible without the help of my colleague Dave. Gracias Dave!

The problem

We have two CSV files containing some data about chemical elements in two languages. One file contains data in English and the other file, in Spanish. Our goal is to migrate these records into a Drupal 8 website, preserving the translations.

Before we start
  • Since this is an advanced migration topic, it is assumed you already know the basics of migration.
  • To execute the migrations in this example, you can download the migrate example i18n. The module should work without any trouble for a standard Drupal 8 install. See quick-start for more information.
Migrating JSON, XML and other formats

Though this example shows how to work with a CSV data source, one can easily work with other data sources. Here are some quick pointers:

  • Find and install the relevant migrate source module. If you do not have a standard source module available, you can:
    • try converting your data to a supported format first.
    • write your own migration source plugin, if you're feeling adventurous.
  • Modify the migration definitions to include custom parameters for the data source.
  • Some useful source formats are supported by these projects:
The module

To write the migrations, we create a module—in our case, it is named migrate_example_i18n. There's nothing special about the module declaration except for the dependencies:

How to migrate translations

    Before we start writing migrations, it is important to mention how Drupal 8 translations work. In a nutshell:

    • First, we create content in its base language, say in English. For example, we could create a brand new node for the element Hydrogen, which might have a unique node ID 4.
    • Now that the base node is in place, we can translate the node, say to Spanish. Unlike some previous versions of Drupal, this won't become a new node with its own node ID. Instead, the translation is saved against the same node generated above, and so will have the same node ID—just a different language setting.

    Hence, the migration definition for this example includes the following:

    • We migrate the base data in English using in example_element_en migration.
    • We migrate the Spanish translations using the example_element_es migration, and link each translation to the original English version.
    • We group the two migrations in the example_element migration group to keep things clean and organized.

    Thus, we can execute the migrations of this example with the command drush migrate-import --group=example_element.


    Note that this plan only works because every single node we are importing has at least an English translation! If some nodes only existed in Spanish, we would not be able to link them to the (non-existent) original English version. If you encounter data like this, you'll need to handle it in a different way.

    Step 1: Element base migration (English)

    To migrate the English translations, we define the example_element_en migration. Here is a quick look at some important parameters used in the migration definition.

    Source source: plugin: csv path: 'element.data.en.csv' header_row_count: 1 keys: - Symbol fields: Name: 'Name' Symbol: 'Symbol' 'Atomic Number': 'Atomic number' 'Discovered By': 'Name of people who discovered the element' constants: lang_en: en node_element: 'element'
    • plugin: Since we want to import data from a CSV file, we need to use the csv plugin provided by the migrate_source_csv module.
    • path: Path to the CSV data source so that the source plugin can read the file. Our source files for this example actually live within our module, so we modify this path at runtime using hook_migration_plugins_alter() in migrate_example_i18n.module.
    • header_row_count: Number of initial rows in the CSV file which do not contain actual data. This helps ignore column headings.
    • keys: The column(s) in the CSV file which uniquely identify each record. In our example, the chemical symbol in the column Symbol is unique to each row, so we can use that as the key.
    • fields: A description for every column present in the CSV file. This is used for displaying source details in the UI.
    • constants: Some static values for use during the migration.
    Destination destination: plugin: 'entity:node'
    • plugin: Nothing fancy here. We aim to create node entities, so we set the plugin as entity:node.
    • translations: Since we are importing the content in base language, we do not specify the translations parameter. This will make Drupal create new nodes for every record.
    Process process: type: constants/node_element title: Name langcode: constants/lang_en field_element_symbol: Symbol field_element_discoverer: plugin: explode delimiter: ', ' source: Discovered By

    This is where we map the columns of the CSV file to properties of our target nodes. Here are some mappings which require a special mention and explication:

    • type: We hard-code the content type for the nodes we wish to create, to type element.
    • langcode: Since all source records are in English, we tell Drupal to save the destination nodes in English as well. We do this by explicitly specifying langcode as en.
    • field_element_discoverer: This field is a bit tricky. Looking at the source data, we realize that every element has one or more discoverers. Multiple discoverer names are separated by commas. Thus, we use plugin: explode and delimiter: ', ' to split multiple records into arrays. With the values split into arrays, Drupal understands and saves the data in this column as multiple values.

    When we run this migration like drush migrate-import example_element_en, we import all the nodes in the base language (English).

    Step 2: Element translation migration (Spanish)

    With the base nodes in place, we define a migration similar to the previous one with the ID example_element_es.

    source: plugin: csv path: 'element.data.es.csv' header_row_count: 1 keys: - 'Simbolo' constants: lang_en: en # ... destination: plugin: 'entity:node' translations: true process: nid: plugin: migration source: Simbolo migration: example_element_en langcode: constants/lang_es content_translation_source: constants/lang_en # ... migration_dependencies: required: - example_element_en

    Let us look at some major differences between the example_element_es migration and the example_element_en migration:

    • source:
      • path: Since the Spanish node data is in another file, we change the path accordingly.
      • keys: The Spanish word for Symbol is Símbolo, and it is the column containing the unique ID of each record. Hence, we define it as the source data key. Unfortunately, Drupal migrate support keys with non-ASCII characters such as í (with its accent). So, as a workaround, I had to remove all such accented characters from the column headings and write the key parameter as Simbolo, without the special í.
      • fields: The field definitions had to be changed to match the Spanish column names used in the CSV.
    • destination:
      • translations: Since we want Drupal to create translations for English language nodes created during the example_element_en migration, we specify translations: true.
    • process:
      • nid: We use the plugin: migration to make Drupal lookup nodes which were created during the English element migration and use their ID as the nid. This results in the Spanish translations being attached to the original nodes created in English.
      • langcode: Since all records in element.data.es.csv are in Spanish, we hard-code the langcode to es for each record of this migration. This tells Drupal that these are Spanish translations.
      • content_translation_source: Each translation of a Drupal node comes from a previous translation—for example, you might take the Spanish translation, and translate it into French. In this case, we'd say that Spanish was the source language of the French translation. By adding this process step, we tell Drupal that all our Spanish translations are coming from English.
    • migration_dependencies: This ensures that the base data is migrated before the translations. So to run this migration, one must run the example_element_en migration first.

    Voilà! Run the Spanish migration (drush migrate-import example_element_es) and you have the Spanish translations for the elements! We can run both the English and Spanish migration at once using the migration group we created. Here's how the output should look in the command-line:

    $ drush migrate-import --group=example_element Processed 111 items (111 created, 0 updated, 0 failed, 0 ignored) - done with 'example_element_en' Processed 105 items (105 created, 0 updated, 0 failed, 0 ignored) - done with 'example_element_es'

    If we had another file containing French translations, we would create another migration like we did for Spanish, and import the French data in a similar way. I couldn't find a CSV file with element data in French, so I didn't include it in this example—but go try it out on your own, and leave a comment to tell me how it went!

    Next steps + more awesome articles by Evolving Web
    Categories: FLOSS Project Planets

    Dropsolid: Making a difference, One Drupal security patch at a time

    Planet Drupal - Thu, 2017-04-20 07:52
    20 Apr Making a difference, one Drupal security patch at a time Nick Advisory by the Drupal security team

    Recently, the References module started receiving some attention (read here, here and here). The reason for this is that the Drupal security team posted an advisory to migrate away from the References module for Drupal 7 and move to the entity_reference module. At the time of writing (20 April), 121.091 sites are actively reporting to Drupal.org that they are using this module. That makes for a lot of unhappy developers.

    Things kicked off after a security vulnerability was discovered in the References module. The security team tried to contact the existing maintainers of that module, but there was no response. The security team had no choice but to mark the module as abandoned and send out the following advisory explaining that the details would be made public in a month and that everyone should upgrade, as there was no fix available.

    Migrate efficiently

    At Dropsolid, we noticed that for many of our older Drupal 7 installs we were still using this module extensively. Migrating all of the affected sites would have meant a very lengthy undertaking, so I was curious to find a way to spend less time and effort while still fixing the problem. We immediately contacted one of the people who reported the security issue and tried to get more information other than what was publicly available. That person stayed true to the rules and did not disclose any information about the issue.

    We didn’t give up, but made an official request to the security team offering to help and requesting access to the security vulnerability issue. The Drupal security team reviewed the request and granted me access. In the Drupal Security issue queue there was some historical information about this vulnerability, some answers and a proposed patch. The patch had not been tested, but this is where Dropsolid chimed in. After extensively testing the patch on all the different scenarios on an actual site that was vulnerable, we marked the issue as Reviewed and Tested by the Community (RTBC) and stepped up maintain the References module for future security issues.

    It pays off to step in

    I’d like to thank Niels Aers, one of my colleagues, as his involvement was critical in this journey and he is now the current maintainer of this module. He jumped straight in without hesitation. In the end, we spent less time fixing the actual issue compared to the potential effort for changing all our sites to use a different module. So remember: you can also make a similar impact to the Drupal community by stepping up when something like this happens. Do not freak out, but think how you can help your clients, company and career by fixing something for more than just you or your company.

    Categories: FLOSS Project Planets

    PyCharm: PyCharm 2017.1.2 Release Candidate

    Planet Python - Thu, 2017-04-20 07:42

    Today we announce the PyCharm 2017.1.2 Release Candidate build #171.4249. The list of bug fixes and improvements for this build can be found in the release notes.

    Some highlights of the PyCharm 2017.1.2 RC are:

    • A number of important fixes for the debugger
    • Fixes for the test runner
    • Fixes for Django and Docker support
    • and much more

    Please give PyCharm 2017.1.2 RC a try before its official release and please report any bugs and feature request to our issue tracker.

    Categories: FLOSS Project Planets

    Catalin George Festila: The twilio python module and cloud communications platform .

    Planet Python - Thu, 2017-04-20 05:13
    Let you to build apps that communicate with everyone in the world. Voice & Video, Messaging, and Authentication APIs for every application.
    First, let's try to install it under Windows 10 operating system:
    C:\>cd Python27
    C:\Python27>cd Scripts
    C:\Python27\Scripts>pip install twilio
    Collecting twilio
      Downloading twilio-5.6.0.tar.gz (194kB)
        100% |################################| 194kB 588kB/s
    Collecting httplib2>=0.7 (from twilio)
      Downloading httplib2-0.9.2.zip (210kB)
        100% |################################| 215kB 519kB/s
    Requirement already satisfied: six in c:\python27\lib\site-packages (from twilio)
    Requirement already satisfied: pytz in c:\python27\lib\site-packages (from twilio)
    Installing collected packages: httplib2, twilio
      Running setup.py install for httplib2 ... done
      Running setup.py install for twilio ... done
    Successfully installed httplib2-0.9.2 twilio-5.6.0Try some example:
    Python 2.7.12 (v2.7.12:d33e0cf91556, Jun 27 2016, 15:19:22) [MSC v.1500 32 bit (Intel)] on win32
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import twilio
    >>> from twilio import *
    >>> dir(twilio)
    ['TwilioException', 'TwilioRestException', 'TwimlException', '__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__', '__version__', '__version_info__', 'compat', 'exceptions', 'rest', 'sys', 'u', 'version']
    >>> dir(twilio.rest)
    ['TwilioIpMessagingClient', 'TwilioLookupsClient', 'TwilioPricingClient', 'TwilioRestClient', 'TwilioTaskRouterClient', 'TwilioTrunkingClient', '__builtins__', '__doc__', '__file__', '__name__', '__package__', '__path__', '_hush_pyflakes', 'base', 'client', 'exceptions', 'ip_messaging', 'lookups', 'pricing', 'resources', 'set_twilio_proxy', 'task_router', 'trunking']Under Fedora 25 you can use this command to install this API:
    [root@localhost mythcat]# pip2.7 install twilio
    Collecting twilio
      Downloading twilio-5.7.0.tar.gz (168kB)
        100% |████████████████████████████████| 174kB 1.8MB/s
    Requirement already satisfied: httplib2>=0.7 in /usr/lib/python2.7/site-packages (from twilio)
    Requirement already satisfied: six in /usr/lib/python2.7/site-packages (from twilio)
    Requirement already satisfied: pytz in /usr/lib/python2.7/site-packages (from twilio)
    Installing collected packages: twilio
      Running setup.py install for twilio ... done
    Successfully installed twilio-5.7.0
     Make a account for twilio here.
    Now about phone twilio numbers, then programmable phone twilio numbers are a core part of Twilio’s platform, enabling you to receive SMS, MMS, and phone calls.
    You can have some problems with SMS sending by country availability.
    And one last example:
    # /usr/bin/env python
    # Download the twilio-python library from http://twilio.com/docs/libraries
    from twilio.rest import Client

    # Find these values at https://twilio.com/user/account
    account_sid = "AC61b32be301f49f78f0ab3d69c4d335f6"
    auth_token = "c8f37b65755900faa4fe7bbe1f948adb"
    client = Client(account_sid, auth_token)

    message = client.api.account.messages.create(to="+contry_allow_SMS",
    body="Hello python this is a twilio sms test")
    Categories: FLOSS Project Planets

    Pronovix: Documenting web APIs with the Swagger / OpenAPI specification in Drupal

    Planet Drupal - Thu, 2017-04-20 04:19

    As part of our work to make Drupal 8 the leading CMS for developer portals, we are implementing a mechanism to import the OpenAPI (formerly known as Swagger) specification format. This is a crucial feature not only for dedicated developer portals, but also for all Drupal sites that are exposing an API. Now that it has become much easier to create a RESTful API service in Drupal 8, the next step is to make it more straightforward to create its API reference documentation. That is why we think our work will be useful for site builders, and not just for technical writers and API product owners.

    Categories: FLOSS Project Planets

    PyBites: Simple Flask app to compare the weather of 2 cities

    Planet Python - Thu, 2017-04-20 03:40

    In this post I show you how to build a simple Flask app to compare the weather of 2 cities using the OpenWeatherMap API. Maybe this aids you in solving this week's challenge.

    Categories: FLOSS Project Planets

    marvil07.net: Re-activating Vote Up/Down

    Planet Drupal - Wed, 2017-04-19 22:27

    Vote Up/Down is a drupal module that uses Voting API to provide a way to vote.
    These notes are about part of the history of the module, and the recent news about it, including a couple of releases!

    A long time ago...

    The project itself is really ancient, it started in 2006 by frjo, in Drupal 4.7, and the same code has evolved until Drupal 7.
    I took co-maintainership of the project around 2009-2010, when I met with lut4rp, at the time the one maintainer of the project; who made a rewrite to modularize it at the start of 6.x-2.x.
    At that time we were still using CVS officially (and some of us git locally), and we were thrilled to receive and integrate a patch from merlinofchaos, that extended the module a lot, and make it more maintainable.
    With the past of the time, I became the only active maintainer of the module.

    At the start I was pretty active as a maintainer there; but over the years, I have not been responsive enough, especially around the D7 port.
    During that time the community provided several patches and finally amitaibu created a sandbox, that I end up integrating into the project.
    Also, I managed to write another submodule, vud_field, in that process.
    For me it was clear, I advocated to remove vud_node, vud_term, and vud_comment form the project in favour of vud_field.
    From my perspective it was more beneficial: (a) vud_field provided mostly the same functionality on nodes, taxonomy terms, and comments; but also (b) provided voting on any entity, embracing D7 new APIs; and also (c) made things more maintainable.
    Sadly, the removal did not happened at that time, and that was one of the reasons why D7 version was never out of alpha status.

    Recent news

    After quite some time of inactivity in vote_up_down, this January, I started to port the module to D8, but I only started: only 4 porting commits got into the new 8.x-1.x branch.

    Then, I decided to add a GSoC project as student's suggestion to port Vote Up/Down to D8 for this year.

    In preparation, this week I have branched out D7 into two different versions 7.x-1.x and 7.x-2.x, adding respective releases to make things more clear:

    • 7.x.1-x (with 7.x-1.0-beta1 release): It still keeps all submodules, but it is not planned to be maintained for much longer anymore. I applied there all related code pending about vud_node, vud_comment, and vud_term.
    • 7.x-2.x (with 7.x-2.0 release): Instead, it only contains vud and vud_field, and it is planned to be maintained as the stable branch. Sadly there in not a complete upgrade path neither from 6.x-2.x nor from 7.x-1.x, but I added some starting code to do that on the related issue #1363928, and maybe someone would like to continue that.

    Hopefully one of the students proposing the port to Vote Up/Down to D8 gets accepted.
    It will be great to see the module active again!

    Categories: FLOSS Project Planets

    Dirk Eddelbuettel: RcppQuantuccia 0.0.1

    Planet Debian - Wed, 2017-04-19 21:38

    New package! And, as it happens, a effectively a subset or variant of one my oldest packages, RQuantLib.

    Fairly recently, Peter Caspers started to put together a header-only subset of QuantLib. He called this Quantuccia, and, upon me asking, said that it stands for "little sister" of QuantLib. Very nice.

    One design goal is to keep Quantuccia header-only. This makes distribution and deployment much easier. In the fifteen years that we have worked with QuantLib by providing the R bindings via RQuantLib, it has always been a concern to provide current QuantLib libraries on all required operating systems. Many people helped over the years but it is still an issue, and e.g. right now we have no Windows package as there is no library build it against.

    Enter RcppQuantuccia. It only depends on R, Rcpp (for seamless R and C++ integrations) and BH bringing Boost headers. This will make it much easier to have Windows and macOS binaries.

    So what can it do right now? We started with calendaring, and you can compute date pertaining to different (ISDA and other) business day conventions, and compute holiday schedules. Here is one example computing inter alia under the NYSE holiday schedule common for US equity and futures markets:

    R> library(RcppQuantuccia) R> fromD <- as.Date("2017-01-01") R> toD <- as.Date("2017-12-31") R> getHolidays(fromD, toD) # default calender ie TARGET [1] "2017-04-14" "2017-04-17" "2017-05-01" "2017-12-25" "2017-12-26" R> setCalendar("UnitedStates") R> getHolidays(fromD, toD) # US aka US::Settlement [1] "2017-01-02" "2017-01-16" "2017-02-20" "2017-05-29" "2017-07-04" "2017-09-04" [7] "2017-10-09" "2017-11-10" "2017-11-23" "2017-12-25" R> setCalendar("UnitedStates::NYSE") R> getHolidays(fromD, toD) # US New York Stock Exchange [1] "2017-01-02" "2017-01-16" "2017-02-20" "2017-04-14" "2017-05-29" "2017-07-04" [7] "2017-09-04" "2017-11-23" "2017-12-25" R>

    The GitHub repo already has a few more calendars, and more are expected. Help is of course welcome for both this, and for porting over actual quantitative finance calculations.

    More information is on the RcppQuantuccia page. Issues and bugreports should go to the GitHub issue tracker.

    This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

    Categories: FLOSS Project Planets

    Norbert Preining: TeX Live 2017 pretest started

    Planet Debian - Wed, 2017-04-19 20:51

    Preparations for the release of TeX Live 2017 have started a few days ago with the freeze of updates in TeX Live 2016 and the announcement of the official start of the pretest period. That means that we invite people to test the new release and help fixing bugs.

    Notable changes are listed on the pretest page, I only want to report about the changes in the core infrastructure: changes in the user/sys mode of fmtutil and updmap, and introduction of the tlmgr shell.

    User/sys mode of fmtutil and updmap

    We (both at TeX Live and Debian) regularly got error reports about fonts not being found or formats not updated etc. The reason for all this was unmistakably the same: The user has called updmap or fmtutil without the -sys option, thus creating a copy of set of configuration files under his home directory, shadowing all later updates on the system side.

    Reason for this behavior is the wide-spread misinformation (outdated information) on the internet suggesting to call updmap.

    To counteract this, we have changed the behavior so that both updmap and fmtutil now accept a new argument -user (in addition to the already present -sys), and rejects to run when called without either of it given, giving a warning and linking to an explanation page. This page provides more detailed documentation, and best practice examples.

    tlmgr shell

    The TeX Live Manager got a new `shell’ mode, invoked by tlmgr shell. Details need to be flashed out, but in principle it is possible to use get and set to query and set some of the options normally passed via command lines, and use all the actions as defined in the documentation. The advantage of this is that it is not necessary to load the tlpdb for each invocation. Here a short example:

    [~] tlmgr shell protocol 1 tlmgr> load local OK tlmgr> load remote tlmgr: package repository /home/norbert/public_html/tlpretest (verified) OK tlmgr> update --list tlmgr: saving backups to /home/norbert/tl/2017/tlpkg/backups update: bchart [147k]: local: 27496, source: 43928 ... update: xindy [535k]: local: 43873, source: 43934 OK tlmgr> update --all tlmgr: saving backups to /home/norbert/tl/2017/tlpkg/backups [ 1/22, ??:??/??:??] update: bchart [147k] (27496 -> 43928) ... done [ 2/22, 00:00/00:00] update: biber [1041k] (43873 -> 43910) ... done ... [22/22, 00:50/00:50] update: xindy [535k] (43873 -> 43934) ... done running mktexlsr ... done running mktexlsr. ... OK tlmgr> quit tlmgr: package log updated: /home/norbert/tl/2017/texmf-var/web2c/tlmgr.log [~]

    Please test and report bugs to our mailing list.


    Categories: FLOSS Project Planets

    Bryan Pendleton: A break in the rain

    Planet Apache - Wed, 2017-04-19 20:50

    It was a beautiful day in the city, so I wandered over to the border between Chinatown and North Beach and hooked up with some old friends for a wonderful lunch.

    Thanks, all!

    Categories: FLOSS Project Planets
    Syndicate content