Feeds

Continuum Analytics News: Continuum Analytics to Share Insights at JupyterCon 2017

Planet Python - Wed, 2017-08-16 11:12
News Thursday, August 17, 2017

Presentation topics include Jupyter and Anaconda in the enterprise; open innovation in a data-centric world; building an Excel-Python bridge; encapsulating data science using Anaconda Project and JupyterLab; deploying Jupyter dashboards for datapoints; JupyterLab

NEW YORK, August 17, 2017—Continuum Analytics, the creator and driving force behind Anaconda, the leading Python data science platform, today announced that the team will present one keynote, three talks and two tutorials at JupyterCon on August 23 and 24 in NYC, NY. The event is designed for the data science and business analyst community and offers in-depth trainings, insightful keynotes, networking events and talks exploring the Project Jupyter platform.

Peter Wang, co-founder and CTO of Continuum Analytics, will present two sessions on August 24. The first is a keynote at 9:15 am, titled “Jupyter & Anaconda: Shaking Up the Enterprise.” Peter will discuss the co-evolution of these two major players in the new open source data science ecosystem and next steps to a sustainable future. The other is a talk, “Fueling Open Innovation in a Data-Centric World,” at 11:55 am, offering Peter’s perspectives on the unique challenges of building a company that is fundamentally centered around sustainable open source innovation.

The second talk features Christine Doig, senior data scientist, product manager, and Fabio Pliger, software engineer, of Continuum Analytics, “Leveraging Jupyter to build an Excel-Python Bridge.” It will take place on August 24 at 11:05 am and Christine and Fabio will share how they created a native Microsoft Excel plug-in that provides a point-and-click interface to Python functions, enabling Excel analysts to use machine learning models, advanced interactive visualizations and distributed compute frameworks without needing to write any code. Christine will also be holding a talk on August 25 at 11:55 am on “Data Science Encapsulation and Deployment with Anaconda Project & JupyterLab.” Christine will share how Anaconda Project and JupyterLab encapsulate data science and how to deploy self-service notebooks, interactive applications, dashboards and machine learning.

James Bednar, senior solutions architect, and Philipp Rudiger, software developer, of Continuum Analytics, will give a tutorial on August 23 at 1:30 pm titled, “Deploying Interactive Jupyter Dashboards for Visualizing Hundreds of Millions of Datapoints.” This tutorial will explore an overall workflow for building interactive dashboards, visualizing billions of data points interactively in a Jupyter notebook, with graphical widgets allowing control over data selection, filtering and display options, all using only a few dozen lines of code.

The second tutorial, “JupyterLab,” will be hosted by Steven Silvester, software engineer at Continuum Analytics and Jason Grout, software developer at Bloomberg, on August 23 at 1:30 pm. They will walk through JupyterLab as a user and as an extension author, exploring its capabilities and offering a demonstration on how to create a simple extension to the environment.

Keynote:
WHO: Peter Wang, co-founder and CTO, Anaconda Powered by Continuum Analytics
WHAT: Jupyter & Anaconda: Shaking Up the Enterprise
WHEN: August 24, 9:15am-9:25am ET
WHERE: Grand Ballroom

Talk #1:
WHO: Peter Wang, co-founder and CTO, Anaconda Powered by Continuum Analytics
WHAT: Fueling Open Innovation in a Data-Centric World
WHEN: August 24, 11:55am–12:35pm ET
WHERE: Regent Parlor

Talk #2:
WHO: 

  • Christine Doig, senior data scientist, product manager, Anaconda Powered by Continuum Analytics
  • Fabio Pliger, software engineer, Anaconda Powered by Continuum Analytics

WHAT: Leveraging Jupyter to Build an Excel-Python Bridge
WHEN: August 24, 11:05am–11:45am ET
WHERE: Murray Hill

Talk #3:
WHO: Christine Doig, senior data scientist, product manager, Anaconda Powered by Continuum Analytics
WHAT: Data Science Encapsulation and Deployment with Anaconda Project & JupyterLab
WHEN: August 25, 11:55am–12:35pm ET
WHERE: Regent Parlor

Tutorial #1:
WHO: 

  • James Bednar, senior solutions architect, Anaconda Powered By Continuum Analytics 
  • Philipp Rudiger, software developer, Anaconda Powered By Continuum Analytics 

WHAT: Deploying Interactive Jupyter Dashboards for Visualizing Hundreds of Millions of Datapoints
WHEN: August 23, 1:30pm–5:00pm ET
WHERE: Concourse E

Tutorial #2:
WHO: 

  • Steven Silvester, software engineer, Anaconda Powered By Continuum Analytics 
  • Jason Grout, software developer of Bloomberg

WHAT: JupyterLab Tutorial
WHEN: August 23, 1:30pm–5:00pm ET
WHERE: Concourse A

###

About Anaconda Powered by Continuum Analytics
Anaconda is the leading data science platform powered by Python, the fastest growing data science language with more than 30 million downloads to date. Continuum Analytics is the creator and driving force behind Anaconda, empowering leading businesses across industries worldwide with solutions to identify patterns in data, uncover key insights and transform data into a goldmine of intelligence to solve the world’s most challenging problems. Anaconda puts superpowers into the hands of people who are changing the world. Learn more at continuum.io

###

Media Contact:
Jill Rosenthal
InkHouse
anaconda@inkhouse.com

 

Categories: FLOSS Project Planets

Lullabot: Indexing content from Drupal 8 using Elasticsearch

Planet Drupal - Wed, 2017-08-16 11:04

Last week, a client asked me to investigate the state of the Elasticsearch support in Drupal 8. They're using a decoupled architecture and wanted to know how—using only core and contrib modules—Drupal data could be exposed to Elasticsearch. Elasticsearch would then index that data and make it available to the site's presentation layer via the Elasticsearch  Search API

During my research, I was impressed by the results. Thanks to Typed Data API plus a couple of contributed modules, an administrator can browse the structure of the content in Drupal and select what and how it should be indexed by Elasticsearch. All of this can be done using Drupal's admin interface.

In this article, we will take a vanilla Drupal 8 installation and configure it so that Elasticsearch receives any content changes. Let’s get started!

Downloading and starting Elasticsearch

We will begin by downloading and starting Elasticsearch 5, which is the latest stable release. Open https://www.elastic.co/downloads/elasticsearch and follow the installation instructions. Once you start the process, open your browser and enter http://127.0.0.1:9200. You should see something like the following screenshot:

undefined

Now let’s setup our Drupal site so it can talk to Elasticsearch.

Setting up Search API

High five to Thomas Seidl for the Search API module and Nikolay Ignatov for the Elasticsearch Connector module. Thanks to them, pushing content to Elasticsearch is a matter of a few clicks.

At the time of this writing there is no available release for Elasticsearch Connector, so you will have to clone the repository and checkout the 8.x-5.x branch and follow the installation instructions. As for Search API, just download and install the latest stable version.

Connecting Drupal to Elasticsearch

Next, let’s connect Drupal to the Elasticsearch server that we configured in the previous section. Navigate to Configuration > Search and Metadata > Elasticsearch Connector and then fill out the form to add a cluster:

undefined

Click 'Save' and check that the connection to the server was successful:

undefined

That’s it for Elasticsearch Connector. The rest of the configuration will be done using the Search API module.

Configuring a search index

Search API provides an abstraction layer that allows Drupal to push content changes to different servers, whether that's Elasticsearch, Apache Solr, or any other provider that has a Search API compatible module. Within each server, search API can create indexes, which are like buckets where you can push data that can be searched in different ways. Here is a drawing to illustrate the setup:

undefined

Now navigate to Configuration > Search and Metadata > Search API and click on Add server:

undefined

Fill out the form to let Search API manage the Elasticsearch server:

undefined

Click Save, then check that the connection was successful:

undefined

Next, we will create an index in the Elasticsearch server where we will specify that we want to push all of the content in Drupal. Go back to Configuration > Search and Metadata > Search API and click on Add index:

undefined

Fill out the form to create an index where content will be pushed by Drupal:

undefined undefined undefined

Click Save and verify that the index creation was successful:

undefined

Verify the index creation at the Elasticsearch server by opening http://127.0.0.1:9200/_cat/indices?v in a new browser tab:

undefined

That’s it! We will now test whether Drupal can properly update Elasticsearch when the index should reflect content changes.

Indexing content

Create a node and then run cron. Verify that the node has been pushed to Elasticsearch by opening the URL http://127.0.0.1:9200/elasticsearch_index_draco_elastic_index/_search, where elasticsearch_index_draco_elastic_index is obtained from the above screenshot:

undefined

Success! The node has been pushed but only it’s identifier is there. We need to select which fields we do want to push to Elasticsearch via the Search API interface at Configuration > Search and Metadata > Search API > Our Elasticsearch index > Fields:

undefined

Click on Add fields and select the fields that you want to push to Elasticsearch:

undefined

Add the fields and click Save. This time we will use Drush to reset the index and index the content again:

undefined

After reloading http://127.0.0.1:9200/elasticsearch_index_draco_elastic_index/_search, we can see the added(s) field(s):

undefined Processing the data prior to indexing it

This is the extra ball: Search API provides a list of processors that will alter the data to be indexed to Elasticsearch. Things like transliteration, filtering out unpublished content, or case insensitive searching, are available via the web interface. Here is the list, which you can find by clicking Processors when you are viewing the server at Search API :

undefined When you need more, extend from the APIs

Now that you have an Elasticsearch engine, it’s time to start hooking it up with your front-end applications. We have seen that the web interface of the Search API module saves a ton of development time, but if you ever need to go the extra mile, there are hooks, events, and plugins that you can use in order to fit your requirements. A good place to start is the Search API’s project homepage. Happy searching!

Acknowledgements

Thanks to:

Categories: FLOSS Project Planets

Acquia Developer Center Blog: Decoupled Drupal Technologies and Techniques

Planet Drupal - Wed, 2017-08-16 10:24

We've got a new installment in the decoupled Drupal project we're working on with Elevated Third and Hoorooh.

The project we're documenting was one we worked on for Powdr Resorts, one of the largest ski operators in North America.

The first installment in the series was A Deep Dive into a Decoupled Drupal 8 Project.

Tags: acquia drupal planet
Categories: FLOSS Project Planets

Amazee Labs: Join us for Tour de Drupal Vienna

Planet Drupal - Wed, 2017-08-16 09:54
Join us for Tour de Drupal Vienna

Cycling is a great way to travel, experience new things and meet like-minded people. Join us for Tour de Drupal Vienna and let’s cycle together to DrupalCon!

Josef Dabernig Wed, 08/16/2017 - 15:54

On Sunday, 24 September we plan to start at 8am from Krems and travel to Tulln. At 11am we’ll arrive in Tulln and meet at the Weshapers office for some drinks & BBQ.

In the afternoon at 2pm, we plan to leave Tulln and cycle the remaining 40 km to Vienna to finally arrive in Vienna.

To sum-up, the meeting points are:

Source

The arrival is planned for Sunday, 24 September at 5pm in front of the big wheel in Vienna at Kaiserwiese.

How to get there?

There are many cycling routes that lead to Vienna. We created a map that currently highlights roads from east and west along the Danube. Also, check out the EuroVelo routes, bessone summarized the interesting ones for Vienna.

If you just wanna join for the last day, it’s a 30-minute train ride from Vienna to Tulln or 1:10 from Vienna to Krems and you can bring your bike on the train. Check ÖBB to book your train ticket.

Convinced? Tell us you are coming!

Categories: FLOSS Project Planets

Valuebound: How to push clean code by following coding standards effectively using git pre-commit hook?

Planet Drupal - Wed, 2017-08-16 09:26

Pushing clean codes is not every one cups of tea, it needs extensive knowledge and practice. Before a website go live, it needs to pass certain standards and checks in order to deliver quality experience. Certainly, a clean website is a demand of almost every client and it should be. 

In this blog post, you will learn why we need to implement git pre-commit hook? how it works? Simultaneously, we will also attempt to implement working examples in order to have better understanding.

Why we need to implement git pre-commit hook

Any website going live should pass certain standards and checks. If the web is built on any framework, then these checks are mandatory. How to ensure all developers are committing clean code? One way is to do code review,…

Categories: FLOSS Project Planets

Minuet – a guitar adventure

Planet KDE - Wed, 2017-08-16 09:09

Hello again and welcome to my blog! In this post i am going to cover what happened since the first GSoC evaluation and give you some overview on the status of my work.

Since the last post I’ve been working on the implementation of the guitar plugin, along with adjusting the existing piano plugin to better suit to the new framework.

As you remember from my last post, minuet currently supports multiple plugins to display its exercises. To change from one plugin to another, all you have to do is to press on the desired instrument name: for now, only “Guitar” and “Piano” are available.

 

In the past couple of weeks, I’ve been deciphering the guitar notes representation and also the guitar chords. I don’t want to discourage anyone from learning how to play the guitar, but man.. It was so hard and tiresome.. Nevertheless, my previous piano experience helped me better understand the guitar specifics and get up to speed with the theory needed to complete my project.

 

Then I talked with my mentor on Hangouts and, using http://chordfind.com as a base (which is indeed a great start for beginners who want to learn guitar/piano and many other 4-strings instruments chords), we agreed on two specific representations for each cord: Major, Minor, Augmented, Diminished, etc. for chords with the root note in the C-E range or in the F-B range.

Then i started working at the core of the plugin guitar: to keep functional the piano keyboard, i had to implement the exact same methods used by the piano plugin. I won’t go into too much coding detail (the code is available on GitHub on my fork of Minuet and on the official GSoC branch when completed), but with a little twitch to the current ExerciseView component, I managed to create a guitar plugin that runs the Minuet’s chords exercises.

It look like this:

  • minor and major chords

  • diminished and augmented chords

  • minor7 and dominant7 chords

  • minor9 and major9 chords

 


Categories: FLOSS Project Planets

Blair Wadman: Create a modal in Drupal 8 in a custom module

Planet Drupal - Wed, 2017-08-16 08:54

Modal dialogs are incredibly useful on websites as they allow the user to do something without having to leave the web page they are on. Drupal 8 now has a Dialog API in core, which greatly reduces the amount of code you need to write to create a modal dialog. Dialogs in Drupal 8 leverage jQuery UI.

In the second part of this series on modal dialogs in Drupal 8, we are going to go a step further from last week by creating the modal in a custom module.

Categories: FLOSS Project Planets

Eli Bendersky: Right and left folds, primitive recursion patterns in Python and Haskell

Planet Python - Wed, 2017-08-16 08:48

A "fold" is a fundamental primitive in defining operations on data structures; it's particularly important in functional languages where recursion is the default tool to express repetition. In this article I'll present how left and right folds work and how they map to some fundamental recursive patterns.

The article starts with Python, which should be (or at least look) familiar to most programmers. It then switches to Haskell for a discussion of more advanced topics like the connection between folding and laziness, as well as monoids.

Extracting a fundamental recursive pattern

Let's begin by defining a couple of straightforward functions in a recursive manner, in Python. First, computing the product of all the numbers in a given list:

def product(seq): if not seq: return 1 else: return seq[0] * product(seq[1:])

Needless to say, we wouldn't really write this function recursively in Python; but if we were, this is probably how we'd write it.

Now another, slightly different, function. How do we double (multiply by 2) every element in a list, recursively?

def double(seq): if not seq: return [] else: return [seq[0] * 2] + double(seq[1:])

Again, ignoring the fact that Python has much better ways to do this (list comprehensions, for example), this is a straightforward recursive pattern that experienced programmers can produce in their sleep.

In fact, there's a lot in common between these two implementation. Let's try to find the commonalities.

As this diagram shows, the functions product and double are really only different in three places:

  1. The initial value produced when the input sequence is empty.
  2. The mapping applied to every sequence value processed.
  3. The combination of the mapped sequence value with the rest of the sequence.

For product:

  1. The initial value is 1.
  2. The mapping is identity (each sequence element just keeps its value, without change).
  3. The combination is the multiplication operator.

Can you figure out the same classification for double? Take a few moments to try for yourself. Here it is:

  1. The initial value is the empty list [].
  2. The mapping takes a value, multiplies it by 2 and puts it into a list. We could express this in Python as lambda x: [x * 2].
  3. The combination is the list concatenation operator +.

With the diagram above and these examples, it's straightforward to write a generalized "recursive transform" function that can be used to implement both product and double:

def transform(init, mapping, combination, seq): if not seq: return init else: return combination(mapping(seq[0]), transform(init, mapping, combination, seq[1:]))

The transform function is parameterized with init - the initial value, mapping- a mapping function applied to every sequence value, and combination - the combination of the mapped sequence value with the rest of the sequence. With these given, it implements the actual recursive traversal of the list.

Here's how we'd write product in terms of transform:

def product_with_transform(seq): return transform(1, lambda x: x, lambda a, b: a * b, seq)

And double:

def double_with_transform(seq): return transform([], lambda x: [x * 2], lambda a, b: a + b, seq) foldr - fold right

Generalizations like transform make functional programming fun and powerful, since they let us express complex ideas with the help of relatively few building blocks. Let's take this idea further, by generalizing transform even more. The main insight guiding us is that the mapping and the combination don't even have to be separate functions. A single function can play both roles.

In the definition of transform, combination is applied to:

  1. The result of calling mapping on the current sequence value.
  2. The recursive application of the transformation to the rest of the sequence.

We can encapsulate both in a function we call the "reduction function". This reduction function takes two arguments: the current sequence value (item), and the result of applying the full transfromation to the rest of the sequence. The driving transformation that uses this reduction function is called "a right fold" (or foldr):

def foldr(func, init, seq): if not seq: return init else: return func(seq[0], foldr(func, init, seq[1:]))

We'll get to why this is called "fold" shortly; first, let's convince ourselves it really works. Here's product implemented using foldr:

def product_with_foldr(seq): return foldr(lambda seqval, acc: seqval * acc, 1, seq)

The key here is the func argument. In the case of product, it "reduces" the current sequence value with the "accumulator" (the result of the overall transformation invoked on the rest of the sequence) by multiplying them together. The overall result is a product of all the elements in the list.

Let's trace the calls to see the recursion pattern. I'll be using the tracing technique described in this post. For this purpose I hoisted the reducing function into a standalone function called product_reducer:

def product_reducer(seqval, acc): return seqval * acc def product_with_foldr(seq): return foldr(product_reducer, 1, seq)

The full code for this experiment is available here. Here's the tracing of invoking product_with_foldr([2, 4, 6, 8]):

product_with_foldr([2, 4, 6, 8]) foldr(<function product_reducer at 0x7f3415145ae8>, 1, [2, 4, 6, 8]) foldr(<function product_reducer at 0x7f3415145ae8>, 1, [4, 6, 8]) foldr(<function product_reducer at 0x7f3415145ae8>, 1, [6, 8]) foldr(<function product_reducer at 0x7f3415145ae8>, 1, [8]) foldr(<function product_reducer at 0x7f3415145ae8>, 1, []) --> 1 product_reducer(8, 1) --> 8 --> 8 product_reducer(6, 8) --> 48 --> 48 product_reducer(4, 48) --> 192 --> 192 product_reducer(2, 192) --> 384 --> 384

The recursion first builds a full stack of calls for every element in the sequence, until the base case (empty list) is reached. Then the calls to product_reducer start executing. The first reduces 8 (the last element in the list) with 1 (the result of the base case). The second reduces this result with 6 (the second-to-last element in the list), and so on until we reach the final result.

Since foldr is just a generic traversal pattern, we can say that the real work here happens in the reducers. If we build a tree of invocations of product_reducer, we get:

And this is why it's called the right fold. It takes the rightmost element and combines it with init. Then it takes the result and combines it with the second rightmost element, and so on until the first element is reached.

More general operations with foldr

We've seen how foldr can implement all kinds of functions on lists by encapsulating a fundamental recursive pattern. Let's see a couple more examples. The function double shown above is just a special case of the functional map primitive:

def map(mapf, seq): if not seq: return [] else: return [mapf(seq[0])] + map(mapf, seq[1:])

Instead of applying a hardcoded "multiply by 2" function to each element in the sequence, map applies a user-provided unary function. Here's map implemented in terms of foldr:

def map_with_foldr(mapf, seq): return foldr(lambda seqval, acc: [mapf(seqval)] + acc, [], seq)

Another functional primitive that we can implement with foldr is filter. This one is just a bit trickier because we sometimes want to "skip" a value based on what the filtering predicate returns:

def filter(predicate, seq): if not seq: return [] else: maybeitem = [seq[0]] if predicate(seq[0]) else [] return maybeitem + filter(predicate, seq[1:])

Feel free to try to rewrite it with foldr as an exercise before looking at the code below. We just follow the same pattern:

def filter_with_foldr(predicate, seq): def reducer(seqval, acc): if predicate(seqval): return [seqval] + acc else: return acc return foldr(reducer, [], seq)

We can also represent less "linear" operations with foldr. For example, here's a function to reverse a sequence:

def reverse_with_foldr(seq): return foldr(lambda seqval, acc: acc + [seqval], [], seq)

Note how similar it is to map_with_foldr; only the order of concatenation is flipped.

Left-associative operations and foldl

Let's probe at some of the apparent limitations of foldr. We've seen how it can be used to easily compute the product of numbers in a sequence. What about a ratio? For the list [3, 2, 2] the ratio is "3 divided by 2, divided by 2", or 0.75 [1].

If we take product_with_foldr from above and replace * by /, we get:

>>> foldr(lambda seqval, acc: seqval / acc, 1, [3, 2, 2]) 3.0

What gives? The problem here is the associativity of the operator /. Take another look at the call tree diagram shown above. It's obvious this diagram represents a right-associative evaluation. In other words, what our attempt at a ratio did is compute 3 / (2 / 2), which is indeed 3.0; instead, what we'd like is (3 / 2) / 2. But foldr is fundamentally folding the expression from the right. This works well for associative operations like + or * (operations that don't care about the order in which they are applied to a sequence), and also for right-associative operations like exponentiation, but it doesn't work that well for left-associative operations like / or -.

This is where the left fold comes in. It does precisely what you'd expect - folds a sequence from the left, rather than from the right. I'm going to leave the division operation for later [2] and use another example of a left-associative operation: converting a sequence of digits into a number. For example [2, 3] represents 23, [3, 4, 5, 6] represents 3456, etc. (a related problem which is more common in introductory programming is converting a string that contains a number into an integer).

The basic reducing operation we'll use here is: acc * 10 + sequence value. To get 3456 from [3, 4, 5, 6] we'll compute:

(((((3 * 10) + 4) * 10) + 5) * 10) + 6

Note how this operation is left-associative. Reorganizing the parens to a rightmost-first evaluation would give us a completely different result.

Without further ado, here's the left fold:

def foldl(func, init, seq): if not seq: return init else: return foldl(func, func(init, seq[0]), seq[1:])

Note that the order of calls between the recursive call to itself and the call to func is reversed vs. foldr. This is also why it's customary to put acc first and seqval second in the reducing functions passed to foldl.

If we perform multiplication with foldl:

def product_with_foldl(seq): return foldl(lambda acc, seqval: acc * seqval, 1, seq)

We'll get this trace:

product_with_foldl([2, 4, 6, 8]) foldl(<function product_reducer at 0x7f2924cbdc80>, 1, [2, 4, 6, 8]) product_reducer(1, 2) --> 2 foldl(<function product_reducer at 0x7f2924cbdc80>, 2, [4, 6, 8]) product_reducer(2, 4) --> 8 foldl(<function product_reducer at 0x7f2924cbdc80>, 8, [6, 8]) product_reducer(8, 6) --> 48 foldl(<function product_reducer at 0x7f2924cbdc80>, 48, [8]) product_reducer(48, 8) --> 384 foldl(<function product_reducer at 0x7f2924cbdc80>, 384, []) --> 384 --> 384 --> 384 --> 384 --> 384

Contrary to the right fold, the reduction function here is called immediately for each recursive step, rather than waiting for the recursion to reach the end of the sequence first. Let's draw the call graph to make the folding-from-the-left obvious:

Now, to implement the digits-to-a-number function task described earlier, we'll write:

def digits2num_with_foldl(seq): return foldl(lambda acc, seqval: acc * 10 + seqval, 0, seq) Stepping it up a notch - function composition with foldr

Since we're looking at functional programming primitives, it's only natural to consider how to put higher order functions to more use in combination with folds. Let's see how to express function composition; the input is a sequence of unary functions: [f, g, h] and the output is a single function that implements f(g(h(...))). Note this operation is right-associative, so it's a natural candidate for foldr:

identity = lambda x: x def fcompose_with_foldr(fseq): return foldr(lambda seqval, acc: lambda x: seqval(acc(x)), identity, fseq)

In this case seqval and acc are both functions. Each step in the fold consumes a new function from the sequence and composes it on top of the accumulator (which is the function composed so far). The initial value for this fold has to be the identity for the composition operation, which just happens to be the identity function.

>>> f = fcompose_with_foldr([lambda x: x+1, lambda x: x*7, lambda x: -x]) >>> f(8) -55

Let's take this trick one step farther. Recall how I said foldr is limited to right-associative operations? Well, I lied a little. While it's true that the fundamental recursive pattern expressed by foldr is right-associative, we can use the function composition trick to evaluate some operation on a sequence in a left-associative way. Here's the digits-to-a-number function with foldr:

def digits2num_with_foldr(seq): composed = foldr( lambda seqval, acc: lambda n: acc(n * 10 + seqval), identity, seq) return composed(0)

To understand what's going on, manually trace the invocation of this function on some simple sequence like [1, 2, 3]. The key to this approach is to recall that foldr gets to the end of the list before it actually starts applying the function it folds. The following is a careful trace of what happens, with the folded function replaced by g for clarify.

digits2num_with_foldl([1, 2, 3]) -> foldr(g, identity, [1, 2, 3]) -> g(1, foldr(g, identity, [2, 3])) -> g(1, g(2, foldr(g, identity, [3]))) -> g(1, g(2, g(3, foldr(g, identity, [])))) -> g(1, g(2, g(3, identity))) -> g(1, g(2, lambda n: identity(n * 10 + 3)))

Now things become a bit trickier to track because of the different anonymous functions and their bound variables. It helps to give these function names.

<f1 = lambda n: identity(n * 10 + 3)> -> g(1, g(2, f1)) -> g(1, lambda n: f1(n * 10 + 2)) <f2 = lambda n: f1(n * 10 + 2)> -> g(1, f2) -> lambda n: f2(n * 10 + 1)

Finally, we invoke this returned function on 0:

f2(0 * 10 + 1) -> f1(1 * 10 + 2) -> identity(12 * 10 + 3) -> 123

In other words, the actual computation passed to that final identity is:

((1 * 10) + 2) * 10 + 3

Which is the left-associative application of the folded function.

Expressing foldl with foldr

After the last example, it's not very surprising that we can take this approach to its logical conclusion and express the general foldl by using foldr. It's just a generalization of digits2num_with_foldr:

def foldl_with_foldr(func, init, seq): composed = foldr( lambda seqval, acc: lambda n: acc(func(n, seqval)), identity, seq) return composed(init)

In fact, the pattern expressed by foldr is very close to what is called primitive recursion by Stephen Kleene in his 1952 book Introduction to Metamathematics. In other words, foldr can be used to express a wide range of recursive patterns. I won't get into the theory here, but Graham Hutton's article A tutorial on the universality and expressiveness of fold is a good read.

foldr and foldl in Haskell

Now I'll switch gears a bit and talk about Haskell. Writing transformations with folds is not really Pythonic, but it's very much the default Haskell style. In Haskell recursion is the way to iterate.

Haskell is a lazily evaluated language, which makes the discussion of folds a bit more interesting. While this behavior isn't hard to emulate in Python, the Haskell code dealing with folds on lazy sequences is pleasantly concise and clear.

Let's starts by implementing product and double - the functions this article started with. Here's the function computing a product of a sequence of numbers:

myproduct [] = 1 myproduct (x:xs) = x * myproduct xs

And a sample invocation:

*Main> myproduct [2,4,6,8] 384

The function doubling every element in a sequence:

mydouble [] = [] mydouble (x:xs) = [2 * x] ++ mydouble xs

Sample invocation:

*Main> mydouble [2,4,6,8] [4,8,12,16]

IMHO, the Haskell variants of these functions make it very obvious that a right-fold recursive pattern is in play. The pattern matching idiom of (x:xs) on sequences splits the "head" from the "tail" of the sequence, and the combining function is applied between the head and the result of the transformation on the tail. Here's foldr in Haskell, with a type declaration that should help clarify what goes where:

myfoldr :: (b -> a -> a) -> a -> [b] -> a myfoldr _ z [] = z myfoldr f z (x:xs) = f x (myfoldr f z xs)

If you're not familiar with Haskell this code may look foreign, but it's really a one-to-one mapping of the Python code for foldr, using some Haskell idioms like pattern matching.

These are the product and doubling functions implemented with myfoldr, using currying to avoid specifying the last parameter:

myproductWithFoldr = myfoldr (*) 1 mydoubleWithFoldr = myfoldr (\x acc -> [2 * x] ++ acc) []

Haskell also has a built-in foldl which performs the left fold. Here's how we could write our own:

myfoldl :: (a -> b -> a) -> a -> [b] -> a myfoldl _ z [] = z myfoldl f z (x:xs) = myfoldl f (f z x) xs

And this is how we'd write the left-associative function to convert a sequence of digits into a number using this left fold:

digitsToNumWithFoldl = myfoldl (\acc x -> acc * 10 + x) 0 Folds, laziness and infinite lists

Haskell evaluates all expressions lazily by default, which can be either a blessing or a curse for folds, depending on what we need to do exactly. Let's start by looking at the cool applications of laziness with foldr.

Given infinite lists (yes, Haskell easily supports infinite lists because of laziness), it's fairly easy to run short-circuiting algorithms on them with foldr. By short-circuiting I mean an algorithm that terminates the recursion at some point throughout the list, based on a condition.

As a silly but educational example, consider doubling every element in a sequence but only until a 5 is encountered, at which point we stop:

> foldr (\x acc -> if x == 5 then [] else [2 * x] ++ acc) [] [1,2,3,4,5,6,7] [2,4,6,8]

Now let's try the same on an infinite list:

> foldr (\x acc -> if x == 5 then [] else [2 * x] ++ acc) [] [1..] [2,4,6,8]

It terminates and returns the right answer! Even though our earlier stack trace of folding makes it appear like we iterate all the way to the end of the input list, this is not the case for our folding function. Since the folding function doesn't use acc when x == 5, Haskell won't evaluate the recursive fold further [3].

The same trick will not work with foldl, since foldl is not lazy in its second argument. Because of this, Haskell programmers are usually pointed to foldl', the eager version of foldl, as the better option. foldl' evaluates its arguments eagerly, meaning that:

  1. It won't support infinite sequences (but neither does foldl!)
  2. It's significantly more efficient than foldl because it can be easily turned into a loop (note that the recursion in foldl is a tail call, and the eager foldl' doesn't have to build a thunk of increasing size due to laziness in the first argument).

There is also an eager version of the right fold - foldr', which can be more efficient than foldr in some cases; it's not in Prelude but can be imported from Data.Foldable [4].

Folding vs. reducing

Our earlier discussion of folds may have reminded you of the reduce built-in function, which seems to be doing something similar. In fact, Python's reduce implements the left fold where the first element in the sequence is used as the zero value. One nice property of reduce is that it doesn't require an explicit zero value (though it does support it via an optional parameter - this can be useful when the sequence is empty, for example).

Haskell has its own variations of folds that implement reduce - they have the digit 1 as suffix: foldl1 is the more direct equivalent of Python's reduce - it doesn't need an initializer and folds the sequence from the left. foldr1 is similar, but folds from the right. Both have eager variants: foldl1' and foldr1'.

I promised to revisit calculating the ratio of a sequence; here's a way, in Haskell:

myratioWithFoldl = foldl1 (/)

The problem with using a regular foldl is that there's no natural identity value to use on the leftmost side of a ratio (on the rightmost side 1 works, but the associativity is wrong). This is not an issue for foldl1, which starts the recursion with the first item in the sequence, rather than an explicit initial value.

*Main> myratioWithFoldl [3,2,2] 0.75

Note that foldl1 will throw an exception if the given sequence is empty, since it needs at least one item in there.

Folding arbitrary data structures

The built-in folds in Haskell are defined on lists. However, lists are not the only data structure we should be able to fold. Why can't we fold maps (say, summing up all the keys), or even custom data structures? What is the minimum amount of abstraction we can extract to enable folding?

Let's start by defining a simple binary tree data structure:

data Tree a = Empty | Leaf a | Node a (Tree a) (Tree a) deriving Show -- A sample tree with a few nodes t1 = Node 10 (Node 20 (Leaf 4) (Leaf 6)) (Leaf 7)

Suppose we want to fold the tree with (+), summing up all the values contained within it. How do we go about it? foldr or foldl won't cut it here - they expect [a], not Tree a. We could try to write our own foldr:

foldTree :: (b -> a -> a) -> a -> Tree b -> a foldTree _ z Empty = z foldTree f z (Leaf x) = ?? foldTree f (Node x left right) = ??

There's a problem, however. Since we want to support an arbitrary folding result, we're not quite sure what to substitute for the ??s in the code above. In foldr, the folding function takes the accumulator and the next value in the sequence, but for trees it's not so simple. We may encounter a single leaf, and we may encounter several values to summarize; for the latter we have to invoke f on x as well as on the result of folding left and right. So it's not clear what the type of f should be - (b -> a -> a) doesn't appear to work [5].

A useful Haskell abstraction that can help us solve this problem is Monoid. A monoid is any data type that has an identity element (called mempty) and an associative binary operation called mappend. Monoids are, therefore, amenable to "summarization".

foldTree :: Monoid a => (b -> a) -> Tree b -> a foldTree _ Empty = mempty foldTree f (Leaf x) = f x foldTree f (Node x left right) = (foldTree f left) <> f x <> (foldTree f right)

We no longer need to pass in an explicit zero element: since a is a Monoid, we have its mempty. Also, we can now apply a single (b -> a) function onto every element in the tree, and combine the results together into a summary using a's mappend (<> is the infix synonym of mappend).

The challenge of using foldTree is that we now actually need to use a unary function that returns a Monoid. Luckily, Haskell has some useful built-in monoids. For example, Data.Monoid.Sum wraps numbers into monoids under addition. We can find the sum of all elements in our tree t1 using foldTree and Sum:

> foldrTree Sum t1 Sum {getSum = 47}

Similarly, Data.Monoid.Product wraps numbers into monoids under multiplication:

> foldrTree Product t1 Product {getProduct = 33600}

Haskell provides a built-in typeclass named Data.Foldable that only requires us to implement a similar mapping function, and then takes care of defining many folding methods. Here's the instance for our tree:

instance Foldable Tree where foldMap f Empty = mempty foldMap f (Leaf x) = f x foldMap f (Node x left right) = foldMap f left <> f x <> foldMap f right

And we'll automatically have foldr, foldl and other folding methods available on Tree objects:

> Data.Foldable.foldr (+) 0 t1 47

Note that we can pass a regular binary (+) here; Data.Foldable employs a bit of magic to turn this into a properly monadic operation. We get many more useful methods on trees just from implementing foldMap:

> Data.Foldable.toList t1 [4,20,6,10,7] > Data.Foldable.elem 6 t1 True

It's possible that for some special data structure these methods can be implemented more efficiently than by inference from foldMap, but nothing is stopping us from redefining specific methods in our Foldable instance. It's pretty cool, however, to see just how much functionality can be derived from having a single mapping method (and the Monoid guarantees) defined. See the documentation of Data.Foldable for more details.

[1]Note that I'm using Python 3 for all the code in this article; hence, Python 3's division semantics apply. [2]Division has a problem with not having a natural "zero" element; therefore, it's more suitable for foldl1 and reduce, which are described later on. [3]I'm prefixing most functions here with my since they have Haskell standard library builtin equivalents; while it's possible to avoid the name clashes with some import tricks, custom names are the least-effort approach, also for copy-pasting these code snippets into a REPL. [4]I realize this is a very rudimentary explanation of Haskell laziness, but going deeper is really out of scope of this article. There are plenty of resources online to read about lazy vs. eager evaluation, if you're interested. [5]We could try to apply f between the leaf value and z, but it's not clear in what order this should be done (what if f is sensitive to order?). Similarly for a Node, since there are no guarantees on the associativity of f, it's hard to predict what is the right way of applying it multiple times.
Categories: FLOSS Project Planets

Catalin George Festila: The DreamPie - interactive shell .

Planet Python - Wed, 2017-08-16 08:47
The DreamPie was designed to bring you a great interactive shell Python experience.
There are two ways to install the DreamPie:
  • cloning the git repository;
  • downloading a release.
You can read about installation and download here.
To run it just try the dreampie.exe with your python shell, I used with my python 2.7 version:
C:\DreamPie>dreampie.exe --hide-console-window c:\Python27\python.exeLet's see one screenshot of this running command:

Also, I tested with Python 3.6.2 and works well.
The main window is divided into the history box and the code box.
The history box lets you view previous commands and their output.
The code box for write your code.
Some keys I used:

  • Ctr+Enter - run the code;
  • Ctr+up / down arrow - adds the previous / next source code;
  • Ctr+Space - show code completions;
  • Ctr+T - open a new tab code;
  • Ctr+W - close the tab code;
  • Ctr+S - save your work history into HTML file.

You can set your font , colors and many features.
I make the installation into C:\DreamPie folder , and comes with all these folders and files:
C:\DreamPie>tree
Folder PATH listing for volume free-tutorials
Volume serial number is 000000FF 0EB1:091D
C:.
├───data
│ ├───language-specs
│ ├───subp-py2
│ │ └───dreampielib
│ │ ├───common
│ │ └───subprocess
│ └───subp-py3
│ └───dreampielib
│ ├───common
│ └───subprocess
├───gtk-2.0
│ ├───cairo
│ ├───gio
│ ├───glib
│ ├───gobject
│ ├───gtk
│ └───runtime
│ ├───bin
│ ├───etc
│ │ ├───bash_completion.d
│ │ ├───fonts
│ │ ├───gtk-2.0
│ │ └───pango
│ ├───lib
│ │ ├───gdk-pixbuf-2.0
│ │ │ └───2.10.0
│ │ │ └───loaders
│ │ ├───glib-2.0
│ │ │ └───include
│ │ └───gtk-2.0
│ │ ├───2.10.0
│ │ │ └───engines
│ │ ├───include
│ │ └───modules
│ └───share
│ ├───aclocal
│ ├───dtds
│ ├───glib-2.0
│ │ ├───gdb
│ │ ├───gettext
│ │ │ └───po
│ │ └───schemas
│ ├───gtk-2.0
│ ├───gtksourceview-2.0
│ │ ├───language-specs
│ │ └───styles
│ ├───icon-naming-utils
│ ├───themes
│ │ ├───Default
│ │ │ └───gtk-2.0-key
│ │ ├───Emacs
│ │ │ └───gtk-2.0-key
│ │ ├───MS-Windows
│ │ │ └───gtk-2.0
│ │ └───Raleigh
│ │ └───gtk-2.0
│ └───xml
│ └───libglade
└───share
├───applications
├───man
│ └───man1
└───pixmaps
Categories: FLOSS Project Planets

Mediacurrent: Integrating Amazon Alexa With a Drupal 8 Site

Planet Drupal - Wed, 2017-08-16 08:31

If you’ve ever used Alexa, it may seem like it must be extremely complicated to get her to respond like she does. However, if you have your content inside Drupal, it’s not terribly difficult to get her to utilize that data for your own custom Alexa skill. Let’s take a look at how to accomplish that.
 

Categories: FLOSS Project Planets

PyCharm: Analyzing Data in Amazon Redshift with Pandas

Planet Python - Wed, 2017-08-16 07:45

Redshift is Amazon Web Services’ data warehousing solution. They’ve extended PostgreSQL to better suit large datasets used for analysis. When you hear about this kind of technology as a Python developer, it just makes sense to then unleash Pandas on it. So let’s have a look to see how we can analyze data in Redshift using a Pandas script!

Setting up Redshift

If you haven’t used Redshift before, you should be able to get the cluster up for free for 2 months. As long as you make sure that you don’t use more than 1 instance, and you use the smallest available instance.

To play around, let’s use Amazon’s example dataset, and to keep things very simple, let’s only load the ‘users’ table. Configuring AWS is a complex subject, and they’re a lot better at explaining how to do it than we are, so please complete the first four steps of the AWS tutorial for setting up an example Redshift environment. We’ll use PyCharm Professional Edition as the SQL client.

Connecting to Redshift

After spinning up Redshift, you can connect PyCharm Professional to it by heading over to the database tool window (View | Tool Windows | Database), then use the green ‘+’ button, and select Redshift as the  data source type. Then fill in the information for your instance:

Make sure that when you click the ‘test connection’ button you get a ‘connection successful’ notification. If you don’t, make sure that you’ve correctly configured your Redshift cluster’s VPC to allow connections from 0.0.0.0/0 on port 5439.

Now that we’ve connected PyCharm to the Redshift cluster, we can create the tables for Amazon’s example data. Copy the first code listing from here, and paste it into the SQL console that was opened in PyCharm when you connected to the database. Then execute it by pressing Ctrl + Enter, when PyCharm asks which query to execute, make sure to select the full listing. Afterward, you should see all the tables in the database tool window:

To load the sample data, go back to the query window, and use the Redshift ‘load’ command to load data from an Amazon S3 bucket into the database:

The IAM role identifier should be the identifier for the IAM role you’ve created for your Redshift cluster in the second step in the Amazon tutorial. If everything goes right, you should have about 50,000 rows of data in your users table after the command completes.

Loading Redshift Data into a Pandas Dataframe

So let’s get started with the Python code! In our example we’ll use Pandas, Matplotlib, and Seaborn. The easiest way to get all of these installed is by using Anaconda, get the Python 3 version from their website. After installing, we need to choose Anaconda as our project interpreter:

If you can’t find Anaconda in the dropdown, you can click the settings “gear” button, and then select ‘Add Local’ and find your Anaconda installation on your disk. We’re using the root Anaconda environment without Conda, as we will depend on several scientific libraries which are complicated to correctly install in Conda environments.

Pandas relies on SQLAlchemy to load data from an SQL data source. So let’s use the PyCharm package manager to install sqlalchemy: use the green ‘+’ button next to the package list and find the package. To make SQLAlchemy work well with Redshift, we’ll need to install both the postgres driver, and the Redshift additions. For postgres, you can use the PyCharm package manager to install psycopg2. Then we need to install sqlalchemy-redshift to teach SQLAlchemy the specifics of working with a Redshift cluster. This package is unfortunately not available in the default Anaconda repository, so we’ll need to add a custom repository.

To add a custom repository click the ‘Manage Repositories’ button, and then use the green ‘+’ icon to add the ‘conda-forge’ channel. Afterwards, close the ‘Manage Repositories’ screen, and install sqlalchemy-redshift. Now that we’ve done that, we can start coding!

To show how it’s done, let’s analyze something simple in Amazon’s dataset, the users dataset holds fictional users, and then indicates for every user if they like certain types of entertainment. Let’s see if there’s any correlation between the types of entertainment.

As always, the full code is available on GitHub.

Let’s open a new Python file, and start our analysis. At first, we need to load our data. Redshift is accessed just like a regular PostgreSQL database, just with a slightly different connection string to use the redshift driver:

connstr = 'redshift+psycopg2://<username>:<password>@<your cluster>.redshift.amazonaws.com:5439/<database name>'

Also note that Redshift by default listens on port 5439, rather than Postgres’ 5432.

After we’ve connected we can use Pandas’ standard way to load data from an SQL database:

import pandas as pd from sqlalchemy import create_engine engine = create_engine(connstr) with engine.connect() as conn, conn.begin(): df = pd.read_sql(""" select likesports as sports, liketheatre as theater, likeconcerts as concerts, likejazz as jazz, likeclassical as classical, likeopera as opera, likerock as rock, likevegas as vegas, likebroadway as broadway, likemusicals as musicals from users;""", conn)

The dataset holds users’ preferences as False, None, or True. Let’s interpret this as True being a ‘like’, None being ambivalent, and False being a dislike. To make a correlation possible, we should convert this into numeric values:

# Map dataframe to have 1 for 'True', 0 for null, and -1 for False def bool_to_numeric(x): if x: return 1 elif x is None: return 0 else: return -1 df = df.applymap(bool_to_numeric)

And now we’re ready to calculate the correlation matrix, and present it. To present it we’ll use Seaborn’s heatmap. We’ll also create a mask to only show the bottom half of the correlation matrix (the top half mirrors the bottom).

import seaborn as sns import matplotlib.pyplot as plt import numpy as np # Calculate correlations corr = df.corr() mask = np.zeros_like(corr) mask[np.triu_indices_from(mask)] = True sns.heatmap(corr, mask=mask, xticklabels=corr.columns.values, yticklabels=corr.columns.values) plt.xticks(rotation=45) plt.yticks(rotation=45) plt.tight_layout() plt.show()

After running this code, we can see that there are no correlations in the dataset:

Which is strong evidence for Amazon’s sample dataset being a sample dataset. QED.

Fortunately, PyCharm also works great with real datasets in Redshift. Let us know in the comments what data you’re interested in analyzing!

Categories: FLOSS Project Planets

Amazee Labs: Extending GraphQL: Part 1 - Fields

Planet Drupal - Wed, 2017-08-16 06:36
Extending GraphQL: Part 1 - Fields

The last blog post might have left you wondering: "Plugins? It already does everything!". Or you are like one of the busy contributors and already identified a missing feature and can't wait to take the matter into your own hands (good choice).

In this and the following posts we will walk you through the extension capabilities of the GraphQL Core module and use some simple examples to show you how to solve common use cases.

Philipp Melab Wed, 08/16/2017 - 12:36

I will assume that you are already familiar with developing Drupal modules and have some basic knowledge of the Plugin API and Plugin Annotations.

The first thing you will want to do is disabling GraphQL schema and result caches. Add these parameters to your development.services.yml:

parameters: graphql.config: result_cache: false schema_cache: false

This will make sure you don't have to clear caches with every change.

As a starting point, we create an empty module called graphql_example. In the GitHub repository for this tutorial, you will find the end result as well as commits for every major step.

Diff: The module boilerplate

A simple page title field

Can't be too hard, right? We just want to be able to ask the GraphQL API what our page title is.
To do that we create a new class PageTitle in the appropriate plugin namespace Drupal\graphql_example\Plugin\GraphQL\Fields.

Let's talk this through. We've created a new derivation of FieldPluginBase, the abstract base class provided by the graphql_core module.

It already does the heavy lifting for integrating our field into the schema. It does this based on the meta information we put into the annotation:

  • id: A unique id for this plugin.
  • type: The return type GraphQL will expect.
  • name: The name we will use to invoke the field.
  • nullable: Defines if the field can return null values or not.
  • multi: Defines if the field will return a list of values.

Now, all we need to do is implement resolveValues to actually return a field value. Note that this method expects you to use the yield keyword instead of return and therefore return a generator.

Fields also can return multiple values, but the framework already handles this within GraphQL type definitions. So all we do is yield as many values as we want. For single value fields, the first one will be chosen.

So we run the first GraphQL query against our custom field.

query { pageTitle }

And the result is disappointing.

{ "data": { "pageTitle": null } }

Diff: The naive approach

The page title is always null because we extract the page title of the current page, which is the GraphQL API callback and has no title. We then need a way to tell it which page we are talking about.

Adding a path argument

Lucky us, GraphQL fields also can accept arguments. We can use them to pass the path of a page and get the title for real. To do that, we add a new annotation property called arguments. This is a map of argument names to the argument type. In our case, we added one argument with name path that expects a String value.

Any arguments will be passed into our resolveValues method with the $args parameter. So we can use the value there to ask the Drupal route matcher to resolve the route and create the proper title for this path.

Let's try again.

query { pageTitle(path: "/admin") }

Way better:

{ "data": { "pageTitle": "Administration" } }

Congratulations, MVP satisfied - you can go home now!

Diff: Using arguments

If there wasn't this itch every developer has when the engineering senses start to tingle. Last time we stumbled on this ominous route field that also takes a path argument. And this ...

query { pageTitle(path: "/node/1") route(path: "/node/1") { ... } }

... smells like a low hanging fruit. There has to be a way to make the two of them work together.

Attaching fields to types

Every GraphQL field can be attached to one or more types by adding the types property to its annotation. In fact, if the property is omitted, it will default to the Root type which is the root query type and the reason our field appeared there in the first place.

We learned that the route field returns a value of type Url. So we remove the argument definition and add a types property instead.

This means the $args parameter won't receive the path value anymore. Instead, the $value parameter will be populated with the result of the route field. And this is a Drupal Url object that we already can be sure is routed since route won't return it otherwise. With this in mind, we can make the solution even simpler.

Now we have to adapt our query since our field is nested within another.

query { route(path: "/admin") { pageTitle } }

Which also will return a nested result.

{ "data": { "route": { "pageTitle": "Administration" } } }

The price of a more complex nested result might seem high for not having to pass the same argument twice. But there's more to what we just did. By attaching the pageTitle field to the Url type, we added it wherever the type appears. Apart from the route field this also includes link fields, menu items or breadcrumbs. And potentially every future field that will return objects of type Url.
We just turned our simple example into the Swiss Army Knife (pun intended) of page title querying.

Diff: Contextual fields

I know what you are thinking. Even an achievement of this epic scale is worthless without test coverage. And you are right. Let's add some.

Adding tests

Fortunately the GraphQL module already comes with an easy to use test base class that helps us to safeguard our achievement in no time.

First, create a tests directory in the module folder. Inside that, a directory called queries that contains one file - page_title.gql - with our test query. A lot of editors already support GraphQL files with syntax highlighting and autocompletion, that's why we moved the query payload to another file.

The test itself just has to extend GraphQLFileTestBase, add the graphql_example module to the list of modules to enable and execute the query file.

Diff: Adding a test

Wrap-Up

We just created a simple field, passed arguments to it, learned how to attach it to an already existing type and finally verified our work by adding a test case. Not bad for one day's work. Next time we will have a look at Types and Interfaces, and how to use them to create fields with complex results.

Categories: FLOSS Project Planets

ADCI Solutions: New employee adaptation

Planet Drupal - Wed, 2017-08-16 06:15

What makes any web development team strong? Right, it's people who are the most important part of the success. So, how to shape a great team member out of a newbie?

Here is a small note on how an integration process is set in our Drupal team. We will guide you through all the stages: from an interview to team-building events.

 

Check out if you included all essential adaptation steps into your workflow.

 

 

 

Categories: FLOSS Project Planets

Mukul Gandhi: Great write up on XML Schema 1.1

Planet Apache - Wed, 2017-08-16 05:06
On this page, http://www.xfront.com/xml-schema-1-1/ Roger L. Costello has posted some wonderful write up on XML Schema 1.1 technology. Enthusiasts are encouraged to read that.

Roger's language is very simple, and covers almost everything from the perspective of XML Schema 1.1 user's needs.
Categories: FLOSS Project Planets

Janez Urevc: Call for help with Media source plugin porting

Planet Drupal - Wed, 2017-08-16 04:42
Call for help with Media source plugin porting slashrsm Wed, 16.08.2017 - 10:42

As you may already know Media entity module entered Drupal 8.4 as Media module earlier this year. This was the result of years of hard work in contrib and core space. While the module stayed conceptually the same we used this opportunity to clean it up and refactor some things; mostly to make APIs even easier to understand and use.

Media entity comes with the concept of so-called source plugins (also called type plugins in the past). They are responsible for everything related to a specific media type: they have knowledge about their nature, about the way they should be stored and displayed, they are aware of any business logic related to them, etc.

There were many plugins already available before Drupal core decided to adopt the module and they mostly lived as separate modules in contrib space. Since the API changed a bit during the core transition all this plugins need to be updated. The process is pretty straightforward, but the number of modules that need to be worked on is quite high. This means that we'll need quite some help from the community to do this as fast and as effectively as possible.

Here is where you come in!

Are you interested in contributing but don't know how? Are you looking for a task that is relatively simple but not completely trivial? Then the porting of media source plugins might be a really good entry point for you!

There is a meta issue that is trying to keep the overview over the porting process. You will find the list of modules and their current status in it. In order to get familiar with the changes that were introduced during the core transition you should check the relevant change record. All information that is needed for ports should be available there. If you'd rather work with examples then take a look at Media entity image and Media entity document, which were adopted to core as Image and File source plugins respectively.

When you decided which module deserves your attention check its issue queue. If there is already an issue about the porting get involved there. If there is not create one to let others know that you are working on the port. In any case make sure to add its reference to the meta overview issue. This will help us to keep the general overview over the process.

Need help?

Have you checked all the resources I mentioned above and you feel that there are still things that are not entirely clear? Come to the #drupal-media channel on IRC. We are hanging out in that channel most of the times. Our weekly meetings happen in the same channel every Wednesday at 14h UTC.

Enjoyed this post? There is more! Results of the Drupal 8 media sprint Call for Drupal 8 media ecosystem co-maintainers Presentations about various Drupal 8 media modules
Categories: FLOSS Project Planets

Thank you all!

Planet KDE - Wed, 2017-08-16 03:32

When we went public with our troubles with the Dutch tax office two weeks ago, the response was overwhelming. The little progress bar on krita.org is still counting, and we’re currently at 37,085 euros, and 857 donators. And that excludes the people who sent money to the bank directly. It does include Private Internet Access‘ sponsorship. Thanks to all you! So many people have supported us, we cannot even manage to send out enough postcards.

So, even though we’re going to get another accountant’s bill of about 4500 euros, we’ve still got quite a surplus! As of this moment, we have €29,657.44 in our savings account!

That means that we don’t need to do a fund raiser in September. Like we said, we’ve still got some features to finish. Dmitry and I are currently working on

  • Make Krita save and autosave in the background (done)
  • Improved animation rendering speed (done)
  • Improve Krita’s brush engine multi-core adaptability (under way)
  • Improve the general concurrency in Krita (under way)
  • Add touch functionality back (under way)
  • Implement the new text tool (under way)
  • Lazy brush: plug in a faster algorithm
  • Stacked brushes: was done, but needs to be redone
  • Replace the reference images docker with a reference images tool (under way)
  • Add patterns and filters to the vector support

All of that should be done before the end of the year. After that, we want to spend 2018 working on stability, polish and performance. So much will have changed that from 3.0 to 4.0 is a bigger step than from 2.9 to 3.0, even though that included the port to a new version of Qt! We will be doing new fund raisers in 2018, but we’re still discussing what the best approach would be. Kickstarters with stretch goals are very much feature oriented, and we’ve all decided that it’s time to improve what we have, instead of adding still more features, at least, for a while…

In the meantime, we’re working on the 3.2 release. We wanted to have it released yesterday, but we found a regression, which Dmitry is working hard on fixing right now. So it’ll probably be tomorrow.

Categories: FLOSS Project Planets

Bryan Pendleton: Spider Woman's Daughter: a very short review

Planet Apache - Tue, 2017-08-15 22:14

Over more than three decades, Tony Hillerman wrote a series of absolutely wonderful detective novels set on the Navajo Indian Reservation and featuring detectives Lieutenant Joe Leaphorn and Sergeant Jim Chee.

Recently, I learned that, after Hillerman's death, his daughter, Anne Hillerman, has begun publishing her own novels featuring Leaphorn, Chee, and the other major characters developed by her father, such as Officer Bernadette Manuelito.

So far, she has published three books, the first of which is Spider Woman's Daughter.

If you loved Tony Hillerman's books, I think you will find Anne Hillerman's books lovely, as well. Not only is she a fine writer, she brings an obvious love of her father's choices of setting, of character(s), and of the Navajo people and their culture.

I'm looking forward to reading the other books that she has written, and I hope she continues writing many more.

Categories: FLOSS Project Planets

myDropWizard.com: FREE migration to Drupal 8 for 10 nonprofits

Planet Drupal - Tue, 2017-08-15 22:01

Migrating your site to Drupal 8 isn't simple or cheap. Nor is maintaining it or getting support once your new Drupal 8 site is live!

This is a problem that affects all organizations using Drupal, but it's particularly hard on smaller nonprofits.

A couple weeks ago, I wrote a super long article detailing how Drupal 8 has left many small nonprofits behind. It also proposes a possible path for fixing it!

We're building an Open Source platform for nonprofit websites built on Drupal 8 and CiviCRM, available as a SaaS with hosting and support included.

That article was primarily about why - in this article I'd like to talk about the details of how!

There's a lot to discuss, but I'll try to make this article shorter. :-)

Oh, and we're looking for 10 adventurous nonprofits to join the BETA and help build it.

If you join the BETA, we'll migrate your existing site to the new Drupal 8 & CiviCRM platform for FREE!

Read more to learn about all the details we've got worked out so far...

Categories: FLOSS Project Planets

Justin Mason: Links for 2017-08-15

Planet Apache - Tue, 2017-08-15 19:58
  • Allen curve – Wikipedia

    During the late 1970s, [Professor Thomas J.] Allen undertook a project to determine how the distance between engineers’ offices affects the frequency of technical communication between them. The result of that research, produced what is now known as the Allen Curve, revealed that there is a strong negative correlation between physical distance and the frequency of communication between work stations. The finding also revealed the critical distance of 50 meters for weekly technical communication. With the fast advancement of internet and sharp drop of telecommunication cost, some wonder the observation of Allen Curve in today’s corporate environment. In his recently co-authored book, Allen examined this question and the same still holds true. He says[2] “For example, rather than finding that the probability of telephone communication increases with distance, as face-to-face probability decays, our data show a decay in the use of all communication media with distance (following a “near-field” rise).” [p. 58] Apparently a few years back in Google, some staff mined the promotion data, and were able to show a Allen-like curve that proved a strong correlation between distance from Jeff Dean’s desk, and time to getting promoted.

    (tags: jeff-dean google history allen-curve work communication distance offices workplace teleworking remote-work)

  • Arq Backs Up To B2!

    Arq backup for OSX now supports B2 (as well as S3) as a storage backend. “it’s a super-cheap option ($.005/GB per month) for storing your backups.” (that is less than half the price of $0.0125/GB for S3’s Infrequent Access class)

    (tags: s3 storage b2 backblaze backups arq macosx ops)

  • After Charlottesville, I Asked My Dad About Selma

    Dad told me that he didn’t think I was going to have to go through what he went through, but now he can see that he was wrong. “This fight is a never-ending fight,” he said. “There’s no end to it. I think after the ‘60s, the whole black revolution, Martin Luther King, H. Rap Brown, Stokely Carmichael and all the rest of the people, after that happened, people went to sleep,” he said. “They thought, ‘this is over.’”

    (tags: selma charlottesville racism nazis america race history civil-rights 1960s)

Categories: FLOSS Project Planets
Syndicate content