FLOSS Project Planets

Colorfield: Payment and Mollie on Drupal 8

Planet Drupal - Tue, 2017-08-15 04:22
Payment and Mollie on Drupal 8 christophe Tue, 15/08/2017 - 10:22 Mollie provides a facade for several payment methods (credit card, debit card, Paypal, Sepa, Bitcoin, ...) with various languages and frameworks support. In some cases, you could decide to use the Payment module instead of the full Commerce distribution. This tutorial describes how to create a product as a node and process payment with Mollie, only via configuration. A possible use case can be an existing Drupal 8 site that just needs to enable a few products (like membership, ...).
Categories: FLOSS Project Planets

Talk Python to Me: #125 Django REST framework and a new API star is born

Planet Python - Tue, 2017-08-15 04:00
APIs were once the new and enabling thing in technology. Today they are table-stakes. And getting them right is important. Today we'll talk about one of the most popular and mature API frameworks in Django REST Framework. You'll meet the creator, Tom Christie and talk about the framework, API design, and even his successful take on funding open source projects. <br/> <br/> But Tom is not done here. He's also creating the next generation API framework that fully embraces Python 3's features called API Star.<br/> <br/> Links from the show:<br/> <br/> <div style="font-size: .85em;"><b>Django REST framework</b>: <a href="http://www.django-rest-framework.org/" target="_blank">django-rest-framework.org</a><br/> <b>API Star</b>: <a href="https://github.com/tomchristie/apistar" target="_blank">github.com/tomchristie/apistar</a><br/> <b>Tom on Twitter</b>: <a href="https://twitter.com/_tomchristie" target="_blank">@_tomchristie</a><br/></div>
Categories: FLOSS Project Planets

Sixth Blog Gsoc 2017

Planet KDE - Tue, 2017-08-15 03:00

Hi, this post is general information about telemetry in Krita. I want to clarify some points. Soon we will launch a preliminary testing of my branch. In case of successful testing, it will go into one of the closest releases of Krita (not 3.2). Krita must follow the policy of...

Categories: FLOSS Project Planets

Note names

Planet KDE - Tue, 2017-08-15 03:00

I mentioned in my previous blog that I started with the note names activity. This will be a musical blog covering the different components that we have and some music knowledge :)

I had been a fond of music from playing a piano to a guitar, that being a reason for me working on background music and making the musical activities as a part of my GSoC. Music is generally represented with Staff. So what is a Staff? The Staff consists of 5 horizontal lines on which our musical notes lie. We represent the lower pitches lower on the Staff and the higher pitches are represented higher on the staff.

Repeater { model: nbLines Rectangle { width: staff.width height: 5 border.width: 5 color: "black" x: 0 y: index * verticalDistanceBetweenLines } }

nbLines = number of horizontal lines = 5

But with blank staff can you tell what notes will be played? No, we can’t, we use Clef for that. We have two main Clefs: Base Clef and Treble Clef. Also more notes can be added to a staff using Ledger lines which are used for extending the Staff. We can specify the type of clef using which the notes are represented.

Repeater { id: staves model: nbStaves Staff { id: staff clef: multipleStaff.clef height: (multipleStaff.height - distanceBetweenStaff * (nbStaves - 1)) / nbStaves width: multipleStaff.width y: index * (height + distanceBetweenStaff) lastPartition: index == nbStaves - 1 firstNoteX: multipleStaff.firstNoteX } }

We can even have multiple staffs by specifying the nbStaves in MultipleStaff component. For note names we have nbStaves = 1 whereas clef = treble for levels < 10 whereas cleff = bass for level > 10

MultipleStaff { id: staff nbStaves: 1 clef: bar.level <= 10 ? "treble" : "bass" height: background.height / 4 width: bar.level == 1 || bar.level == 11 ? background.width * 0.8 : background.width / 2 nbMaxNotesPerStaff: bar.level == 1 || bar.level == 11 ? 8 : 1 firstNoteX: bar.level == 1 || bar.level == 11 ? width / 5 : width / 2 }

I did various changes and fixes in the last week in note names which include:

  1. Adding highlighting to options in the levels.
  2. Fixing keyboard controls which allows you to navigate between options using arrow keys and selecting the answer using Enter or return key
  3. Added the initial version of highlighting of the notes on staff for note names.

In the coming days, I will work on the following things:

  1. Improving the highlight of notes on staff.
  2. Add a drag for the options in levels.
  3. Cleaning the code and other minor fixes :)

Did I tell you that I am also working on more animations for oware? Yes, we have more animations for oware coming up for the movements of the seeds when captured to the score houses. I completed the animation movement pretty quickly this time as compared to the time taken when I was implementing it for the movements. Probably that was due to all the things I learnt in those animations which made me realise that though it took a lot of time (made me behind my timeline alot :D) but it was totally worth it. At the end our aim is to provide the best activities for kids with the best experience that they can get and not just workable activities. Along with clean and maintainable code to make it as easy as we can for new contributors or anyone to understand. Well, that’s what you learn the most :)

I will share more about the note names activity and the score animation also in my next blog post :)

Categories: FLOSS Project Planets

Agiledrop.com Blog: AGILEDROP: Accepted Business Sessions for DrupalCon Vienna

Planet Drupal - Tue, 2017-08-15 02:27
This year European DrupalCon will take place in Vienna, Austria. It's still more than a month away. However, the sessions were already selected. We will look at the ones, which were accepted in the business track. And we will also explain why. DrupalCon Vienna is one of the biggest Drupal events in the world this year. Therefore, some of our team members will be present at the event in the capital city of Austria. But once again our AGILEDROP team will not be just present at the event. We had a »bigger« role. Namely, our commercial director Iztok Smolic was invited to the Business track… READ MORE
Categories: FLOSS Project Planets

Dirk Eddelbuettel: #9: Compacting your Shared Libraries

Planet Debian - Mon, 2017-08-14 21:49

Welcome to the nineth post in the recognisably rancid R randomness series, or R4 for short. Following on the heels of last week's post, we aim to look into the shared libraries created by R.

We love the R build process. It is robust, cross-platform, reliable and rather predicatable. It. Just. Works.

One minor issue, though, which has come up once or twice in the past is the (in)ability to fully control all compilation options. R will always recall CFLAGS, CXXFLAGS, ... etc as used when it was compiled. Which often entails the -g flag for debugging which can seriously inflate the size of the generated object code. And once stored in ${RHOME}/etc/Makeconf we cannot on the fly override these values.

But there is always a way. Sometimes even two.

The first is local and can be used via the (personal) ~/.R/Makevars file (about which I will have to say more in another post). But something I have been using quite a bite lately uses the flags for the shared library linker. Given that we can have different code flavours and compilation choices---between C, Fortran and the different C++ standards---one can end up with a few lines. I currently use this which uses -Wl, to pass an the -S (or --strip-debug) option to the linker (and also reiterates the desire for a shared library, presumably superfluous):

SHLIB_CXXLDFLAGS = -Wl,-S -shared SHLIB_CXX11LDFLAGS = -Wl,-S -shared SHLIB_CXX14LDFLAGS = -Wl,-S -shared SHLIB_FCLDFLAGS = -Wl,-S -shared SHLIB_LDFLAGS = -Wl,-S -shared

Let's consider an example: my most recently uploaded package RProtoBuf. Built under a standard 64-bit Linux setup (Ubuntu 17.04, g++ 6.3) and not using the above, we end up with library containing 12 megabytes (!!) of object code:

edd@brad:~/git/rprotobuf(feature/fewer_warnings)$ ls -lh src/RProtoBuf.so -rwxr-xr-x 1 edd edd 12M Aug 14 20:22 src/RProtoBuf.so edd@brad:~/git/rprotobuf(feature/fewer_warnings)$

However, if we use the flags shown above in .R/Makevars, we end up with much less:

edd@brad:~/git/rprotobuf(feature/fewer_warnings)$ ls -lh src/RProtoBuf.so -rwxr-xr-x 1 edd edd 626K Aug 14 20:29 src/RProtoBuf.so edd@brad:~/git/rprotobuf(feature/fewer_warnings)$

So we reduced the size from 12mb to 0.6mb, an 18-fold decrease. And the file tool still shows the file as 'not stripped' as it still contains the symbols. Only debugging information was removed.

What reduction in size can one expect, generally speaking? I have seen substantial reductions for C++ code, particularly when using tenmplated code. More old-fashioned C code will be less affected. It seems a little difficult to tell---but this method is my new build default as I continually find rather substantial reductions in size (as I tend to work mostly with C++-based packages).

The second option only occured to me this evening, and complements the first which is after all only applicable locally via the ~/.R/Makevars file. What if we wanted it affect each installation of a package? The following addition to its src/Makevars should do:

strippedLib: $(SHLIB) if test -e "/usr/bin/strip"; then /usr/bin/strip --strip-debug $(SHLIB); fi .phony: strippedLib

We declare a new Makefile target strippedLib. But making it dependent on $(SHLIB), we ensure the standard target of this Makefile is built. And by making the target .phony we ensure it will always be executed. And it simply tests for the strip tool, and invokes it on the library after it has been built. Needless to say we get the same reduction is size. And this scheme may even pass muster with CRAN, but I have not yet tried.

Lastly, and acknowledgement. Everything in this post has benefited from discussion with my former colleague Dan Dillon who went as far as setting up tooling in his r-stripper repository. What we have here may be simpler, but it would not have happened with what Dan had put together earlier.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Daniel Bader: Unpacking Nested Data Structures in Python

Planet Python - Mon, 2017-08-14 20:00
Unpacking Nested Data Structures in Python

A tutorial on Python’s advanced data unpacking features: How to unpack data with the “=” operator and for-loops.

Have you ever seen Python’s enumerate function being used like this?

for (i, value) in enumerate(values): ...

In Python, you can unpack nested data structures in sophisticated ways, but the syntax might seem complicated: Why does the for statement have two variables in this example, and why are they written inside parentheses?

This article answers those questions and many more. I wrote it in two parts:

  • First, you’ll see how Python’s “=” assignment operator iterates over complex data structures. You’ll learn about the syntax of multiple assignments, recursive variable unpacking, and starred targets.

  • Second, you’ll discover how the for-statement unpacks data using the same rules as the = operator. Again, we’ll go over the syntax rules first and then dive into some hands-on examples.

Ready? Let’s start with a quick primer on the “BNF” syntax notation used in the Python language specification.

BNF Notation – A Primer for Pythonistas

This section is a bit technical, but it will help you understand the examples to come. The Python 2.7 Language Reference defines all the rules for the assignment statement using a modified form of Backus Naur notation.

The Language Reference explains how to read BNF notation. In short:

  • symbol_name ::= starts the definition of a symbol
  • ( ) is used to group symbols
  • * means appearing zero or more times
  • + means appearing one or more times
  • (a|b) means either a or b
  • [ ] means optional
  • "text" means the literal text. For example, "," means a literal comma character.

Here is the complete grammar for the assignment statement in Python 2.7. It looks a little complicated because Python allows many different forms of assignment:

An assignment statement consists of

  • one or more (target_list "=") groups
  • followed by either an expression_list or a yield_expression
assignment_stmt ::= (target_list "=")+ (expression_list | yield_expression)

A target list consists of

  • a target
  • followed by zero or more ("," target) groups
  • followed by an optional trailing comma
target_list ::= target ("," target)* [","]

Finally, a target consists of any of the following

  • a variable name
  • a nested target list enclosed in ( ) or [ ]
  • a class or instance attribute
  • a subscripted list or dictionary
  • a list slice
target ::= identifier | "(" target_list ")" | "[" [target_list] "]" | attributeref | subscription | slicing

As you’ll see, this syntax allows you to take some clever shortcuts in your code. Let’s take a look at them now:

#1 – Unpacking and the “=” Assignment Operator

First, you’ll see how Python’s “=” assignment operator iterates over complex data structures. You’ll learn about the syntax of multiple assignments, recursive variable unpacking, and starred targets.

Multiple Assignments in Python:

Multiple assignment is a shorthand way of assigning the same value to many variables. An assignment statement usually assigns one value to one variable:

x = 0 y = 0 z = 0

But in Python you can combine these three assignments into one expression:

x = y = z = 0

Recursive Variable Unpacking:

I’m sure you’ve written [ ] and ( ) on the right side of an assignment statement to pack values into a data structure. But did you know that you can literally flip the script by writing [ ] and ( ) on the left side?

Here’s an example:

[target, target, target, ...] = or (target, target, target, ...) =

Remember, the grammar rules allow [ ] and ( ) characters as part of a target:

target ::= identifier | "(" target_list ")" | "[" [target_list] "]" | attributeref | subscription | slicing

Packing and unpacking are symmetrical and they can be nested to any level. Nested objects are unpacked recursively by iterating over the nested objects and assigning their values to the nested targets.

Here’s what this looks like in action:

(a, b) = (1, 2) # a == 1 # b == 2 (a, b) = ([1, 2], [3, 4]) # a == [1, 2] # b == [3, 4] (a, [b, c]) = (1, [2, 3]) # a == 1 # b == 2 # c == 3

Unpacking in Python is powerful and works with any iterable object. You can unpack:

  • tuples
  • lists
  • dictionaries
  • strings
  • ranges
  • generators
  • comprehensions
  • file handles.

Test Your Knowledge: Unpacking

What are the values of a, x, y, and z in the example below?

a = (x, y, z) = 1, 2, 3

Hint: this expression uses both multiple assignment and unpacking.

Starred Targets (Python 3.x Only):

In Python 2.x the number of targets and values must match. This code will produce an error:

x, y, z = 1, 2, 3, 4 # Too many values

Python 3.x introduced starred variables. Python first assigns values to the unstarred targets. After that, it forms a list of any remaining values and assigns it to the starred variable. This code does not produce an error:

x, *y, z = 1, 2, 3, 4 # y == [2,3]

Test Your Knowledge: Starred Variables

Is there any difference between the variables b and *b in these two statements? If so, what is it?

(a, b, c) = 1, 2, 3 (a, *b, c) = 1, 2, 3 #2 – Unpacking and for-loops

Now that you know all about target list assignment, it’s time to look at unpacking used in conjunction with for-loops.

In this section you’ll see how the for-statement unpacks data using the same rules as the = operator. Again, we’ll go over the syntax rules first and then we’ll look at a few hands-on examples.

Let’s examine the syntax of the for statement in Python:

for_stmt ::= "for" target_list "in" expression_list ":" suite ["else" ":" suite]

Do the symbols target_list and expression_list look familiar? You saw them earlier in the syntax of the assignment statement.

This has massive implications:

Everything you’ve just learned about assignments and nested targets also applies to for loops!

Standard Rules for Assignments:

Let’s take another look at the standard rules for assignments in Python. The Python Language Reference says:

The for statement is used to iterate over the elements of a sequence (such as a string, tuple or list) or other iterable objects … Each item, in turn, is assigned to the target list using the standard rules for assignments.

You already know the standard rules for assignments. You learned them earlier when we talked about the = operator. They are:

  • assignment to a single target
  • assignment to multiple targets
  • assignment to a nested target list
  • assignment to a starred variable (Python 3.x only)

In the introduction, I promised I would explain this code:

for (i,value) in enumerate(values): ...

Now you know enough to figure it out yourself:

  • enumerate returns a sequence of (number, item) tuples
  • when Python sees the target list (i,value) it unpacks (number, item) tuple into the target list.


I’ll finish by showing you a few more examples that use Python’s unpacking features with for-loops. Here’s some test data we’ll use in this section:

# Test data: negative_numbers = (-1, -2, -3, -4, -5) positive_numbers = (1, 2, 3, 4, 5)

The built-in zip function returns pairs of numbers:

>>> list(zip(negative_numbers, positive_numbers)) [(-1, 1), (-2, 2), (-3, 3), (-4, 4), (-5, 5)]

I can loop over the pairs:

for z in zip(negative_numbers, positive_numbers): print(z)

Which produces this output:

(-1, 1) (-2, 2) (-3, 3) (-4, 4) (-5, 5)

I can also unpack the pairs if I wish:

>>> for (neg, pos) in zip(negative_numbers, positive_numbers): ... print(neg, pos) -1 1 -2 2 -3 3 -4 4 -5 5

What about starred variables? This example finds a string’s first and last character. The underscore character is often used in Python when we need a dummy placeholder variable:

>>> animals = [ ... 'bird', ... 'fish', ... 'elephant', ... ] >>> for (first_char, *_, last_char) in animals: ... print(first_char, last_char) b d f h e t Unpacking Nested Data Structures – Conclusion

In Python, you can unpack nested data structures in sophisticated ways, but the syntax might seem complicated. I hope that with this tutorial I’ve given you a clearer picture of how it all works. Here’s a quick recap of what we covered:

  • You just saw how Python’s “=” assignment operator iterates over complex data structures. You learned about the syntax of multiple assignments, recursive variable unpacking, and starred targets.

  • You also learned how Python’s for-statement unpacks data using the same rules as the = operator and worked through a number of examples.

It pays off to go back to the basics and to read the language reference closely—you might find some hidden gems there!

Categories: FLOSS Project Planets

Justin Mason: Links for 2017-08-14

Planet Apache - Mon, 2017-08-14 19:58
Categories: FLOSS Project Planets

Reproducible builds folks: Reproducible Builds: Weekly report #119

Planet Debian - Mon, 2017-08-14 19:30

Here's what happened in the Reproducible Builds effort between Sunday July 30 and Saturday August 5 2017:

Media coverage

We were mentioned on Late Night Linux Episode 17, around 29:30.

Packages reviewed and fixed, and bugs filed

Upstream packages:

  • Bernhard M. Wiedemann:
    • efl (merged), unique ids based on memory address
    • 389-ds (merged), SOURCE_DATE_EPOCH support.
    • plowshare, SOURCE_DATE_EPOCH support
    • sphinx, file ordering
    • sphinx, SOURCE_DATE_EPOCH support

Debian packages:

Reviews of unreproducible packages

29 package reviews have been added, 72 have been updated and 151 have been removed in this week, adding to our knowledge about identified issues.

4 issue types have been updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (36)
  • Andreas Beckmann (2)
  • Daniel Schepler (2)
  • Logan Rosen (1)
  • Lucas Nussbaum (93)
diffoscope development

Version 85 was uploaded to unstable by Mattia Rizzolo. It included contributions from:

  • Mattia Rizzolo:
    • Add an explicit Recommends: on the defusedxml python package.
    • Various other code quality tweaks.
  • Juliana Oliveira Rodrigues:
    • Fix test_ico_image for ImageMagick identify >= 6.9.8.
    • Use the defusedxml XML library by default in the XML comparator, if it's available. This protects against various XML parser DoS attacks and other security holes, which other Python XML libraries are vulnerable to.
  • Ximin Luo:
    • Force a flush when writing output to diff. (Closes: #870049).

as well as previous weeks' contributions, summarised in the changelog.

There were also further commits in git, which will be released in a later version:

  • Guangyuan Yang:
    • tests/iso9660: support isoinfo's output coming from cdrtools' version instead of genisoimage's
  • Mattia Rizzolo:
    • Code quality and test fixes.
  • Chris Lamb:
    • Code quality and test fixes.

This week's edition was written by Ximin Luo, Bernhard M. Wiedemann and Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Categories: FLOSS Project Planets

Acquia Lightning Blog: Round up your front-end JavaScript libraries with Composer

Planet Drupal - Mon, 2017-08-14 18:03
Round up your front-end JavaScript libraries with Composer phenaproxima Mon, 08/14/2017 - 18:03

In Lightning 2.1.7, we’re finally answering a long-standing question: if I’m managing my code base with Composer, how can I bring front-end JavaScript libraries into my site?

This has long been a tricky issue. drupal.org doesn’t really provide an official solution -- modules that require JavaScript libraries usually include instructions for downloading and extracting said libraries yourself. Libraries API can help in some cases; distributions are allowed to ship certain libraries. But if you’re building your site with Composer, you’ve been more or less on your own.

Now, the Lightning team has decided to add support for Asset Packagist. This useful repository acts as a bridge between Composer and the popular NPM and Bower repositories, which catalog thousands of useful front-end and JavaScript packages. When you have Asset Packagist enabled in a Composer project, you can install a Bower package like this (using Dropzone as an example):

$ composer require bower-asset/dropzone

And you can install an NPM package just as easily:

$ composer require npm-asset/dropzone

To use Asset Packagist in your project, merge the following into your composer.json:

"repositories": [ { "type: "composer", "url": "https://asset-packagist.org" } ]

Presto! You can now add Bower and NPM packages to your project as if they were normal PHP packages. Yay! However...

Normally, asset packages will be installed in the vendor directory, like any other Composer package. This probably isn’t what you want to do with a front-end JavaScript library, though -- luckily, there is a special plugin you can use to install the libraries in the right place. Note that you’ll need Composer 1.5 (recently released) or later for this to work; run composer self-update if you're using an older version of Composer.

Now, add the plugin as a dependency:

$ composer require oomphinc/composer-installers-extender

Then merge the following into your composer.json:

"extra": { "installer-types": [ "bower-asset", "npm-asset" ], "installer-paths": { "path/to/docroot/libraries/{$name}": [ "type:bower-asset", "type:npm-asset" ] } }

Now, when you install a Bower or NPM package, it will be placed in docroot/libraries/NAME_OF_PACKAGE. Boo-yah!

Let's face it -- if you're using Composer to manage your Drupal code base and you want to add some JavaScript libraries, Asset Packagist rocks your socks around the block.

BUT! Note that this -- adding front-end libraries to a browser-based application -- is really the only use case for which Asset Packagist is appropriate. If you're writing a JavaScript app for Node, you should use NPM or Yarn, not Composer! Asset Packagist isn't meant to replace NPM or Bower, and it doesn't necessarily resolve dependencies the same way they do. So use this power wisely and well!

P.S. Lightning 2.1.7 includes a script which can help set up your project's composer.json to use Asset Packagist. To run this script, switch into the Lightning profile directory and run:

$ composer run enable-asset-packagist
Categories: FLOSS Project Planets

Jamie McClelland: Diversity doesn't help the bottom line

Planet Debian - Mon, 2017-08-14 14:39

A Google software engineer's sexist screed against diversity has been making the rounds lately.

Most notable are the offensive and mis-guided statements about gender essentialism, which honestly make the thing hard to read at all.

What seems lost in the hype, however, is that his primary point seems quite accurate. In short: If Google successfully diversified it's workforce, racial and gender tensions would increase not decrease, divisiveness would spread and, with all liklihood, Google could be damaged.

Imagine what would happen if the thousands of existing, mostly male, white and Asian engineers, the majority of whom are convinced that they play no part in racism and sexism, were confronted with thousands of smart and ambitious women, African Americans and Latinos who were becoming their bosses, telling them to work in different ways, and taking "their" promotions.

It would be a revolution! I'd love to see it. Google's bosses definitely do not.

That's why none of the diversity programs at Google or any other major tech company are having any impact - because they are not designed to have an impact. They are designed to boost morale and make their existing engineers feel good about what they do.

Google has one goal: to make money. And one strategy: to design software that that people want to use. One of their tactics that is highly effective is building tight knit groups of programmers who work well together. If the creation of hostile, racist and sexist environments is a by-product - well, it's not one that affects their bottom line.

Would Google make better software with a more diverse group of engineers? Definitely! For one, if African American engineers were working on their facial recognition software, it's doubtful it would have mistaken people with black faces for gorillas.

However, if the perceived improvement in software outweighed the risks of diversification, then Google would not waste any time on feel-good programs and trainings - they would simply build a jobs pipeline and change their job outreach programs to recruit substantially more female, African Americans and Latino candidates.

In the end, this risk avoidance and failure to perceive the limitations of homogeneity is the achiles heel of corporate software design.

Our challenge is to see what we can build outside the confines of corporate culture that prioritizes profits, production efficiency, and stability. What can we do with teams that are willing to embrace racial and gender tension, risk diviseveness and be willing to see benefits beyond releasing version 1.0?

Categories: FLOSS Project Planets

Go support in KDevelop. GSoC week 11. Code completion and bug fixing.

Planet KDE - Mon, 2017-08-14 14:30

Sidenote: I'm working on Go language support in KDevelop. KDevelop is a cross-platform IDE with awesome plugins support and possibility to implement support for various build systems and languages. The Go language is an cross-platform open-source compiled statically-typed languages which tends to be simple and readable, and mainly targets console apps and network services.

During last week I was continuing working on code completion support.
Firstly, I spent time investigating what else could be added to existing support - and realized that Go channels wasn't covered really well. "Channels" in Go world are something like queues, or, maybe more exactly, pipes. They provides ability to communicate between different goroutines (think of them as of lightweight threads) - you can send a value to channel and receive it on other side.
So, my first change was related to matching types while passing values to channel - now it works correctly and suggests matching types with higher priority.
Aside from different value types channels differs in direction - there are mono-directional and bidirectional channels: in, out, and in/out.
Because of that my second change was aimed on providing support for matching these different kinds of channels. Now, if function expects, for example, in channel, both in and in/out channels will have higher priority than out channel.

After doing that I began to open various Go files \ projects to find remaining bugs and received segfault while parsing fmt/print.go. :( After some investigating I realized that in case of struct variable declaration with literal (e.g. initializing struct fields inside {} block) no context was opened and that leaded to crash lately. Although it took me some time to find where real problem is and how to fix it it's fixed now and even 1142-lines fmt/print.go opens successfully.

Despite that, I found that in case of struct literal initialization names of fields are not highlighted as usages - I am going to fix that during next week and spend more time on testing and fixing remaining issues.

Looking forward to next week!
Categories: FLOSS Project Planets

Continuum Analytics News: Five Organizations Successfully Fueling Innovation with Data Science

Planet Python - Mon, 2017-08-14 14:12
Company Blog Tuesday, August 15, 2017 Christine Doig Sr. Data Scientist, Product Manager

Data science innovation requires availability, transparency and interoperability. But what does that mean in practice? At Anaconda, it means providing data scientists with open source tools that facilitate collaboration; moving beyond analytics to intelligence. Open source projects are the foundation of modern data science and are popping up across industries, making it more accessible, more interactive and more effective. So, who’s leading the open source charge in the data science community? Here are five organizations to keep your eye on:

1. TaxBrain. TaxBrain is a platform that enables policy makers and the public to simulate and study the effects of tax policy reforms using open source economic models. Using the open source platform, anyone can plug elements of the administration’s proposed tax policy to get an idea of how it would perform in the real world.

Why public policy is going #opensource via @teoliphant @MattHJensen in @datanami https://t.co/vKTzYtdvGl #datascience #taxbrain

— Continuum Analytics (@ContinuumIO) August 17, 2016


2. Recursion Pharmaceuticals. Recursion is a pharmaceutical company dedicated to finding the remedies for rare genetic diseases. Its drug discovery assay is built on an open source software platform, combining biological science with machine learning techniques to visualize cell data and test drugs efficiently. This approach shortens research and development process, reducing time to market for remedies to these rare genetic diseases. Their goal is to treat 100 diseases by 2026 using this method.

3. The U.S. Government. Under the previous administration, the U.S. government launched Data.gov, an open data initiative that offers more than 197K datasets for public use. This database exists, in part, thanks to the former U.S. chief data scientist, DJ Patil. He helped drive the government’s data science projects forward, at the city, state and federal levels. Recently, concerns have been raised over the the Data.gov portal, as certain information has started to disappear. Data scientists are keeping a sharp eye on the portal to ensure that these resources are updated and preserved for future innovative projects.

4. Comcast. Telecom and broadcast giant, Comcast, run their projects on open source platforms to drive data science innovation in the industry. 

For example, earlier this month, Comcast’s advertising branch announced they were creating a Blockchain Insights Platform to make the planning, targeting, execution and measurement of video ads more efficient. This data-driven, secure approach would be a game changer for the advertising industry, which eagerly awaits its launch in 2018.

5. DARPA. The Defense Advanced Research Projects Agency (DARPA) is behind the Memex project, a program dedicated to fighting human trafficking, which is a top mission for the defense department. DARPA estimates that in two years, traffickers spent $250 million posting the temporary advertisements that fuel the human trafficking trade. Using an open source platform, Memex is able to index and cross reference interactive and social media, text, images and video across the web. This allows them to find the patterns in web data that indicate human trafficking. Memex’s data science approach is already credited in generating at least 20 active cases and nine open indictments. 

These are just some of the examples of open source-fueled data science turning industries on their head, bringing important data to the public and generally making the world a better place. What will be the next open source project to put data science in the headlines? Let us know what you think in the comments below!

Categories: FLOSS Project Planets

Elevated Third: E3 Named Finalist in 5 Acquia Engage Award Categories

Planet Drupal - Mon, 2017-08-14 14:04
E3 Named Finalist in 5 Acquia Engage Award Categories E3 Named Finalist in 5 Acquia Engage Award Categories root Mon, 08/14/2017 - 12:04

As an Acquia Preferred Partner, we are thrilled to announce our work has ranked amongst the world’s most innovative websites and digital experiences in the 2017 Acquia Engage Awards. Elevated Third received recognition in the Nonprofit, Brand Experience, Financial Services, Digital Experience, and Community categories for the following projects. 

The Acquia Engage Awards recognize the amazing sites and digital experiences that organizations are building with the Acquia Platform. Nominations that demonstrated an advanced level of visual design, functionality, integration and overall experience have advanced to the finalist round, where an outside panel of experts will select the winning projects.

Winners will be announced at Acquia Engage in Boston from October 16-18, of which we are sponsors.  

“Acquia’s partners and customers are setting the benchmark for orchestrating the customer journey and driving the future of digital. Organizations are mastering the art of making every interaction personal and meaningful, and creating engaging, elegant solutions that extend beyond the browser,” said Joe Wykes, senior vice president, global channels, and commerce at Acquia. “We’re laying the foundation to help our partners and customers achieve their greatest ambitions and grow their digital capabilities long into the future. We’re inspired by the nominees and impact of their amazing collective work.”

Check out our competition! The full list of finalists for the 2017 Acquia Engage Awards is posted here.

Categories: FLOSS Project Planets

Antonio Terceiro: Debconf17

Planet Debian - Mon, 2017-08-14 13:27

I’m back from Debconf17.

I gave a talk entitled “Patterns for Testing Debian Packages”, in which I presented a collection of 7 patterns I documented while pushing the Debian Continuous Integration project, and were published in a 2016 paper. Video recording and a copy of the slides are available.

I also hosted the ci/autopkgtest BoF session, in which we discussed issues around the usage of autopkgtest within Debian, the CI system, etc. Video recording is available.

Kudos for the Debconf video team for making the recordings available so quickly!

Categories: FLOSS Project Planets

Holger Levsen: 20170812-reproducible-policy

Planet Debian - Mon, 2017-08-14 12:53
"packages should build reproducibly" - after 4 years this work of many is in debian-policy now

This post was written roughly 44h ago and now that the fix for #844431 has been merged into the git master branch, I'm publishing it - hoping you'll enjoy this as much as I do!

So today is the last (official) day of DebConf17 and it looks like #844431: "packages should build reproducibly" will be merged into debian-policy today! So I'm super excited, super happy, quite tired and a bit sad (DebConf is ending…) right now!

Four years ago Lunar held a BoF at DebConf13 which started the initiative in Debian. I only got involved in September 2014 with setting up continuous tests, rebuilding each package twice with some variations and then comparing the results using diffoscope, which back then was still called debbindiff and which we renamed as part of our efforts to make Reproducible Builds the norm in Free Software.

Many people have worked on this, and I'm also very happy how visible this has been in our talk here yesterday. You people rock and I'm very thankful and proud to be part of this team. Thank you everyone!

And please understand: we are not 94% done yet (which our reproducibility stats might have made you think), rather more like half done or so. We still need tools and processes to enable anyone to indepently verify that a given binary comes from the sources it is said to be coming, this will involve distributing .buildinfo files and providing user interfaces in APT and elsewhere. And probably also systematic rebuilds by us and other parties. And 6 or 7% of the archive are a lot of packages still, eg in Buster we currently still have 273 unreproducible key packages and for a large part we don't have patches yet. So there is still a lot of work ahead.

This is what was added to debian-policy now:

Reproducibility --------------- Packages should build reproducibly, which for the purposes of this document [#]_ means that given - a version of a source package unpacked at a given path; - a set of versions of installed build dependencies; - a set of environment variable values; - a build architecture; and - a host architecture, repeatedly building the source package for the build architecture on any machine of the host architecture with those versions of the build dependencies installed and exactly those environment variable values set will produce bit-for-bit identical binary packages. It is recommended that packages produce bit-for-bit identical binaries even if most environment variables and build paths are varied. It is intended for this stricter standard to replace the above when it is easier for packages to meet it. .. [#] This is Debian's precisification of the `reproducible-builds.org definition `_.

For now violating this part of policy may result in a severity: normal bug, though I think we should still only file them if we have patches, else it's probably better to just take a note in our notes.git, like we did before the policy change.

Finally one last comment: we could do reproducible security updates for Stretch now too, for those 94% of the packages which are reproducible. It just needs to be done by someones and the first step would be publishing those .buildinfo files from those builds…

Categories: FLOSS Project Planets

PyPy Development: Let's remove the Global Interpreter Lock

Planet Python - Mon, 2017-08-14 11:34

Hello everyone

The Python community has been discussing removing the Global Interpreter Lock for a long time. There have been various attempts at removing it: Jython or IronPython successfully removed it with the help of the underlying platform, and some have yet to bear fruit, like gilectomy. Since our February sprint in Leysin, we have experimented with the topic of GIL removal in the PyPy project. We believe that the work done in IronPython or Jython can be reproduced with only a bit more effort in PyPy. Compared to that, removing the GIL in CPython is a much harder topic, since it also requires tackling the problem of multi-threaded reference counting. See the section below for further details.

As we announced at EuroPython, what we have so far is a GIL-less PyPy which can run very simple multi-threaded, nicely parallelized, programs. At the moment, more complicated programs probably segfault. The remaining 90% (and another 90%) of work is with putting locks in strategic places so PyPy does not segfault during concurrent accesses to data structures.

Since such work would complicate the PyPy code base and our day-to-day work, we would like to judge the interest of the community and the commercial partners to make it happen (we are not looking for individual donations at this point). We estimate a total cost of $50k, out of which we already have backing for about 1/3 (with a possible 1/3 extra from the STM money, see below). This would give us a good shot at delivering a good proof-of-concept working PyPy with no GIL. If we can get a $100k contract, we will deliver a fully working PyPy interpreter with no GIL as a release, possibly separate from the default PyPy release.

People asked several questions, so I'll try to answer the technical parts here.

What would the plan entail?

We've already done the work on the Garbage Collector to allow doing multi- threaded programs in RPython. "All" that is left is adding locks on mutable data structures everywhere in the PyPy codebase. Since it would significantly complicate our workflow, we require real interest in that topic, backed up by commercial contracts in order to justify the added maintenance burden.

Why did the STM effort not work out?

STM was a research project that proved that the idea is possible. However, the amount of user effort that is required to make programs run in a parallelizable way is significant, and we never managed to develop tools that would help in doing so. At the moment we're not sure if more work spent on tooling would improve the situation or if the whole idea is really doomed. The approach also ended up adding significant overhead on single threaded programs, so in the end it is very easy to make your programs slower. (We have some money left in the donation pot for STM which we are not using; according to the rules, we could declare the STM attempt failed and channel that money towards the present GIL removal proposal.)

Wouldn't subinterpreters be a better idea?

Python is a very mutable language - there are tons of mutable state and basic objects (classes, functions,...) that are compile-time in other language but runtime and fully mutable in Python. In the end, sharing things between subinterpreters would be restricted to basic immutable data structures, which defeats the point. Subinterpreters suffers from the same problems as multiprocessing with no additional benefits. We believe that reducing mutability to implement subinterpreters is not viable without seriously impacting the semantics of the language (a conclusion which applies to many other approaches too).

Why is it easier to do in PyPy than CPython?

Removing the GIL in CPython has two problems:

  • how do we guard access to mutable data structures with locks and
  • what to do with reference counting that needs to be guarded.

PyPy only has the former problem; the latter doesn't exist, due to a different garbage collector approach. Of course the first problem is a mess too, but at least we are already half-way there. Compared to Jython or IronPython, PyPy lacks some data structures that are provided by JVM or .NET, which we would need to implement, hence the problem is a little harder than on an existing multithreaded platform. However, there is good research and we know how that problem can be solved.

Best regards,
Maciej Fijalkowski

Categories: FLOSS Project Planets

Nextide Blog: Maestro D8 Concepts Part 1: Templates and Tasks

Planet Drupal - Mon, 2017-08-14 11:07
Maestro D8 Concepts Part 1: Templates and Tasks randy Mon, 08/14/2017 - 11:07

Templates and tasks make up the basic building blocks of a Maestro workflow.  Maestro requires a workflow template to be created by an administrator.  When called upon to do so, Maestro will put the template into "production" and will follow the logic in the template until completion.  The definitions of in-production and template are important as they are the defining points for important jargon in Maestro.  Simply put, templates are the workflow patterns that define logic, flow and variables.  Processes are templates that are being executed which then have process variables and assigned t

Categories: FLOSS Project Planets

Appnovation Technologies: Appnovator Spotlight: Tony Nguyen

Planet Drupal - Mon, 2017-08-14 11:03
Appnovator Spotlight: Tony Nguyen Meet Tony Nguyen, our Manager of Product R&D from Vancouver, BC. 1. Who are you? What's your story? I'm Tony Nguyen and I manage the Product Research & Development team with the osCaddie R&D being our main project. I'm originally from New Zealand and moved to Vancouver just over three years ago and have been with Appnovation for two of...
Categories: FLOSS Project Planets

Tim Retout: Jenkins milestone steps do not work yet

Planet Debian - Mon, 2017-08-14 10:57

Public Service Announcement for anyone relying on Jenkins for continuous deployment - the milestone step plugin as of version 1.3.1 will not function correctly if you could have more than two builds running at once - older builds could get deployed after newer builds.

See JENKINS-46097.

A possible workaround is to add an initial milestone at the start of the pipeline, which will then allow builds to be killed early. (Builds are only killed early once they have passed their first milestone.)

Going by the source history, I reckon this bug has been present since the milestone-step plugin was created.

Categories: FLOSS Project Planets
Syndicate content