FLOSS Project Planets

DrupalCon News: Session Spotlight: the Business track is for more than just business people

Planet Drupal - Mon, 2015-07-27 14:20

Whether you're counting Business Summit attendees or conference registrants with C-Suite titles, last year DrupalCon Europe saw about 500 attendees who were highly interested in the business-side of Drupal. As we saw in the Business Track and the business-related BoFs, there is a strong interest at Cons for not only learning the skills to code better, but also to make your business better, and DrupalCon Barcelona will be no different.

Categories: FLOSS Project Planets

David MacIver: Hypothesis 1.9.0 is out

Planet Python - Mon, 2015-07-27 12:55

Codename: The great bundling

This is my favourite type of release: One which means I stop having to look embarrassed in response to common questions.

Here’s the changelog entry:

Codename: The great bundling.

This release contains two fairly major changes.

The first is the deprecation of the hypothesis-extra mechanism. From now on all the packages that were previously bundled under it other than hypothesis-pytest (which is a different beast and will remain separate). The functionality remains unchanged and you can still import them from exactly the same location, they just are no longer separate packages.

The second is that this introduces a new way of building strategies which lets you build up strategies recursively from other strategies.

It also contains the minor change that calling .example() on a strategy object will give you examples that are more representative of the actual data you’ll get. There used to be some logic in there to make the examples artificially simple but this proved to be a bad idea.

“How do I do recursive data?” has always been a question which I’ve had to look embarrassed about whenever anyone asked me. Because Hypothesis has a, uh, slightly unique attitude to data generation I’ve not been able to use the standard techniques that people use to make this work in Quickcheck, so this was a weak point where Hypothesis was simply worse than the alternatives.

I got away with it because Python is terrible at recursion anyway so people mostly don’t use recursive data. But it was still a bit of an embarrassment.

Part of the problem here is that you could do recursive data well enough using the internal API (not great. It’s definitely a bit of a weak point even there), but the internal API is not part of the public API and is decidedly harder to work with than the main public API.

The solution I ended up settling on is to provide a new function that lets you build up recursive data by specifying a base case and an expansion function and then getting what is just a fixed point combinator over strategies. In the traditional Hypothesis manner it uses a bizarre mix of side effects and exceptions internally to expose a lovely clean functional API which doesn’t let you see any of that.

The other embarrassing aspect of Hypothesis is how all the extra packages work. There are all these additional packages which have to be upgraded in lockstep with Hypothesis and behave like second class citizens – e.g. there have never been good changelogs and release announcements for them. It’s a common problem that people fail to upgrade one and get confusing error messages when things try to load and don’t work.

This release merges these all into Hypothesis core and leaves installing their dependencies up to you. You can install those dependencies using a setuptools extra, so e.g. installing hypothesis[django] will install Hypothesis + compatible versions of all the dependencies – but there’s no checking of versions etc. when you use them. I may add that checking if it turns out to be a major problem that people try to use these with the wrong version of dependencies, but I also may not. We’ll see.

Categories: FLOSS Project Planets

Drupal Association News: Take the 2015 Drupal Job Market Survey

Planet Drupal - Mon, 2015-07-27 12:19

Last year we conducted a Drupal Job Market survey to better understand the opportunities for those who know Drupal. The survey showed strong demand for Drupal skills and demonstrated why Drupal is a rewarding and potentially lucrative career path. We are conducting another survey this year. 

Take the Survey

This year we are adding questions about compensation to help Drupal talent and hiring organizations benchmark themselves.

You can expect to see the results from the survey published in late August. Thank you for taking the survey!   


Categories: FLOSS Project Planets

Andy Wingo: cps soup

GNU Planet! - Mon, 2015-07-27 10:43

Hello internets! This blog goes out to my long-time readers who have followed my saga hacking on Guile's compiler. For the rest of you, a little history, then the new thing.

In the olden days, Guile had no compiler, just an interpreter written in C. Around 8 years ago now, we ported Guile to compile to bytecode. That bytecode is what is currently deployed as Guile 2.0. For many reasons we wanted to upgrade our compiler and virtual machine for Guile 2.2, and the result of that was a new continuation-passing-style compiler for Guile. Check that link for all the backstory.

So, I was going to finish documenting this intermediate language about 5 months ago, in preparation for making the first Guile 2.2 prereleases. But something about it made me really unhappy. You can catch some foreshadowing of this in my article from last August on common subexpression elimination; I'll just quote a paragraph here:

In essence, the scope tree doesn't necessarily reflect the dominator tree, so not all transformations you might like to make are syntactically valid. In Guile 2.2's CSE pass, we work around the issue by concurrently rewriting the scope tree to reflect the dominator tree. It's something I am seeing more and more and it gives me some pause as to the suitability of CPS as an intermediate language.

This is exactly the same concern that Matthew Fluet and Stephen Weeks had back in 2003:

Thinking of it another way, both CPS and SSA require that variable definitions dominate uses. The difference is that using CPS as an IL requires that all transformations provide a proof of dominance in the form of the nesting, while SSA doesn't. Now, if a CPS transformation doesn't do too much rewriting, then the partial dominance information that it had from the input tree is sufficient for the output tree. Hence tree splicing works fine. However, sometimes it is not sufficient.

As a concrete example, consider common-subexpression elimination. Suppose we have a common subexpression x = e that dominates an expression y = e in a function. In CPS, if y = e happens to be within the scope of x = e, then we are fine and can rewrite it to y = x. If however, y = e is not within the scope of x, then either we have to do massive tree rewriting (essentially making the syntax tree closer to the dominator tree) or skip the optimization. Another way out is to simply use the syntax tree as an approximation to the dominator tree for common-subexpression elimination, but then you miss some optimization opportunities. On the other hand, with SSA, you simply compute the dominator tree, and can always replace y = e with y = x, without having to worry about providing a proof in the output that x dominates y (i.e. without putting y in the scope of x)

[MLton-devel] CPS vs SSA

To be honest I think all this talk about dominators is distracting. Dominators are but a lightweight flow analysis, and I usually find myself using full-on flow analysis to compute the set of optimizations that I can do on a piece of code. In fact the only use I had for dominators in the nested CPS language was to rewrite scope trees! The salient part of Weeks' observation is that nested scope trees are the problem, not that dominators are the solution.

So, after literally years of hemming and hawing about this, I finally decided to remove nested scope trees from Guile's CPS intermediate language. Instead, a function is now a collection of labelled continuations, with one distinguished entry continuation. There is no more $letk term to nest continuations in each other. A program is now represented as a "soup" -- basically a map from labels to continuation bodies, again with a distinguished entry. As an example, consider this expression:

function(x): return add(x, 1)

If we rewrote it in continuation-passing style, we'd give the function a name for its "tail continuation", ktail, and annotate each expression with its continuation:

function(ktail, x): add(x, 1) -> ktail

Here the -> ktail means that the add expression passes its values to the continuation ktail.

With nested CPS, it could look like:

function(ktail, x): letk have_one(one): add(x, one) -> ktail load_constant(1) -> have_one

Here the label have_one is in a scope, as is the value one. With "CPS soup", though, it looks more like this:

function(ktail, x): label have_one(one): add(x, one) -> ktail label main(x): load_constant(1) -> have_one

It's a subtle change, but it took a few months to make so it's worth pointing out what's going on. The difference is that there is no scope tree for labels or variables any more. A variable can be used at a label if it flows to the label, in a flow analysis sense. Indeed, determining the set of variables that can be used at a label requires flow analysis; that's what Weeks was getting at in his 2003 mail about the advantages of SSA, which are really the advantages of an intermediate language without nested scope trees.

The question arises, though, now that we've decided on CPS soup, how should we represent a program as a value? We've gone from a nested term to a graph term, and we need to find a way to represent it somehow that facilitates looking up labels by name, and facilitates tree rewrites.

In Guile's IR, labels and variables are both integers, so happily enough, we have such a data structure: Clojure-style maps specialized for integer keys.

Friends, if there has been one realization or revolution for me in the last year, it has been Clojure-style data structures. Here's why. In compilers, I often have to build up some kind of analysis, then use that analysis to transform data. Often I need to keep the old term around while I build a new one, but it would be nice to share state between old and new terms. With a nested tree, if a leaf changed you'd have to rebuild all surrounding terms, which is gnarly. But with Clojure-style data structures, more and more I find myself computing in terms of values: build up this value, transform this map to that set, fold over this map -- and yes, you can fold over Guile's intmaps -- and so on. By providing an expressive data structure for which I can control performance characteristics by using transients if needed, these data structures make my programs more about data and less about gnarly machinery.

As a concrete example, the old contification pass in Guile, I didn't have the mental capacity to understand all the moving parts in such a way that I could compute an optimal contification from the beginning; instead I had to iterate to a fixed point, as Kennedy did in his "Compiling with Continuations, Continued" paper. With the new CPS soup language and with Clojure-style data structures, I could actually fit more of the algorithm into my head, with the result that Guile now contifies optimally while avoiding the fixed-point transformation. Also, the old pass used hash tables to represent the analysis, which I found incredibly confusing to reason about -- I totally buy Rich Hickey's argument that place-oriented programming is the source of many evils in programs, and hash tables are nothing if not a place party. Using functional maps let me solve harder problems because they are easier for me to reason about.

Contification isn't an isolated case, either. For example, we are able to do the complete set of optimizations from the "Optimizing closures in O(0) time" paper, including closure sharing, which I think makes Guile unique besides Chez Scheme. I wasn't capable of doing it on the old representation because it was just too hard for me to think about, because my data structures weren't right.

This new "CPS soup" language is still a first-order CPS language in that each term specifies its continuation, and that variable names appear in the continuation of a definition, not the definition itself. This effectively makes every variable a phi variable, in the sense of SSA, and you have to do some work to get to a variable's definition. It could be that still this isn't the right number of names; consider this function:

function foo(k, x): label have_y(y) bar(y) -> k label y_is_two() load_constant(2) -> have_y label y_is_one() load_constant(1) -> have_y label main(x) if x -> y_is_one else -> y_is_two

Here there is no distinguished name for the value load_constant(1) versus load_constant(2): both are possible values for y. If we ended up giving them names, we'd have to reintroduce actual phi variables for the joins, which would basically complete the transformation to SSA. Until now though I haven't wanted those names, so perhaps I can put this off. On the other hand, every term has a label, which simplifies many things compared to having to contain terms in basic blocks, as is usually done in SSA. Yet another chapter in CPS is SSA is CPS is SSA, it seems.

Welp, that's all the nerdery for right now. Talk at yall later!

Categories: FLOSS Project Planets

Matt Raible: UberConf 2015: My Presentations on Apache Camel and Java Webapp Security

Planet Apache - Mon, 2015-07-27 10:08
Last week I had the pleasure of speaking at UberConf 2015. My first talk was on Developing, Testing and Scaling with Apache Camel. This presentation contained an intro to Apache Camel and a recap of my experience using it at a client last year. You can click through the presentation below, download it from my presentations page, or view it on SlideShare.

My second presentation was about implementing Java Web Application Security with Java EE, Spring Security and Apache Shiro. I updated this presentation to use Java EE 7 and Jersey, as well as Spring Boot. I used Spring Boot to manage dependencies in all three projects, then showed the slick out-of-the-box security Spring Boot has (when you include the Spring Security on the classpath). For Apache Shiro, I configured its filter and required dependencies using Spring's JavaConfig. You can click through my security presentation below, download it from my presentations page, or view it on SlideShare.

One thing that didn't make it into the presentation was the super-helpful pull request from Rob Winch, Spring Security Lead. He showed me how you can use basic and form-based authentication in the same app, as well how to write tests with MockMvc and Spring Security's Testing support.

The next time I do this presentation (at the Rich Web Experience), I'd like to see if it's possible to use all-Java to configure the Java EE 7 example. I used web.xml in this example and the Servlet 3.0 Security Annotations might offer enough to get rid of it.

All the demos I did during the security presentation can be seen in my java-webapp-security-examples project on GitHub. There's branches for where I started (javaee-start, springsecurity-start and apacheshiro-start) as well as "complete" branches for where I finished. The complete examples should also be in-sync with the master branch.

If you have any questions about either presentation, please let me know.

Categories: FLOSS Project Planets

Mike Driscoll: PyDev of the Week: Aaron Maxwell

Planet Python - Mon, 2015-07-27 08:30

This week we welcome Aaron Maxwell as our PyDev of the Week! Aaron is the author of the Advanced Python Newsletter and a very enthusiastic Pythonista. You can check out his github profile to get an idea of what he’s been up to. Let’s spend some time getting to know him better.

Can you tell us a little about yourself (hobbies, education, etc):

Sure. I love to travel to countries and places that are new to me, and my favorite part is learning some of the local language. A fun game: start a conversation with a native, and see how long we can talk before they figure out I’m not from around here!

In my life I’ve been a club bouncer, a theoretical physicist, a roofer, an author, a forklift driver, a public speaker, and a software engineer. My degree is in physics (with a heap of extra math classes), and I am really good at starting graduate programs then dropping out of them. Like, I’m great at it! This knack has given me some higher education in biophysics, neuroscience, and computer science.

If a magic genie offers me one superpower, I choose the ability to instantly teleport anywhere – and take anyone with me, simply by holding their hand. That would be soooo much fun!

I go by the nickname “redsymbol” online – Github, Hacker News, etcetera. Sometimes people ask me what “redsymbol” means. The answer is: It’s a secret.

Why did you start using Python?

It just seemed exceptionally well designed. It still does.

This matters, because language empowers thought. Code is not just a way to control a machine. It becomes a basic medium in which we express the mental creativity of our craft. A language can hinder or support that expression. A lot of it depends on making good decisions on details of the language.

So what really compelled me in an irresistible way to use Python was that it was so much easier to express my thoughts into code, compared to the other languages I had available at the time – and do so in a way that was robust, maintainable, evolvable. I’ve learned and mastered other languages since, yet Python’s continued to be my favorite for many kinds of engineering domains.

What other programming languages do you know and which is your favorite?

My favorite is Python 3.4. I like Python 2.x, of course, but I have written it so much, I’m almost getting bored with it. And it won’t be changing anymore.

The 3.x series keeps introducing exciting, richly powerful tools with every release. Three-point-four crossed several important thresholds: many of 3’s initially rough edges have now been smoothed out, unittest got supercharged, asyncio is part of the standard library, pip and virtualenv now ship as included tools… on and on. And it just happened that 3rd-party library support really solidified sometime between the 3.3 and 3.4 release. It’s now rare that I can’t pip install a package I need.

(On that note – there are really soooo many improvements in the Py3k… far more than most people realize. I once gave a talk describing what’s new in Python 3. I talked as fast as I could non-stop for an hour, and I still feel like I barely mentioned a fraction of it all. Some people seem to think only a few things have changed; respectfully to them, that’s very inaccurate.)

After Python, the language I know best is probably C. I keep finding it useful for certain things. I’m amazed C is still as important as it is, and am starting to think it will outlive us all.

Lately, I’ve been liking Java a lot. I can’t believe I just wrote that sentence, but it’s true. I learned it before I started with Python, ignored it for a long time, and recently started needing to use it again. Now that I have more experience maintaining large applications as part of an engineering team, I can see how some of its design decisions (which used to drive me nuts) are in fact valuable in that context.

I know some Lisp and Scheme, but haven’t been practicing them in a while. My mind naturally thinks very functionally (in the sense of functional programming), so they are easy to pick up and re-pickup. Javascript, for the same reasons. (As far as I’m concerned, JS is lisp wrapped in a C-like syntax.)

Go is an interesting language. I love its syntactically cheap concurrency, and the runtime’s solid support for hybrid threading (i.e. goroutines). It’s really hard to implement that well, especially in userspace, and they hit a home run. The lack of generics and (still!!) no built-in set type really drive me nuts, though. There was a time I thought I might switch my primary language from Python to Go; but with Python 3’s relentless onslaught of exciting progress, there’s no chance of that happening now.

I’m unfortunately an expert in PHP. I keep trying to get away from it. Here’s a tweetable sound bite for you: “Coding in PHP is like driving a Lamborghini down a road full of potholes.”

What projects are you working on now?

This year I’m doing something different: going to companies and teaching their programmers how Python can really add value to the work they do – make their projects more successful, and more fun. Some of the students are IT folk who mainly create small scripts; some are experienced engineers with CS degrees; and everything in between. What an amazing adventure. I mostly fly around the USA, but sometimes outside of it – most recently Poland. (Their food is DELICIOUS.)

I’m the author of an email newsletter, which I call the “Advanced Python Newsletter” – you should subscribe! I’m also about half-way done with a book I am tentatively titling “Advanced Python: A Not-For-Beginners Guide To Leveling Up Your Python Code”. There is a huuuge lack of materials for experienced programmers who already know the basics of Python, and that’s what I’m trying to fill.

Along those lines, I’m also designing some self-paced courses on really impactful topics: expressive code reuse through decorators, mastering the Python logging module, building REST servers, etc. This is where teaching live classes helps a lot – I get to look in their faces as I explain something, and learn how to teach something in a way that the light bulb goes off (instead of baffling them). I can then incorporate the best approaches into the self-paced version.

As you might imagine, with all this, I’m not coding nearly enough this year. After leaving Lyft late last year to teach – the hands-down best engineering team I ever had the privilege to work with! – I no longer am coding full-time, alongside truly excellent engineers. And frankly, I miss it.

Fortunately, I’m regularly able to sneak in some actual coding. A few weeks ago, I played hookey from writing for a day to create a tool to manage ID3 tags of my media collection (github.com/redsymbol/mtag). Also assuaging the anguish is that I do significant coding for the courses – implementing a surprisingly sophisticated REST server providing a rich “todo-list” API, for example, so I can teach people how to build realistic web/microservices.

Which Python libraries are your favorite (core or 3rd party)?

I’ve always thought Python’s logging module is an impressive feat of engineering. The better I get as a developer, the more I like it.

I’m in love with asyncio, and Python 3.5’s async/await extensions. This is a big change not just in the library, but the whole Python ecosystem. Excellent libraries like Twisted, gevent, etc. have enabled asynchronous programming for a while; but to have something so solid and well-engineered baked into the standard library is qualitatively different. And on top of that, Python 3’s async support is genuinely breaking new ground.

For third-party, I’m amazed by requests. So much of programming is cognitive – depending on what goes on in the programmer’s mind. A great API can make that orders of magnitude easier. Requests does that better than almost any library I have ever seen.

I use Django and Flask a lot, for two completely different things. For human-facing web applications, Django is my go-to, and has been since version 0.96. (Back in 2007, wow!) For building services that expose their API via HTTPS, Flask works a lot better in my experience; a lot of Django’s killer features become excess baggage in that context, so Flask has less cruft to work around. I am super pumped that Python 3 support is already so solid in both.

Jinja2 continues to be very useful to me. Along with requests, it’s one of those libs I treat like it’s part of the standard library.

Where do you see Python going as a programming language?

I have a couple of answers. First, as concurrency and distributed systems continue to become more important, I think people will use Python to power and evolve those cutting edges more. As recently as a year ago, I would not have believed that, because of the GIL. But what I finally figured out just recently – and I can’t believe it took me sooooo long – is that for a wide range of engineering domains, the global interpreter lock isn’t a significant limitation.

For embarrassingly parallel tasks that are truly CPU bound, we can completely bypass the GIL via multiprocessing (see e.g. https://github.com/migrateup/thumper) – or in the rare cases that doesn’t do the job, C extensions. For network and I/O bound tasks,
threading gets you very far… and if you hit its limits, asyncio takes you the rest of the way. I’m actually designing a 2-day course in Python concurrency for one of my client companies, and I may turn that into a self-paced online course at some point.

What Guido has done with 3.4 and 3.5 is just the start of all this, and I can’t wait to see what he does next. He’s one of the greatest software engineers on the face of the earth!

Anyway, that’s my first prediction. The second: I think we’re going to see a mass migration to Python 3 within the next two years. I don’t think it will happen in 2015, but not long after.

The reason is that several changes are converging right now. First, library support is solid. Many libraries have well-tested and reliable support for Python 3; others don’t and never will (usually because they’ve become unmaintained), but there are drop-in Python 3 replacements. This is all true right now – not a “real soon now” thing.

Second, many big hitters in open-source have committed to Python 3. In particular, both Ubuntu and RedHat have very near-term plans to switch to Python 3 being the distribution default. At least in the case of Ubuntu, Python 2 will not even be installed by default, unless some other package requires it.

It also helps that Python 3 safely installs alongside Python 2 on the same system, without stepping on each other’s toes.

With me, the most promising sign of all is what I see happening on github. There is a LOT of activity involving Python 3 projects. In my experience, the best developers are naturally curious and want to do new things, and won’t stay long with a frozen language like Python 2.

Here’s something interesting: My beginner-level Python classes generally use 2.7, because the companies hiring me to train their employees mostly use that Python version in their day-to-day work. Recently, when I start the class, and tell students we’re using 2.7, they are really disappointed. They feel like they’re being forced to learn an obsolete version, when 3.5 is just around the corner! I kind of spin it for them so they stop being sad, but I think they’re on to something. This factor alone – i.e., the career development urge of wanting to keep up with technology – is starting to push things forward.


The Last 10 PyDevs of the Week

Categories: FLOSS Project Planets

KWallet5 can be auto-unlocked during login again

Planet KDE - Mon, 2015-07-27 08:16

I've just pushed a patch to KWallet5 allowing you to have your wallet unlocked automagically during login. This patch was originally done by Alex Fiestas for KWallet4, so all credits and free beers go to him; I've merely just forward-ported it.

You'll also require kde:kwallet-pam repo and pass "-DKWALLET5=1" to cmake. This will generate pam_kwallet5.so which then can be coinstalled with the same module for KWallet4 (plus it also enables some ifdef'd code inside the module). If you're still using some KDE4/Qt4 software which is using KWallet4, you will require both modules present.

How to set up kwallet-pam can be found over at Luca's blog (though he said he'll update it for KWallet5 later, so you may want to wait a bit :).

Categories: FLOSS Project Planets

Piergiorgio Lucidi: The way for Enterprise Information Management (EIM)

Planet Apache - Mon, 2015-07-27 08:03

The Enterprise Information Management (EIM) consists of the art of managing the information lifecycle contemplating all the areas such as ECM, BPM, BI, WCM, ES and Capture. This is the sea where we should navigate every day from now.

Categories: FLOSS Project Planets

Tim Millwood: Overriding Drupal 8 services

Planet Drupal - Mon, 2015-07-27 07:42
Since July 2014 there’s been a feature in Drupal 8 has a way to override backend specific services....
Categories: FLOSS Project Planets

Red Crackle: Adding multiple SKUs of a product

Planet Drupal - Mon, 2015-07-27 07:40
In this post, you will learn how to add multiple SKUs of a product. When user adds product to the cart, he will be able to select the specific SKU to check out. Creating multiple SKUs and showing them in the same product display is helpful if the underlying product is the same, only some of the attributes are different. A common attribute that can be changed is color. In this specific example, we have used the number of LEDs within the flashlight as an attribute that the customer can select to purchase.
Categories: FLOSS Project Planets

David Reid: Building postgrest

Planet Apache - Mon, 2015-07-27 06:54

I’ve long thought that a simple REST layer sitting on top of a database, with some suitable access controls would be an ideal solution for many of the small projects I find myself tinkering with. Until recently I’d never quite found a solution that provided this, but then I came across postgrest.

Having some time to spend looking at it and the base of an idea that might be ideally suited to using it, I decided to install. Rather than install the binary, I cloned the repository so that I had access to the source. However, it’s written in Haskell, a language I had no experience with. So, how do I build it?

1. Install haskell

$ sudo apt-get install haskell-platform

NB As this uses postgresql you also need the development libraries for postgresql installed.
$ sudo apt-get install libpq-dev

2. Build/setup the project?

Initially it wasn’t clear, but after some web searches I found the documentation for the cabal build tool that explained the standard method which led me to do this.

$ cabal sandbox init
$ cabal install -j

This took a while as the various packages were download and installed.

3. Run postgrest

$ .cabal-sandbox/bin/postgrest
Usage: postgrest (-d|--db-name NAME) [-P|--db-port PORT] (-U|--db-user ROLE)
[--db-pass PASS] [--db-host HOST] [-p|--port PORT]
(-a|--anonymous ROLE) [-s|--secure] [--db-pool COUNT]
[--v1schema NAME] [--jwt-secret SECRET]
PostgREST / create a REST API to an existing Postgres database

Available options:
-h,--help Show this help text
-d,--db-name NAME name of database
-P,--db-port PORT postgres server port (default: 5432)
-U,--db-user ROLE postgres authenticator role
--db-pass PASS password for authenticator role
--db-host HOST postgres server hostname (default: "localhost")
-p,--port PORT port number on which to run HTTP
server (default: 3000)
-a,--anonymous ROLE postgres role to use for non-authenticated requests
-s,--secure Redirect all requests to HTTPS
--db-pool COUNT Max connections in database pool (default: 10)
--v1schema NAME Schema to use for nonspecified version (or explicit
v1) (default: "1")
--jwt-secret SECRET Secret used to encrypt and decrypt JWT
tokens) (default: "secret")

Now I have it built, time to start playing around with using it

Categories: FLOSS Project Planets

Annertech: How to Integrate your Drupal Website with Salesforce CRM

Planet Drupal - Mon, 2015-07-27 06:44
How to Integrate your Drupal Website with Salesforce CRM

Recently, I wrote a blog post on the benefits of integrating your website and CRM, and Anthony followed up with another on the typical integration patterns you commonly see. Annertech have a lot of experience integrating Drupal websites with various CRMs, so this is the start of a new series on CRM integration where we will go into more detail on some of the more popular CRMs we’ve worked with.

Categories: FLOSS Project Planets

Drupal core announcements: Recording from July 24th 2015 Drupal 8 critical issues discussion

Planet Drupal - Mon, 2015-07-27 05:53

This was our 9th critical issues discussion meeting to be publicly recorded in a row. (See all prior recordings). Here is the recording of the meeting video and chat from Friday in the hope that it helps more than just those who were on the meeting:

If you also have significant time to work on critical issues in Drupal 8 and we did not include you, let me know as soon as possible.

The meeting log is as follows (all times are GMT real time at the meeting):

10:08 WimLeers

10:08 WimLeers

10:08 WimLeers
10:09 Druplicon
https://www.drupal.org/node/2524082 => Config overrides should provide cacheability metadata [
=> 147 comments, 39 IRC mentions

10:09 WimLeers
10:09 Druplicon
https://www.drupal.org/node/2429617 => [PP-1] Make D8 2x as fast: SmartCache: context-dependent page caching (for *all* users!) [
=> 226 comments, 21 IRC mentions

10:10 WimLeers
10:10 Druplicon
https://www.drupal.org/node/2499157 => Auto-placeholdering [
=> 2 comments, 3 IRC mentions

10:14 pfrenssen
10:14 Druplicon
https://www.drupal.org/node/2524082 => Config overrides should provide cacheability metadata [
=> 147 comments, 40 IRC mentions

10:14 pfrenssen
10:14 Druplicon
https://www.drupal.org/node/2525910 => Ensure token replacements have cacheability + attachments metadata and that it is bubbled in any case [
=> 176 comments, 29 IRC mentions

10:18 alexpott
10:18 Druplicon
http://drupal.org/node/2538228 => Config save dispatches an event - may conflict with config structure changes in updates [
=> 6 comments, 1 IRC mention

10:20 alexpott
10:20 Druplicon
https://www.drupal.org/node/2538514 => Remove argument support from TranslationWrapper [
=> 12 comments, 4 IRC mentions

10:25 WimLeers
lauriii: welcome!
10:29 lauriii
WimLeers: little late because I'm in a sprint and was helping people ;<

10:45 alexpott
The upgrade path we're talking about http://drupal.org/node/2528178
10:45 Druplicon
http://drupal.org/node/2528178 => Provide an upgrade path for #2354889 (block context manager) [#2528178]
=> 143 comments, 1 IRC mention

10:52 alexpott
10:52 Druplicon
https://www.drupal.org/node/2538514 => Remove argument support from TranslationWrapper [#2538514]
=> 12 comments, 5 IRC mentions

10:52 WimLeers

11:02 dawehner
11:02 catch
\Drupal\block\Plugin\Derivative\ThemeLocalTask also.

11:19 alexpott
berdir: is talking about http://drupal.org/node/2513094

11:19 Druplicon
http://drupal.org/node/2513094 => ContentEntityBase::getTranslatedField and ContentEntityBase::__clone break field reference to parent entity [
=> 36 comments, 1 IRC mention

Categories: FLOSS Project Planets

Plasma Mobile SDK

Planet KDE - Mon, 2015-07-27 04:27
Where are the giants?

When approaching this issue I had been thinking about the issue for a while. I had mainly 2 problems: I was rather frustrated with previous Linux-based systems so far and the one I liked didn’t really scale for us. One thing was clear: We had to stand on the shoulder of giants.

  • The first one to think about was the N9 SDK (and by extension N900). It used to have a scratchbox-based system that emulated the one on the phone. It was useful for testing the applications locally on the device (although I actually never used that), I think that this one had both cross-compilation toolchain and base system, as well as the host and it used QEmu if a host executable was run. It felt weird because it shoved you in a weird system and you had to pull your code in weird ways to fetch it. Otherwise it worked great. Afterwards, madde came. It’s what I was looking for really and played quite well with cmake, actually when I published the steps on how to develop for the N9 it’s what I used back then, but it probably was too late for it to become a thing I guess. Also many people weren’t too fond of it, as I learned after some time.
  • The Android NDK is the other I took into account. I don’t think it would be fair to compare anything we could do with the actual Android SDK, so I’ll limit myself to this one. This one ships the complete cross-compilation toolchain and runs native (similar to the BBX SDK, IIRC).

For the N9 approach, I would have had to concentrate on figuring out technologies that were long dead (and probably should remain). For the Android approach I found 2 big problems: We have to actually work on generating the actual binaries (which means, from any platform to a specific ubuntu-vivid-arm target) plus all the dependencies. This meant, creating a new distro, and we already had debian/ubuntu for that, let’s use debootstrap! Oh, wait…

Old school

For a start, I took what sebas already started, in fact. Using debootstrap to create a chroot jail that could cross-compile the projects into our platform. This started to prove feasible really soon, as 2 or 3 days after working on it (and after fixing some issues that kept arising, mostly on KF5 itself and packaging), I already started to output binaries that could be deployed on the device.


  • My IDE is outside of the jail, so there isn’t much we can do to integrate at all (we can’t access the build directory, and most of the data in it isn’t too meaningful for the outside anyway). A solution would be to ship and run the IDE from within the SDK though.
  • You need to be root. Not only to generate the system, but also to run it.
New school

An idea I wanted to approach was docker. Everyone on the web world is shit-crazy about it and it’s deeply based on traditional Linux, so there’s quite in common already. Doing something similar to the deboostrap there was a piece of cake, and I managed to re-use the set up code I already had for the previous version.


  • Also everything is inaccessible from the IDE
  • I’m less aware of the limitations I’ll find.

Still, this second approach feels lighter and it’s quite fun to investigate something new like that.


It seems that the jailed systems are the way to go, at least for now, so the tools I’ve created so far assume that they live in the jail as well. So, what do we have?

  • createpkg: Makes it possible to create a deb package that can be sent to the device for testing. It’s much simpler than the simplest deb package ever, but it works. Much better than sending executables over, much better than learning how to package for Debian.
  • deploypkg: sends the deb file, installs it and starts the application.
  • click-get: Downloads a click file from the Ubuntu Store, it’s harder than you’d think.
  • kdev-debdeploy: Does it all, inside your good ol’ IDE.
To be done
  • Workflow: Figure out a way to deploy KDevelop within either jail, but then we’ll also be able to use kde-debdeploy, that does mostly the same as these tools, integrated on the IDE.
  • Emulation: Testing: QEmu + docker anyone?
Categories: FLOSS Project Planets

eGenix.com: Python Meeting Düsseldorf - 2015-07-29

Planet Python - Mon, 2015-07-27 04:00

The following text is in German, since we're announcing a regional user group meeting in Düsseldorf, Germany.


Das nächste Python Meeting Düsseldorf findet an folgendem Termin statt:

Mittwoch, 29.07.2015, 18:00 Uhr
Raum 1, 2.OG im Bürgerhaus Stadtteilzentrum Bilk
Düsseldorfer Arcaden, Bachstr. 145, 40217 Düsseldorf

Neuigkeiten Bereits angemeldete Vorträge

Charlie Clark
       "Eine Einführung in das Routing von Pyramid"

Marc-Andre Lemburg
       "Python Idioms - Tipps und Anleitungen für besseren Python Code"
       "Bericht von der EuroPython 2015"

Weitere Vorträge können gerne noch angemeldet werden. Bei Interesse, bitte unter info@pyddf.de melden.

Startzeit und Ort

Wir treffen uns um 18:00 Uhr im Bürgerhaus in den Düsseldorfer Arcaden.

Das Bürgerhaus teilt sich den Eingang mit dem Schwimmbad und befindet sich an der Seite der Tiefgarageneinfahrt der Düsseldorfer Arcaden.

Über dem Eingang steht ein großes “Schwimm’in Bilk” Logo. Hinter der Tür direkt links zu den zwei Aufzügen, dann in den 2. Stock hochfahren. Der Eingang zum Raum 1 liegt direkt links, wenn man aus dem Aufzug kommt.

>>> Eingang in Google Street View


Das Python Meeting Düsseldorf ist eine regelmäßige Veranstaltung in Düsseldorf, die sich an Python Begeisterte aus der Region wendet.

Einen guten Überblick über die Vorträge bietet unser PyDDF YouTube-Kanal, auf dem wir Videos der Vorträge nach den Meetings veröffentlichen.

Veranstaltet wird das Meeting von der eGenix.com GmbH, Langenfeld, in Zusammenarbeit mit Clark Consulting & Research, Düsseldorf:


Das Python Meeting Düsseldorf nutzt eine Mischung aus Open Space und Lightning Talks, wobei die Gewitter bei uns auch schon mal 20 Minuten dauern können :-)

Lightning Talks können vorher angemeldet werden, oder auch spontan während des Treffens eingebracht werden. Ein Beamer mit XGA Auflösung steht zur Verfügung. Folien bitte als PDF auf USB Stick mitbringen.

Lightning Talk Anmeldung bitte formlos per EMail an info@pyddf.de


Das Python Meeting Düsseldorf wird von Python Nutzern für Python Nutzer veranstaltet.

Da Tagungsraum, Beamer, Internet und Getränke Kosten produzieren, bitten wir die Teilnehmer um einen Beitrag in Höhe von EUR 10,00 inkl. 19% Mwst. Schüler und Studenten zahlen EUR 5,00 inkl. 19% Mwst.

Wir möchten alle Teilnehmer bitten, den Betrag in bar mitzubringen.


Da wir nur für ca. 20 Personen Sitzplätze haben, möchten wir bitten, sich per EMail anzumelden. Damit wird keine Verpflichtung eingegangen. Es erleichtert uns allerdings die Planung.

Meeting Anmeldung bitte formlos per EMail an info@pyddf.de

Weitere Informationen

Weitere Informationen finden Sie auf der Webseite des Meetings:


Viel Spaß !

Marc-Andre Lemburg, eGenix.com

Categories: FLOSS Project Planets

Michael Stapelberg: dh-make-golang: creating Debian packages from Go packages

Planet Debian - Mon, 2015-07-27 02:50

Recently, the pkg-go team has been quite busy, uploading dozens of Go library packages in order to be able to package gcsfuse (a user-space file system for interacting with Google Cloud Storage) and InfluxDB (an open-source distributed time series database).

Packaging Go library packages (!) is a fairly repetitive process, so before starting my work on the dependencies for gcsfuse, I started writing a tool called dh-make-golang. Just like dh-make itself, the goal is to automatically create (almost) an entire Debian package.

As I worked my way through the dependencies of gcsfuse, I refined how the tool works, and now I believe it’s good enough for a first release.

To demonstrate how the tool works, let’s assume we want to package the Go library github.com/jacobsa/ratelimit:

midna /tmp $ dh-make-golang github.com/jacobsa/ratelimit 2015/07/25 18:25:39 Downloading "github.com/jacobsa/ratelimit/..." 2015/07/25 18:25:53 Determining upstream version number 2015/07/25 18:25:53 Package version is "0.0~git20150723.0.2ca5e0c" 2015/07/25 18:25:53 Determining dependencies 2015/07/25 18:25:55 2015/07/25 18:25:55 Packaging successfully created in /tmp/golang-github-jacobsa-ratelimit 2015/07/25 18:25:55 2015/07/25 18:25:55 Resolve all TODOs in itp-golang-github-jacobsa-ratelimit.txt, then email it out: 2015/07/25 18:25:55 sendmail -t -f < itp-golang-github-jacobsa-ratelimit.txt 2015/07/25 18:25:55 2015/07/25 18:25:55 Resolve all the TODOs in debian/, find them using: 2015/07/25 18:25:55 grep -r TODO debian 2015/07/25 18:25:55 2015/07/25 18:25:55 To build the package, commit the packaging and use gbp buildpackage: 2015/07/25 18:25:55 git add debian && git commit -a -m 'Initial packaging' 2015/07/25 18:25:55 gbp buildpackage --git-pbuilder 2015/07/25 18:25:55 2015/07/25 18:25:55 To create the packaging git repository on alioth, use: 2015/07/25 18:25:55 ssh git.debian.org "/git/pkg-go/setup-repository golang-github-jacobsa-ratelimit 'Packaging for golang-github-jacobsa-ratelimit'" 2015/07/25 18:25:55 2015/07/25 18:25:55 Once you are happy with your packaging, push it to alioth using: 2015/07/25 18:25:55 git push git+ssh://git.debian.org/git/pkg-go/packages/golang-github-jacobsa-ratelimit.git --tags master pristine-tar upstream

The ITP is often the most labor-intensive part of the packaging process, because any number of auto-detected values might be wrong: the repository owner might not be the “Upstream Author”, the repository might not have a short description, the long description might need some adjustments or the license might not be auto-detected.

midna /tmp $ cat itp-golang-github-jacobsa-ratelimit.txt From: "Michael Stapelberg" <stapelberg AT debian.org> To: submit@bugs.debian.org Subject: ITP: golang-github-jacobsa-ratelimit -- Go package for rate limiting Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Package: wnpp Severity: wishlist Owner: Michael Stapelberg <stapelberg AT debian.org> * Package name : golang-github-jacobsa-ratelimit Version : 0.0~git20150723.0.2ca5e0c-1 Upstream Author : Aaron Jacobs * URL : https://github.com/jacobsa/ratelimit * License : Apache-2.0 Programming Lang: Go Description : Go package for rate limiting GoDoc (https://godoc.org/github.com/jacobsa/ratelimit) . This package contains code for dealing with rate limiting. See the reference (http://godoc.org/github.com/jacobsa/ratelimit) for more info. TODO: perhaps reasoning midna /tmp $

After filling in all the TODOs in the file, let’s mail it out and get a sense of what else still needs to be done:

midna /tmp $ sendmail -t -f < itp-golang-github-jacobsa-ratelimit.txt midna /tmp $ cd golang-github-jacobsa-ratelimit midna /tmp/golang-github-jacobsa-ratelimit master $ grep -r TODO debian debian/changelog: * Initial release (Closes: TODO) midna /tmp/golang-github-jacobsa-ratelimit master $

After filling in these TODOs as well, let’s have a final look at what we’re about to build:

midna /tmp/golang-github-jacobsa-ratelimit master $ head -100 debian/**/* ==> debian/changelog <== golang-github-jacobsa-ratelimit (0.0~git20150723.0.2ca5e0c-1) unstable; urgency=medium * Initial release (Closes: #793646) -- Michael Stapelberg <stapelberg@debian.org> Sat, 25 Jul 2015 23:26:34 +0200 ==> debian/compat <== 9 ==> debian/control <== Source: golang-github-jacobsa-ratelimit Section: devel Priority: extra Maintainer: pkg-go <pkg-go-maintainers@lists.alioth.debian.org> Uploaders: Michael Stapelberg <stapelberg@debian.org> Build-Depends: debhelper (>= 9), dh-golang, golang-go, golang-github-jacobsa-gcloud-dev, golang-github-jacobsa-oglematchers-dev, golang-github-jacobsa-ogletest-dev, golang-github-jacobsa-syncutil-dev, golang-golang-x-net-dev Standards-Version: 3.9.6 Homepage: https://github.com/jacobsa/ratelimit Vcs-Browser: http://anonscm.debian.org/gitweb/?p=pkg-go/packages/golang-github-jacobsa-ratelimit.git;a=summary Vcs-Git: git://anonscm.debian.org/pkg-go/packages/golang-github-jacobsa-ratelimit.git Package: golang-github-jacobsa-ratelimit-dev Architecture: all Depends: ${shlibs:Depends}, ${misc:Depends}, golang-go, golang-github-jacobsa-gcloud-dev, golang-github-jacobsa-oglematchers-dev, golang-github-jacobsa-ogletest-dev, golang-github-jacobsa-syncutil-dev, golang-golang-x-net-dev Built-Using: ${misc:Built-Using} Description: Go package for rate limiting This package contains code for dealing with rate limiting. See the reference (http://godoc.org/github.com/jacobsa/ratelimit) for more info. ==> debian/copyright <== Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ Upstream-Name: ratelimit Source: https://github.com/jacobsa/ratelimit Files: * Copyright: 2015 Aaron Jacobs License: Apache-2.0 Files: debian/* Copyright: 2015 Michael Stapelberg <stapelberg@debian.org> License: Apache-2.0 Comment: Debian packaging is licensed under the same terms as upstream License: Apache-2.0 Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at . http://www.apache.org/licenses/LICENSE-2.0 . Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. . On Debian systems, the complete text of the Apache version 2.0 license can be found in "/usr/share/common-licenses/Apache-2.0". ==> debian/gbp.conf <== [DEFAULT] pristine-tar = True ==> debian/rules <== #!/usr/bin/make -f export DH_GOPKG := github.com/jacobsa/ratelimit %: dh $@ --buildsystem=golang --with=golang ==> debian/source <== head: error reading ‘debian/source’: Is a directory ==> debian/source/format <== 3.0 (quilt) midna /tmp/golang-github-jacobsa-ratelimit master $

Okay, then. Let’s give it a shot and see if it builds:

midna /tmp/golang-github-jacobsa-ratelimit master $ git add debian && git commit -a -m 'Initial packaging' [master 48f4c25] Initial packaging 7 files changed, 75 insertions(+) create mode 100644 debian/changelog create mode 100644 debian/compat create mode 100644 debian/control create mode 100644 debian/copyright create mode 100644 debian/gbp.conf create mode 100755 debian/rules create mode 100644 debian/source/format midna /tmp/golang-github-jacobsa-ratelimit master $ gbp buildpackage --git-pbuilder […] midna /tmp/golang-github-jacobsa-ratelimit master $ lintian ../golang-github-jacobsa-ratelimit_0.0\~git20150723.0.2ca5e0c-1_amd64.changes I: golang-github-jacobsa-ratelimit source: debian-watch-file-is-missing P: golang-github-jacobsa-ratelimit-dev: no-upstream-changelog I: golang-github-jacobsa-ratelimit-dev: extended-description-is-probably-too-short midna /tmp/golang-github-jacobsa-ratelimit master $

This package just built (as it should!), but occasionally one might need to disable a test and file an upstream bug about it. So, let’s push this package to pkg-go and upload it:

midna /tmp/golang-github-jacobsa-ratelimit master $ ssh git.debian.org "/git/pkg-go/setup-repository golang-github-jacobsa-ratelimit 'Packaging for golang-github-jacobsa-ratelimit'" Initialized empty shared Git repository in /srv/git.debian.org/git/pkg-go/packages/golang-github-jacobsa-ratelimit.git/ HEAD is now at ea6b1c5 add mrconfig for dh-make-golang [master c5be5a1] add mrconfig for golang-github-jacobsa-ratelimit 1 file changed, 3 insertions(+) To /git/pkg-go/meta.git ea6b1c5..c5be5a1 master -> master midna /tmp/golang-github-jacobsa-ratelimit master $ git push git+ssh://git.debian.org/git/pkg-go/packages/golang-github-jacobsa-ratelimit.git --tags master pristine-tar upstream Counting objects: 31, done. Delta compression using up to 8 threads. Compressing objects: 100% (25/25), done. Writing objects: 100% (31/31), 18.38 KiB | 0 bytes/s, done. Total 31 (delta 2), reused 0 (delta 0) To git+ssh://git.debian.org/git/pkg-go/packages/golang-github-jacobsa-ratelimit.git * [new branch] master -> master * [new branch] pristine-tar -> pristine-tar * [new branch] upstream -> upstream * [new tag] upstream/0.0_git20150723.0.2ca5e0c -> upstream/0.0_git20150723.0.2ca5e0c midna /tmp/golang-github-jacobsa-ratelimit master $ cd .. midna /tmp $ debsign golang-github-jacobsa-ratelimit_0.0\~git20150723.0.2ca5e0c-1_amd64.changes […] midna /tmp $ dput golang-github-jacobsa-ratelimit_0.0\~git20150723.0.2ca5e0c-1_amd64.changes Uploading golang-github-jacobsa-ratelimit using ftp to ftp-master (host: ftp.upload.debian.org; directory: /pub/UploadQueue/) […] Uploading golang-github-jacobsa-ratelimit_0.0~git20150723.0.2ca5e0c-1.dsc Uploading golang-github-jacobsa-ratelimit_0.0~git20150723.0.2ca5e0c.orig.tar.bz2 Uploading golang-github-jacobsa-ratelimit_0.0~git20150723.0.2ca5e0c-1.debian.tar.xz Uploading golang-github-jacobsa-ratelimit-dev_0.0~git20150723.0.2ca5e0c-1_all.deb Uploading golang-github-jacobsa-ratelimit_0.0~git20150723.0.2ca5e0c-1_amd64.changes midna /tmp $ cd golang-github-jacobsa-ratelimit midna /tmp/golang-github-jacobsa-ratelimit master $ git tag debian/0.0_git20150723.0.2ca5e0c-1 midna /tmp/golang-github-jacobsa-ratelimit master $ git push git+ssh://git.debian.org/git/pkg-go/packages/golang-github-jacobsa-ratelimit.git --tags master pristine-tar upstream Total 0 (delta 0), reused 0 (delta 0) To git+ssh://git.debian.org/git/pkg-go/packages/golang-github-jacobsa-ratelimit.git * [new tag] debian/0.0_git20150723.0.2ca5e0c-1 -> debian/0.0_git20150723.0.2ca5e0c-1 midna /tmp/golang-github-jacobsa-ratelimit master $

Thanks for reading this far, and I hope dh-make-golang makes your life a tiny bit easier. As dh-make-golang just entered Debian unstable, you can install it using apt-get install dh-make-golang. If you have any feedback, I’m eager to hear it.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: Evading the "Hadley tax": Faster Travis tests for R

Planet Debian - Sun, 2015-07-26 21:35

Hadley is a popular figure, and rightly so as he successfully introduced many newcomers to the wonders offered by R. His approach strikes some of us old greybeards as wrong---I particularly take exception with some of his writing which frequently portrays a particular approach as both the best and only one. Real programming, I think, is often a little more nuanced and aware of tradeoffs which need to be balanced. As a book on another language once popularized: "There is more than one way to do things." But let us leave this discussion for another time.

As the reach of the Hadleyverse keeps spreading, we sometimes find ourselves at the receiving end of a cost/benefit tradeoff. That is what this post is about, and it uses a very concrete case I encountered yesterday.

As blogged earlier, the RcppZiggurat package was updated. I had not touched it in a year, but Brian Ripley had sent a brief and detailed note concerning something flagged by the Solaris compiler (correctly suggesting I replace fabs() with abs() on integer types). (Allow me to stray from the main story line here for a second to stress just how insane a work load he is carrying, essentially for all of us. R and the R community are so just so indebted to him for all his work---which makes the usual social media banter about him so unfortunate. But that too shall be left for another time.) Upon making the simple fix, and submitting to GitHub the usual Travis CI was triggered. And here is what I saw:

All happy, all green. Previous build a year ago, most recent build yesterday, both passed. But hold on: test time went from 2:54 minutes to 7:47 minutes for an increase of almost five minutes! And I knew that I had not added any new dependencies, or altered any build options. What did happen was that among the dependencies of my package, one had decided to now also depend on ggplot2. Which leads to a chain of sixteen additional packages being loaded besides the four I depend upon---when it used to be just one. And that took five minutes as all those packages are installed from source, and some are big and take a long time to compile.

There is however and easy alternative, and for that we have to praise Michael Rutter who looks after a number of things for R on Ubuntu. Among these are the R builds for Ubuntu but also the rrutter PPA as well as the c2d4u PPA. If you have not heard this alphabet soup before, a PPA is a package repository for Ubuntu where anyone (who wants to sign up) can upload (properly setup) source files which are then turned into Ubuntu binaries. With full dependency resolution and all other goodies we have come to expect from the Debian / Ubuntu universe. And Michael uses this facility with great skill and calm to provide us all with Ubuntu binaries for R itself (rebuilding what yours truly uploads into Debian), as well as a number of key packages available via the CRAN mirrors. Less know however is this "c2d4u" which stands for CRAN to Debian for Ubuntu. And this builds on something Charles Blundell once built under my mentorship in a Google Summer of Code. And Michael does a tremdous job covering well over a thousand CRAN source packages---and providing binaries for all. Which we can use for Travis!

What all that means is that I could now replace the line

- ./travis-tool.sh install_r RcppGSL rbenchmark microbenchmark highlight

which implies source builds of the four listed packages and all their dependencies with the following line implying binary installations of already built packages:

- ./travis-tool.sh install_aptget libgsl0-dev r-cran-rcppgsl r-cran-rbenchmark r-cran-microbenchmark r-cran-highlight

In this particular case I also needed to build a binary package of my RcppGSL package as this one is not (yet) handled by Michael. I happen to have (re-)discovered the beauty of PPAs for Travis earlier this year and revitalized an older and largely dormant launchpad account I had for this PPA of mine. How to build a simple .deb package will also have to left for a future post to keep this more concise.

This can be used with the existing r-travis setup---but one needs to use the older, initial variant in order to have the ability to install .deb packages. So in the .travis.yml of RcppZiggurat I just use

before_install: ## PPA for Rcpp and some other packages - sudo add-apt-repository -y ppa:edd/misc ## r-travis by Craig Citro et al - curl -OL http://raw.github.com/craigcitro/r-travis/master/scripts/travis-tool.sh - chmod 755 ./travis-tool.sh - ./travis-tool.sh bootstrap

to add my own PPA and all is good. If you do not have a PPA, or do not want to create your own packages you can still benefit from the PPAs by Michael and "mix and match" by installing from binary what is available, and from source what is not.

Here we were able to use an all-binary approach, so let's see the resulting performance:

Now we are at 1:03 to 1:15 minutes---much better.

So to conclude, while the every expanding universe of R packages is fantastic for us as users, it can be seen to be placing a burden on us as developers when installing and testing. Fortunately, the packaging infrastructure built on top of Debian / Ubuntu packages can help and dramatically reduce build (and hence test) times. Learning about PPAs can be a helpful complement to learning about Travis and continued integration. So maybe now I need a new reason to blame Hadley? Well, there is always snake case ...

Follow-up: The post got some pretty immediate feedback shortly after I posted it. Craig Citro pointed out (quite correctly) that I could use r_binary_install which would also install the Ubuntu binaries based on their R packages names. Having built R/CRAN packages for Debian for so long, I am simply more used to the r-cran-* notations, and I think I was also the one contributing install_aptget to r-travis ... Yihui Xie spoke up for the "new" Travis approach deploying containers, caching of packages and explicit whitelists. It was in that very (GH-based) discussion that I started to really lose faith in the new Travis approach as they want use to whitelist each and every package. With 6900 and counting at CRAN I fear this simply does not scale. But different approaches are certainly welcome. I posted my 1:03 to 1:15 minutes result. If the "New School" can do it faster, I'd be all ears.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Wuinfo: Content as a Service

Planet Drupal - Sun, 2015-07-26 20:47

As one of Canada’s most successful integrated media and entertainment companies, Corus have multiple TV channels and websites for each channel.

It had been a challenge to have multiple channels' live schedule data displayed on websites. All the data are from a central repository. It became a little bit difficult since the repository is not always available. We had used Feeds module to import all the schedule data. Each channel website keeps a live copy of the schedule data. Things got worse because of the way we update the program items. We delete all the current schedule data in the system and then imported from the central repository. Sometimes, our schedule pages became empty because the central repository is not available.

Pedram Tiv, the director of digital operations at Corus Entertainment, had a vision of building a robust schedule for all channels. He wants to establish a Drupal website as a schedule service provider - content as a service. The service website download and synchronize all channels schedule data. Our content manager can also login to the website and edit any schedule items. The site keeps all the revisions for the changes. Since, the central repository only provide raw data, It is helpful we can edit the scheduled show title or series name.

I loved this brilliant idea as soon as he had explained it to me. We are building a Drupal website as a content service provider. It means we would build a CMS for other CMS websites. Scalability is always challenging for a modern website. To make it scalable, Pedram added another layer of cache protection. We added S3 cache between the schedule service and the front end web servers. With it, schedule service can handle more channels and millions of requests each day. Front end websites download schedule data from the Amazon S3 bucket only. What we did is creating and uploading seven days' schedule data to S3. We set up a cron job for this task. Every day, It uploads thousands of JSON schedule files for different channels in different time zones of next seven days each time.

This setup offloaded the pressure of schedule server and let it serve unlimited front end users. It gives seven days of grace period. It allowed the schedule server to be offline without interrupting the service. One time, our schedule service was down for three days. The schedule service was not affected because we have seven days of schedule data in an S3 bucket. By using S3 as another layer of protection, it provided excellent high availability.

Our schedule service have been up and running for many months without a problem. There are over 100,000 active nodes in the system. For more detail about importing large number of content and building an efficient system, we have some other blogs for this project.

Sites are that are using the schedule services now:

Categories: FLOSS Project Planets

Gregor Herrmann: RC bugs 2015/30

Planet Debian - Sun, 2015-07-26 17:18

this week, besides other activities, I again managed to NMU a few packages as part of the GCC 5 transition. & again I could build on patches submitted by various HP engineers & other helpful souls.

  • #757525 – hardinfo: "hardinfo: FTBFS with clang instead of gcc"
    patch to build with -std=gnu89, upload to DELAYED/5
  • #758723 – nagios-plugins-rabbitmq: "should depend on libjson-perl"
    add missing dependency, upload to DELAYED/5
  • #777766 – src:adun.app: "adun.app: ftbfs with GCC-5"
    send updated patch to BTS
  • #777837 – src:ebview: "ebview: ftbfs with GCC-5"
    add patch from paulownia@Safe-mail.net, upload to DELAYED/5
  • #777882 – src:gnokii: "gnokii: ftbfs with GCC-5"
    build with -fgnu89-inline, upload to DELAYED/5
  • #777907 – src:hunt: "hunt: ftbfs with GCC-5"
    apply patch from Nicholas Luedtke, upload to DELAYED/5
  • #777920 – src:isdnutils: "isdnutils: ftbfs with GCC-5"
    add patch to build with -fgnu89-inline; upload to DELAYED/5
  • #778019 – src:multimon: "multimon: ftbfs with GCC-5"
    build with -fgnu89-inline; upload to DELAYED/5
  • #778068 – src:pork: "pork: ftbfs with GCC-5"
    build with -fgnu89-inline, QA upload
  • #778098 – src:quarry: "quarry: ftbfs with GCC-5"
    build with -std=gnu89, upload to DELAYED/5, then rescheduled to 0-day with maintainer's permission
  • #778099 – src:ratbox-services: "ratbox-services: ftbfs with GCC-5"
    build with -fgnu89-inline, upload to DELAYED/5, later cancelled because package is about to be removed (#793408)
  • #778109 – src:s51dude: "s51dude: ftbfs with GCC-5"
    build with -fgnu89-inline, upload to DELAYED/5
  • #778116 – src:shell-fm: "shell-fm: ftbfs with GCC-5"
    apply patch from Brett Johnson, upload to DELAYED/5
  • #778119 – src:simulavr: "simulavr: ftbfs with GCC-5"
    apply patch from Brett Johnson, QA upload
  • #778120 – src:sipsak: "sipsak: ftbfs with GCC-5"
    apply patch from Brett Johnson, upload to DELAYED/5
  • #778122 – src:skyeye: "skyeye: ftbfs with GCC-5"
    build with -fgnu89-inline, QA upload
  • #778140 – src:tcpcopy: "tcpcopy: ftbfs with GCC-5"
    add patch backported from upstream git, upload to DELAYED/5
  • #778145 – src:thewidgetfactory: "thewidgetfactory: ftbfs with GCC-5"
    add missing #include, upload to DELAYED/5
  • #778164 – src:vtun: "vtun: ftbfs with GCC-5"
    add patch from Tim Potter, upload to DELAYED/5
  • #790464 – flow-tools: "Please drop conditional build-depend on libmysqlclient15-dev"
    drop obsolete dependency, NMU
  • #793336 – src:libdevel-profile-perl: "libdevel-profile-perl: FTBFS with perl 5.22 in experimental (MakeMaker changes)"
    finish and upload package modernized by XTaran (pkg-perl)
  • #793580 – libb-hooks-parser-perl: "libb-hooks-parser-perl: B::Hooks::Parser::Install::Files missing"
    investigate and forward upstream, upload new upstream release later (pkg-perl)
Categories: FLOSS Project Planets
Syndicate content