Feeds

KnackForge: How to update Drupal 8 core?

Planet Drupal - Sat, 2018-03-24 01:01
How to update Drupal 8 core?

Let's see how to update your Drupal site between 8.x.x minor and patch versions. For example, from 8.1.2 to 8.1.3, or from 8.3.5 to 8.4.0. I hope this will help you.

  • If you are upgrading to Drupal version x.y.z

           x -> is known as the major version number

           y -> is known as the minor version number

           z -> is known as the patch version number.

Sat, 03/24/2018 - 10:31
Categories: FLOSS Project Planets

Code Positive: Automate Drupal 8 Deployment On Pantheon With Quicksilver Scripts

Planet Drupal - Mon, 2017-09-18 21:50

Drupal deployment automation for Pantheon hosting - save time, make deployments safer, automate your testing workflow, and leave the boring repetative work to the computers!

READ MORE

 

Categories: FLOSS Project Planets

Yasoob Khalid: 13 Python libraries to keep you busy

Planet Python - Mon, 2017-09-18 21:22

Hi guys! I was recently contacted by folks from AppDynamics (a part of CISCO). They shared an infographic with me which listed 13 Python libraries. These libraries were categorized in sections. I loved going through that infographic. I hope you guys will enjoy it too. 

Source: AppDynamics


Categories: FLOSS Project Planets

Justin Mason: Links for 2017-09-18

Planet Apache - Mon, 2017-09-18 19:58
  • Native Memory Tracking

    Java 8 HotSpot feature to monitor and diagnose native memory leaks

    (tags: java jvm memory native-memory malloc debugging coding nmt java-8 jcmd)

  • This Heroic Captain Defied His Orders and Stopped America From Starting World War III

    Captain William Bassett, a USAF officer stationed at Okinawa on October 28, 1962, can now be added alongside Stanislav Petrov to the list of people who have saved the world from WWIII:

    By [John] Bordne’s account, at the height of the Cuban Missile Crisis, Air Force crews on Okinawa were ordered to launch 32 missiles, each carrying a large nuclear warhead. […] The Captain told Missile Operations Center over the phone that he either needed to hear that the threat level had been raised to DEFCON 1 and that he should fire the nukes, or that he should stand down. We don’t know exactly what the Missile Operations Center told Captain Bassett, but they finally received confirmation that they should not launch their nukes. After the crisis had passed Bassett reportedly told his men: “None of us will discuss anything that happened here tonight, and I mean anything. No discussions at the barracks, in a bar, or even here at the launch site. You do not even write home about this. Am I making myself perfectly clear on this subject?”

    (tags: wwiii history nukes cuban-missile-crisis 1960s usaf okinawa missiles william-bassett)

  • malware piggybacking on CCleaner

    On September 13, 2017 while conducting customer beta testing of our new exploit detection technology, Cisco Talos identified a specific executable which was triggering our advanced malware protection systems. Upon closer inspection, the executable in question was the installer for CCleaner v5.33, which was being delivered to endpoints by the legitimate CCleaner download servers. Talos began initial analysis to determine what was causing this technology to flag CCleaner. We identified that even though the downloaded installation executable was signed using a valid digital signature issued to Piriform, CCleaner was not the only application that came with the download. During the installation of CCleaner 5.33, the 32-bit CCleaner binary that was included also contained a malicious payload that featured a Domain Generation Algorithm (DGA) as well as hardcoded Command and Control (C2) functionality. We confirmed that this malicious version of CCleaner was being hosted directly on CCleaner’s download server as recently as September 11, 2017.

    (tags: ccleaner malware avast piriform windows security)

Categories: FLOSS Project Planets

Plasma 5.11 beta available in unofficial PPA for testing on Artful

Planet KDE - Mon, 2017-09-18 19:12

Adventurous users and developers running the Artful development release can now also test the beta version of Plasma 5.11. This is experimental and can possibly kill kittens!

Bug reports on this beta go to https://bugs.kde.org, not to Launchpad.

The PPA comes with a WARNING: Artful will ship with Plasma 5.10.5, so please be prepared to use ppa-purge to revert changes. Plasma 5.11 will ship too late for inclusion in Kubuntu 17.10, but should be available via backports soon after release day, October 19th, 2017.

Read more about the beta release: https://www.kde.org/announcements/plasma-5.10.95.php

If you want to test on Artful: sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt-get update && sudo apt full-upgrade -y

The purpose of this PPA is testing, and bug reports go to bugs.kde.org.

Categories: FLOSS Project Planets

Carl Chenet: The Github threat

Planet Python - Mon, 2017-09-18 18:00

Many voices arise now and then against risks linked to the Github use by Free Software projects. Yet the infatuation for the collaborative forge of the Octocat Californian start-ups doesn’t seem to fad away.

These recent years, Github and its services take an important role in software engineering as they are seen as easy to use, efficient for a daily workload with interesting functions in enterprise collaborative workflow or amid a Free Software project. What are the arguments against using its services and are they valid? We will list them first, then we’ll examine their validity.

1. Critical points 1.1 Centralization

The Github application belongs to a single entity, Github Inc, a US company which manage it alone. So, a unique company under US legislation manages the access to most of Free Software application code sources, which may be a problem with groups using it when a code source is no longer available, for political or technical reason.

The Octocat, the Github mascot

 

This centralization leads to another trouble: as it obtained critical mass, it becomes more and more difficult not having a Github account. People who don’t use Github, by choice or not, are becoming a silent minority. It is now fashionable to use Github, and not doing so is seen as “out of date”. The same phenomenon is a classic, and even the norm, for proprietary social networks (Facebook, Twitter, Instagram).

1.2 A Proprietary Software

When you interact with Github, you are using a proprietary software, with no access to its source code and which may not work the way you think it is. It is a problem at different levels. First, ideologically, but foremost in practice. In the Github case, we send them code we can control outside of their interface. We also send them personal information (profile, Github interactions). And mostly, Github forces any project which goes through the US platform to use a crucial proprietary tools: its bug tracking system.

Windows, the epitome of proprietary software, even if others took the same path

 

1.3 The Uniformization

Working with Github interface seems easy and intuitive to most. Lots of companies now use it as a source repository, and many developers leaving a company find the same Github working environment in the next one. This pervasive presence of Github in free software development environment is a part of the uniformization of said developers’ working space.

Uniforms always bring Army in my mind, here the Clone army

2 – Critical points cross-examination 2.1 Regarding the centralization 2.1.1 Service availability rate

As said above, nowadays, Github is the main repository of Free Software source code. As such it is a favorite target for cyberattacks. DDOS hit it in March and August 2015. On December 15, 2015, an outage led to the inaccessibility of 5% of the repositories. The same occurred on November 15. And these are only the incident reported by Github itself. One can imagine that the mean outage rate of the platform is underestimated.

2.1.2 Chain reaction could block Free Software development

Today many dependency maintenance tools, as npm for javascript, Bundler for Ruby or even pip for Python can access an application source code directly from Github. Free Software projects getting more and more linked and codependents, if one component is down, all the developing process stop.

One of the best examples is the npmgate. Any company could legally demand that Github take down some source code from its repository, which could create a chain reaction and blocking the development of many Free Software projects, as suffered the Node.js community from the decisions of Npm, Inc, the company managing npm.

2.2 A historical precedent: SourceForge

Github didn’t appear out of the blue. In his time, its predecessor, SourceForge, was also extremely popular.

Heavily centralized, based on strong interaction with the community, SourceForge is now seen as an aging SAAS (Software As A Service) and sees most of its customers fleeing to Github. Which creates lots of hurdles for those who stayed. The Gimp project suffered from spams and terrible advertising, which led to the departure of the VLC project, then from installers corrupted with adwares instead of the official Gimp installer for Windows. And finally, the Project Gimp’s SourceForge account was hacked by… SourceForge team itself!

These are very recent examples of what can do a commercial entity when it is under its stakeholders’ pressure. It is vital to really understand what it means to trust them with data and exchange centralization, where it could have tremendous repercussion on the day-to-day life and the habits of the Free Software and open source community.

2.3. Regarding proprietary software 2.3.1 One community, several opinions on proprietary software

Mostly based on ideology, this point deals with the definition every member of the community gives to Free Software and open source. Mostly about one thing: is it viral or not? Or GPL vs MIT/BSD.

Those on the side of the viral Free Software will have trouble to use a proprietary software as this last one shouldn’t even exist. It must be assimilated, to quote Star Trek, as it is a connected black box, endangering privacy, corrupting for profit our uses and restrain our freedom to use as we’re pleased what we own, etc.

Those on the side of complete freedom have no qualms using proprietary software as their very existence is a consequence of freedom without restriction. They even agree that code they developed may be a part of proprietary software, which is quite a common occurrence. This part of the Free Software community has no qualm using Github, which is well within their ideology parameters. Just take a look at the Janson amphitheater during Fosdem and check how many Apple laptops running on macOS are around.

FreeBSD, the main BSD project under the BSD license

2.3.2 Data loss and data restrictions linked to proprietary software use

Even without ideological consideration, and just focusing on Github infrastructure, the bug tracking system is a major problem by itself.

Bug report builds the memory of Free Software projects. It is the entrance point for new contributors, the place to find bug reporting, requests for new functions, etc. The project history can’t be limited only to the code. It’s very common to find bug reports when you copy and paste an error message in a search engine. Not their historical importance is precious for the project itself, but also for its present and future users.

Github gives the ability to extract bug reports through its API. What would happen if Github is down or if the platform doesn’t support this feature anymore? In my opinion, not that many projects ever thought of this outcome. How could they move all the data generated by Github into a new bug tracking system?

One old example now is Astrid, a TODO list bought by Yahoo a few years ago. Very popular, it grew fast until it was closed overnight, with only a few weeks for its users to extract their data. It was only a to-do list. The same situation with Github would be tremendously difficult to manage for several projects if they even have the ability to deal with it. Code would still be available and could still live somewhere else, but the project memory would be lost. A project like Debian has today more than 800,000 bug reports, which are a data treasure trove about problems solved, function requests and where the development stand on each. The developers of the Cpython project have anticipated the problem and decided not to use Github bug tracking systems.

Issues, the Github proprietary bug tracking system

Another thing we could lose if Github suddenly disappear: all the work currently done regarding the push requests (aka PRs). This Github function gives the ability to clone one project’s Github repository, to modify it to fit your needs, then to offer your own modification to the original repository. The original repository’s owner will then review said modification, and if he or she agrees with them will fuse them into the original repository. As such, it’s one of the main advantages of Github, since it can be done easily through its graphic interface.

However reviewing all the PRs may be quite long, and most of the successful projects have several ongoing PRs. And this PRs and/or the proprietary bug tracking system are commonly used as a platform for comment and discussion between developers.

Code itself is not lost if Github is down (except one specific situation as seen below), but the peer review works materialized in the PRs and the bug tracking system is lost. Let’s remember than the PR mechanism let you clone and modify projects and then generate PRs directly from its proprietary web interface without downloading a single code line on your computer. In this particular case, if Github is down, all the code and the work in progress is lost.

Some also use Github as a bookmark place. They follow their favorite projects’ activity through the Watch function. This technological watch style of data collection would also be lost if Github is down.

Debian, one of the main Free Software projects with at least a thousand official contributors

2.4 Uniformization

The Free Software community is walking a thigh rope between normalization needed for an easier interoperability between its products and an attraction for novelty led by a strong need for differentiation from what is already there.

Github popularized the use of Git, a great tool now used through various sectors far away from its original programming field. Step by step, Git is now so prominent it’s almost impossible to even think to another source control manager, even if awesome alternate solutions, unfortunately not as popular, exist as Mercurial.

A new Free Software project is now a Git repository on Github with README.md added as a quick description. All the other solutions are ostracized? How? None or very few potential contributors would notice said projects. It seems very difficult now to encourage potential contributors into learning a new source control manager AND a new forge for every project they want to contribute. Which was a basic requirement a few years ago.

It’s quite sad because Github, offering an original experience to its users, cut them out of a whole possibility realm. Maybe Github is one of the best web versioning control systems. But being the main one doesn’t let room for a new competitor to grow. And it let Github initiate development newcomers into a narrow function set, totally unrelated to the strength of the Git tool itself.

3. Centralization, uniformization, proprietary software… What’s next? Laziness?

Fight against centralization is a main part of the Free Software ideology as centralization strengthens the power of those who manage it and who through it control those who are managed by it. Uniformization allergies born against main software companies and their wishes to impose a closed commercial software world was for a long time the main fuel for innovation thirst and intelligent alternative development. As we said above, part of the Free Software community was built as a reaction to proprietary software and their threat. The other part, without hoping for their disappearance, still chose a development model opposite to proprietary software, at least in the beginning, as now there’s more and more bridges between the two.

The Github effect is a morbid one because of its consequences: at least centralization, uniformization, proprietary software usage as their bug tracking system. But some years ago the Dear Github buzz showed one more side effect, one I’ve never thought about: laziness. For those who don’t know what it is about, this letter is a complaint from several spokespersons from several Free Software projects which demand to Github team to finally implement, after years of polite asking, new functions.

Since when Free Software project facing a roadblock request for clemency and don’t build themselves the path they need? When Torvalds was involved in the Bitkeeper problem and the Linux kernel development team couldn’t use anymore their revision control software, he developed Git. The mere fact of not being able to use one tool or functions lacking is the main motivation to seek alternative solutions and, as such, of the Free Software movement. Every Free Software community member able to code should have this reflex. You don’t like what Github offers? Switch to Gitlab. You don’t like it Gitlab? Improve it or make your own solution.

The Gitlab logo

Let’s be crystal clear. I’ve never said that every Free Software developers blocked should code his or her own alternative. We all have our own priorities, and some of us even like their beauty sleep, including me. But, to see that this open letter to Github has 1340 names attached to it, among them some spokespersons for major Free Software project showed me that need, willpower and strength to code a replacement are here. Maybe said replacement will be born from this letter, it would be the best outcome of this buzz.

In the end, Github usage is just another example of Internet usage massification. As Internet users are bound to go to massively centralized social network as Facebook or Twitter, developers are following the same path with Github. Even if a large fraction of developers realize the threat linked this centralized and proprietary organization, the whole community is following this centralization and uniformization trend. Github service is useful, free or with a reasonable price (depending on the functions you need) easy to use and up most of the time. Why would we try something else? Maybe because others are using us while we are savoring the convenience? The Free Software community seems to be quite sleepy to me.

The lion enjoying the hearth warm

About Me

Carl Chenet, Free Software Indie Hacker, founder of the French-speaking Hacker News-like Journal du hacker.

Follow me on social networks

Translated from French by Stéphanie Chaptal. Original article written in 2015.

Categories: FLOSS Project Planets

Carl Chenet: The Github threat

Planet Debian - Mon, 2017-09-18 18:00

Many voices arise now and then against risks linked to the Github use by Free Software projects. Yet the infatuation for the collaborative forge of the Octocat Californian start-ups doesn’t seem to fad away.

These recent years, Github and its services take an important role in software engineering as they are seen as easy to use, efficient for a daily workload with interesting functions in enterprise collaborative workflow or amid a Free Software project. What are the arguments against using its services and are they valid? We will list them first, then we’ll examine their validity.

1. Critical points 1.1 Centralization

The Github application belongs to a single entity, Github Inc, a US company which manage it alone. So, a unique company under US legislation manages the access to most of Free Software application code sources, which may be a problem with groups using it when a code source is no longer available, for political or technical reason.

The Octocat, the Github mascot

 

This centralization leads to another trouble: as it obtained critical mass, it becomes more and more difficult not having a Github account. People who don’t use Github, by choice or not, are becoming a silent minority. It is now fashionable to use Github, and not doing so is seen as “out of date”. The same phenomenon is a classic, and even the norm, for proprietary social networks (Facebook, Twitter, Instagram).

1.2 A Proprietary Software

When you interact with Github, you are using a proprietary software, with no access to its source code and which may not work the way you think it is. It is a problem at different levels. First, ideologically, but foremost in practice. In the Github case, we send them code we can control outside of their interface. We also send them personal information (profile, Github interactions). And mostly, Github forces any project which goes through the US platform to use a crucial proprietary tools: its bug tracking system.

Windows, the epitome of proprietary software, even if others took the same path

 

1.3 The Uniformization

Working with Github interface seems easy and intuitive to most. Lots of companies now use it as a source repository, and many developers leaving a company find the same Github working environment in the next one. This pervasive presence of Github in free software development environment is a part of the uniformization of said developers’ working space.

Uniforms always bring Army in my mind, here the Clone army

2 – Critical points cross-examination 2.1 Regarding the centralization 2.1.1 Service availability rate

As said above, nowadays, Github is the main repository of Free Software source code. As such it is a favorite target for cyberattacks. DDOS hit it in March and August 2015. On December 15, 2015, an outage led to the inaccessibility of 5% of the repositories. The same occurred on November 15. And these are only the incident reported by Github itself. One can imagine that the mean outage rate of the platform is underestimated.

2.1.2 Chain reaction could block Free Software development

Today many dependency maintenance tools, as npm for javascript, Bundler for Ruby or even pip for Python can access an application source code directly from Github. Free Software projects getting more and more linked and codependents, if one component is down, all the developing process stop.

One of the best examples is the npmgate. Any company could legally demand that Github take down some source code from its repository, which could create a chain reaction and blocking the development of many Free Software projects, as suffered the Node.js community from the decisions of Npm, Inc, the company managing npm.

2.2 A historical precedent: SourceForge

Github didn’t appear out of the blue. In his time, its predecessor, SourceForge, was also extremely popular.

Heavily centralized, based on strong interaction with the community, SourceForge is now seen as an aging SAAS (Software As A Service) and sees most of its customers fleeing to Github. Which creates lots of hurdles for those who stayed. The Gimp project suffered from spams and terrible advertising, which led to the departure of the VLC project, then from installers corrupted with adwares instead of the official Gimp installer for Windows. And finally, the Project Gimp’s SourceForge account was hacked by… SourceForge team itself!

These are very recent examples of what can do a commercial entity when it is under its stakeholders’ pressure. It is vital to really understand what it means to trust them with data and exchange centralization, where it could have tremendous repercussion on the day-to-day life and the habits of the Free Software and open source community.

2.3. Regarding proprietary software 2.3.1 One community, several opinions on proprietary software

Mostly based on ideology, this point deals with the definition every member of the community gives to Free Software and open source. Mostly about one thing: is it viral or not? Or GPL vs MIT/BSD.

Those on the side of the viral Free Software will have trouble to use a proprietary software as this last one shouldn’t even exist. It must be assimilated, to quote Star Trek, as it is a connected black box, endangering privacy, corrupting for profit our uses and restrain our freedom to use as we’re pleased what we own, etc.

Those on the side of complete freedom have no qualms using proprietary software as their very existence is a consequence of freedom without restriction. They even agree that code they developed may be a part of proprietary software, which is quite a common occurrence. This part of the Free Software community has no qualm using Github, which is well within their ideology parameters. Just take a look at the Janson amphitheater during Fosdem and check how many Apple laptops running on macOS are around.

FreeBSD, the main BSD project under the BSD license

2.3.2 Data loss and data restrictions linked to proprietary software use

Even without ideological consideration, and just focusing on Github infrastructure, the bug tracking system is a major problem by itself.

Bug report builds the memory of Free Software projects. It is the entrance point for new contributors, the place to find bug reporting, requests for new functions, etc. The project history can’t be limited only to the code. It’s very common to find bug reports when you copy and paste an error message in a search engine. Not their historical importance is precious for the project itself, but also for its present and future users.

Github gives the ability to extract bug reports through its API. What would happen if Github is down or if the platform doesn’t support this feature anymore? In my opinion, not that many projects ever thought of this outcome. How could they move all the data generated by Github into a new bug tracking system?

One old example now is Astrid, a TODO list bought by Yahoo a few years ago. Very popular, it grew fast until it was closed overnight, with only a few weeks for its users to extract their data. It was only a to-do list. The same situation with Github would be tremendously difficult to manage for several projects if they even have the ability to deal with it. Code would still be available and could still live somewhere else, but the project memory would be lost. A project like Debian has today more than 800,000 bug reports, which are a data treasure trove about problems solved, function requests and where the development stand on each. The developers of the Cpython project have anticipated the problem and decided not to use Github bug tracking systems.

Issues, the Github proprietary bug tracking system

Another thing we could lose if Github suddenly disappear: all the work currently done regarding the push requests (aka PRs). This Github function gives the ability to clone one project’s Github repository, to modify it to fit your needs, then to offer your own modification to the original repository. The original repository’s owner will then review said modification, and if he or she agrees with them will fuse them into the original repository. As such, it’s one of the main advantages of Github, since it can be done easily through its graphic interface.

However reviewing all the PRs may be quite long, and most of the successful projects have several ongoing PRs. And this PRs and/or the proprietary bug tracking system are commonly used as a platform for comment and discussion between developers.

Code itself is not lost if Github is down (except one specific situation as seen below), but the peer review works materialized in the PRs and the bug tracking system is lost. Let’s remember than the PR mechanism let you clone and modify projects and then generate PRs directly from its proprietary web interface without downloading a single code line on your computer. In this particular case, if Github is down, all the code and the work in progress is lost.

Some also use Github as a bookmark place. They follow their favorite projects’ activity through the Watch function. This technological watch style of data collection would also be lost if Github is down.

Debian, one of the main Free Software projects with at least a thousand official contributors

2.4 Uniformization

The Free Software community is walking a thigh rope between normalization needed for an easier interoperability between its products and an attraction for novelty led by a strong need for differentiation from what is already there.

Github popularized the use of Git, a great tool now used through various sectors far away from its original programming field. Step by step, Git is now so prominent it’s almost impossible to even think to another source control manager, even if awesome alternate solutions, unfortunately not as popular, exist as Mercurial.

A new Free Software project is now a Git repository on Github with README.md added as a quick description. All the other solutions are ostracized? How? None or very few potential contributors would notice said projects. It seems very difficult now to encourage potential contributors into learning a new source control manager AND a new forge for every project they want to contribute. Which was a basic requirement a few years ago.

It’s quite sad because Github, offering an original experience to its users, cut them out of a whole possibility realm. Maybe Github is one of the best web versioning control systems. But being the main one doesn’t let room for a new competitor to grow. And it let Github initiate development newcomers into a narrow function set, totally unrelated to the strength of the Git tool itself.

3. Centralization, uniformization, proprietary software… What’s next? Laziness?

Fight against centralization is a main part of the Free Software ideology as centralization strengthens the power of those who manage it and who through it control those who are managed by it. Uniformization allergies born against main software companies and their wishes to impose a closed commercial software world was for a long time the main fuel for innovation thirst and intelligent alternative development. As we said above, part of the Free Software community was built as a reaction to proprietary software and their threat. The other part, without hoping for their disappearance, still chose a development model opposite to proprietary software, at least in the beginning, as now there’s more and more bridges between the two.

The Github effect is a morbid one because of its consequences: at least centralization, uniformization, proprietary software usage as their bug tracking system. But some years ago the Dear Github buzz showed one more side effect, one I’ve never thought about: laziness. For those who don’t know what it is about, this letter is a complaint from several spokespersons from several Free Software projects which demand to Github team to finally implement, after years of polite asking, new functions.

Since when Free Software project facing a roadblock request for clemency and don’t build themselves the path they need? When Torvalds was involved in the Bitkeeper problem and the Linux kernel development team couldn’t use anymore their revision control software, he developed Git. The mere fact of not being able to use one tool or functions lacking is the main motivation to seek alternative solutions and, as such, of the Free Software movement. Every Free Software community member able to code should have this reflex. You don’t like what Github offers? Switch to Gitlab. You don’t like it Gitlab? Improve it or make your own solution.

The Gitlab logo

Let’s be crystal clear. I’ve never said that every Free Software developers blocked should code his or her own alternative. We all have our own priorities, and some of us even like their beauty sleep, including me. But, to see that this open letter to Github has 1340 names attached to it, among them some spokespersons for major Free Software project showed me that need, willpower and strength to code a replacement are here. Maybe said replacement will be born from this letter, it would be the best outcome of this buzz.

In the end, Github usage is just another example of Internet usage massification. As Internet users are bound to go to massively centralized social network as Facebook or Twitter, developers are following the same path with Github. Even if a large fraction of developers realize the threat linked this centralized and proprietary organization, the whole community is following this centralization and uniformization trend. Github service is useful, free or with a reasonable price (depending on the functions you need) easy to use and up most of the time. Why would we try something else? Maybe because others are using us while we are savoring the convenience? The Free Software community seems to be quite sleepy to me.

The lion enjoying the hearth warm

About Me

Carl Chenet, Free Software Indie Hacker, founder of the French-speaking Hacker News-like Journal du hacker.

Follow me on social networks

Translated from French by Stéphanie Chaptal. Original article written in 2015.

Categories: FLOSS Project Planets

Christopher Allan Webber: DRM will unravel the Web

GNU Planet! - Mon, 2017-09-18 15:30

I'm a web standards author and I participate in the W3C. I am co-editor of the ActivityPub protocol, participate in a few other community groups and working groups, and I consider it an honor to have been able to participate in the W3C process. What I am going to write here though represents me and my feelings alone. In a sense though, that makes this even more painful. This is a blogpost I don't have time to write, but here I am writing it; I am emotionally forced to push forward on this topic. The W3C has allowed DRM to move forward on the web through the EME specification (which is, to paraphrase Danny O'Brien from the EFF, a "DRM shaped hole where nothing else but DRM fits"). This threatens to unravel the web as we know it. How could this happen? How did we get here?

Like many of my generation, I grew up on the web, both as a citizen of this world and as a developer. "Web development", in one way or another, has principally been my work for my adult life, and how I have learned to be a programmer. The web is an enormous, astounding effort of many, many participants. Of course, Tim Berners-Lee is credited for much of it, and deserves much of this credit. I've had the pleasure of meeting Tim on a couple of occasions; when you meet Tim it's clear how deeply he cares about the web. Tim speaks quickly, as though he can't wait to get out the ideas that are so important to him, to try to help you understand how wonderful and exciting this system it is that we can build together. Then, as soon as he's done talking, he returns to his computer and gets to hacking on whatever software he's building to advance the web. You don't see this dedication to "keep your hands dirty" in the gears of the system very often, and it's a trait I admire. So it's very hard to reconcile that vision of Tim with someone who would intentionally unravel their own work... yet by allowing the W3C to approve DRM/EME, I believe that's what has happened.

I had an opportunity to tell Tim what I think about DRM and EME on the web, and unfortunately I blew it. At TPAC (W3C's big conference/gathering of the standards minds) last year, there was a protest against DRM outside. I was too busy to take part, but I did talk to a friend who is close to Tim and was frustrated about the protests happening outside. After I expressed that I sympathized with the protestors (and that I had even indeed protested myself in Boston), I explained my position to my friend. Apparently I was convincing enough where they encouraged me to talk to Tim and offer my perspective; they offered to flag them down for a chat. In fact Tim and I did speak over lunch, but -- although we had met in person before -- it was my first time talking to Tim one-on-one, and I was embarassed for that first interaction would me to be talking about DRM and what I was afraid was a sore subject for him. Instead we had a very pleasant conversation about the work I was doing on ActivityPub and some related friends' work on other standards (such as Linked Data Notifications, etc). It was a good conversation, but when it was over I had an enormous feeling of regret that has been on the back of my mind since.

Here then, is what I wish I had said.

Tim, I have read your article on why the W3C is supporting EME, and that I know you have thought about it a great deal. I think you believe what you are doing what is right for the web, but I believe you are making an enormous miscalculation. You have fought long and hard to build the web into the system it is... unfortunately, I think DRM threatens to undo all that work so thoroughly that allowing the W3C to effectively green-light DRM for the web will be, looking back on your life, your greatest regret.

You and I both know the dangers of DRM: it creates content that is illegal to operate on using any of the tooling you or I will ever be able to write. The power of DRM is not in its technology but in the surrounding laws; in the United States through the DMCA it is a criminal offense to inspect how DRM systems work or to talk about these vulnerabilities. DRM is also something that clearly cannot itself be implemented as a standard; it relies on proprietary secrecy in order to be able to function. Instead, EME defines a DRM-shaped hole, but we all know what goes into that hole... unfortunately, there's no way for you or I to build an open and interoperable system that can fit in that EME hole, because DRM is antithetical to an interoperable, open web.

I think, from reading your article, that you believe that DRM will be safely contained to just "premium movies", and so on. Perhaps if this were true, DRM would still be serious but not as enormous of a threat as I believe it is. In fact, we already know that DRM is being used by companies like John Deere to say that you don't even own your own tractor, car, etc. If DRM can apply to tractors, surely it will apply to more than just movies.

Indeed, there's good reason to believe that some companies will want to apply DRM to every layer of the web. Since the web has become a full-on "application delivery system", of course the same companies that apply DRM to software will want to apply DRM to their web software. The web has traditionally been a book which encourages being opened; I learned much of how to program on the web through that venerable "view source" right-click menu item of web browsers. However I fully expect with EME that we will see application authors begin to lock down HTML, CSS, Javascript, and every other bit of their web applications down with DRM. (I suppose in a sense this is already happening with javascript obfuscation and etc, but the web itself was at least a system of open standards where anyone could build an implementation and anyone could copy around files... with EME, this is no longer the case.) Look at the prevelance of DRM in proprietary applications elsewhere... once the option of a W3C-endorsed DRM-route exists, do you think these same application developers will not reach for it? But I think if you develop the web with the vision of it being humanity's greatest and most empowering knowledge system, you must be against this, because if enough of the web moves over to this model the assumptions and properties of the web as we've known it, as an open graph to free the world, cannot be upheld. I also know the true direction you'd like the web to go, one of linked data systems (of which ActivityPub is somewhat quietly one). Do you think such a world will be possible to build with DRM? I for one do not see how it is possible, but I'm afraid that's the path down which we are headed.

I'm sure you've thought of these things too, so what could be your reason for deciding to go ahead with supporting DRM anyway? My suspicion is it's two things contributing to this:

  1. Fear that the big players will pick up their ball and leave. I suspect there's fear of another WHATWG, that the big players will simply pick up their ball and leave.
  2. Most especially, and related to the above, I suspect the funding and membership structure of the W3C is having a large impact on this. Funding structures tend to have a large impact on decision making, as a kind of Conway's Law effect. W3C is reliant on its "thin gruel" of funding from member organizations (which means that large players tend to have a larger say in how the web is built today).

I suspect this is most of all what's driving the support for DRM within the W3C. However, I know a few W3C staff members who are clearly not excited about DRM, and two who have quit the organization over it, so it's not that EME is internally a technology that brings excitement to the organziation.

I suppose at this point, this is where I diverge with the things I could have said in the past and did not say as an appeal to not allow the W3C to endorse EME. Unfortunately, today EME made it to Recommendation. At the very least, I think the W3C could have gone forward with the Contributor Covenant proposed by the EFF, but did not. This is an enormous disappointment.

What do we do now? I think the best we can do at this point, as individual developers and users, is speak out against DRM and refuse to participate in it.

And Tim, if you're listening, perhaps there's no chance now to stop EME from becoming a Recommendation. But your voice can still cary weight. I encourage you to join in speaking out against the threat DRM brings to unravel the web.

Perhaps if we speak loud enough, and push hard enough, we can still save the web we love. But today is a sad say, and from here I'm afraid it is going to be an uphill battle.

EDIT: If you haven't yet read Cory Doctorow / the EFF's open letter to the W3C, you should.

Categories: FLOSS Project Planets

Bay Area Drupal Camp: Summit Registration is Now Open!

Planet Drupal - Mon, 2017-09-18 13:54
Summit Registration is Now Open! Grace Lovelace Mon, 09/18/2017 - 10:54am BADCamp Drupal Summits

Summits are one-day events focused around specific topics and areas of practice that gather people in specific industries or with specific skills to dive deep into the issues that matter and collaborate freely.

 

Sign Up for a Summit Today!

 

Nonprofit Summit (Wed)

The BADCamp Nonprofit Summit (NPS) is back in Berkeley for 2017 with even more opportunities for nonprofits and developers to collaborate, learn, and grow! We’ve got a full day of case studies, presentations, and small-group breakout sessions, all led by nonprofit tech experts. Come discover new tools and strategies, learn how to use them, and make contacts with other members of the Drupal nonprofit community!

Higher Ed Summit (Thurs)

The Higher Education Summit is a unique opportunity for site owners, IT managers, developers, content creators, and agencies dedicated to supporting and advancing the use of Drupal in academia to share, learn, and strengthen our community of practice. Through panels, talks, and ample breakout sessions, participants share and learn from one another’s victories and challenges, and build momentum in cross-institutional initiatives. Drupal behind the login. This year's theme is using Drupal as a collaboration tool (intranets, research sites, data sharing, administrative tasks, portals, etc.).

Front End Summit (Thursday)

Perhaps more than any other discipline, front-end development has been rapidly evolving over the past several years to accommodate an ever-changing variety of workflows, toolsets, best practices, and technologies. As BADCamp turns 10, let us acknowledge the past, assess current trends, and discuss the future of front-end development at the Frontend Summit.

Backdrop Summit (Wed)

Backdrop CMS is a content management system based on the Drupal you know and love, but with a new mission that aims to decrease the cost of long-term website ownership. The goal of this Drupal fork is to empower more people to do more things on the web. At the Backdrop Summit you'll learn about the Backdrop software and its differences from the Drupal CMS.

DevOps Summit (Thurs)

Want to accelerate development at your organization? The DevOps Summit is about inspiring people (aka YOU) with new processes and tools to help transform ideas into working web applications. We’ll be discussing topics like automated testing, continuous integration, local development, ChatOps, and more. Along the way you’ll have a chance to pick the brains of leading DevOps professionals in the Drupal community. Anyone who is looking to work with happier development teams while saving time and money should attend.

  Do you think BADCamp is awesome?

Would you have been willing to pay for your ticket?  If so, then you can give back to the camp by purchasing an individual sponsorship at the level most comfortable for you. As our thanks, we will be handing out some awesome BADCamp swag as our thanks.

We need your help!

BADCamp is 100% volunteer driven and we need your hands! We need stout hearts to volunteer and help set up, tear down, give directions and so much more!  If you are local and can help us, please sign up on our Volunteer Form.

Sponsors

A BIG thanks to our sponsors who have committed early. Without them this magical event wouldn’t be possible. Interested in sponsoring BADCamp? Contact matt@badcamp.net or anne@badcamp.net


Thank you to Pantheon & Acquia for sponsoring at the Core level to help keep BADCamp free and awesome.

Drupal Planet
Categories: FLOSS Project Planets

Colan Schwartz: Client-side encryption options now available in Drupal

Planet Drupal - Mon, 2017-09-18 13:24
Topics: 

After the success of last year's GSOC project with Drupal, I thought it would be a great idea to see if we could take what we did there (server-side encryption) and do something similar on the client side. The benefit of this approach is that unencrypted content/data is never seen by the hosting server. So it's not necessary to trust it to the same degree. This has been a requested feature for some time, and become very popular within the instant-messaging space.

I posted the idea, but wasn't sure how much traction there would be given the additional complexity. Before long, there were two interested students, Marcin Czarnecki and Tameesh Biswas, who were interested in the project given their interest in cryptography. They both wrote very good proposals, which we in the Drupal community accepted.

With the help of Adam Bergstein (my co-mentor from last year) and Talha Paracha (last year's student), we were able to mentor both students in working towards completing their projects, even with the added complexity. Unlike last year, users' passwords couldn't be used to encrypt anything because the site has access to these. An out-of-band mechanism was necessary to perform the encryption, public-key cryptography. It needed to be in the hands of users themselves instead of being handled implicitly by the server.

I'm delighted to report that both students passed. The community can now take their projects and build upon them. Please review the new Drupal modules at Client-side content encryption (overview) and Client Side File Crypto (overview). If there are any issues, please open tickets in the respective queues.

This article, Client-side encryption options now available in Drupal, appeared first on the Colan Schwartz Consulting Services blog.

Categories: FLOSS Project Planets

NumFOCUS: The Econ-ARK joins NumFOCUS Sponsored Projects

Planet Python - Mon, 2017-09-18 13:06
​NumFOCUS is pleased to announce the addition of the Econ-ARK to our fiscally sponsored projects. As a complement to the thriving QuantEcon project (also a NumFOCUS sponsored project), the Econ-ARK is creating an open-source resource containing the tools needed to understand how diversity across economic agents (in preferences, circumstances, knowledge, etc) leads to richer and […]
Categories: FLOSS Project Planets

Zato Blog: Building a protocol-agnostic API for SMS text messaging with Zato and Twilio

Planet Python - Mon, 2017-09-18 12:48

This blog post discusses an integration scenario that showcases a new feature in Zato 3.0 - SMS texting with Twilio.

Use-case

Suppose you'd like to send text messages that originate from multiple sources, from multiple systems communicating natively over different protocols, such as the most commonly used ones:

  • REST
  • AMQP
  • WebSockets
  • FTP
  • WebSphere MQ

Naturally, the list could grow but the main points are that:

  • Ubiquitous as it is, HTTP is far from being the only protocol used in more complex environments
  • You don't want to distribute credentials to Twilio to each of backend or frontend systems that wants to text

The solution is to route all the messages through a dedicated Zato service that will:

  • Offer to each system communication in their own native protocol
  • Be the only place where credentials are kept

Let's say that the system that we are building will send text messages informing customers about the availability of their order. For simplicity, only REST and AMQP will be shown below but the same principle will hold for other protocols that Zato supports.

Code # -*- coding: utf-8 -*- from __future__ import absolute_import, division, print_function, unicode_literals # Zato from zato.server.service import Service class SMSAdapter(Service): """ Sends template-based text messages to users given on input. """ name = 'sms.adapter' class SimpleIO: input_required = ('user_name', 'order_no') def get_phone_number(self, user_name): """ Returns a phone number by user_name.+1234567890 In practice, this would be read from a database or cache. """ users = { 'mary.major': '+15550101', 'john.doe': '+15550102', } return users[user_name] def handle(self): # In a real system there would be more templates, # perhaps in multiple natural languages, and they would be stored in files # on disk instead of directly in the body of a service. template = "Hello, we are happy to let you know that" \ "your order #{order_no} is ready for pickup." # Get phone number from DB phone_number = self.get_phone_number(self.request.input.user_name) # Convert the template to an actual message msg = template.format(order_no=self.request.input.order_no) # Get connection to Twilio sms = self.out.sms.twilio.get('My SMS') # Send messages sms.conn.send(msg, to=phone_number)

In reality, the code would contain more logic, for instance to look up users in an SQL or Cassandra database or to send messages based on different templates but to illustrate the point, the service above will suffice.

Note that the service uses SimpleIO which means it can be used with both JSON and XML even if only the former is used in the example.

This is the only piece of code needed and the rest is simply configuration in web-admin described below.

Channel configuration

The service needs to be mounted on channels - in this scenario it will be HTTP/REST and AMQP ones but could be any other as required in a given integration project.

In all cases, however, no changes to the code are needed in order to support additional protocols - assigning a service to a channel is merely a matter of additional configuration without any coding.

Twilio configuration

Fill out the form in Connections -> SMS -> Twilio to get a new connection to Twilio SMS messaging facilities. Account SID and token are the same values that you are given by Twilio for your account.

Default from is useful if you typically send messages from the same number or nickname. On the other hand, default to is handy if the recipient is usually the same for all messages sent. Both of these values can be always overridden on a per call basis.

Invocation samples

We can now invoke the service from both curl and RabbitMQ's GUI:

Result

In either case, the result is a text message delivered to the intended recipient :-)

There is more!

It is frequently very convenient to test connections without actually having to develop any code - this is why SMS Twilio connections offer a form to do exactly that. Just click on 'Send a message', fill in your message, click Submit and you're done!

Summary

Authoring API services, including ones that send text messages with Twilio is an easy matter with Zato.

Multiple input protocols are supported out of the box and you can rest assured that API keys and other credentials never sprawl all throughout the infrastructure, everything is contained in a single place.

Categories: FLOSS Project Planets

DataCamp: How Not To Plot Hurricane Predictions

Planet Python - Mon, 2017-09-18 12:32

Visualizations help us make sense of the world and allow us to convey large amounts of complex information, data and predictions in a concise form. Expert predictions that need to be conveyed to non-expert audiences, whether they be the path of a hurricane or the outcome of an election, always contain a degree of uncertainty. If this uncertainty is not conveyed in the relevant visualizations, the results can be misleading and even dangerous.

Here, we explore the role of data visualization in plotting the predicted paths of hurricanes. We explore different visual methods to convey the uncertainty of expert predictions and the impact on layperson interpretation. We connect this to a broader discussion of best practices with respect to how news media outlets report on both expert models and scientific results on topics important to the population at large.

No Spaghetti Plots?

We have recently seen the damage wreaked by tropical storm systems in the Americas. News outlets such as the New York Times have conveyed a great deal of what has been going on using interactive visualizations for Hurricanes Harvey and Irma, for example. Visualizations include geographical visualisation of percentage of people without electricity, amount of rainfall, amount of damage and number of people in shelters, among many other things.

One particular type of plot has understandably been coming up recently and raising controversy: how to plot the predicted path of a hurricane, say, over the next 72 hours. There are several ways to visualize predicted paths, each way with its own pitfalls and misconceptions. Recently, we even saw an article in Ars Technica called Please, please stop sharing spaghetti plots of hurricane models, directed at Nate Silver and fivethirtyeight.

In what follows, I'll compare three common ways, explore their pros and cons and make suggestions for further types of plots. I'll also delve into why these types are important, which will help us decide which visual methods and techniques are most appropriate.

Disclaimer: I am definitively a non-expert in metereological matters and hurricane forecasting. But I have thought a lot about visual methods to convey data, predictions and models. I welcome and actively encourage the feedback of experts, along with that of others.

Visualizing Predicted Hurricane Paths

There are three common ways of creating visualizations for predicted hurricane paths. Before talking about at them, I want you to look at them and consider what information you can get from each of them. Do your best to interpret what each of them is trying to tell you, in turn, and then we'll delve into what their intentions are, along with their pros and cons:

The Cone of Uncertainty

From the National Hurricane Center

Spaghetti Plots (Type I)

From South Florida Water Management District via fivethirtyeight

Spaghetti Plots (Type II)

From The New York Times. Surrounding text tells us 'One of the best hurricane forecasting systems is a model developed by an independent intergovernmental organization in Europe, according to Jeff Masters, a founder of the Weather Underground. The system produces 52 distinct forecasts of the storm’s path, each represented by a line [above].'

Interpretation and Impact of Visualizations of Hurricanes' Predicted Paths The Cone of Uncertainty

The cone of uncertainty, a tool used by the National Hurricane Center (NHC) and communicated by many news outlets, shows us the most likely path of the hurricane over the next five days, given by the black dots in the cone. It also shows how certain they are of this path. As time goes on, the prediction is less certain and this is captured by the cone, in that there is an approximately 66.6% chance that the centre of the hurricane will fall in the bounds of the cone.

Was this apparent from the plot itself?

It wasn't to me initially and I gathered this information from the plot itself, the NHC's 'about the cone of uncertainty' page and weather.com's demystification of the cone post. There are three more salient points, all of which we'll return to:

  • It is a common initial misconception that the widening of the cone over time suggests that the storm will grow;
  • The plot contains no information about the size of the storm, only about the potential path of its centre, and so is of limited use in telling us where to expect, for example, hurricane-force winds;
  • There is essential information contained in the text that accompanies the visualization, as well as the visualization itself, such as the note placed prominently at the top, '[t]he cone contains the probable path of the storm center but does not show the size of the storm...'; when judging the efficacy of a data visualization, we'll need to take into consideration all its properties, including text (and whether we can actually expect people to read it!); note that interactivity is a property that these visualizations do not have (but maybe should).
Spaghetti Plots (Type I)

Type I spaghetti plots show several predictions in one plot. One any given Type I spaghetti plot, the visualized trajectories are predictions from models from different agencies (NHC, the National Oceanic and Atmospheric Administration and the UK Met Office, for example). They are useful in that, like the cone of uncertainty, they inform us of the general region that may be in the hurricane's path. They are wonderfully unuseful and actually misleading in the fact that they weight each model (or prediction) equally.

In the Type I spaghetti plot above, there are predictions with varying degrees of uncertaintly from agencies that have previously made predictions with variable degrees of success. So some paths are more likely than others, given what we currently know. This information is not present. Even more alarmingly, some of the paths are barely even predictions. Take the black dotted line XTRP, which is a straight-line prediction given the storm's current trajectory. This is not even a model. Eric Berger goes into more detail in this Ars Technica article.

Essentially, Type I spaghetti plots provide an ensemble model (compare with aggregate polling). Yet, a key aspect of ensemble models is that each model is given an appropriate weight and these weights need be communicated in any data visualization. We'll soon see how to do this using a variation on Type I.

Spaghetti Plots (Type II)

Type II spaghetti plots show many, say 50, different realizations of any given model. The point is that if we simulate (run) a model several times, it will given a different trajectory each time. Why? Nate Cohen put it well in The Upshot:

"It’s really tough to forecast exactly when a storm will make a turn. Even a 15- or 20-mile difference in when it turns north could change whether Miami is hit by the eye wall, the fierce ring of thunderstorms that include the storm’s strongest winds and surround the calmer eye."

These are perhaps my favourite of the three for several reasons:

  • By simulating multiple runs of the model, they provide an indication of the uncertainty underlying each model;
  • They give a picture of relative likelihood of the storm centre going through any given location. Put simply, if more of the plotted trajectories go through location A than through location B, then under the current model it is more likely that the centre of the storm will go through location A;
  • They are unlikely to be misinterpreted (at least compared to the cone of uncertainty and Type I spaghetti plots). All the words required on the visualization are 'Each line represents one forecast of Irma's path'.

One con of Type II is that they are not representative of multiple models but, as we'll see, this can be altered by combining them with Type I spaghetti plots. Another con is that they, like the others, only communicate the path of the centre of the storm and say nothing about its size. Soon we'll also see how we can remedy this. Note that the distinction between Type I and Type II spaghetti plots is not one that I have found in the literature, but one that I created because these plots have such different interpretations and effects.

For the time being, however, note that we've been discussing the efficacy of certain types of plots without explicitly discussing their purpose, that is, why we need them at all. Before going any further, let's step back a bit and try to answer the question 'What is the purpose of visualizing the predicted path of a hurricane?' Performing such ostensibly naive tasks is often illuminating.

Why Plot Predicted Paths of Hurricanes?

Why are we trying to convey the predicted path of a tropical storm? I'll provide several answers to this in a minute.

But first, let me say what these visualizations are not intended for. We are not using these visualizations to help people decide whether or not to evacuate their homes or towns. Ordering or advising evacuation is something that is done by local authorities, after repeated consultation with experts, scientists, modelers and other key stakeholders.

The major point of this type of visualization is to allow the general populace to be as well-informed as possible about the possible paths of the hurricane and allow them to prepare for the worst if there's a chance that where they are or will be is in the path of destruction. It is not to unduly scare people. As weather.com states with respect to the function of the cone of uncertainty, '[e]ach tropical system is given a forecast cone to help the public better understand where it's headed' and '[t]he cone is designed to show increasing forecast uncertainty over time.'

To this end, I think that an important property would be for a reader to be able to look at it and say 'it is very likely/likely/50% possible/not likely/very unlikely' that my house (for example) will be significantly damaged by the hurricane.

Even better, to be able to say "There's a 30-40% chance, given the current state-of-the-art modeling, that my house will be significantly damaged".

Then we have a hierarchy of what we want our visualization to communicate:

  • At a bare minimum, we want civilians to be aware of the possible paths of the hurricane.
  • Then we would like civilians to be able to say whether it is very likely, likely, unlikely or very unlikely that their house, for example, is in the path.
  • Ideally, a civilian would look at the visualization and be able to read off quantatively what the probability (or range of probabilities) of their house being in the hurricane's path is.

On top of this, we want our visualizations to be neither misleading nor easy to misinterpret.

The Cone of Uncertainty versus Spaghetti Plots

All three methods perform the minimum required function, to alert civilians to the possible paths of the hurricane. The cone of uncertainty does a pretty good job at allowing a civilian to say how likely it is that a hurricane goes through a particular location (within the cone, it's about two-thirds likely). At least qualitatively, Type II spaghetti plots also do a good job here, as described above, 'if more of the trajectories go through location A than through location B, then under the current model it is more likely that the centre of the storm will go through location A'.

If you plot 50 trajectories, you get a sense of where the centre of the storm will likely be, that is, if around half of the trajectories go through a location, then there's an approximately 50% chance (according to our model) that the centre of the storm will hit that location. None of these methods yet perform the 3rd function and we'll see below how combining Type I and Type II spaghetti plots will allow us to do this.

The major problem with the cone of uncertainty and Type I spaghetti models is that the cone of uncertainty is easy to misinterpret (in that many people interpret the cone as a growing storm and do not appreciate the role of uncertainty) and that the Type I spaghetti models are misleading (they make all models look equally believable). These models then don't satisfy the basic requirement that 'we want our visualizations to be neither misleading nor easy to misinterpret.'

Best Practices for Visualizing Hurricane Prediction Paths

Type II spaghetti plots are the most descriptive and the least open to misinterpretation. But they do fail at presenting the results of all models. That is, they don't aggregate over multiple models like we saw in Type I.

So what if we combined Type I and Type II spaghetti plots?

To answer this, I did a small experiment using python, folium and numpy. You can find all the code here.

I first took one the NHC's Hurricane Irma's prediction paths from last week, added some random noise and plotted 50 trajectories. Note that, once again, I am a non-expert in all matters meteorological. The noise that I generated and added to the predicted signal/path was not based on any models and, in a real use case, would come from the models themselves (if you're interested, I used Gaussian noise). For the record, I also found it difficult to find data concerning any of the predicted paths reported in the media. The data I finally used I found here.

Here's a simple Type II spaghetti plot with 50 trajectories:

But these are possible trajectories generated by a single model. What if we had multiple models from different agencies? Well, we can plot 50 trajectories from each:

One of the really cool aspects of Type II spaghetti plots is that, if we plot enough of them, each trajectory becomes indistinct and we begin to see a heatmap of where the centre of the hurricane is likely to be. All this means is that the more blue in a given region, the more likely it is for the path to go through there. Zoom in to check it out.

Moreover, if we believe that one model is more likely than another (if, for example, the experts who produced that model have produced far more accurate models previously), we can weight these models accordingly via, for example, transparency of the trajectories, as we do below. Note that weighting these models is a task for an expert and an essential part of this process of aggregate modeling.

What the above does is solve the tasks required by the first two properties that we want our visualizations to have. To achieve the 3rd, a reader being able to read off that it's, say 30-40% likely for the centre of a hurricane to pass through a particular location, there are two solutions:

  • to alter the heatmap so that it moves between, say, red and blue and include a key that says, for example, red means a probability of greater than 90%;
  • To transform the heatmap into a contour map that shows regions in which the probability takes on certain values.

Also do note that this will tell somebody the probability that a given location will be hit by the hurricane's center. You could combine (well, convolve) this with information about the size of the hurricane to transform the heatmap into one of the probability of a location being hit by hurricane-force winds. If you'd like to do this, go and hack around the code that I wrote to generate the plots above (I plan to write a follow-up post doing this and walking through the code).

Visualizing Uncertainty and Data Journalism

What can we take away from this? We have explored several types of visualization methods for predicted hurricane paths, discussed the pros and cons of each and suggested a way forward for more informative and less misleading plots of such paths, plots that communicate not only the results but also the uncertainty around the models.

This is part of a broader conversation that we need to be having about reporting uncertainty in visualizations and data journalism, in general. We need to actively participate in conversations about how experts report uncertainty to civilians via news media outlets. Here's a great piece from The Upshot demonstrating what the jobs report could look like due to statistical noise, even if jobs were steady. Here's another Upshot piece showing the role of noise and uncertainty in interpreting polls. I'm well aware that we need headlines to sell news and the role of click-bait in the modern news media landscape, but we need to be communicating not merely results, but uncertainty around those results so as not mislead the general public and potentially ourselves. Perhaps more importantly, the education system needs to shift and equip all civilians with levels of data literacy and statistical literacy in order to deal with this movement into the data-driven age. We can all contribute to this.

Categories: FLOSS Project Planets

Deeson: Where to find Deeson at DrupalCon Vienna 2017

Planet Drupal - Mon, 2017-09-18 12:06

It’s official: Deeson is in the top 30 of companies contributing to Drupal globally! As huge proponents of open source we’re proud to be playing a key role in supporting the health of the project, and this is testament to the hard work of our development team.

Next week we’re heading to DrupalCon Vienna 2017. We’re pleased to sponsor the Women in Drupal event again this year, and in the spirit of sharing what we’ve learned we’ll also be delivering several sessions throughout the event. Here’s a taste of what to expect:

Component driven front-end development

John Ennew. 26th September, 2.15pm in Lehar 2.

Pages are dead - long live components.

With a component based approach your development team can maintain a catalogue of templates independent of the backend CMS.

When the backend work starts, these components will then be integrated into Drupal. This talk will describe a method for doing this which does not cause complex themes or copying pieces of template code out of the front-end prototypes.

This talk will cover:

  • The general approach to component based development
  • A method for developing components independent of the backend system which will be used
  • How to integrate the components with Drupal 8
  • An overview of the advantages of this approach
Building social websites with Group and Open Social

Kristiaan Van den Eynde. 27th September, 10.45am in Lehar 1.

A lot of Drupal sites are run by only a handful of people. A few power users receive the rights to administer other user accounts, some others can post and publish content and everyone else can just view content and “use” the site. It’s when this scenario doesn’t suit your needs that you might want to have a look at the Group module.

Group allows you to give people similar permissions like those above but only for smaller subsections of a website. Say you run a school website and you want students to be able to only see the courses that are available for them to enroll in, but nothing else. Or you want to run a social network where users can post content, but only within their sandboxed area on the website. Group’s got you covered.

This session will be a brief description of the Group module by its author Kristiaan Van den Eynde (Deeson) and explain its key concepts. We will demo how to configure it and then show you how Group is used in the wonderful Open Social distribution.

Joining us for the second part of the presentation, Jochem van Nieuwenhuijsen (GoalGorilla / Open Social) will explain how Group enabled a team of talented developers to build a social network using Drupal 8. He will list some of the challenges and show you some of the cool stuff they built on top of the Group module.

This session is suitable for developers with some experience with Drupal 8 site building, but most of the presentation should be easy to digest for even the most junior site builders.

Birds of a Feather sessions

Drupal recently announced that Vienna will be the last European conference for the foreseeable future, and that they will host BoFs at the event for the community to discuss the future. 

If you’re not familiar with the BoF format, DrupalCon describes them as “informal gatherings of like-minded individuals who wish to discuss a certain topic without a pre-planned agenda”. This year, Deeson team members are delivering five BoF sessions:

Facilitating happy, high performing distributed teams

Tim Deeson. 26th September, 10.45am in Galerie 11-12. 

An opportunity to share tips and tools for what you've found works or problems you want help with.

At Deeson we've found a mix of tools, processes and relationship building is key. And that sharing what works for us and learning from others is invaluable.

  • Are there tools that you find that really make a difference?
  • Any team events or activities that help people get to know each other?
  • How do you spot if someone isn't happy or engaged?
  • What methods works or doesn't work for different types of personalities?
  • Is always-on Slack a blessing or a curse?
  • Do in-person events matter or can everything be done online?
  • Do you have a structured way of supporting the team to get know each other well?

Bookmark this session

Agile and agencies

John Ennew. 27th September, 1pm in Galerie 15-16. 

The are many ways to run an Agile project.

Much of the written support for working with Agile is based on an internal team which doesn't always support to how the client and agency relationship works.

Agencies have a variety of mechanisms for running an Agile project from simply embedding the ceremonies of Agile to actually ensuring the project team, client and contract are Agile from the start. 

Come along and share your experiences (highs and lows), your tips and your best practices for making Agile work in an Agency.

Bookmark this session

Creating technical excellence in the tech team

John Ennew. 28th September, 2.15pm in Galerie 11-12.

Are you a technical lead?

Come and meet like minded people to share your experiences in managing your team members and ensuring technical excellence.

Bookmark this session

The road ahead for Group: New features and future development

Kristiaan Van den Eynde. 27th September, 3.45pm in Galerie 15-16.

This BoF is intended for site builders who are actively using Group for Drupal 8 and are wondering what's currently planned for development or for site builders who think there is a key feature missing from Group 8 right now. The goal is to either learn about what's coming or to actually add something to the roadmap, provided it would be useful to a larger audience.

Bookmark this session

Reinventing the entity access layer (Node access for all entities)

Kristiaan Van den Eynde. 28th September, 12pm in Galerie 13-14.

This BoF is intended for those involved in https://www.drupal.org/node/777578 and all those who wish to participate by writing a proof of concept or by brainstorming over possible approaches.

Bookmark this session

Heading to DrupalCon Vienna 2017? We’d love to meet you! Drop us a line on Twitter if you want to chat to us or email hello@deeson.co.uk.

Categories: FLOSS Project Planets

Python Software Foundation: Improving Python and Expanding Access: How the PSF Uses Your Donation

Planet Python - Mon, 2017-09-18 12:02

The PSF is excited to announce its first ever membership drive beginning on September 18th!  Our goal for this inaugural drive is to raise $4,000.00 USD in donations and sign up 3,000 new members in 30 days.

If you’ve never donated to the PSF,  you've let your membership lapse, or you've thought about becoming a Supporting Member - here is your chance to make a difference.

Join the PSF as a Supporting Member or Donate to the PSF
You can donate as an individual or join the PSF as a Supporting Member. Supporting members pay $99.00 USD per year to help sustain the Foundation and support the Python community. Supporting members are also eligible to vote for candidates for the PSF Board of Directors, changes in the PSF bylaws, and other matters related to the infrastructure of the foundation.

To become a supporting member or to make a donation, click on the widget here and follow the instructions at the bottom of the page.

We know many of you already make a great effort to support us; you volunteer your time to help us keep our website going, you join working groups to help with marketing, sponsorship, grant requests, trademarks, Python education, and packaging. Even more, you help the PSF put on PyCon US, a conference we couldn’t do without the help of our volunteers. The collective efforts and contributions of our volunteers help drive our work. We will forever be grateful to the people who step forward and ask, “What can I do to help advance open source technology related to Python?”
We understand that not everyone has the time to volunteer, but perhaps you’re in a position to help financially.
We’re asking those who are able, to donate money to support sprints, meet ups, and community events. Donations support Python documentation, fiscal sponsorships, software development, and community projects. They help fund the critical tools programmers use every day.

If you're not in a position to contribute financially, that's ok. Basic membership is free and we welcome anyone who would like to join at this level. Register here to create your member account, log back in, then complete the form to become a basic member.

What does the PSF do?
  • We fund great projects. So far this year we have approved over $200,000.00 USD in grants to over 140 events worldwide. We’re on track to surpass last year’s total of $265,000.00 USD in grants to 137 events in 45 different countries.

  • We organize and host PyCon US. This year’s event brought together 3,389 attendees from 41 countries, a new record for PyCon! Our sponsors’ support enabled us to award $89,000.00 USD in financial aid to 194 attendees.

  • We celebrate awesome Python contributors. Community Service Awards are given out quarterly, honoring individuals who support our mission. 

  • We implemented a trial Python Ambassador program that we hope to expand in the next year. This program provides funding for a dedicated Pythonista to travel locally to perform Python outreach. 

  • We provide fiscal sponsorship support for Python projects, where the PSF collects targeted donations and reimburses expenses on that projects' behalf.

  • We support Python programmers worldwide by funding sprints and workshops that enable people to work on Python-related projects that advance the mission of the PSF. 


Here is what one of our sponsors has to say about why they contribute to the PSF:

“Work on stuff that matters is one of O’Reilly’s core principles, and we know how very much open source matters. The open source community spurs innovation, shares knowledge, encourages growth, and creates industries. The Python Software Foundation is a prime example of the power of open source, showing how focused, thoughtful, and consistent efforts can create a community whose impact extends far beyond meetups and lines of code. O’Reilly is proud to continue to sponsor this great foundation.”
-- Rachel Roumeliotis, Vice President at O’Reilly Media and Chair of OSCON

Lastly, if you’d like to share the news about the PSF’s Membership drive, please share a tweet via the tweet button here:




Or share a tweet with the following text:
Donation & Membership Drive @ThePSF. Help us raise $4K and register 3K new members in 30 days! http://bit.ly/2h3dxpb #idonatedtothepsf
We at the PSF want to thank you for all that you do. Your support is what makes the PSF possible.

Categories: FLOSS Project Planets

Michy Alice: Putting some of my Python knowledge to a good use: a Reddit reading bot!

Planet Python - Mon, 2017-09-18 11:26

One of the perks of knowing a programming language is that you can build your own tools and applications. Depending on what you need, it may even be a fast process since you usually do not need to write production grade code and a detailed documentation (although it might still be helpful in the future).

I’ve got used to read news stuff on Reddit, however, it sometimes can be a bit time consuming since it tends to keep you wandering through every and each rabbit hole that pops up. This is fine if you are commuting and have just some spare time to spend on browsing the web but sometimes I just need a quick glance at what’s new and relevant to my interests.

In order to automate this search process, I’ve written a bot that takes as input a list of subreddits, a list of keywords and flags and browses each subreddit looking for the given keywords.

If a keyword is found inside either the body or in the title of a post which has been submitted in one of the selected subreddits, the post title and the links are either printed in the console or saved in a file (in this case the file name must be supplied when starting the search).

The bot is written using praw.


What do I need to use the bot?

In order to use the bot you’ll need to set up an app using your Reddit account and save the client_id, client_secret, username and password in a file named config_data.py which should be stored in the same folder as the reddit_browsing_bot_main.py script.

How does the bot work?

The bot is designed to be a command line application and can be used either in Linux terminal or in the PowerShell if you are the Windows type ;)

This choice of adopting a CLI was undoubtebly a bad choice if I wanted to make other people use the application but in my case I am the end user, and I like command line tools, a lot.

For each subreddit entered, the bot checks if that subreddit exists, if it doesn’t the subreddit is discarded. Then, within each subreddit, the bot searches the first –l posts and returns the posts that contained at least a keyword.

This is an example of use:

reddit_browsing_bot_main.py -s python -k pycon -l 80 -f new -o output.txt –vIn the example above I am searching the first 80 posts in the “new” section of the python
subreddit for posts that mention pycon. The –o flag tells the program to output the results
of the search to the output.txt file. The –v flag makes the program print the output to the
console.You can search in more subreddits and/or use more keywords, just separate each new
subreddit/keyword with a comma. If you did not supply an output file, the program will
just output the results to the console.Type:reddit_browsing_bot_main.py -hfor a help menu. Maybe in the future I’ll add some features but for now this is pretty much it. Is it ok to use the bot?As far as I know, the bot is not violating any of the terms written in the Reddit’s API. Also,
the API calls are already limited by the praw module in order to comply with the Reddit’s
API limits. The bot is not downvoting nor upvoting any post, it just reads what is online.Anyway, should you want to check the code yourself, it is available on my GitHub.I’ve also copied and pasted a gist below so that you can have a look at the code here:
Categories: FLOSS Project Planets

Possbility and Probability: The curse of knowledge: Finding os.getenv()

Planet Python - Mon, 2017-09-18 10:24

Recently I was working with a co-worker on an unusual nginx problem. While working on the nginx issue we happened to look at some of my Python code. My co-worker normally does not do a lot of Python development, she … Continue reading →

The post The curse of knowledge: Finding os.getenv() appeared first on Possibility and Probability.

Categories: FLOSS Project Planets

Drop Guard: Drop Guard is cutting costs by 40% - facts and figures

Planet Drupal - Mon, 2017-09-18 09:45
Drop Guard is cutting costs by 40% - facts and figures

While working with other NGOs and agencies during the last 1,5 years, we collected more and more information about the time and money that Drop Guard will save your agency. On our website, we claim that Drop Guard will cut your update costs by 40%. CTOs and COOs want to challenge numbers like this and ask how exactly this ROI is calculated. That’s why I want to share the detailed information in this blog post with you.

Security updates are released every Wednesday. If you work in a Drupal shop that cares about security, you have to apply updates for every site every Wednesday or at least Thursday.

Drupal Business Drupal Planet
Categories: FLOSS Project Planets

Catalin George Festila: The numba python module.

Planet Python - Mon, 2017-09-18 09:44
Today I tested the numba python module.
This python module allow us to speed up applications with high performance functions written directly in Python.
The numba python module works by generating optimized machine code using the LLVM compiler infrastructure at import time, runtime, or statically.
The code can be just-in-time compiled to native machine instructions, similar in performance to C, C++ and Fortran.
For the installation I used pip tool:
C:\Python27>cd Scripts

C:\Python27\Scripts>pip install numba
Collecting numba
Downloading numba-0.35.0-cp27-cp27m-win32.whl (1.4MB)
100% |################################| 1.4MB 497kB/s
...
Installing collected packages: singledispatch, funcsigs, llvmlite, numba
Successfully installed funcsigs-1.0.2 llvmlite-0.20.0 numba-0.35.0 singledispatch-3.4.0.3

C:\Python27\Scripts>pip install numpy
Requirement already satisfied: numpy in c:\python27\lib\site-packages
The example test from official website working well:
The example source code is:
from numba import jit
from numpy import arange

# jit decorator tells Numba to compile this function.
# The argument types will be inferred by Numba when function is called.
@jit
def sum2d(arr):
M, N = arr.shape
result = 0.0
for i in range(M):
for j in range(N):
result += arr[i,j]
return result

a = arange(9).reshape(3,3)
print(sum2d(a))The result of this run python script is:
C:\Python27>python.exe numba_test_001.py
36.0Another example using just-in-time compile is used with Numba’s jit function:
import numba
from numba import jit

def fibonacci(n):
a, b = 1, 1
for i in range(n):
a, b = a+b, a
return a

print fibonacci(10)

fibonacci_jit = jit(fibonacci)
print fibonacci_jit(14)Also you can use jit is as a decorator:
@jit
def fibonacci_jit(n):
a, b = 1, 1
for i in range(n):
a, b = a+b, a

return aNumba is a complex python module because use compiling.
First, compiling takes time, but will work specially for small functions.
The Numba python module tries to do its best by caching compilation as much as possible though.
Another note: not all code is compiled equal.
Categories: FLOSS Project Planets

Chromatic: How To: Multiple Authors in Drupal

Planet Drupal - Mon, 2017-09-18 09:35

A brief rundown of how to configure Drupal to display multiple content authors.

Categories: FLOSS Project Planets
Syndicate content