FLOSS Project Planets

Tollef Fog Heen: Temperature monitoring using a Beaglebone Black and 1-wire

Planet Debian - Wed, 2015-04-22 04:15

I've had a half-broken temperature monitoring setup at home for quite some time. It started out with a Atom-based NAS, a USB-serial adapter and a passive 1-wire adapter. It sometimes worked, then stopped working, then started when poked with a stick. Later, the NAS was moved under the stairs and I put a Beaglebone Black in its old place. The temperature monitoring thereafter never really worked, but I didn't have the time to fix it. Over the last few days, I've managed to get it working again, of course by replacing nearly all the existing components.

I'm using the DS18B20 sensors. They're about USD 1 a piece on Ebay (when buying small quantities) and seems to work quite ok.

My first task was to address the reliability problems: Dropouts and really poor performance. I thought the passive adapter was problematic, in particular with the wire lengths I'm using and I therefore wanted to replace it with something else. The BBB has GPIO support, and various blog posts suggested using that. However, I'm running Debian on my BBB which doesn't have support for DTB overrides, so I needed to patch the kernel DTB. (Apparently, DTB overrides are landing upstream, but obviously not in time for Jessie.)

I've never even looked at Device Tree before, but the structure was reasonably simple and with a sample override from bonebrews it was easy enough to come up with my patch. This uses pin 11 (yes, 11, not 13, read the bonebrews article for explanation on the numbering) on the P8 block. This needs to be compiled into a .dtb. I found the easiest way was just to drop the patched .dts into an unpacked kernel tree and then running make dtbs.

Once this works, you need to compile the w1-gpio kernel module, since Debian hasn't yet enabled that. Run make menuconfig, find it under "Device drivers", "1-wire", "1-wire bus master", build it as a module. I then had to build a full kernel to get the symversions right, then build the modules. I think there is or should be an easier way to do that, but as I cross-built it on a fast AMD64 machine, I didn't investigate too much.

Insmod-ing w1-gpio then works, but for me, it failed to detect any sensors. Reading the data sheet, it looked like a pull-up resistor on the data line was needed. I had enabled the internal pull-up, but apparently that wasn't enough, so I added a 4.7kOhm resistor between pin 3 (VDD_3V3) on P9 and pin (GPIO_45) on P8. With that in place, my sensors showed up in /sys/bus/w1/devices and you can read the values using cat.

In my case, I wanted the data to go into collectd and then to graphite. I first tried using an Exec plugin, but never got it to work properly. Using a [python plugin] worked much better and my graphite installation is now showing me temperatures.

Now I just need to add more probes around the house.

The most useful references were

In addition, various searches for DS18B20 pinout and similar, of course.

Categories: FLOSS Project Planets

KDE Applications 15.04 , Frameworks 5.9 and linux 3.19.4 available

Planet KDE - Wed, 2015-04-22 02:29


KDE's first release of its 15.04 series of Applications and Frameworks 5.9.0 are now available to all Chakra users. With this release kde-workspace has also been updated to version 4.11.18 and kdelibs to 4.14.7. Have in mind that the applications that have been ported to Frameworks 5 will not be updated but remain at their previous versions, as they are being prepared to be included in the upcoming Plasma5 switch.

According to the official announcement, starting with this release KDE Telepathy and kdenlive will be shipped together with the rest of KDE Applications.

In addition, the following notable updates are now available:
- linux 3.19.4
- nvidia 346.59
- git 2.3.5
- vlc 2.2.1
- wine 1.7.41
- ruby 2.2.1
- digikam 4.9.0
- apache 2.4.12
- subversion 1.8.13
- bomi (a Qt5 GUI player based on mpv) 0.9.7
- otf-source-han-sans (CJK fonts) 1.002

It should be safe to answer yes to any replacement question by Pacman. If in doubt or if you face another issue - please ask or report it on the related forum section.

As always, make sure your mirror is fully synced (at least for the core, desktop and platform repositories) before performing this update by running the mirror-check application.

Categories: FLOSS Project Planets

Python Piedmont Triad User Group: PYPTUG Monthly meeting: Team Near Space Circus #NSC01 mission debrief

Planet Python - Wed, 2015-04-22 01:31
PYthon Piedmont Triad User Group meeting
Come join PYPTUG at out next monthly meeting (April 27th 2015) to learn more about the Python programming language, modules and tools. Python is the perfect language to learn if you've never programmed before, and at the other end, it is also the perfect tool that no expert would do without. Monthly meetings are in addition to our project nights.


WhatMeeting will start at 5:30pm.

We will open on an Intro to PYPTUG and on how to get started with Python, PYPTUG activities and members projects, then on to News from the community. And of course, the main part of the meeting:



Main FeatureTitle: "Team Near Space Circus NSC-01 mission debrief"


Abstract: We'll talk some about the python code that ran the cluster, that took the pictures the realtime twitter position reporting, the simulation website, the Picavet, look at pretty pictures


Balloon burst in near space, it's going down! Lightning talks! 
We will have some time for extemporaneous "lightning talks" of 5-10 minute duration. If you'd like to do one, some suggestions of talks were provided here, if you are looking for inspiration. Or talk about a project you are working on.






WhenMonday, April 27th 2015
Meeting starts at 5:30PM

WhereWake Forest University,
close to Polo Rd and University Parkway:

Manchester Hall
room: Manchester 241 Wake Forest University, Winston-Salem, NC 27109

 Map this

See also this campus map (PDF) and also the Parking Map (PDF) (Manchester hall is #20A on the parking map)

And speaking of parking:  Parking after 5pm is on a first-come, first-serve basis.  The official parking policy is:
"Visitors can park in any general parking lot on campus. Visitors should avoid reserved spaces, faculty/staff lots, fire lanes or other restricted area on campus. Frequent visitors should contact Parking and Transportation to register for a parking permit." Mailing List
Don't forget to sign up to our user group mailing list:

https://groups.google.com/d/forum/pyptug?hl=en

It is the only step required to become a PYPTUG member.

Meetup Group
In order to get a feel for how much food we'll need, we ask that you register your attendance to this meeting on meetup:

http://www.meetup.com/PYthon-Piedmont-Triad-User-Group-PYPTUG/events/221687697/
Categories: FLOSS Project Planets

John Goerzen: Today I FLEW A PLANE

Planet Debian - Tue, 2015-04-21 22:53

“For once you have tasted flight,
You will walk the earth with your eyes turned skyward;
For there you have been,
And there you long to return.”

– Leonardo da Vinci

There is something of a magic to flight, to piloting. I remember the first flight I ever took, after years of dreaming of flying in a plane: my grandma had bought me a plane ticket. In one of the early morning flights, I witnessed a sunrise above cumulus clouds. Although I was just 10 or so at the time, that still is a most beautiful image seared into my memory.

I have become “meh” about commercial flight over the years. The drive to the airport, the security lines, the lack of scenery at 35,000 feet. And yet, there is much more to flight than that. When I purchased what was essentially a flying camera, I saw a whole new dimension of the earth’s amazing beauty. All the photos in this post, in fact, are ones I took. I then got a RC airplane, because flying the quadcopter was really way too easy.

“It’s wonderful to climb the liquid mountains of the sky.
Behind me and before me is God, and I have no fears.”

– Helen Keller

Start talking to pilots, and you notice a remarkable thing: this group of people that tends to be cool and logical, methodical and precise, suddenly finds themselves using language almost spiritual. Many have told me that being a pilot brings home how much all humans have in common, the unifying fact of sharing this beautiful planet together. Many volunteer with organizations such as Angel Flight. And having been up in small planes a few times, I start to glimpse this. Flying over my home at 1000′ up, or from lake to lake in Seattle with a better view than the Space Needle, seeing places familiar and new, but from a new perspective, drives home again and again the beauty of our world, the sheer goodness of it, and the wonderful color of the humanity that inhabits it.

“The air up there in the clouds is very pure and fine, bracing and delicious.

And why shouldn’t it be?

It is the same the angels breathe.”

– Mark Twain

The view from 1000 feet, or 3000, is often so much more spectacular than the view from 35,000 ft as you get on a commercial flight. The flexibility is too; there are airports all over the country that smaller planes can use which the airlines never touch.

Here is one incredible video from a guy that is slightly crazy but does ground-skimming, flying just a few feet off the ground: (try skipping to 9:36)

So what comes next is something I blame slightly on my dad and younger brother. My dad helped get me interested in photography as a child, and that interest has stuck. It’s what caused me to get into quadcopters (“a flying camera for less than the price of a nice lens!”). And my younger brother started mentioning airplanes to me last year for some reason, as if he was just trying to get me interested. Eventually, it worked. I started talking to the pilots I know (I know quite a few; there seems to be a substantial overlap between amateur radio and pilots). I started researching planes, flight, and especially safety — the most important factor.

And eventually I decided I wanted to be a pilot. I’ve been studying feverishly, carrying around textbooks and notebooks in the car, around the house, and even on a plane. There is a lot to learn.

And today, I took my first flight with a flight instructor. Today I actually flew a plane for awhile. Wow! There is nothing quite like that experience. Seeing a part of the world I am familiar with from a new perspective, and then actually controlling this amazing machine — I really fail to find the words to describe it. I have put in many hours of study already, and there will be many more studying and flying, but it is absolutely worth it.

Here is one final video about one of the most unique places you can fly to in Kansas.

And a blog with lots of photos of a flight to Beaumont called “Horse on the runway”.

Categories: FLOSS Project Planets

Julien Tayon: Eval is even more really dangerous than you think

Planet Python - Tue, 2015-04-21 20:47

Preamble, I know about this excellent article:
http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html

I have a bigger objection than ned to use eval; python has potentially unsafe base types.

I had this discussion with a guy at pycon about being able to safely process templates and do simple user defined formating operations without rolling your own home made language with data coming from user input interpolated by python. Using python for only the basic operations.

And my friend told me interpolating some data from python with all builtins and globals removed could be faster. After all letting your customer specify "%12.2f" in his customs preference for items price can't do any harm. He even said: nothing wrong can happen: I even reduce the possibility with a regexp validation. And they don't have the size to put ned's trick in 32 characters, how much harm can you do?

His regexp was complex, and I told him can I try something?

and I wrote "%2000.2000f" % 0.0 then '*' * 20 and 2**2**2**2**2

all of them validated.

Nothing wrong. Isn't it?

My point is even if we patched python eval function and or managed sandboxing in python, python is inherently unsafe as ruby and php (and perl) in the base type.

And since we can't change the behaviour of base type we should never let people use a python interpreter even reduced as a calculator or a templating language with uncontrolled user inputs.

Base types and keywords cannot be removed from any interpreters.

And take the string defined as:

"*" * much

this will multiply the string by much octets and thus allocate the memory ... (also in perl, php, ruby, bash, python, vimscripts, elispc)
And it cant be removed from the language, keywords * and base types are being part of the core of the language. If you change them, you have another language.

"%2000000.2000000f" % 0.0 is funny to execute, it is CPU hungry.

We may change it. But I guess that a lot of application out there depend on python/perl/PHP ruby NOT throwing an exception when you do "%x.yf" with x+y bigger than the possible size of the number. And where would set the limit ?

Using any modern scripting language as a calculator is like being a C coders still not understanding why using printf/scanf/memcpy deserve the direct elimination of the C dev pool.

Take the int... when we overflow, python dynamically allocate a bigger number. And since exponentiation operator has the opposite priority as in math, it grows even faster, allocating huge memory in a matter of small iterations. (ruby does too, Perl requires the Math::BigInt to have this behaviour)

It is not python is a bad language. He is an excellent one, because of «these flaws». C knight coders like to bash python for this kind of behaviour because of this uncontroled use of resources. Yes, but in return we avoid the hell of malloc and have far less buffer overflow. Bugs that costs resources too. And don't avoid this:

#include <"stdio.h">

void main(void){
printf("%100000.200f", 0.0);
}

And ok, javascript does not have the "%what.milles" bug (nicely done js), but he has probably other ones.


So, the question is how to be safe?

As long as we don't have powerful interpreter like python and others with resource control, we have to resort to other languages.


I may have an answer : use Lua.

https://pypi.python.org/pypi/lupa

I checked  most of this explosive base type behaviour don't happen.

But, please, never use ruby, php, perl, bash, vim, elispc, ksh, csh, python has a reduced interpreter for doing basic scripting operation or templating with uncontrolled user input (I mean human controlled by someone that knows coding). Even for a calculator it is dangerous.

What makes python a good language makes him also a dangerous language. I like it for the same reasons I fear to let user inputs be interpreted by it.

EDIT: format http://pyformat.info/ is definitely a good idea.
EDIT++: http://beauty-of-imagination.blogspot.ca/2015/04/so-i-wrote-proof-of-concept-language-to.html
Categories: FLOSS Project Planets

Justin Mason: Links for 2015-04-21

Planet Apache - Tue, 2015-04-21 19:58
Categories: FLOSS Project Planets

Jonathan Wiltshire: Tube in a Day

Planet Debian - Tue, 2015-04-21 17:27

For some reason, I’ve decided that gallivanting around the London Underground for the day one Saturday is a fine way to raise money for a local children’s hospice. You’d make my day by supporting us – we aren’t deducting expenses from pledges, so there’s no penalty to the charity for our travel.

We’re going to run a modified version of the Guinness-recognised Tube Challenge starting about 05:15 (modified to allow for unavoidable maintenance works; we don’t have the luxury of being able to pick a day when that’s not going to be a problem) and likely finishing about midnight.

I’m also interested to hear ideas for some kind of micro-blogging platform that we can update on the move, preferably presenting in stream format and with an Android-friendly site/app that can cope with uploading a photo smoothly. Not Twitter or Facebook; it’ll probably be a short-lived account. I don’t want to part with personal information and I want to be able to throw it away afterwards. Suggestions?

Tube in a Day is a post from: jwiltshire.org.uk | Flattr

Categories: FLOSS Project Planets

Would you want to see posts about NoSQL databases on PlanetKDE?

Planet KDE - Tue, 2015-04-21 17:27

My dear readers, I have agreed with ArangoDB to help them spread the word about their – of course open source – multi-model NoSQL database, and I will be using this blog to do that. It’s not going to be boring marketing bla bla, but I’m planning to write about things like tutorials, interviews with ArangoDB users and contributors, as well as the adventures of a “kind-of-geeky psychologist” (i.e. me) taking a stab at developing a data-intensive web application using ArangoDB.

PlanetKDE is not subscribed to this whole blog, however, but only to a specific category. Therefore, I can control whether these posts show up on the Planet or not. Since many of my readers here are rather on the tech-savvy side of the spectrum, I suppose that these posts might be of interest to at least some, if not many, of you, but I don’t want to “spam” the Planet with these posts if the majority would not be interested in them.

Therefore, I want to give you, my readers, a choice. Please vote using the poll below on whether you’d like to see NoSQL database-related posts show up on the Planet or not. Of course I can revise the decision later if I get a lot of feedback to the contrary on my individual posts, but I’d like to get a quantitative picture of the general preference beforehand.

Take Our Poll
Filed under: KDE
Categories: FLOSS Project Planets

A. Jesse Jiryu Davis: Announcing PyMongo 3.0.1

Planet Python - Tue, 2015-04-21 17:10

It's my pleasure to announce the release of PyMongo 3.0.1, a bugfix release that addresses issues discovered since PyMongo 3.0 was released a couple weeks ago. The main bugs were related to queries and cursors in complex sharding setups, but there was an unintentional change to the return value of save, GridFS file-deletion didn't work properly, passing a hint with a count didn't always work, and there were some obscure bugs and undocumented features.

For the full list of bugs fixed in PyMongo 3.0.1, please see the release in Jira.

If you are using PyMongo 3.0, please upgrade immediately.

If you are on PyMongo 2.8, read the changelog for major API changes in PyMongo 3, and test your application carefully with PyMongo 3 before deploying.

Categories: FLOSS Project Planets

Cheeky Monkey Media: Install Drupal 7 with a Foundation sub-theme using drush

Planet Drupal - Tue, 2015-04-21 17:10

Drupal 7 is by far my favorite CMS to date and Zurb Foundation is currently my go to theme. Although, I wouldn't really call Foundation a theme, but more of a responsive front-end framework that you can use to build your themes from.

Here is how to setup a fresh copy of Drupal 7 and configure a Foundation sub-theme quickly to get your project up and running:

Install Drupal using Drush

Although you can do this all the old fashion way, I prefer to use drush for this. Here are the drush commands to make this all happen:

drush dl drupal --drupal...
Categories: FLOSS Project Planets

Jonathan Wiltshire: Jessie Countdown: 4

Planet Debian - Tue, 2015-04-21 17:00

Four architectures – types of computing device that you can use to run Debian – didn’t make it through architecture qualification for Jessie and won’t be part of the official stable release this weekend.

It’s always difficult to see architectures go, particularly when there is still a community interested in maintaining support for them. Nevertheless, sometimes a port just doesn’t have enough momentum behind it or sufficient upstream interest from the toolchain to be economical. Whether a port is of sufficient quality for stable is a complex decision involving several different teams – Debian System Administration, the security team, those who maintain the auto-builder network, the release team, and so on – and will continue to affect them for several years.

It’s not all gloom though: new architectures in Jessie include arm64ppc64el (a 64-bit version of powerpc) and s390x (replacing Wheezy’s s390). Architecture rotation can be healthy.

(source: https://release.debian.org/jessie/arch_qualify.htmlia64 and s390 were planned retirements following Wheezy, leaving hurd-i386kfreebsd-i386kfreebsd-amd64 and sparc which didn’t satisfy all qualification criteria by the time decisions had to be taken.)

Jessie Countdown: 4 is a post from: jwiltshire.org.uk | Flattr

Categories: FLOSS Project Planets

Evolving KDE

Planet KDE - Tue, 2015-04-21 16:13

Paul and Lydia have blogged about how KDE should and could evolve. KDE as a whole is a big, diverse, sprawling thing. It's a house of many rooms, built on the idea that free software is important. By many, KDE is still seen as being in competition with Gnome, but Gnome still focuses on creating a desktop environment with supporting applications.

KDE has a desktop project, and has projects for supporting applications, but also projects for education, projects for providing useful libraries to other applications and projects to provide tools for creative professionals and much, much more. For over a decade, as we've tried to provide an alternative to proprietary systems and applications, KDE has grown and grown. I wouldn't be able, anymore, to characterize KDE in any sort of unified way. Well, maybe "like Apache, but for end-users, not developers."

So I can only really speak about my own project and how it has evolved. Krita, unlike a project like Blender, started out to provide a free software alternative to a proprietary solution that was integrated with the KDE desktop and meant to be used by people for whom having free software was the most important thing. Blender started out to become the tool of choice for professionals, no matter what, and was open sourced later on. It's an important distinction.

Krita's evolution has gone from being a weaker, but free-as-in-freedom alternative to a proprietary application to an application that aspires to be the tool of choice, even for people who don't give a fig about free software. Even for people who feel that free software must be inferior because it's free software. When one artist says to another at, for instance, Spectrum "What, you're not using Krita? You're crazy!", we'll have succeeded.

That is a much harder goal than we originally had, because our audience ceases to be in the same subculture that we are. They are no longer forgiving because they're free software enthusiasts and we're free software enthusiasts who try really hard, they're not even much forgiving because they get the tool gratis.

But when the question is: what should a KDE project evolve into, my answer would always be: stop being a free software alternative, start becoming a competitor, no matter what, no matter where. For the hard of reading: that doesn't mean that a KDE project should stop being free-as-in-freedom software, it means that we should aim really high. Users should select a KDE application over others because it gives a better experience, makes them more productive, makes them feel smart for having chosen the obviously superior solution.

And that's where the blog Paul linked to comes in. We will need a change in mentality if we want to become a provider of the software-of-choice in the categories where we compete.

It means getting rid of the "you got it for free, if you don't like it, fuck off or send a patch" mentality. We'd all love to believe that nobody thinks like that anymore in KDE, but that's not true.

I know, because that's something I experienced in the reactions to my previous blog. One of the reactions I got a couple of times was "if you've got so much trouble porting, why are you porting? If Qt4 and KDE 4 work for you, why don't you stay with it?" I was so naive, I took the question seriously.

Of course Krita needs to be ported to Qt5 and Kf5. That's what Qt5 and Kf5 are for. If those libraries are not suitable for an application like Krita, those libraries have failed in their purpose and have no reason for existence. Just like Krita has no reason for existence if people can't paint with it. And of course I wasn't claiming in my blog that Qt5 and Kf5 were not suitable: I was claiming that the process of porting was made unnecessarily difficult by bad documentation, by gratuitous API changes in some places and in other places by a disregard for the amount of work a notional library or build-system 'clean-up' causes for complex real-world projects.

It took me days to realize that asking me "why port at all" is in essence nothing but telling me "if you don't like it, fuck off or send a patch". I am pretty sure that some of the people who asked me that question didn't realize that either -- but that doesn't make it any better. It's, in a way, worse: we're sending fuck-off messages without realizing it!

Well, you can't write software that users love if you tell them to fuck off when they have a problem.

If KDE wants to evolve, wants to stay relevant, wants to compete, not just with other free software projects that provide equivalents to what KDE offers, that mentality needs to go. Either we're writing software for the fun of it, or we're writing software that we want people to choose to use (and I've got another post coming up elaborating on that distinction).

And if KDE wants to be relevant in five years, just writing software for the fun of it isn't going to cut it.

Categories: FLOSS Project Planets

Bryan Pendleton: Flash Crash news

Planet Apache - Tue, 2015-04-21 14:44

Well, this is interesting: CFTC Charges U.K. Resident Navinder Singh Sarao and His Company Nav Sarao Futures Limited PLC with Price Manipulation and Spoofing.

In particular, the CFTC release notes:

in or about June 2009, Defendants modified a commonly used off-the-shelf trading platform to automatically simultaneously “layer” four to six exceptionally large sell orders into the visible E-mini S&P central limit order book (the Layering Algorithm), with each sell order one price level from the other. As the E-mini S&P futures price moved, the Layering Algorithm allegedly modified the price of the sell orders to ensure that they remained at least three or four price levels from the best asking price; thus, remaining visible to other traders, but staying safely away from the best asking price. Eventually, the vast majority of the Layering Algorithm orders were canceled without resulting in any transactions. According to the Complaint, between April 2010 and April 2015, Defendants utilized the Layering Algorithm on over 400 trading days.

The Complaint alleges that Defendants often cycled the Layering Algorithm on and off several times during a typical trading day to create large imbalances in the E-mini S&P visible order book to affect the prevailing E-mini S&P price. Defendants then allegedly traded in a manner designed to profit from this temporary artificial volatility. According to the Complaint, from April 2010 to present, Defendants have profited over $40 million, in total, from E-mini S&P trading.

As others quickly pointed out, the notion that "layering" is involved in these wild price swings is being studied by multiple agencies. For example: Exclusive: SEC targets 10 firms in high frequency trading probe - SEC document.

The SEC has been seeking evidence of abuse of order types, as well as traditional forms of abusive trading like "layering" or "spoofing" and other issues relating to high-frequency trading that might be violations of the law, SEC Director of Enforcement Andrew Ceresney told Reuters in May (reut.rs/1kwSqF5).

Spoofing and layering are tactics where traders places orders that they cancel before they are executed to create the false impression of demand, aiming to trick others into buying or selling a stock at the artificial price.

I'm pleased that investigators continue to investigate.

On the other hand, even after 5 years the investigators still appear to be uncertain as to exactly what happened and why.

It's disturbing news, all around.

Categories: FLOSS Project Planets

Fabio Zadrozny: Type hinting on Python

Planet Python - Tue, 2015-04-21 14:11
If you missed it, it seems right now there's a long thread going on related to the type-hinting PEP:

https://mail.python.org/pipermail/python-dev/2015-April/139221.html

It seems there's a lot of debate and I think the outcome will shape how Python code will look some years from now, so, I thought I'd give one more opinion to juice things up :)

The main thing going for the proposal is getting errors earlier by doing type-checking (which I find a bit odd in the Python world with duck-typing, when many times a docstring could say it's expecting a list but I could pass a list-like object and could get fine with it).

The main point raised against it is is that with the current proposal, the code becomes harder to read.

Example:

def zipmap(f: Callable[[int, int], int], xx: List[int], yy: List[int]) -> List[Tuple[int, int, int]]:

Sure, IDEs -- such as PyDev :) --  will probably have to improve their syntax highlighting so that you could distinguish things better, but I agree that this format is considerably less readable. Note that Python 3 already has the syntax in-place but currently it seems it's very seldomly used -- http://mypy-lang.org/ seems to be the exception :)

Now, for me personally, the current status quo in this regard is reasonable: Python doesn't go into java nor c++ land and keeps working with duck-typing, and Python code can still be documented so that clients knows what kinds of objects are expected as parameters.

I.e.: the code above would be:

def zipmap(f, xx, yy):
    '''
    :type f: callable(tuple(int, int))->int
    :type xx: list(int)
    :type yy: list(int)
    :rtype: list(tuple(int, int, int))
    '''

Many IDEs can already understand that and give you proper code-completion from that -- at least I know PyDev does :)

The only downside which I see in the current status quo is that the format of the docstrings isn't standardized and there's no static check for it (if you want to opt-in the type-checking world), but both are fixable: standardize it (there could be a grammar for docstrings which define types) and have a tool which parses the docstrings and does runtime checks (using the profiler hook for that shouldn't be that hard... and the overhead should be within reasonable limits -- if you're doing type checking with types from annotations, it should have a comparable overhead anyways).

Personally, I think that this approach has the same benefits without the argument against it (which is a harder to read syntax)...

If more people share this view of the Python world, I may even try to write a runtime type-checking tool based on docstrings in the future :)
Categories: FLOSS Project Planets

Jim Birch: Drupal 7: Importing Tweets into Drupal using the Twitter Module

Planet Drupal - Tue, 2015-04-21 13:35

Why would you want to import tweets into a Drupal site?  For one, I want to own the content I create.  Unlike other social media sites, Twitter allows great access to the content I create on their platform.  Through their API, I can access all of my Tweets and Mentions for archiving and displaying on my own site.

I have had a couple of instances with clients where the archiving of Tweets came in handy.  One when a Twitter account was hacked, and one when someone said something that wasn't supposed to be said.  At the very least, it is an offsite backup of your content at Twitter, and that is never a bad thing.

I have used this module for building aggregated content.  If you have a site that is surrounded by topics, you can build lists of Twitter accounts or #hashtags.  Imagine if you were running a Drupal Camp, you could build a feed of all of the speakers and sponsors, or a feed of the camp's #hashtag, or both!

You could also build a Twitter feed of only your community.  This module allows each and every Drupal user account to associate with one or many twitter accounts.  The users just need to authorize themselves.  The possibilities seem endless.

OK, so on with the good stuff.  Importing Tweets into your Drupal 7 site is very quick and easy using the Drupal Twitter Module

Read more

Categories: FLOSS Project Planets

ThinkShout: Drupal and Salesforce Integrations Get Some (Data) Integrity

Planet Drupal - Tue, 2015-04-21 13:00

Hot on the heels of our all-hands-on-deck sprint to release RedHen Raiser, we decided to change gears to focus on some of our marquee open source contributions, namely the Salesforce Suite.

The Salesforce Suite has been around since Drupal 5 and it’s evolved quite a bit in order to keep up with the ever-changing Salesforce and Drupal landscapes. Several years ago, we found ourselves relying heavily upon the Salesforce Suite for our Salesforce-Drupal integrations. But there came a point where we realized the module could no longer keep up with our needs. So we, in collaboration with the maintainers of the module at the time, set out to rewrite the suite for Drupal 7.

We completely rewrote the module, leveraging Drupal's entity architecture, Salesforce's REST API, and OAUTH for authentication. We also added much-needed features such as a completely new user experience, the ability to synchronize any Drupal and Salesforce objects, and a number of performance enhancements. This was a heck of an undertaking, and there were dozens of other improvements we made to the suite that you can read about in this blog post. We’ve maintained this module ever since and have endeavored to add new features and enhancements as they become necessary. We realized this winter that it was time for yet another batch of improvements as the complexity and scale of our integrations has grown.

In addition to over 150 performance enhancements and bug fixes, this release features an all new Drupal entity mapping system which shows a log of all synchronization activity, including any errors. You can now see a log entry for every attempted data synchronization. If there’s a problem, the log will tell you where it is and why it’s an issue. There’s now a whole interface designed to help you pinpoint where these issues are so you can solve them quickly.

Administrators can even manually create or edit a connection between Drupal and Salesforce objects. Before this update, the only way to connect two objects was to create the mapping and then wait for an object to be updated or created in either Drupal or Salesforce. Now you can just enter the Salesforce ID and you’re all set.

Take the following example to understand why these improvements are so critical. Say that your constituents are volunteering through your Drupal site using the Registration module. The contacts are created or updated in RedHen and then synced to Salesforce. For some reason, you can see the new volunteers in Drupal, but they are not showing in Salesforce. It used to be that the only clue to a problem was buried in the error log. Now, all you have to do is go to the RedHen contact record, and then click “Salesforce activity,” and you’ll see a record of the attempted sync and an explanation of why it failed. Furthermore, you can manually connect the contact to Salesforce by entering the Salesforce ID.

Finally, you can now delete existing mappings, or map to an entirely different content type. The bottom line is that module users have more control of, and insights into, how their data syncs to Salesforce. You can download version 7.x-3.1 from Drupal.org and experience these improvements for yourself.

We’ve been hard at work polishing several other of our modules and tools, like the RedHen suite and Entity Registration, which also saw new releases. We’ll tell you more about what you can expect from those new versions in our upcoming blogs.

Want to chat about our module work at DrupalCon in LA? You can find us hanging out with our friends from MailChimp at their booth. We’d love to talk to you more about what we’re working on.

Categories: FLOSS Project Planets

Drupal Watchdog: Drupal People

Planet Drupal - Tue, 2015-04-21 12:39
Column

Drupal people are good people. They are the recipe’s secret ingredient, and conferences are the oven. Mix and bake.

March 2007, Sunnyvale, California, the Yahoo campus and a Sheraton.

OSCMS, my second Drupal event and my first conference.

Dries gave the State of Drupal keynote, with a survey of developers and a vision for future work. His hair was still a bit punk and he was a bit younger. Dries has the best slides. Where does he find those amazing slides?

I like Dries a lot.

I wish I had created Drupal.

In 1999, I created my own CMS named Frameworks. I remember showing my friend Norm an "edit" link for changing text and how cool that was. Back then, I didn't even know about Open Source – despite being a fanboy of Richard Stallman and the FSF – and I was still using a mix of C/C++, Perl, and IIS. (If you wanted to eat in the 1990's, Windows was an occupational hazard.)

But I didn't create Drupal. I didn't have the hair, I've never had those amazing slides, and I will never be able to present that well.

But mainly, I didn't have the vision.

Rasmus Lerdorf gave a talk on the history of PHP. I was good with computer languages. I had written a compiler in college, developed my first interpretive language in the late 1980's and another one in the early 1990's. I wondered why I hadn't created PHP. At the time, most web apps were written in Perl. I loved Perl. It was so concise. It was much better than AWK, which in itself was also pretty awesome.

(Note: AWK does not stand for awkward. It’s named after Aho, Weinberger, and Kernighan – of K&R fame).

So I didn't see the need for PHP, we had Perl!

Again, no vision.

Meanwhile: 2007, Sunnyvale, California, OSCMS.

Categories: FLOSS Project Planets

The Ride for Roswell; Together, We Can Help Find Cancer Cures and Save Lives

LinuxPlanet - Tue, 2015-04-21 11:42

I almost never post to this blog about personal issues (my similar post about the Ride last year may have been my first). Posts are usually related to LQ, Linux or Open Source. The truth is, I’m a pretty private person. That means I’m stepping a bit outside of my comfort zone with this post, but it’s for a cause that’s extremely important to me. This June I’ll be participating in the Ride for Roswell, a bike race that helps support the cutting-edge research and patient care programs that benefit the 31,000 patients who turn to Roswell Park for hope. From my donation page:

Cancer is a disease that has impacted myself and my family a great deal. I’m participating in the Ride for Roswell in memory of my grandmother (whom cancer took far too soon) and my mother (who was able to beat cancer with the help of Roswell). I myself am genetically predisposed to get a certain cancer. I know this because of Roswell Park; and because of Roswell Park, I am not afraid. The cutting-edge research that Roswell does, coupled with the top notch patient care they provide, offers hope to many against an insidious disease that has taken far too many lives. With your donation and facilities such as Roswell Park, I’m confident that, together, we can help find cancer cures and save lives.

I know people have a tendency to speak in platitudes when it comes to cancer. In this case, however, we can make a difference. We can be the change we want to see in the world. We can win.

When making donations, I know two things that are very important to me are how responsible the organization I’m donating to is with my donation and how transparent they are with where that donation goes.  The Roswell Park Alliance Foundation has traditionally been counted among the most responsible charities by the national rating agency Charity Navigator. Financial Health and Accountability / Transparency are the two criteria used by Charity Navigator to give the Roswell Park Alliance it’s top rating.

I know that we’re in difficult fiscal times, but any amount helps. Please visit my donation page and give, if you’re able. If you’re not able, I appreciate you reading my story.

Now back to our regularly scheduled programming.

–jeremy


Categories: FLOSS Project Planets

Julien Danjou: Gnocchi 1.0: storing metrics and resources at scale

Planet Debian - Tue, 2015-04-21 11:00

A few months ago, I wrote a long post about what I called back then the "Gnocchi experiment". Time passed and we – me and the rest of the Gnocchi team – continued to work on that project, finalizing it.

It's with a great pleasure that we are going to release our first 1.0 version this month, roughly at the same time that the integrated OpenStack projects release their Kilo milestone. The first release candidate numbered 1.0.0rc1 has been released this morning!

The problem to solve

Before I dive into Gnocchi details, it's important to have a good view of what problems Gnocchi is trying to solve.

Most of the IT infrastructures out there consists of a set of resources. These resources have properties: some of them are simple attributes whereas others might be measurable quantities (also known as metrics).

And in this context, the cloud infrastructures make no exception. We talk about instances, volumes, networks… which are all different kind of resources. The problems that are arising with the cloud trend is the scalability of storing all this data and being able to request them later, for whatever usage.

What Gnocchi provides is a REST API that allows the user to manipulate resources (CRUD) and their attributes, while preserving the history of those resources and their attributes.

Gnocchi is fully documented and the documentation is available online. We are the first OpenStack project to require patches to integrate the documentation. We want to raise the bar, so we took a stand on that. That's part of our policy, the same way it's part of the OpenStack policy to require unit tests.

I'm not going to paraphrase the whole Gnocchi documentation, which covers things like installation (super easy), but I'll guide you through some basics of the features provided by the REST API. I will show you some example so you can have a better understanding of what you could leverage using Gnocchi!

Handling metrics

Gnocchi provides a full REST API to manipulate time-series that are called metrics. You can easily create a metric using a simple HTTP request:

POST /v1/metric HTTP/1.1
Content-Type: application/json
 
{
"archive_policy_name": "low"
}
 
HTTP/1.1 201 Created
Location: http://localhost/v1/metric/387101dc-e4b1-4602-8f40-e7be9f0ed46a
Content-Type: application/json; charset=UTF-8
 
{
"archive_policy": {
"aggregation_methods": [
"std",
"sum",
"mean",
"count",
"max",
"median",
"min",
"95pct"
],
"back_window": 0,
"definition": [
{
"granularity": "0:00:01",
"points": 3600,
"timespan": "1:00:00"
},
{
"granularity": "0:30:00",
"points": 48,
"timespan": "1 day, 0:00:00"
}
],
"name": "low"
},
"created_by_project_id": "e8afeeb3-4ae6-4888-96f8-2fae69d24c01",
"created_by_user_id": "c10829c6-48e2-4d14-ac2b-bfba3b17216a",
"id": "387101dc-e4b1-4602-8f40-e7be9f0ed46a",
"name": null,
"resource_id": null
}


The archive_policy_name parameter defines how the measures that are being sent are going to be aggregated. You can also define archive policies using the API and specify what kind of aggregation period and granularity you want. In that case , the low archive policy keeps 1 hour of data aggregated over 1 second and 1 day of data aggregated to 30 minutes. The functions used for aggregations are the mathematical functions standard deviation, minimum, maximum, … and even 95th percentile. All of that is obviously customizable and you can create your own archive policies.

If you don't want to specify the archive policy manually for each metric, you can also create archive policy rule, that will apply a specific archive policy based on the metric name, e.g. metrics matching disk.* will be high resolution metrics so they will use the high archive policy.

It's also worth noting Gnocchi is precise up to the nanosecond and is not tied to the current time. You can manipulate and inject measures that are years old and precise to the nanosecond. You can also inject points with old timestamps (i.e. old compared to the most recent one in the timeseries) with an archive policy allowing it (see back_window parameter).

It's then possible to send measures to this metric:

POST /v1/metric/387101dc-e4b1-4602-8f40-e7be9f0ed46a/measures HTTP/1.1
Content-Type: application/json
 
[
{
"timestamp": "2014-10-06T14:33:57",
"value": 43.1
},
{
"timestamp": "2014-10-06T14:34:12",
"value": 12
},
{
"timestamp": "2014-10-06T14:34:20",
"value": 2
}
]

HTTP/1.1 204 No Content


These measures are synchronously aggregated and stored into the configured storage backend. Our most scalable storage drivers for now are either based on Swift or Ceph which are both scalable storage objects systems.

It's then possible to retrieve these values:

GET /v1/metric/387101dc-e4b1-4602-8f40-e7be9f0ed46a/measures HTTP/1.1
 
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
 
[
[
"2014-10-06T14:30:00.000000Z",
1800.0,
19.033333333333335
],
[
"2014-10-06T14:33:57.000000Z",
1.0,
43.1
],
[
"2014-10-06T14:34:12.000000Z",
1.0,
12.0
],
[
"2014-10-06T14:34:20.000000Z",
1.0,
2.0
]
]


As older Ceilometer users might notice here, metrics are only storing points and values, nothing fancy such as metadata anymore.

By default, values eagerly aggregated using mean are returned for all supported granularities. You can obviously specify a time range or a different aggregation function using the aggregation, start and stop query parameter.

Gnocchi also supports doing aggregation across aggregated metrics:

GET /v1/aggregation/metric?metric=65071775-52a8-4d2e-abb3-1377c2fe5c55&metric=9ccdd0d6-f56a-4bba-93dc-154980b6e69a&start=2014-10-06T14:34&aggregation=mean HTTP/1.1
 
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
 
[
[
"2014-10-06T14:34:12.000000Z",
1.0,
12.25
],
[
"2014-10-06T14:34:20.000000Z",
1.0,
11.6
]
]


This computes the mean of mean for the metric 65071775-52a8-4d2e-abb3-1377c2fe5c55 and 9ccdd0d6-f56a-4bba-93dc-154980b6e69a starting on 6th October 2014 at 14:34 UTC.

Indexing your resources

Another object and concept that Gnocchi provides is the ability to manipulate resources. There is a basic type of resource, called generic, which has very few attributes. You can extend this type to specialize it, and that's what Gnocchi does by default by providing resource types known for OpenStack such as instance, volume, network or even image.

POST /v1/resource/generic HTTP/1.1
 
Content-Type: application/json
 
{
"id": "75C44741-CC60-4033-804E-2D3098C7D2E9",
"project_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D",
"user_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D"
}
 
HTTP/1.1 201 Created
Location: http://localhost/v1/resource/generic/75c44741-cc60-4033-804e-2d3098c7d2e9
ETag: "e3acd0681d73d85bfb8d180a7ecac75fce45a0dd"
Last-Modified: Fri, 17 Apr 2015 11:18:48 GMT
Content-Type: application/json; charset=UTF-8
 
{
"created_by_project_id": "ec181da1-25dd-4a55-aa18-109b19e7df3a",
"created_by_user_id": "4543aa2a-6ebf-4edd-9ee0-f81abe6bb742",
"ended_at": null,
"id": "75c44741-cc60-4033-804e-2d3098c7d2e9",
"metrics": {},
"project_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d",
"revision_end": null,
"revision_start": "2015-04-17T11:18:48.696288Z",
"started_at": "2015-04-17T11:18:48.696275Z",
"type": "generic",
"user_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d"
}


The resource is created with the UUID provided by the user. Gnocchi handles the history of the resource, and that's what the revision_start and revision_end fields are for. They indicates the lifetime of this revision of the resource. The ETag and Last-Modified headers are also unique to this resource revision and can be used in a subsequent request using If-Match or If-Not-Match header, for example:

GET /v1/resource/generic/75c44741-cc60-4033-804e-2d3098c7d2e9 HTTP/1.1
If-Not-Match: "e3acd0681d73d85bfb8d180a7ecac75fce45a0dd"
 
HTTP/1.1 304 Not Modified


Which is useful to synchronize and update any view of the resources you might have in your application.

You can use the PATCH HTTP method to modify properties of the resource, which will create a new revision of the resource. The history of the resources are available via the REST API obviously.

The metrics properties of the resource allow you to link metrics to a resource. You can link existing metrics or create new ones dynamically:

POST /v1/resource/generic HTTP/1.1
Content-Type: application/json
 
{
"id": "AB68DA77-FA82-4E67-ABA9-270C5A98CBCB",
"metrics": {
"temperature": {
"archive_policy_name": "low"
}
},
"project_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D",
"user_id": "BD3A1E52-1C62-44CB-BF04-660BD88CD74D"
}
 
HTTP/1.1 201 Created
Location: http://localhost/v1/resource/generic/ab68da77-fa82-4e67-aba9-270c5a98cbcb
ETag: "9f64c8890989565514eb50c5517ff01816d12ff6"
Last-Modified: Fri, 17 Apr 2015 14:39:22 GMT
Content-Type: application/json; charset=UTF-8
 
{
"created_by_project_id": "cfa2ebb5-bbf9-448f-8b65-2087fbecf6ad",
"created_by_user_id": "6aadfc0a-da22-4e69-b614-4e1699d9e8eb",
"ended_at": null,
"id": "ab68da77-fa82-4e67-aba9-270c5a98cbcb",
"metrics": {
"temperature": "ad53cf29-6d23-48c5-87c1-f3bf5e8bb4a0"
},
"project_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d",
"revision_end": null,
"revision_start": "2015-04-17T14:39:22.181615Z",
"started_at": "2015-04-17T14:39:22.181601Z",
"type": "generic",
"user_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d"
}


Haystack, needle? Find!

With such a system, it becomes very easy to index all your resources, meter them and retrieve this data. What's even more interesting is to query the system to find and list the resources you are interested in!

You can search for a resource based on any field, for example:

POST /v1/search/resource/instance HTTP/1.1
Content-Type: application/json
 
{
"=": {
"user_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d"
}
}


That query will return a list of all resources owned by the user_id bd3a1e52-1c62-44cb-bf04-660bd88cd74d.

You can do fancier queries such as retrieving all the instances started by a user this month:

POST /v1/search/resource/instance HTTP/1.1
Content-Type: application/json
Content-Length: 113
 
{
"and": [
{
"=": {
"user_id": "bd3a1e52-1c62-44cb-bf04-660bd88cd74d"
}
},
{
">=": {
"started_at": "2015-04-01"
}
}
]
}


And you can even do fancier queries than the fancier ones (still following?). What if we wanted to retrieve all the instances that were on host foobar the 15th April and who had already 30 minutes of uptime? Let's ask Gnocchi to look in the history!

POST /v1/search/resource/instance?history=true HTTP/1.1
Content-Type: application/json
Content-Length: 113
 
{
"and": [
{
"=": {
"host": "foobar"
}
},
{
">=": {
"lifespan": "1 hour"
}
},
{
"<=": {
"revision_start": "2015-04-15"
}
}
 
]
}


I could also mention the fact that you can search for value in metrics. One feature that I will very likely include in Gnocchi 1.1 is the ability to search for resource whose specific metrics matches some value. For example, having the ability to search for instances whose CPU consumption was over 80% during a month.

Cherries on the cake

While Gnocchi is well integrated and based on common OpenStack technology, please do note that it is completely able to function without any other OpenStack component and is pretty straight-forward to deploy.

Gnocchi also implements a full RBAC system based on the OpenStack standard oslo.policy and which allows pretty fine grained control of permissions.

There is also some work ongoing to have HTML rendering when browsing the API using a Web browser. While still simple, we'd like to have a minimal Web interface served on top of the API for the same price!

Ceilometer alarm subsystem supports Gnocchi with the Kilo release, meaning you can use it to trigger actions when a metric value crosses some threshold. And OpenStack Heat also supports auto-scaling your instances based on Ceilometer+Gnocchi alarms.

And there are a few more API calls that I didn't talk about here, so don't hesitate to take a peek at the full documentation!

Towards Gnocchi 1.1!

Gnocchi is a different beast in the OpenStack community. It is under the umbrella of the Ceilometer program, but it's one of the first projects that is not part of the (old) integrated release. Therefore we decided to have a release schedule not directly linked to the OpenStack and we'll release more often that the rest of the old OpenStack components – probably once every 2 months or the like.

What's coming next is a close integration with Ceilometer (e.g. moving the dispatcher code from Gnocchi to Ceilometer) and probably more features as we have more requests from our users. We are also exploring different backends such as InfluxDB (storage) or MongoDB (indexer).

Stay tuned, and happy hacking!

Categories: FLOSS Project Planets
Syndicate content