FLOSS Project Planets

Mike C. Fletcher: Auto-generating Output Declarations

Planet Python - Fri, 2014-04-11 12:04

So the good news is that I've now got far more PyOpenGL output parameters automatically wrapped such that they can be passed in or automatically generated. That drops a *lot* of the manually maintained code. OpenGLContext works with the revisions, but the revision currently does *not* support non-contiguous input arrays (basically the cleanup included removing manually maintained code that said "this is an input array, so allow it to be non-contiguous"). I'll be moving the logic to doing that wrapping into the run-time eventually.

Upshot, however, is that there are a lot of changes, and while they are all bug-fixes, they are going to need to be tested (once they are finished). There's also a few hundred entry-points that can't currently be auto-wrapped, I'm intending to make the auto-wrapper more capable there when possible.

Categories: FLOSS Project Planets

Janez Urevc: You should come to DC Alpe-Adria (really!)

Planet Drupal - Fri, 2014-04-11 11:56

If you came this far you probably liked this video just as much as I did :). You should really consider coming to Portorož in May to attend DC Alpe-Adria. We will have 2 days of great sessions, BoFs and sprints + 2 more day of extended sprints where we're going to focus on D8 and making it rock!

Portorož is also a great destination for children and families so you could bring your significant others and/or families with you and extend Drupal camp into an unforgettable vacation.

Interested? Of course you are! Find out more at drupalalpeadria.org.

Categories: FLOSS Project Planets

VIM: “Hiding” C++11 lambdas

Planet KDE - Fri, 2014-04-11 11:43

One of my favourite C++11 features are lambdas.

The syntax is a bit cumbersome, but it was the best approach the committee could take without creating a new sub-language. Every part of the syntax has a reason for why it exists.

But, it still is a bit ugly, and can influence readability of the surrounding code quite a bit.

The thing that annoys me the most is the lambda head – the capture block and the arguments it takes. Those are very important when writing the code, but not (that much) when reading it.

My solution for this? The conceal feature of Vim.

The good thing about lambdas is that they are (meant to) be used as local anonymous functions. That means that, while reading other parts of the code, you don’t actually need to know what the lambda is capturing, nor which are its arguments. So, it doesn’t hurt to hide them, right?

Naturally, when you want to edit the lambda head, Vim shows the actual contents of line, and not just some strange Greek symbol. :)

This also lowers the desire to use the potentially problematic [&] and [=] as the capture block, instead of explicitly capturing the variables that you need.

Edit: The code to achieve this:

.vimrc: set conceallevel=1 .vim/after/syntax/cpp/cpp.vim syn match cpp11_lambda "[[a-zA-Z0-9&= ,]*] *(.*)( *{)\@=" conceal cchar=λ syn match cpp11_lambda "[[a-zA-Z0-9&= ,]*]( *{)\@=" conceal cchar=λ
Categories: FLOSS Project Planets

Wouter Verhelst: Review: John Scalzi: Redshirts

Planet Debian - Fri, 2014-04-11 11:25

I'm not much of a reader anymore these days (I used to be when I was a young teenager), but I still do tend to like reading something every once in a while. When I do, I generally prefer books that can be read front to cover in one go—because that allows me to immerse myself into the book so much more.

John Scalzi's book is... interesting. It talks about a bunch of junior officers on a starship of the "Dub U" (short for "Universal Union"), which flies off into the galaxy to Do Things. This invariably involves away missions, and on these away missions invariably people die. The title is pretty much a dead giveaway; but in case you didn't guess, it's mainly the junior officers who die.

What I particularly liked about this book is that after the story pretty much wraps up, Scalzi doesn't actually let it end there. First there's a bit of a tie-in that has the book end up talking about itself; after that, there are three epilogues in which the author considers what this story would do to some of its smaller characters.

All in all, a good read, and something I would not hesitate to recommend.

Categories: FLOSS Project Planets

Ian Campbell: qcontrol 0.5.3

Planet Debian - Fri, 2014-04-11 11:04

Update: Closely followed by 0.5.4 to fix an embarassing brown paper bag bug:

  • Correct argument handling for system-status command

Get it from gitorious or http://www.hellion.org.uk/qcontrol/releases/0.5.4/.

I've just released qcontrol 0.5.3. Changes since the last release:

  • Reduce spaminess of temperature control (Debian bug #727150).
  • Support for enabling/disabling RTC on ts219 and ts41x. Patch from Michael Stapelberg (Debian bug #732768).
  • Support for Synology Diskstation and Rackstation NASes. Patch from Ben Peddell.
  • Return correct result from direct command invocation (Debian bug #617439).
  • Fix ts41x LCD detection.
  • Improved command line argument parsing.
  • Lots of internal refactoring and cleanups.

Get it from gitorious or http://www.hellion.org.uk/qcontrol/releases/0.5.3/.

The Debian package will be uploaded shortly.

Categories: FLOSS Project Planets

FeatherCast: Lakmal Warusawithana, Apache Stratos (Incubating)

Planet Apache - Fri, 2014-04-11 10:49

While at ApacheCon in Denver, I had a chance to speak with Lakmal Warusawithana, from WSO2, about the Apache Stratos (Incubating) project, and what it is.

Categories: FLOSS Project Planets

Steve Kemp: Putting the finishing touches to a nodejs library

Planet Debian - Fri, 2014-04-11 10:14

For the past few years I've been running a simple service to block blog/comment-spam, which is (currently) implemented as a simple JSON API over HTTP, with a minimal core and all the logic in a series of plugins.

One obvious thing I wasn't doing until today was paying attention to the anchor-text used in hyperlinks, for example:

<a href="http://fdsf.example.com/">buy viagra</a>

Blocking on the anchor-text is less prone to false positives than blocking on keywords in the comment/message bodies.

Unfortunately there seem to exist no simple nodejs modules for extracting all the links, and associated anchors, from a random Javascript string. So I had to write such a module, but .. given how small it is there seems little point in sharing it. So I guess this is one of the reasons why there often large gaps in the module ecosystem.

(Equally some modules are essentially applications; great that the authors shared, but virtually unusable, unless you 100% match their problem domain.)

I've written about this before when I had to construct, and publish, my own cidr-matching module.

Anyway expect an upload soon, currently I "parse" HTML and BBCode. Possibly markdown to follow, since I have an interest in markdown.

Categories: FLOSS Project Planets

Bruce Snyder: Admiral General Aladeen's Rationale For a Dictatorship

Planet Apache - Fri, 2014-04-11 10:05
I was watching the movie 'The Dictator' again on the plane a couple weeks ago. Although I had seen it before and knew it was funny, I was reminded all over again how hilarious it is to watch Sacha Baron Cohen poke fun at life. Here is Admiral General Aladeen's final speech in the movie on dictatorship vs. democracy, too damn funny:


Imagine if North America was a dictatorship:
  • You could let 1% of the people have all the nation's wealth
  • You could help your rich friends get richer by cutting their taxes and bailing them out when they gamble and lose
  • You could ignore the needs of the poor for healthcare and education
  • Your media would appear free but would secretly be controlled by one person and his family
  • You could wiretap phones
  • You could torture foreign prisoners
  • You could have rigged elections
  • You could lie about why you go to war
  • You could fill your prisons with one particular racial group and no one would complain
  • You could use the media to scare the people into supporting policies that are against their interests
Ahhh, the irony! 
Categories: FLOSS Project Planets

Phase2: An Open Source PartnerShip A Year In The Making

Planet Drupal - Fri, 2014-04-11 09:52

It was one year ago that our own Steven Merrill, Director of Engineering at Phase2, found himself at the RedHat Summit, when he stopped in front of the OpenShift booth. OpenShift is an open-source Platform As A Service (PaaS) solution that offers developers a cloud application platform with a choice of programming languages, frameworks and application lifecycle tools to build and run their applications. The platform provides built-in support for Node.js, Ruby, Python, PHP, Perl, and Java, as well as MySQL, PostgreSQL, and MongoDB. Developers can also add their own languages.

Right away Steven was intrigued by OpenShift since it’s the only PaaS that’s open source (OpenShift Origin,) and that also has a Red Hat-supported behind-the-firewall install (OpenShift Enterprise) and a public PaaS (OpenShift Online.) As Phase2’s DevOps luminary and frequent contributor to the Drupal community, Steven quickly acquainted himself with the OpenShift team and started to explore the possibility of spinning up OpenShift environments for Drupal. By the end of RedHat Summit 2013, Steven had laid the groundwork for a Drupal 8 cartridge and had created an updated PHP 5.4 cartridge for OpenShift.

Steven’s introduction to OpenShift at the RedHat Summit ignited excitement about diversifying our deployment optimization services here at Phase2. The possibility of creating quickstart packages for our Drupal distributions on OpenShift was especially attractive to us. Soon after the RedHat Summit, the Drupal 8 quickstart cartridge was committed to OpenShift, allowing developers to quickly and safely spin up a Drupal 8 environment to test and develop on.

Throughout the past year, our relationship with OpenShift strengthened as we worked together at DrupalCon Portland and DrupalCon Prague to develop Drupal compatibility with OpenShift. To our clients’ delight, we began implementing OpenShift into our deployment services. One of our recent clients, a Fortune 500 publishing company, was overjoyed to find that the deployment process we created for them using Openshift allowed them to cut onboarding time for new developers from an entire month to as little as a week.

Steven and Diane Mueller, the OpenShift community manager, recently co-hosted an OpenShift for Drupal training at NYC Camp. The training gave Drupal developers the tools and knowledge they need to quickly develop, host, and scale applications in an open source cloud environment.  Next week we will be once again heading to RedHat Summit, one year later, exhibiting at the summit as an Advanced OpenShift partner.

Our partnership with OpenShift is a classic open source story: equally committed to open source solutions, Phase2 and OpenShift have teamed up to develop mutually beneficial service capabilities for our clients. We look forward to continuing our close relationship with OpenShift and announcing several more exciting developments and collaborative projects launching in the near future. Stay tuned – there are big things coming for Drupal on OpenShift, the cloud, and Phase2’s deployment services.

Categories: FLOSS Project Planets

Code Karate: Drupal Site Map Module

Planet Drupal - Fri, 2014-04-11 08:19
Episode Number: 143

The Drupal Site Map module can be used to provide you Drupal website visitors with a high level overview of the content on your Drupal 7 site.

Tags: DrupalContribDrupal 7Site BuildingDrupal PlanetSEO
Categories: FLOSS Project Planets

AppNeta Blog: Writing Purposeful Unit Tests

Planet Python - Fri, 2014-04-11 08:00
Several recent blogs have discussed unit testing, some of them in considerable depth. One of my favorites is Jeff Knupp’s entry, which is a comprehensive look at how to write ...read moreRelated Posts:
Categories: FLOSS Project Planets

End Point: Speeding Up Saving Millions of ORM Objects in PostgreSQL

Planet Python - Fri, 2014-04-11 07:59
The Problem

Sometimes you need to generate sample data, like random data for tests. Sometimes you need to generate it with huge amount of code you have in your ORM mappings, just because an architect decided that all the logic needs to be stored in the ORM, and the database should be just a dummy data container. The real reason is not important - the problem is: let’s generate lots of, millions of rows, for a sample table from ORM mappings.

Sometimes the data is read from a file, but due to business logic kept in ORM, you need to load the data from file to ORM and then save the millions of ORM objects to database.

This can be done in many different ways, but here I will concentrate on making that as fast as possible.

I will use PostgreSQL and SQLAlchemy (with psycopg2) for ORM, so all the code will be implemented in Python. I will create a couple of functions, each implementing another solution for saving the data to the database, and I will test them using 10k and 100k of generated ORM objects.

Sample Table

The table I used is quite simple, just a simplified blog post:

CREATE TABLE posts ( id SERIAL PRIMARY KEY, title TEXT NOT NULL, body TEXT NOT NULL, payload TEXT NOT NULL ); SQLAlchemy Mapping I'm using SQLAlchemy for ORM, so I need a mapping, I will use this simple one: class BlogPost(Base): __tablename__ = "posts" id = Column(Integer, primary_key=True) title = Column(Text) body = Column(Text) payload = Column(Text)

The payload field is just to make the object bigger, to simulate real life where objects can be much more complicated, and thus slower to save to the database.

Generating Random Object

The main idea for this test is to have a randomly generated object, however what I really check is the database speed, and the whole randomness is used at the client side, so having a randomly generated object doesn’t really matter at this moment. The overhead of a fully random function is the same regardless of the method of saving the data to the database. So instead of randomly generating the object, I will use a static one, with static data, and I will use the function below:

TITLE = "title" * 1764 BODY = "body" * 1764 PAYLOAD = "dummy data" * 1764 def generate_random_post(): "Generates a kind of random blog post" return BlogPost(title=TITLE, body=BODY, payload=PAYLOAD) Solution Ideas

Generally there are two main ideas for such a bulk inserting of multiple ORM objects:

  • Insert them one-by-one with autocommit
  • Insert them one-by-one in one transaction
Save One By One

This is the simplest way. Usually we don’t save just one object, but instead we save many different objects in one transaction, and making a couple of related changes in multiple transactions is a great way leading to a database with bad data.

For generating millions of unrelated objects this shouldn’t cause data inconsistency, but this is highly inefficient. I’ve seen this multiple times in code: create an object, save it to the database, commit, create another object and so on. It works, but is quite slow. Sometimes it is fast enough, but for the cost of making a very simple change in this algorithm we can make it 10 times faster.

I’ve implemented this algorithm in the function below:

def save_objects_one_by_one(count=MAX_COUNT): for i in xrange(1, MAX_COUNT+1): post = generate_random_post() session.add(post) session.commit() Save All in One Transaction

This solution is as simple as: create objects, save them to the database, commit the transaction at the end, so do everything in one huge transaction.

The implementation differs only by four spaces from the previous one, just run commit() once, after adding all objects:

def save_objects_one_transaction(count=MAX_COUNT): for i in xrange(1, MAX_COUNT+1): post = generate_random_post() session.add(post) session.commit() Time difference

I ran the tests multiple times, truncating the table each time. The average results of saving 10k objects were quite predictable:

  • Multiple transactions - 268 seconds
  • One transaction - 25 seconds

The difference is not surprising, the whole table size is 4.8MB, but after each transaction the database needs to write the changes on disk, which slows the procedure a lot.

Copy

So far, I’ve described the most common methods of generating and storing many ORM objects. I was wondering about another, which may seem surprising a little bit at the beginning.

PostgreSQL has a great COPY command which can copy data between a table and a file. The file format is simple: one table row per one file row, fields delimited with a defined delimiter etc. It can be a normal csv or tsv file.

My crazy idea was: how about using the COPY for loading all the generated ORM objects? To do that, I need to serialize them to a text representation, to create a text file with all of them. So I created a simple function, which does that. This function is made outside the BlogPost class, so I don't need to change the data model.

def serialize_post_to_out_stream(post, out): import csv writer = csv.writer(out, delimiter="\t", quoting=csv.QUOTE_MINIMAL) writer.writerow([post.title, post.body, post.payload])

The function above gets two parameters:

  • post - the object to be serialized
  • out - the output stream where the row with the post object will be saved, in Python it is a file-like object, so an object with all the functions a file object has

Here I use a standard csv module, which supports reading and writing csv files. I really don’t want to write my own function for escaping all the possible forms of data I could have - this usually leads to many tricky bugs.

The only thing left is to use the COPY command. I don’t want to create a file with data and load that later; the generated data can be really huge, and creating temporary files can just slow things down. I want to keep the whole procedure in Python, and use pipes for data loading.

I will use the psql program for accessing the PostgreSQL database. Psql has a different command called \COPY, which can read the csv file from psql's standard input. This can be done using e.g.: cat file.csv | psql database.

To use it in Python, I’m going to use the subprocess module, and create a psql process with stdin=subprocess.PIPE which will give me write access to the pipe psql reads from. The function I’ve implemented is:

def save_objects_using_copy(count=MAX_COUNT): import subprocess p = subprocess.Popen([ 'psql', 'pgtest', '-U', 'pgtest', '-c', '\COPY posts(title, body, payload) FROM STDIN', '--set=ON_ERROR_STOP=true' ], stdin=subprocess.PIPE ) for i in xrange(1, MAX_COUNT+1): post = generate_random_post() serialize_post_to_out_stream(post, p.stdin) p.stdin.close() Results

I’ve also tested that on the same database table, truncating the table before running it. After that I’ve also checked this function, and the previous one (with one transaction) on a bigger sample - 100k of BlogPost objects.

The results are:

Sample size Multiple Transactions One Transaction COPY 10k 268 s 25 s 5 s 100k — 262 s 51 s

I haven’t tested the multiple transactions version for 100k sample, as I just didn’t want to wait multiple hours for finishing that (as I run each of the tests multiple times to get more reliable results).

As you can see, the COPY version is the fastest, even 5 times faster than the full ORM version with one huge transaction. This version is also memory friendly, as no matter how many objects you want to generate, it always needs to store one ORM object in memory, and you can destroy it after saving.

The Drawbacks

Of course using psql poses a couple of problems:

  • you need to have psql available; sometimes that’s not an option
  • calling psql creates another connection to the database; sometimes that could be a problem
  • you need to set up a password in ~/.psql file; you cannot provide it in the command line

You could also get the pcycopg2 cursor directly from the SQLAlchemy connection, and then use the copy_from() function, but this method needs to have all the data already prepared in memory, as it reads from a file-like object, e.g. StringIO. This is not a good solution for inserting millions of objects, as they can be quite huge - streaming is much better in this case.

Another solution to this is to write a generator, which is a file like object, and the copy_from() method can read from it directly. This function calls the file's read() method trying to read 8192 bytes per call. This can be a good idea when you don't have access to the psql, however due to the overhead for generating the 8192 bytes strings, it should be slowever than the psql version.

Categories: FLOSS Project Planets

Acquia: How to reliably test sandbox projects using the drupal.org testbot locally

Planet Drupal - Fri, 2014-04-11 04:15

During Drupal Dev Days in Hungary, there were many sprints that took place. You can see the amazing footage of what went on there in this nice movie, but that is not what we are going to discuss now!

Categories: FLOSS Project Planets

Lars Wirzenius: Applying the Doctorow method to coding

Planet Debian - Fri, 2014-04-11 03:27

When you have a big goal, do at least a little of it every day. Cory Doctorow writes books and stuff, and writes for at least twenty minutes every day. I write computer software, primarily Obnam, my backup program, and recently wrote the first rough draft of a manual for it, by writing at least a little every day. In about two months I got from nothng to something that is already useful to people.

I am now applying this to coding as well. Software development is famously an occupation that happens mostly in one's brain and where being in hack mode is crucial. Getting into hack mode takes time and a suitable, distraction-free environment.

I have found, however, that there are a lot of small, quick tasks that do not require a lot of concentration. Fixing wordings of error messages, making small, mechanical refactorings, confirming bugs by reproducing them and writing test cases to reproduce them, etc. I have foubd that if I've prepared for and planned such tasks properly, in the GTD planning phase, I can do such tasks even on trains and traun stations.

This is important. I commute to work and if I can spend the time I wait for a train, or on the train, productively, I can significant, real progress. But to achieve this I really do have to do the preparation beforehand. Th 9:46 train to work is much too noisy to do any real thinking in.

Categories: FLOSS Project Planets

Morten.dk: Drupal8 theme debug

Planet Drupal - Fri, 2014-04-11 03:22

I would lie (and would i lie to you ?) if it say that im not extremely excited about theming in Drupal8. One the bigger painpoints in Drupal theming is figuring out where the markup is generated from. In Drupal8 we have build that directly in, i did a little screencast of it & damn its awesome.

read more

Categories: FLOSS Project Planets

Hideki Yamane: given enough eyeballs, but...

Planet Debian - Fri, 2014-04-11 02:35
"given enough eyeballs, all bugs are shallow"Oh, right?

And we easily make mistakes because we're human, you know.

Categories: FLOSS Project Planets

Andrew Pollock: [life] Day 73: A fourth-generation friendship

Planet Debian - Fri, 2014-04-11 01:46

Oh man, am I exhausted.

I've known my friend Kim for longer than we remembered. Until Zoe was born, I thought the connection was purely that our grandmothers knew each other. After Zoe was born, and we gave her my birth mother's name as her middle name, Kim's mother sent me a message indicating that she knew my mother. More on that in a moment.

Kim and I must have interacted when we were small, because it predates my memory of her. My earliest memories are of being a pen pal with her when she lived in Kingaroy. She had a stint in South Carolina, and then in my late high school years, she moved relatively close to me, at Albany Creek, and we got to have a small amount of actual physical contact.

Then I moved to Canberra, and she moved to Melbourne, and it was only due to the wonders of Facebook that we reconnected while I was in the US.

Fast forward many years, and we're finally all back in Brisbane again. Kim is married and has a daughter named Sarah who is a couple of years older than Zoe, and could actually pass of as her older sister. She also has as a younger son. Since we've been back in Brisbane, we've had many a play date at each other's homes, and the girls get along famously, to the point where Sarah was talking about her "best friend Zoe" at show and tell at school.

The other thing I learned since reconnecting with Kim in the past year, is that Kim's aunt and my mother were in the same grade at school. Kim actually arranged for me to have a coffee with her aunt when she was visiting from Canberra, and she told me a bunch of stuff about my Mum that I didn't know, so that was really nice.

Kim works from home part time, and I offered to look after Sarah for a day in the school holidays as an alternative to her having to go to PCYC holiday care. Today was that day.

I picked up Zoe from Sarah this morning, as it was roughly in the same direction as Kim's place, and made more sense, and we headed over to Kim's place to pick up Sarah. We arrived only a couple of minutes later than the preferred pick up time, so I was pretty happy with how that worked out.

The plan was to bring Sarah back to our place, and then head over to New Farm Park on the CityCat and have a picnic lunch and a play in the rather fantastic playground in the park over there.

I hadn't made Zoe's lunch prior to leaving the house, so after we got back home again, I let the girls have a play while I made Zoe's lunch. After some play with Marble Run, the girls started doing some craft activity all on their own on the balcony. It was cute watching them try to copy what each other were making. One of them tried gluing two paper cups together by the narrow end. It didn't work terribly well because there wasn't a lot of surface to come into contact with each other.

I helped the girls with their craft activity briefly, and then we left on foot to walk to the CityCat terminal. Along the way, I picked up some lunch for myself at the Hawthorne Garage and added it to the small Esky I was carrying with Zoe's lunchbox in it. It was a beautiful day for a picnic. It was warm and clear. I think Sarah found the walk a bit long, but we made it to the ferry terminal relatively incident free. We got lucky, and a ferry was just arriving, and as it happened, they had to change boats, as they do from time to time at Hawthorne, so we would have had plenty of time regardless, as everyone had to get off one boat and onto a new one.

We had a late morning tea at the New Farm Park ferry terminal after we got off, and then headed over to the playground. I claimed a shady spot with our picnic blanket and the girls did their thing.

I alternated between closely shadowing them around the playground and letting them run off on their own. Fortunately they stuck together, so that made keeping track of them slightly easier.

For whatever reason, Zoe was in a bit of a grumpier mood than normal today, and wasn't taking too kindly to the amount of turn taking that was necessary to have a smoothly oiled operation. Sarah (justifiably) got a bit whiny when she didn't get an equitable amount of time getting the call the shots on what the they did, but aside from that they got along fine.

There was another great climbing tree, which had kids hanging off it all over the place. Both girls wanted to climb it, but needed a little bit of help getting started. Sarah lost her nerve before Zoe did, but even Zoe was a surprisingly trepidatious about it, and after shimmying a short distance along a good (but high) branch, wanted to get down.

The other popular activity was a particularly large rope "spider web" climbing frame, which Sarah was very adept at scaling. It was a tad too big for Zoe to manage though, and she couldn't keep up, which frustrated her quite a bit. I was particularly proud of how many times she returned to it to try again, though.

We had our lunch, a little more play time, and the obligatory ice cream. I'd contemplated catching the CityCat further up-river to Sydney Street to then catching the free CityHopper ferry, but the thought of then trying to get two very tired girls to walk from the Hawthorne ferry terminal back home didn't really appeal to me all that much, so I decided to just head back home.

That ended up being a pretty good call, because as it was, trying to get the two of them back home was like herding cats. Sarah was fine, but Zoe was really dragging the chain and getting particularly grumpy. I had to deploy every positive parenting trick that I currently have in my book to keep Zoe moving, but we got there eventually. Fortunately we didn't have any particularly deadline.

The girls did some more playing at home while I collapsed on the couch for a bit, and then wanted to do some more craft. We made a couple of crowns and hot-glued lots of bling onto them.

We drove back to Kim's place after that, and the girls played some more there. Sarah nearly nodded off on the way home. Zoe was surprisingly chipper. The dynamic changed completely once we were back at Sarah's house. Zoe seemed fine to take Sarah's direction on everything, so I wonder how much of things in the morning were territorial, and Sarah wasn't used to Zoe calling the shots when she was at Zoe's place.

Kim invited us to stay for dinner. I wasn't really feeling like cooking, and the girls were having a good time, so I decided to stay for dinner, and after they had a bath together we headed home. Zoe stayed awake all the way home, and went to bed without any fuss.

It's pretty hot tonight, and I'm trialling Zoe sleeping without white noise, so we'll see how tonight pans out.

Categories: FLOSS Project Planets

Joey Hess: propellor introspection for DNS

Planet Debian - Fri, 2014-04-11 01:05

In just released Propellor 0.3.0, I've improved improved Propellor's config file DSL significantly. Now properties can set attributes of a host, that can be looked up by its other properties, using a Reader monad.

This saves needing to repeat yourself:

hosts = [ host "orca.kitenet.net" & stdSourcesList Unstable & Hostname.sane -- uses hostname from above

And it simplifies docker setup, with no longer a need to differentiate between properties that configure docker vs properties of the container:

-- A generic webserver in a Docker container. , Docker.container "webserver" "joeyh/debian-unstable" & Docker.publish "80:80" & Docker.volume "/var/www:/var/www" & Apt.serviceInstalledRunning "apache2"

But the really useful thing is, it allows automating DNS zone file creation, using attributes of hosts that are set and used alongside their other properties:

hosts = [ host "clam.kitenet.net" & ipv4 "10.1.1.1" & cname "openid.kitenet.net" & Docker.docked hosts "openid-provider" & cname "ancient.kitenet.net" & Docker.docked hosts "ancient-kitenet" , host "diatom.kitenet.net" & Dns.primary "kitenet.net" hosts ]

Notice that hosts is passed into Dns.primary, inside the definition of hosts! Tying the knot like this is a fun haskell laziness trick. :)

Now I just need to write a little function to look over the hosts and generate a zone file from their hostname, cname, and address attributes:

extractZoneFile :: Domain -> [Host] -> ZoneFile extractZoneFile = gen . map hostAttr where gen = -- TODO

The eventual plan is that the cname property won't be defined as a property of the host, but of the container running inside it. Then I'll be able to cut-n-paste move docker containers between hosts, or duplicate the same container onto several hosts to deal with load, and propellor will provision them, and update the zone file appropriately.

Also, Chris Webber had suggested that Propellor be able to separate values from properties, so that eg, a web wizard could configure the values easily. I think this gets it much of the way there. All that's left to do is two easy functions:

overrideAttrsFromJSON :: Host -> JSON -> Host exportJSONAttrs :: Host -> JSON

With these, propellor's configuration could be adjusted at run time using JSON from a file or other source. For example, here's a containerized webserver that publishes a directory from the external host, as configured by JSON that it exports:

demo :: Host demo = Docker.container "webserver" "joeyh/debian-unstable" & Docker.publish "80:80" & dir_to_publish "/home/mywebsite" -- dummy default & Docker.volume (getAttr dir_to_publish ++":/var/www") & Apt.serviceInstalledRunning "apache2" main = do json <- readJSON "my.json" let demo' = overrideAttrsFromJSON demo writeJSON "my.json" (exportJSONAttrs demo') defaultMain [demo']
Categories: FLOSS Project Planets

Matt Raible: 10 years ago today, I bought a VW Bus

Planet Apache - Thu, 2014-04-10 21:20

10 years ago today, I bought a 1966 21-Window Volkswagen Bus. Restoring a VW Bus had been a dream of mine since high school, when I attempted to restore a '69 VW Bug. In October 2005, the restoration project began. Since then, it's been through many shops and had quite a few folks work on it. You can read all about it in When is the bus gonna be done?

Since many long-time readers of this blog are familiar with The Bus Project, it seems fitting to give y'all a detailed update on where things sit today.

In June 2013, I mentioned I'd removed it from the shop that worked on it for six years. My biggest frustration with that shop was their willingness to work on it. They'd go through spurts of working on it, then let it sit for months. To make matters worse, when I'd email them, they wouldn't respond for weeks. Often, it'd take a couple emails and a phone call to get them to respond. I was pleased with the work they did as it seemed high quality. They also motivated me to go big and pursue other teenage VW dreams: a Porsche 911 engine and air-ride suspension.

Last year, the bus spent six months in a friend-of-a-friend's garage. I paid them to get it running and driveable. They created an oil tank, modified the gas tank, mounted the air tank, did some custom welding on the rear floor, installed the brakes, installed the steering and got the shifting linkage hooked up. It was refreshing to get weekly updates and text messages when things were finished. In fact, he almost got it running, but was unable to find a part needed to complete the "spark" process.

In December 2013, I moved it to a new shop, which seems to have all the necessary components to complete the project. They're vintage car restoration specialists and the owner estimated they could have it done by April 2014 (a.k.a. today). They'd also rescued a fair amount of projects: ones that sat at garages for years while progress stalled and owners lost lots of money. They began work on it and I enjoyed the daily emails with links to albums of their progress. It seemed like The Bus would finally be painted come January.

In early January, I received an email with a major roadblock. The subject of the email was Thickness of bondo on the VW bus. As I read the paragraphs below, I could feel my heart sink.

As you saw in the photos this week, we've had to repair three spots where the body work previously done on the bus was chipped. In all three cases and especially the roof yesterday, the thickness off the bondo is alarming. This could and probably will lead to paint failure down the road.

The way this usually progresses is that once you start driving the bus and road vibrations become an everyday occurrence, the bondo can start coming loose, sometimes in large pieces, from the body. We only see this happen in cases like yours where the body work has been performed elsewhere and we've inherited the job to "Finish" it, painting being the major component of our task. The problem, along with being my concern here, is obvious. We are reluctant to see you spend thousands of hard earned dollars to paint a bus that will surely fail later.

The options aren't great. The best route is to restrip the bus now and get it back to bare metal and start over. Doing the body work correctly would insure a thinner application of substrates (dura glass, bondo, etc.) and eliminate the possibility of cracking and spidering of the paint completely. By doing this, we can warrantee our work and know you are hitting the road without bondo so thick it can be measured with a ruler. As it stands, we frankly are reluctant to proceed any further with the paint process knowing that we would be doing you a huge disservice by so doing.

Shortly after, he sent me some pictures of the bondo trouble spots.

You can imagine my frustration when I received this email. Over the life of the project, I've spent more than $100K and I expected the body work to be "show car" quality. The shop that did all the body work knew this. The only thing I can think of is their definition of quality is a lot different from the current shop. The current shop estimates it'd be $17K to fix the bondo and get it ready for paint. The money part doesn't bother me (it cost me $10K to move it from shop 1 to shop 2 back in 2008), but they estimate it'll be six weeks to complete the process. They recommended I contact the BBB for the shop who did all the bondo because "they are an embarrassment to our industry". They also suggested we finish the mechanical and electrical portions of the job. I agreed that seemed like the best way forward.

A week later, they got it running, but discovered some new previous-work-quality problems along the way. The custom fabricated oil tank leaked like a sieve. From the video they sent, it appeared like it'd never even been pressure tested. They slapped on a new one from the Porsche they had sitting in the shop and voila - it was alive!

Progress continued through January. They removed the suspension, powder coated it and painted the bottom with good quality (3M) black under coating. At the end of January, Trish and I visited Sewfine to pick out the interior. We selected colors, fabric and were about to order everything when we ran into another roadblock. While talking, we discovered that the back seat won't fit with the current setup. The air tank is in the way, and possibly the raised floor. They suggested relocating the air tank or trying to find a smaller one.

The folks at Sewfine thought this was a pretty big deal and caused me to think the whole project might be delayed for several months. I emailed the current shop and was pleasantly surprised with a reply of "that shouldn't be hard to fix".

I'm not clear on the details of what happened at this point, but the daily-picture updates stopped flowing. It was six weeks until they started working on it again. They received all the powder coated components and started installing them. At the end of March, I visited the shop and was pleased to see the suspension, wheels, and engine installed. We talked about the body/paint work and I was disappointed to hear them refuse to paint it without sandblasting and re-doing the body work. They said they didn't want to put their name behind work that is likely to fail. They estimated a couple more weeks until they got it running and driveable.

That couple weeks passed. Since I received no email updates, I emailed the owner. He said they'd run into issues on another job and it'll be another week before they can get back to it. This shop is starting to feel like the one that did the body work in that it's very difficult to get them to prioritize my project over other ones. However, they've done great work so far. They send me a link to an album at the end of each day and things happen really fast ... when they work on it.

At this point, I'm resetting my expectations (as usual) to it'll be done in a few more months. I'm aiming for my 40th birthday this time: July 16th. I'm estimating it'll be 65K to finish it:

  • 10K: current bill
  • 20K: body and paint
  • 5K: wiring, windows and trim
  • 10K: stereo
  • 20K: interior

I could probably save 20K by having someone else paint it and doing the wiring and trim myself. I'm tempted to do this and repaint it if/when the bondo/paint starts to crack. However, I believe the current shop can get it running/driveable/painted by mid-June, so it's tempting to just pay for it.

That's the latest update. The project is still progressing and it still feels like it'll be done soon. When I bought the bus 10 years ago, I thought I'd drive it a lot sooner. Then again, my expectation was that I'd incrementally improve it through the years and eventually get to something spectacular. Now we're on the cusp of spectacularity. Stay tuned...

Categories: FLOSS Project Planets

Dirk Eddelbuettel: RcppCNPy 0.2.3

Planet Debian - Thu, 2014-04-10 20:49
R 3.1.0 came out today. Among the (impressive and long as usual) list of changes is the added ability to specify CXX_STD = CXX11 in order to get C++11 (or the best available subset on older compilers). This brings a number of changes and opportunities which are frankly too numerous to be discussed in this short post. But it also permits us, at long last, to use long long integer types.

For RcppCNPy, this means that we can finally cover NumPy integer data (along with the double precision we had from the start) on all platforms. Python encodes these as an int64, and that type was unavailable (at least in 32-bit OSs) until we got long long made available to us by R. So today I made the change to depend on R 3.1.0, and select C++11 which allowed us to free the code from a number if #ifdef tests. This all worked out swimmingly and the new package has already been rebuilt for Windows.

I also updated the vignette, and refreshed its look and feel. Full changes are listed below.

Changes in version 0.2.3 (2014-04-10)
  • src/Makevars now sets CXX_STD = CXX11 which also provides the long long type on all platforms, so integer file support is no longer conditional.

  • Consequently, code conditional on RCPP_HAS_LONG_LONG_TYPES has been simplified and is no longer conditional.

  • The package now depends on R 3.1.0 or later to allow this.

  • The vignette has been updated and refreshed to reflect this.

CRANberries also provides a diffstat report for the latest release. As always, feedback is welcome and the rcpp-devel mailing list off the R-Forge page for Rcpp is the best place to start a discussion.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets
Syndicate content