Bug fix to lexical_cast.hpp which now uses the test for INT128 which the rest of Boost uses, consistent with Boost 1.55 too.
We had some requests to get GHC (the Glasgow Haskell Compiler) up and running on two new Ubuntu architectures: arm64, added in 13.10, and ppc64el, added in 14.04. This has been something of a saga, and has involved rather more late-night hacking than is probably good for me.Book the First: Recalled to a life of strange build systems
You might not know it from the sheer bulk of uploads I do sometimes, but I actually don't speak a word of Haskell and it's not very high up my list of things to learn. But I am a pretty experienced build engineer, and I enjoy porting things to new architectures: I'm firmly of the belief that breadth of architecture support is a good way to shake out certain categories of issues in code, that it's worth doing aggressively across an entire distribution, and that, even if you don't think you need something now, new requirements have a habit of coming along when you least expect them and you might as well be prepared in advance. Furthermore, it annoys me when we have excessive noise in our build failure and proposed-migration output and I often put bits and pieces of spare time into gardening miscellaneous problems there, and at one point there was a lot of Haskell stuff on the list and it got a bit annoying to have to keep sending patches rather than just fixing things myself, and ... well, I ended up as probably the only non-Haskell-programmer on the Debian Haskell team and found myself fixing problems there in my free time. Life is a bit weird sometimes.
Bootstrapping packages on a new architecture is a bit of a black art that only a fairly small number of relatively bitter and twisted people know very much about. Doing it in Ubuntu is specifically painful because we've always forbidden direct binary uploads: all binaries have to come from a build daemon. Compilers in particular often tend to be written in the language they compile, and it's not uncommon for them to build-depend on themselves: that is, you need a previous version of the compiler to build the compiler, stretching back to the dawn of time where somebody put things together with a big magnet or something. So how do you get started on a new architecture? Well, what we do in this case is we construct a binary somehow (usually involving cross-compilation) and insert it as a build-dependency for a proper build in Launchpad. The ability to do this is restricted to a small group of Canonical employees, partly because it's very easy to make mistakes and partly because things like the classic "Reflections on Trusting Trust" are in the backs of our minds somewhere. We have an iron rule for our own sanity that the injected build-dependencies must themselves have been built from the unmodified source package in Ubuntu, although there can be source modifications further back in the chain. Fortunately, we don't need to do this very often, but it does mean that as somebody who can do it I feel an obligation to try and unblock other people where I can.
As far as constructing those build-dependencies goes, sometimes we look for binaries built by other distributions (particularly Debian), and that's pretty straightforward. In this case, though, these two architectures are pretty new and the Debian ports are only just getting going, and as far as I can tell none of the other distributions with active arm64 or ppc64el ports (or trivial name variants) has got as far as porting GHC yet. Well, OK. This was somewhere around the Christmas holidays and I had some time. Muggins here cracks his knuckles and decides to have a go at bootstrapping it from scratch. It can't be that hard, right? Not to mention that it was a blocker for over 600 entries on that build failure list I mentioned, which is definitely enough to make me sit up and take notice; we'd even had the odd customer request for it.
Several attempts later and I was starting to doubt my sanity, not least for trying in the first place. We ship GHC 7.6, and upgrading to 7.8 is not a project I'd like to tackle until the much more experienced Haskell folks in Debian have switched to it in unstable. The porting documentation for 7.6 has bitrotted more or less beyond usability, and the corresponding documentation for 7.8 really isn't backportable to 7.6. I tried building 7.8 for ppc64el anyway, picking that on the basis that we had quicker hardware for it and didn't seem likely to be particularly more arduous than arm64 (ho ho), and I even got to the point of having a cross-built stage2 compiler (stage1, in the cross-building case, is a GHC binary that runs on your starting architecture and generates code for your target architecture) that I could copy over to a ppc64el box and try to use as the base for a fully-native build, but it segfaulted incomprehensibly just after spawning any child process. Compilers tend to do rather a lot, especially when they're built to use GCC to generate object code, so this was a pretty serious problem, and it resisted analysis. I poked at it for a while but didn't get anywhere, and I had other things to do so declared it a write-off and gave up.Book the Second: The golden thread of progress
In March, another mailing list conversation prodded me into finding a blog entry by Karel Gardas on building GHC for arm64. This was enough to be worth another look, and indeed it turned out that (with some help from Karel in private mail) I was able to cross-build a compiler that actually worked and could be used to run a fully-native build that also worked. Of course this was 7.8, since as I mentioned cross-building 7.6 is unrealistically difficult unless you're considerably more of an expert on GHC's labyrinthine build system than I am. OK, no problem, right? Getting a GHC at all is the hard bit, and 7.8 must be at least as capable as 7.6, so it should be able to build 7.6 easily enough ...
Not so much. What I'd missed here was that compiler engineers generally only care very much about building the compiler with older versions of itself, and if the language in question has any kind of deprecation cycle then the compiler itself is likely to be behind on various things compared to more typical code since it has to be buildable with older versions. This means that the removal of some deprecated interfaces from 7.8 posed a problem, as did some changes in certain primops that had gained an associated compatibility layer in 7.8 but nobody had gone back to put the corresponding compatibility layer into 7.6. GHC supports running Haskell code through the C preprocessor, and there's a __GLASGOW_HASKELL__ definition with the compiler's version number, so this was just a slog tracking down changes in git and adding #ifdef-guarded code that coped with the newer compiler (remembering that stage1 will be built with 7.8 and stage2 with stage1, i.e. 7.6, from the same source tree). More inscrutably, GHC has its own packaging system called Cabal which is also used by the compiler build process to determine which subpackages to build and how to link them against each other, and some crucial subpackages weren't being built: it looked like it was stuck on picking versions from "stage0" (i.e. the initial compiler used as an input to the whole process) when it should have been building its own. Eventually I figured out that this was because GHC's use of its packaging system hadn't anticipated this case, and was selecting the higher version of the ghc package itself from stage0 rather than the version it was about to build for itself, and thus never actually tried to build most of the compiler. Editing ghc_stage1_DEPS in ghc/stage1/package-data.mk after its initial generation sorted this out. One late night building round and round in circles for a while until I had something stable, and a Debian source upload to add basic support for the architecture name (and other changes which were a bit over the top in retrospect: I didn't need to touch the embedded copy of libffi, as we build with the system one), and I was able to feed this all into Launchpad and watch the builders munch away very satisfyingly at the Haskell library stack for a while.
This was all interesting, and finally all that work was actually paying off in terms of getting to watch a slew of several hundred build failures vanish from arm64 (the final count was something like 640, I think). The fly in the ointment was that ppc64el was still blocked, as the problem there wasn't building 7.6, it was getting a working 7.8. But now I really did have other much more urgent things to do, so I figured I just wouldn't get to this by release time and stuck it on the figurative shelf.Book the Third: The track of a bug
Then, last Friday, I cleared out my urgent pile and thought I'd have another quick look. (I get a bit obsessive about things like this that smell of "interesting intellectual puzzle".) slyfox on the #ghc IRC channel gave me some general debugging advice and, particularly usefully, a reduced example program that I could use to debug just the process-spawning problem without having to wade through noise from running the rest of the compiler. I reproduced the same problem there, and then found that the program crashed earlier (in stg_ap_0_fast, part of the run-time system) if I compiled it with +RTS -Da -RTS. I nailed it down to a small enough region of assembly that I could see all of the assembly, the source code, and an intermediate representation or two from the compiler, and then started meditating on what makes ppc64el special.
You see, the vast majority of porting bugs come down to what I might call gross properties of the architecture. You have things like whether it's 32-bit or 64-bit, big-endian or little-endian, whether char is signed or unsigned, that sort of thing. There's a big table on the Debian wiki that handily summarises most of the important ones. Sometimes you have to deal with distribution-specific things like whether GL or GLES is used; often, especially for new variants of existing architectures, you have to cope with foolish configure scripts that think they can guess certain things from the architecture name and get it wrong (assuming that powerpc* means big-endian, for instance). We often have to update config.guess and config.sub, and on ppc64el we have the additional hassle of updating libtool macros too. But I've done a lot of this stuff and I'd accounted for everything I could think of. ppc64el is actually a lot like amd64 in terms of many of these porting-relevant properties, and not even that far off arm64 which I'd just successfully ported GHC to, so I couldn't be dealing with anything particularly obvious. There was some hand-written assembly which certainly could have been problematic, but I'd carefully checked that this wasn't being used by the "unregisterised" (no specialised machine dependencies, so relatively easy to port but not well-optimised) build I was using. A problem around spawning processes suggested a problem with SIGCHLD handling, but I ruled that out by slowing down the first child process that it spawned and using strace to confirm that SIGSEGV was the first signal received. What on earth was the problem?
From some painstaking gdb work, one thing I eventually noticed was that stg_ap_0_fast's local stack appeared to be being corrupted by a function call, specifically a call to the colourfully-named debugBelch. Now, when IBM's toolchain engineers were putting together ppc64el based on ppc64, they took the opportunity to fix a number of problems with their ABI: there's an OpenJDK bug with a handy list of references. One of the things I noticed there was that there were some stack allocation optimisations in the new ABI, which affected functions that don't call any vararg functions and don't call any functions that take enough parameters that some of them have to be passed on the stack rather than in registers. debugBelch takes varargs: hmm. Now, the calling code isn't quite in C as such, but in a related dialect called "Cmm", a variant of C-- (yes, minus), that GHC uses to help bridge the gap between the functional world and its code generation, and which is compiled down to C by GHC. When importing C functions into Cmm, GHC generates prototypes for them, but it doesn't do enough parsing to work out the true prototype; instead, they all just get something like extern StgFunPtr f(void);. In most architectures you can get away with this, because the arguments get passed in the usual calling convention anyway and it all works out, but on ppc64el this means that the caller doesn't generate enough stack space and then the callee tries to save its varargs onto the stack in an area that in fact belongs to the caller, and suddenly everything goes south. Things were starting to make sense.
Now, debugBelch is only used in optional debugging code; but runInteractiveProcess (the function associated with the initial round of failures) takes no fewer than twelve arguments, plenty to force some of them onto the stack. I poked around the GCC patch for this ABI change a bit and determined that it only optimised away the stack allocation if it had a full prototype for all the callees, so I guessed that changing those prototypes to extern StgFunPtr f(); might work: it's still technically wrong, not least because omitting the parameter list is an obsolescent feature in C11, but it's at least just omitting information about the parameter list rather than actively lying about it. I tweaked that and ran the cross-build from scratch again. Lo and behold, suddenly I had a working compiler, and I could go through the same build-7.6-using-7.8 procedure as with arm64, much more quickly this time now that I knew what I was doing. One upstream bug, one Debian upload, and several bootstrapping builds later, and GHC was up and running on another architecture in Launchpad. Success!Epilogue
There's still more to do. I gather there may be a Google Summer of Code project in Linaro to write proper native code generation for GHC on arm64: this would make things a good deal faster, but also enable GHCi (the interpreter) and Template Haskell, and thus clear quite a few more build failures. Since there's already native code generation for ppc64 in GHC, getting it going for ppc64el would probably only be a couple of days' work at this point. But these are niceties by comparison, and I'm more than happy with what I got working for 14.04.
The upshot of all of this is that I may be the first non-Haskell-programmer to ever port GHC to two entirely new architectures. I'm not sure if I gain much from that personally aside from a lot of lost sleep and being considered extremely strange. It has, however, been by far the most challenging set of packages I've ported, and a fascinating trip through some odd corners of build systems and undefined behaviour that I don't normally need to touch.
This is half a blog post and half a reminder for my future self.
So let's say you used the following commands:git add foo git annex add bar git annex sync # move to different location with different remotes available git add quux git annex add quuux git annex sync
what I wanted to happen was to simply sync the already committed stuff to the other remotes. What happened instead was git annex sync's automagic commit feature (which you can not disable, it seems) doing its job: Commit what was added earlier and use "git-annex automatic sync" as commit message.
This is not a problem in and as of itself, but as this is my my master annex and as I managed to maintain clean commit messages for the last few years, I felt the need to clean this mess up.
Changing old commit messages is easy:git rebase --interactive HEAD~3
pick the r option for "reword" and amend the two commit messages. I did the same on my remote and all the branches I could find with git branch -a. Problem is, git-annex pulls in changes from refs which are not shown as branches; run git annex sync and back are the old commits along with a merge commit like an ugly cherry on top. Blegh.
I decided to leave my comfort zone and ended up with the following:# always back up before poking refs git clone --mirror repo backup git reset --hard 1234 git show-ref | grep master # for every ref returned, do: git update-ref $ref 1234
rinse repeat for every remote, git annex sync, et voilà. And yes, I avoided using an actual loop on purpose; sometimes, doing things slowly and by hand just feels safer.
For good measure, I am runninggit fsck && git annex fsck
on all my remotes now, but everything looks good up to now.
If the plain ASCII text below is mangled beyond verification, you can retrieve a copy of it from my web site that should be able to be verified.-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 OTR Key Replacement for XMPP firstname.lastname@example.org =========================================== Date: 2014-04-14 My main XMPP account is email@example.com. I prefer OTR  conversations when using XMPP for private discussions. I was using irssi to connect to XMPP servers, and irssi relies on OpenSSL for the TLS connections. I was using it with versions of OpenSSL that were vulnerable to the "Heartbleed" attack . It's possible that my OTR long-term secret key was leaked via this attack. As a result, I'm changing my OTR key for this account. The new, correct OTR fingerprint for the XMPP account at firstname.lastname@example.org is: F8953C5D 48ABABA2 F48EE99C D6550A78 A91EF63D Thanks for taking the time to verify your peers' fingerprints. Secure communication is important not only to protect yourself, but also to protect your friends, their friends and so on. Happy Hacking, --dkg (Daniel Kahn Gillmor) Notes:  OTR: https://otr.cypherpunks.ca/  Heartbleed: http://heartbleed.com/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQJ8BAEBCgBmBQJTTBF+XxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRFQjk2OTEyODdBN0FEREUzNzU3RDkxMUVB NTI0MDFCMTFCRkRGQTVDAAoJEKUkAbEb/fpcYwkQAKLzEnTV1lrK6YrhdvRnuYnh Bh9Ad2ZY44RQmN+STMEnCJ4OWbn5qx/NrziNVUZN6JddrEvYUOxME6K0mGHdY2KR yjLYudsBuSMZQ+5crZkE8rjBL8vDj8Dbn3mHyT8bAbB9cmASESeQMu96vni15ePd 2sB7iBofee9YAoiewI+xRvjo2aRX8nbFSykoIusgnYG2qwo2qPaBVOjmoBPB5YRI PkN0/hAh11Ky0qQ/GUROytp/BMJXZx2rea2xHs0mplZLqJrX400u1Bawllgz3gfV qQKKNc3st6iHf3F6p6Z0db9NRq+AJ24fTJNcQ+t07vMZHCWM+hTelofvDyBhqG/r l8e4gdSh/zWTR/7TR3ZYLCiZzU0uYNd0rE3CcxDbnGTUS1ZxooykWBNIPJMl1DUE zzcrQleLS5tna1b9la3rJWtFIATyO4dvUXXa9wU3c3+Wr60cSXbsK5OCct2KmiWY fJme0bpM5m1j7B8QwLzKqy/+YgOOJ05QDVbBZwJn1B7rvUYmb968yLQUqO5Q87L4 GvPB1yY+2bLLF2oFMJJzFmhKuAflslRXyKcAhTmtKZY+hUpxoWuVa1qLU3bQCUSE MlC4Hv6vaq14BEYLeopoSb7THsIcUdRjho+WEKPkryj6aVZM5WnIGIS/4QtYvWpk 3UsXFdVZGfE9rfCOLf0F =BGa1 -----END PGP SIGNATURE-----
PyCon 2014 happened. (Sprints are still happening.)
This was my 3rd PyCon, but my first year as a serious contributor to the event, which led to an incredibly different feel. I also came as a person running a company building a complex system in Python, and I loved having the overarching mission of what I'm building driving my approach to what I chose to do. PyCon is one of the few conferences I go to where the feeling of acceptance and at-homeness mitigates the introvert overwhelm at nonstop social interaction. It's truly a special event and community.
Here are some highlights:
- I gave a tutorial about search, which was recorded in its entirety... if you watch this video, I highly recommend skipping the hands-on parts where I'm just walking around helping people out.
- I gave a talk! It's called Subprocess to FFI, and you can find the video here. Through three full iterations of dry runs with feedback, I had a ton of fun preparing this talk. I'd like to give more like it in the future as I continue to level up my speaking skills.
- Allen Downey came to my talk and found me later to say hi. Omg amazing, made my day.
- Aux Vivres and Dieu du Ciel, amazing eats and drink with great new and old friends. Special shout out to old Debian friends Micah Anderson, Matt Zimmerman, and Antoine Beaupré for a good time at Dieu du Ciel.
- The Geek Feminism open space was a great place to chill out and always find other women to hang with, much thanks to Liz Henry for organizing it.
- Talking to the community from the Inbox booth on Startup Row in the Expo hall on Friday. Special thanks for Don Sheu and Yannick Gingras for making this happen, it was awesome!
- The PyLadies lunch. Wow, was that amazing. Not only did I get to meet Julia Evans (who also liked meeting me!), but there was an amazing lineup of amazing women telling everyone about what they're doing. This and Noami Ceder's touching talk about openly transitioning while being a member of the Python community really show how the community walks the walk when it comes to diversity and is always improving.
- Catching up with old friends like Biella Coleman, Selena Deckelmann, Deb Nicholson, Paul Tagliamonte, Jessica McKellar, Adam Fletcher, and even friends from the bay area who I don't see often. It was hard to walk places without getting too distracted running into people I knew, I got really good at waving and continuing on my way.
I didn't get to go to a lot of talks in person this year since my personal schedule was so full, but the PyCon video team is amazing as usual, so I'm looking forward to checking out the archive. It really is a gift to get the videos up while energy from the conference is still so high and people want to check out things they missed and share the talks they loved.
Thanks to everyone, hugs, peace out, et cetera!
I did a large upgrade tonight and noticed there was a mutt upgrade, no biggie really….Except my I have for years (incorrectly?) used the “i” key when reading a specific email to jump back to the list of emails, or from index to pager in mutt speak.
Instead of my pager of mails, I got “No news servers defined!” The fix is rather simple, in muttrc putbind pager i exit
and you’re back to using the i key the wrong way again like me.
(This is my first race of the 2014 season.)
I had entered this race in 2013 and found it was effective for focusing winter training. As triathlons do not typically start until May in the UK, scheduling earlier races can be motivating in the colder winter months.
I didn't have any clear goals for the race except to blow out the cobwebs and improve on my 2013 time. I couldn't set reasonable or reliable target times after considerable "long & slow" training in the off-season but I did want to test some new equipment and stategies, especially race pacing with a power meter, but also a new wheelset, crankset and helmet.
Preparation was both accidentally and deliberately compromised: I did very little race-specific training as my season is based around an entirely different intensity of race, but compounding this I was confined to bed the weekend before.
Sleep was acceptable in the preceding days and I felt moderately fresh on race morning. Nutrition-wise, I had porridge and bread with jam for breakfast, a PowerGel before the race, 750ml of PowerBar Perform on the bike along with a "Hydro" PowerGel with caffeine at approximately 30km.Run 1 (7.5km)
A few minutes before the start my race number belt—the only truly untested equipment that day—refused to tighten. However, I decided that once the race began I would either ignore it or even discard it, risking disqualification.
Despite letting everyone go up the road, my first km was still too fast so I dialed down the effort, settling into a "10k" pace and began overtaking other runners. The Fen winds and drag-strip uphill from 3km provided a bit of pacing challenge for someone used to shelter and shorter hills but I kept a metered effort through into transition.
- 33:01 (4:24/km, T1: 00:47) — Last year: 37:47 (5:02/km)
Although my 2014 bike setup features a power meter, I had not yet had the chance to perform an FTP test outdoors. I was thus was not able to calculate a definitive target power for the bike leg. However, data from my road bike suggested I set a power ceiling of 250W on the longer hills.
This was extremely effective in avoiding going "into the red" and compromising the second run. This lends yet more weight to the idea that a power meter in multisport events is "almost like cheating".
I was not entirely comfortable with my bike position: not only were my thin sunglasses making me raise my head more than I needed to, I found myself creeping forward onto the nose of my saddle. This is sub-optimal, even if only considering that I am not training in that position.
Overall, the bike was uneventful with the only memorable moment provided by a wasp that got stuck between my head and a helmet vent. Coming into transition I didn't feel like I had really pushed myself that hard—probably a good sign—but the time difference from last year's bike leg (1:16:11) was a little underwhelming.
- 1:10:45 (T2: 00:58)
After leaving transition, my legs were extremely uncooperative and I had great difficulty in pacing myself in the first kilometer. Concentrating hard on reducing my cadence as well as using my rehearsed mental cue, I managed to settle down.
The following 4 kilometers were a mental struggle rather than a physical one, modulo having to force a few burps to ease some discomfort, possibly from drinking too much or too fast on the bike.
I had planned to "unload" as soon as I reached 6km but I didn't really have it in me. Whilst I am physiologically faster compared to last year, I suspect the lack of threshold-level running over the winter meant the mental component required for digging deep will require some coaxing to return.
However, it is said that you have successfully paced a duathlon if the second run faster than the first. On this criterion, this was a success, but it would have been a bonus to have really felt completely completely drained at the end of the day, if only from a neo-Calvinist perspective.
- 32:46 (4:22/km) / Last year: 38:10 (5:05/km)
- Total time
A race that goes almost entirely to plan is a bit of a paradox – there's certainly satisfaction in setting goals and hitting them without issue, but this is a gratification of slow-burning fire rather than the jubilation of a fireworks display.
However, it was nice to learn that I managed to finish 5th in my age group despite this race attracting an extremely strong field: as an indicator, the age-group athlete finishing immediately before me was seven minutes faster and the overall winner finished in 1:54:53 (!).
The race identified the following areas to work on:
- Perform an outdoors FTP on my time-trial bike outdoors to develop an optimum power plan.
- Do a few more brick runs, at least to re-acclimatise the feeling.
- Schedule another bike fit.
The Debian Project Leader election has concluded and the winner is Lucas Nussbaum. Of a total of 1003 developers, 401 developers voted using the Condorcet method.
More information about the result is available in the Debian Project Leader Elections 2014 page.
The new term for the project leader will start on April 17th and expire on April 17th 2015.
We had a bit of a rough night last night. I noticed Zoe was pretty hot when she had a nap yesterday after not really eating much lunch. She still had a mild fever after her nap, so I gave her some paracetamol (aka acetaminophen, that one weirded me out when I moved to the US) and called for a home doctor to check her ears out.
Her ears were fine, but her throat was a little red. The doctor said it was probably a virus. Her temperature wasn't so high at bed time, so I skipped the paracetamol, and she went to bed fine.
She did wake up at about 1:30am and it took me until 3am to get her back to bed. I think it was a combination of the fever and trying to phase out her white noise, but she just didn't want to sleep in her bed or her room. At 3am I admitted defeat and let her sleep with me.
She had only a slightly elevated temperature this morning, and otherwise seemed in good spirits. We were supposed to go to a family lunch today, because my sister and brother are in town with their respective families, but I figured we'd skip that on account that Zoe may have still had something, and coupled with the poor night's sleep, I wasn't sure how much socialising she was going to be up for.
My ear has still been giving me grief, and I had a home doctor check it yesterday as well, and he said the ear canal was 90% blocked. First thing this morning I called up to make an appointment with my regular doctor to try and get it flushed out. The earliest appointment I could get was 10:15am.
So we trundled around the corner to my doctor after a very slow start to the day. I got my ear cleaned out and felt like a million bucks afterwards. We went to Woolworths to order an undecorated mud slab cake, so I can try doing a trial birthday cake. I've given up on trying to do the sitting minion, and significantly scaled back to just a flat minion slab cake. The should be ready tomorrow.
The family thing was originally supposed to be tomorrow, and was only moved to today yesterday. My original plan had been to take Zoe to a free Dora the Explorer live show that was on in the Queen Street Mall.
I decided to revert back to the original plan, but by this stage, it was too late to catch the 11am show, so the 1pm show was the only other option. We had a "quick" lunch at home, which involved Zoe refusing the eat the sandwich I made for her and me convincing her otherwise.
Then I got a time-sensitive phone call from a friend, and once I'd finished dealing with that, there wasn't enough time to take any form of public transport and get there in time, so I decided to just drive in.
We parked in the Myer Centre car park, and quickly made our way up to the mall, and made it there comfortably with 5 minutes to spare.
The show wasn't anything much to phone home about. It was basically just 20 minutes of someone in a giant Dora suit acting out was was essentially a typical episode of Dora the Explorer, on stage, with a helper. Zoe started out wanting to sit on my lap, but made a few brief forays down to the "mosh pit" down the front with the other kids, dancing around.
After the show finished, we had about 40 minutes to kill before we could get a photo with Dora, so we wandered around the Myer Centre. I let Zoe choose our destinations initially, and we browsed a cheap accessories store that was having a sale, and then we wandered downstairs to one of the underground bus station platforms.
After that, we made our way up to Lincraft, and browsed. We bought a $5 magnifying glass, and I let Zoe do the whole transaction by herself. After that it was time to make our way back down for the photo.
Zoe made it first in line, so we were in and out nice and quick. We got our photos, and they gave her a little activity book as well, which she thought was cool, and then we headed back down the car park.
In my haste to park and get top side, I hadn't really paid attention to where we'd parked, and we came down via different elevators than we went up, so by the time I'd finally located the car, the exit gate was trying to extract an extra $5 parking out of me. Fortunately I was able to use the intercom at the gate and tell my sob story of being a nincompoop, and they let us out without further payment.
We swung by the Valley to clear my PO box, and then headed home. Zoe spontaneously announced she'd had a fun day, so that was lovely.
We only had about an hour and half to kill before Sarah was going to pick up Zoe, so we just mucked around. Zoe looked at stuff around the house with her magnifying glass. She helped me open my mail. We looked at some of the photos on my phone. Dayframe and a Chromecast is a great combination for that. We had a really lovely spell on the couch where we took turns to draw on her Magna Doodle. That was some really sweet time together.
Zoe seemed really eager for her mother to arrive, and kept asking how much longer it was going to be, and going outside our unit's front door to look for her.
Sarah finally arrived, and remarked that Zoe felt hot, and so I checked her temperature, and her fever had returned, so whatever she has she's still fighting off.
I decided to do my Easter egg shopping in preparation for Sunday. A friend suggested this cool idea of leaving rabbit paw tracks all over the house in baby powder, and I found a template online and got that all ready to go.
I had a really great yoga class tonight. Probably one of the best I've had in a while in terms of being able to completely clear my head.
I'm looking forward to an uninterrupted night's sleep tonight.
It describes a couple of attacks. The first is that some platforms store their Secure Boot policy in a run time UEFI variable. UEFI variables are split into two broad categories - boot time and run time. Boot time variables can only be accessed while in boot services - the moment the bootloader or kernel calls ExitBootServices(), they're inaccessible. Some vendors chose to leave the variable containing firmware settings available during run time, presumably because it makes it easier to implement tools for modifying firmware settings at the OS level. Unfortunately, some vendors left bits of Secure Boot policy in this space. The naive approach would be to simply disable Secure Boot entirely, but that means that the OS would be able to detect that the system wasn't in a secure state. A more subtle approach is to modify the policy, such that the firmware chooses not to verify the signatures on files stored on fixed media. Drop in a new bootloader and victory is ensured.
But that's not a beautiful approach. It depends on the firmware vendor having made that mistake. What if you could just rewrite arbitrary variables, even if they're only supposed to be accessible in boot services? Variables are all stored in flash, connected to the chipset's SPI controller. Allowing arbitrary access to that from the OS would make it straightforward to modify the variables, even if they're boot time-only. So, thankfully, the SPI controller has some control mechanisms. The first is that any attempt to enable the write-access bit will cause a System Management Interrupt, at which point the CPU should trap into System Management Mode and (if the write attempt isn't authorised) flip it back. The second is to disable access from the OS entirely - all writes have to take place in System Management Mode.
The MITRE results show that around 0.03% of modern machines enable the second option. That's unfortunate, but the first option should still be sufficient. Except the first option requires on the SMI actually firing. And, conveniently, Intel's chipsets have a bit that allows you to disable all SMI sources, and then have another bit to disable further writes to the first bit. Except 40% of the machines MITRE tested didn't bother setting that lock bit. So you can just disable SMI generation, remove the write-protect bit on the SPI controller and then write to arbitrary variables, including the SecureBoot enable one.
This is, uh, obviously a problem. The good news is that this has been communicated to firmware and system vendors and it should be fixed in the future. The bad news is that a significant proportion of existing systems can probably have their Secure Boot implementation circumvented. This is pretty unsurprisingly - I suggested that the first few generations would be broken back in 2012. Security tends to be an iterative process, and changing a branch of the industry that's historically not had to care into one that forms the root of platform trust is a difficult process. As the MITRE paper says, UEFI Secure Boot will be a genuine improvement in security. It's just going to take us a little while to get to the point where the more obvious flaws have been worked out.
 Unless the malware was intelligent enough to hook GetVariable, detect a request for SecureBoot and then give a fake answer, but who would do that?
 Impressively, basically everyone enables that.
 Great for dealing with bugs caused by YOUR ENTIRE COMPUTER BEING INTERRUPTED BY ARBITRARY VENDOR CODE, except unfortunately it also probably disables chunks of thermal management and stops various other things from working as well.
New module 'bdtDt' replacing the old 'bdtDate' module in a more transparent style using a local class which is wrapped, just like the three other new classes do
New module 'bdtTd' providing date durations which can be added to dates.
New module 'bdtTz' providing time zone information such as offset to UTC, amount of DST, abbreviated and full timezone names.
New module 'bdtDu' using 'posix_time::duration' for time durations types
New module 'bdtPt' using 'posix_time::ptime' for posix time, down to nanosecond granularity (where hardware and OS permit it)
Now selects C++11 compilation by setting CXX_STD = CXX11 in src/Makevars* and hence depend on R 3.1.0 or later – this gives gives us long long needed for the nano-second high-resolution time calculations across all builds and platforms.
Courtesy of CRANberries, there is also a diffstat report for the lastest release. As always, feedback is welcome and the rcpp-devel mailing list off the R-Forge page for Rcpp is the best place to start a discussion.Update: I just learned the hard way that the combination of 32-bit OS, g++ at version 4.7 or newer and a Boost version of 1.53 or 1.54 does not work with this new upload. Some Googling suggests that this ought to be fixed in Boost 1.54; seemingly it isn't as our trusted BH package with Boost headers provides that very version 1.54. However, the Googling also suggested a quick two-line fix which I just committed in the Github repo. A new BH package with the fix may follow in a few days.
I'm pondering a rewrite of my console-based mail-client.
While it is "popular" it is not popular.
I suspect "console-based" is the killer.
I like console, and I ssh to a remote server to use it, but having different front-ends would be neat.
In the world of mailpipe, etc, is there room for a graphic console client? Possibly.
The limiting factor would be the lack of POP3/IMAP.
Reworking things such that there is a daemon to which a GUI, or a console client, could connect seems simple. The hard part would obviously be working the IPC and writing the GUI. Any toolkit selected would rule out 40% of the audience.
In other news I'm stalling on replying to emails. Irony.
One of the big news stories of the week has been “the Heartbleed bug“. If you know a techie person, you might have noticed that person looking a bit more stressed and tired than usual since Monday (that was certainly true of me). Some of the discussion might seem a bit confusing and/or scary; what’s worse, the non-tech press has started getting some of the details wrong and scare-mongering for readers.
So here’s my non-techie guide to what all the fuss is about. If you’re a techie, this advice isn’t for you; chances are, you already know what you should be doing to help fix this.
(If you’re a techie and you don’t know, ask! You might just need a little education on what needs to happen, and there’s nothing wrong with that, but you’ll be better off asking and possibly looking foolish than you will be if you get hacked.)
If you’re not inclined to read the whole thing, here are the important points:
- Don’t panic! There are reports of people cleaning out their bank accounts, cutting off their Internet service, buying new computers, etc. If you’re thinking about doing anything drastic because you’re scared of Heartbleed, don’t.
- You’ll probably need to change a lot of your passwords on various sites, but wait until each site you use tells you to.
- This is mostly a problem for site servers, not PCs or phones or tablets. Unless you’re doing something unusual (and you’d know if you were), you’re fine as long as you update your devices like you usually do. (You do update your devices, right?)
There’s a notion called a “heartbeat signal”, where two computers talking to each other say “Hey, you there?” every so often. This is usually done by computer #1 sending some bit of data to computer #2, and computer #2 sending it back. In this particular situation, the two computers actually send both a bit of data and the length of that bit of data.
Some of you might be asking “so what happens if computer #1 sends a little bit of data, but lies and says the data is a lot longer than that?” In a perfect world, computer #2 would scold computer #1 for lying, and that’s what happens now with the bug fix. But before early this week, computer #2 would just trust computer #1 in one very specific case.
Now, computers use memory to keep track of stuff they’re working on, and they’re constantly asking for memory and then giving it back when they’re done, so it can be used by something else. So, when you ask for memory, the bit of memory you get might have the results of what the program was doing just a moment ago–things like decrypting a credit card using a crypto key, or checking a password.
This isn’t normally a problem, since it’s the same program getting its own memory back. But if it’s using this memory to keep track of these heartbeats, and it’s been tricked into thinking it needs to send back “the word HAT, which is 500 characters long“, then character 4 and following is likely to be memory used for something just a moment ago.
And that, by the way, is where the name comes from: the heartbeat signal bleeds data, so “Heartbleed”. There’s been some fascinating commentary on how well this bug has been marketed, by the way; hopefully, we in the techie community will learn something about how to explain problems like this for future incidents.Does this affect every site?
No. Only sites using certain newer versions of crypographic software called “OpenSSL” are affected by this. OpenSSL is very popular; I’ve seen estimates that anywhere from a third to a half of all secure Internet sites use it. But not all of those sites will have the bug, since it was only introduced in the last two years.
How do we know this? OpenSSL is open source, and is developed “in public”. Because of that, we know the exact moment when the bug was introduced, when it was released to the world, and when it was fixed.
(And, just for the record, it was an honest mistake. Don’t go and slam on the poor guy who wrote the code with the bug. It should have been caught by a number of different people, and none of them noticed it, so it’s a lot more complicated than “it’s his fault! pitchforks and torches!”)What should I do?
Nothing, yet. Right now, this is mostly a techie problem.
Remember that bit about crypto keys? That’s the part which puts the little lock icon next to the URL in your browser when you go to your bank’s Web site, or to Amazon to buy things, or whatever. The crypto keys make sure that your conversation with your bank about your balance is just between you and your bank.
That’s also the part which is making techies the world over a little more stressed and tired. You see, we know that the people who found the bug were “good guys” and helped to get the bug fixed, but we don’t know if any “bad guys” found the bug before this week. And if a “bad guy” used the bug to extract crypto keys, they would still have those crypto keys, and could still use them even though the original bug is fixed. That would mean that a “bad guy” could intercept your conversation with your bank / Amazon / whoever.
Since we don’t know, we have to do the safe thing, and assume that all our keys were in fact stolen, That means we have to redo all our crypto keys. That’s a lot of work.
And because your password is likely protected with those same crypto keys, if a “bad guy” has Amazon’s key, they’d be able to watch you change your password at Amazon. Maybe they didn’t even have your old password, but now they have your new one. Oops. You’re now less secure than you were.
Now, it’s important to make sure we’re clear: we don’t know that this has happened. There’s really no way of knowing, short of actually catching a “bad guy” in the act, and we haven’t caught anyone–yet. So, this is a safety measure.
Thus, the best thing to do is: don’t panic. Continue to live life as usual. It might be prudent to put off doing some things for a few days, but I wouldn’t even worry so much about that. If you pay your bills online, for example, don’t risk paying a bill late out of fear. Remember: so far, we have no evidence yet that anyone’s actually doing anything malicious with this bug.
At some point, a lot of sites are going to post a notice that looks a lot like this:
We highly recommend our users change the password on their Linux Foundation ID—which is used for the logins on most Linux Foundation sites, including our community site, Linux.com—for your own security and as part of your own comprehensive effort to update and secure as many of your online credentials as you can.
(That’s the notice my employer posted once we had our site in order.)
That will be your cue that they’ve done the work to redo their crypto keys, and that it’s now safe to change your password.
A lot of sites will make statements saying, essentially, “we don’t have a problem”. They’re probably right. Don’t second-guess them; just exhale, slowly, and tick that site off your list of things to worry about.
Other sites might not say anything. That’s the most worrying part, because it’s hard to tell if they’re OK or not. If it’s an important site to you, the best course of action might be to just ask, or search on Google / Bing / DuckDuckGo / wherever for some kind of statement.What about your site?
Yup, I use OpenSSL, and I was vulnerable. But I’m the only person who actually logs in to anything on this site. I’ve got the bugfix, but I’m still in the process of creating new keys.
Part of the problem is that everyone else is out there creating new keys at the same time, which creates a bit of a traffic jam.
So yeah, if you were thinking of posting your credit card number in a comment, and wanted to make sure you did it securely… well, don’t do that. EVER. And not because of Heartbleed.
Little snow, but above-average season. The macro weather situation was very stable this year, very high snowfall in Austria's south (eastern tyrol and carinthia), and long periods of warm and sunny weather with little precipitation on the northern side of the alps (i.e. us).
This had me going snowboarding a lot, but almost exclusively in Damüls since it is characterized by a) grassy terrain (no stones) and b) huge numbers of snow cannons.
I started early (December 7) with another 6 days on piste in December. If there had been more snow the season would have been a long one, too. - Season's end depends on the timimg of easter (because of the holidays) which would have been late. However I again stopped rather early, last day was March 30.
In addition to the days listed below I had an early season's opening at the glacier in Pitztal. I attended the pureboarding in November (21st to 23rd). Looking back at the season I am not quite satisfied with my progress, I just have not managed to implement and practise the technique I should have learned there. It is next to impossible when the slopes are full, and when they aren't one likes to give it a run. ;-)
Here is the balance sheet:2005/06 2006/07 2007/08 2008/09 2009/10 2010/11 2011/12 2012/13 2013/14 number of (partial) days251729373030252330 Damüls1010510162310429 Diedamskopf154242313414191 Warth/Schröcken030413100 total meters of altitude12463474096219936226774202089203918228588203562274706 highscore10247m8321m12108m11272m11888m10976m13076m13885m12848m # of runs309189503551462449516468597
Between 2001 and 2004, John Wielgley wrote emacs-chess, a rather complete Chess library for Emacs. I found it around 2004, and was immediately hooked. Why? Because Emacs is configurable, and I was hoping that I could customize the chessboard display much more than with any other console based chess program I have ever seen. And I was right. One of the four chessboard display types is exactly what I was looking for, chess-plain.el:┌────────┐ 8│tSlDjLsT│ 7│XxXxXxXx│ 6│ ⡀ ⡀ ⡀ ⡀│ 5│⡀ ⡀ ⡀ ⡀ │ 4│ ⡀ ⡀ ⡀ ⡀│ 3│⡀ ⡀ ⡀ ⡀ │ 2│pPpPpPpP│ 1│RnBqKbNr│ └────────┘ abcdefgh
This might look confusing at first, but I have to admit that I grew rather fond of this way of displaying chess positions as ASCII diagrams. In this configuration, initial letters for (mostly) German chess piece names are used for the black pieces, and English chess piece names are used for the white pieces. Uppercase is used to indicate if a piece is on a black square and braille dot 7 is used to indicate an empty black square.
chess-plain is completely configurable though, so you can have more classic diagrams like this as well:┌────────┐ 8│rnbqkbnr│ 7│pppppppp│ 6│ + + + +│ 5│+ + + + │ 4│ + + + +│ 3│+ + + + │ 2│PPPPPPPP│ 1│RNBQKBNR│ └────────┘ abcdefgh
Here, upper case letters indicate white pieces, and lower case letters black pieces. Black squares are indicated with a plus sign.
However, as with many Free Software projects, Emacs Chess was rather dormant in the last 10 years. For some reason that I can not even remember right now, my interest in Emacs Chess has been reignited roughly 5 weeks ago.Universal Chess Interface
It all began when I did a casual apt-cache serch for chess engines, only to discover that a number of free chess engines had been developed and packaged for Debian in the last 10 years. In 2004 there was basically only GNUChess, Phalanx and Crafty. These days, a number of UCI based chess engines have been added, like Stockfish, Glaurung, Fruit or Toga2. So I started by learning how the new chess engine communication protocol, UCI, actually works. After a bit of playing around, I had a basic engine module for Emacs Chess that could play against Stockfish. After I had developed a thin layer for all things that UCI engines have in common (chess-uci.el), it was actually very easy to implement support for Stockfish, Glaurung and Fruit in Emacs Chess. Good, three new free engines supported.Opening books
When I learnt about the UCI protocol, I discovered that most UCI engines these days do not do their own book handling. In fact, it is sort of expected from the GUI to do opening book moves. And here one thing led to another. There is quite good documentation about the Polyglot chess opening book binary format on the net. And since I absolutely love to write binary data decoders in Emacs Lisp (don't ask, I don't know why) I immediately started to write Polyglot book handling code in Emacs Lisp, see chess-polyglot.el.
It turns out that it is relatively simple and actually performs very good. Even a lookup in an opening book bigger than 100 megabytes happens more or less instantaneously, so you do not notice the time required to find moves in an opening book. Binary search is just great. And binary searching binary data in Emacs Lisp is really fun :-).
So Emacs Chess can now load and use polyglot opening book files. I integrated this functionality into the common UCI engine module, so Emacs Chess, when fed with a polyglot opening book, can now choose moves from that book instead of consulting the engine to calculate a move. Very neat! Note that you can create your own opening books from PGN collections, or just download a polyglot book made by someone else.Internet Chess Servers
Later I reworked the internet chess server backend of Emacs Chess a bit (sought games are now displayed with tabulated-list-mode), and found and fixed some (rather unexpected) bugs in the way how legal moves are calculated (if we take the opponents rook, their ability to castle needs to be cleared).
Emacs Chess supports two of the most well known internet chess servers. The Free Internet Chess Server (FICS) and chessclub.com (ICC).A Chess engine written in Emacs Lisp
And then I rediscovered my own little chess engine implemented in Emacs Lisp. I wrote it back in 2004, but never really finished it. After I finally found a small (but important) bug in the static position evaluation function, I was motivated enough to fix my native Emacs Lisp chess engine. I implemented quiescence search so that captue combinations are actually evaluated and not just pruned at a hard limit. This made the engine quite a bit slower, but it actually results in relatively good play. Since the thinking time went up, I implemented a small progress bar so one can actually watch what the engine is doing right now. chess-ai.el is a very small Lisp impelemtnation of a chess engine. Static evaluation, alpha beta and quiescence search included. It covers the basics so to speak. So if you don't have any of the above mentioned external engines installed, you can even play a game of Chess against Emacs directly.Other features
The feature list of Emacs Chess is rather impressive. You can not just play a game of Chess against an engine, you can also play against another human (either via ICS or directly from Emacs to Emacs), view and edit PGN files, solve chess puzzles, and much much more. Emacs Chess is really a universal chess interface for Emacs.Emacs-chess 2.0
In 2004, John and I were already planning to get emacs-chess 2.0 out the door. Well, 10 years have passed, and both of us have forgotten about this wonderful codebase. I am trying to change this. I am in development/maintainance mode for emacs-chess again. John has also promised to find a bit of time to work on a final 2.0 release.
If you are an Emacs user who knows and likes to play Chess, please give emacs-chess a whirl. If you find any problems, please file an Issue on Github, or better yet, send us a Pull Requests.
There is an emacs-chess Debian package which has not been updated in a while. If you want to test the new code, be sure to grab it from GitHub directly. Once we reach a state that at least feels like stable, I am going to update the Debian package of course.
Thirty years ago I started to learn programming. To celebrate this, I'm doing a bit of programming as a sort of performance art. I will write a new program, from scratch, until it is ready for me to start using it for real. The program won't be finished, but it will be ready for my own production use. It'll be something I have wanted to have for a while, but I'm not saying beforehand what it will be. For me, the end result is interesting; for you, the interesting part is watching me be stupid and make funny mistakes.
The performance starts Friday, 18 April 2014, at 09:00 UTC. I apologise if this is an awkward time for you. No time is good for everyone, so I picked a time that is good for me.
Run the following command to see what the local time will be for you.date --date '2014-04-18 09:00:00 UTC'
While I write this program, I will broadcast my terminal to the Internet for anyone to see. For instructions, see the http://liw.fi/distix/performance-art/ page.
There will be an IRC channel as well: #distix on the OFTC network (irc.oftc.net). Feel free to join there if you want to provide real time feedback (the laugh track).
I got one of those rare opportunities to calibrate Zoe's outlook on people on Friday. I feel pretty happy with the job I did.
Once we arrived at the New Farm Park ferry terminal, the girls wanted to have some morning tea, so we camped out in the terminal to have something to eat. Kim had had packed two poppers (aka "juice boxes") for Sarah so they both got to have one. Nice one, Kim!
Not long after we started morning tea, an older woman with some sort of presumably intellectual disability and her carer arrived to wait for a ferry. I have no idea what the disability was, but it presented as her being unable to speak. She'd repeatedly make a single grunting noise, and held her hands a bit funny, and would repeatedly stand up and walk in a circle, and try to rummage through the rubbish bin next to her. I exchanged a smile with her carer. The girls were a little bit wary of her because she acting strange. Sarah whispered something to me inquiring what was up with her. Zoe asked me to accompany her to the rubbish bin to dispose of her juice box.
I didn't feel like talking about the woman within her earshot, so I waited until they'd boarded their ferry, and we'd left the terminal before talking about the encounter. It also gave me a little bit of time to construct my explanation in my head.
I specifically wanted to avoid phrases like "something wrong" or "not right". For all I knew she could have had cerebral palsy, and had a perfectly good brain trapped inside a malfunctioning body.
So I explained that the woman had "special needs" and that people with special needs have bodies or brains that don't work the same way as us, and so just like little kids, they need an adult carer to take care of them so they don't hurt themselves or get lost. In the case of the woman we'd just seen, she needed a carer to make sure she didn't get lost or rummage through the rubbish bin.
That explanation seemed to go down pretty well, and that was the end of that. Maybe next time such circumstances permit, I'll try striking up a conversation with the carer.
Wow, it's been a while since I've done this. In part because I've not had much time for reading books (which doesn't prevent me from buying them).
Jared Bernstein & Dean Baker — Getting Back to Full Employment
James Coughtrey — Six Seconds of Moonlight (sff)
Philip J. Davis & Reuben Hersh — The Mathematical Experience (non-fiction)
Debra Dunbar — A Demon Bound (sff)
Andy Duncan & Ellen Klages — Wakulla Springs (sff)
Dave Eggers & Jordan Bass — The Best of McSweeny's (mainstream)
Siri Hustvedt — The Blazing World (mainstream)
Jacqueline Koyanagi — Ascension (sff)
Ann Leckie — Ancillary Justice (sff)
Adam Lee — Dark Heart (sff)
Seanan McGuire — One Salt Sea (sff)
Seanan McGuire — Ashes of Honor (sff)
Seanan McGuire — Chimes at Midnight (sff)
Seanan McGuire — Midnight Blue-Light Special (sff)
Seanan McGuire — Indexing (sff)
Naomi Mitchinson — Travel Light (sff)
Helaine Olen — Pound Foolish (non-fiction)
Richard Powers — Orfeo (mainstream)
Veronica Schanoes — Burning Girls (sff)
Karl Schroeder — Lockstep (sff)
Charles Stross — The Bloodline Feud (sff)
Charles Stross — The Traders' War (sff)
Charles Stross — The Revolution Trade (sff)
Matthew Thomas — We Are Not Ourselves (mainstream)
Kevin Underhill — The Emergency Sasquatch Ordinance (non-fiction)
Jo Walton — What Makes This Book So Great? (non-fiction)
So, yeah. A lot of stuff.
I went ahead and bought nearly all of the novels Seanan McGuire had out that I'd not read yet after realizing that I'm going to eventually read all of them and there's no reason not to just own them. I also bought all of the Stross reissues of the Merchant Princes series, even though I had some of the books individually, since I think it will make it more likely I'll read the whole series this way.
I have so much stuff that I want to read, but I've not really been in the mood for fiction. I'm trying to destress enough to get back in the mood, but in the meantime have mostly been reading non-fiction or really light fluff (as you'll see from my upcoming reviews). Of that long list, Ancillary Justice is getting a lot of press and looks interesting, and Lockstep is a new Schroeder novel. 'Nuff said.
Kevin Underhill is the author of Lowering the Bar, which you should read if you haven't since it's hilarious. I'm obviously looking forward to that.
The relatively obscure mainstream novels here are more Powell's Indiespensible books. I will probably cancel that subscription soon, at least for a while, since I'm just building up a backlog, but that's part of my general effort to read more mainstream fiction. (I was a bit disappointed since there were several months with only one book, but the current month finally came with two books again.)
Now I just need to buckle down and read. And play video games. And do other things that are fun rather than spending all my time trying to destress from work and zoning in front of the TV.
Review: Cryptography Engineering, by Niels Ferguson, et al.Publisher: Wiley Copyright: 2010 ISBN: 0-470-47424-6 Format: Kindle Pages: 384
Subtitled Design Principles and Practical Applications, Cryptography Engineering is intended as an overview and introduction to cryptography for the non-expert. It doesn't dive deeply into the math, although there is still a fairly thorough mathematical introduction to public-key cryptography. Instead, it focuses on the principles, tools, and algorithms that are the most concretely useful to a practitioner who is trying to design secure systems rather than doing theoretical cryptography.
The "et al." in the author summary hides Bruce Schneier and Tadayoshi Kohno, and this book is officially the second edition of Practical Cryptography by Ferguson and Schneier. Schneier's name will be familiar from, among other things, Applied Cryptography, and I'll have more to say later about which of the two books one should read (and the merits of reading both). But one of the immediately-apparent advantages of Cryptography Engineering is that it's recent. Its 2010 publication date means that it recommends AES as a block cipher, discusses MD5 weaknesses, and can discuss and recommend SHA-2. For the reader whose concern with cryptography is primarily "what should I use now for new work," this has huge benefit.
"What should I use for new work" is the primary focus of this book. There is some survey of the field, but that survey is very limited compared to Applied Cryptography and is tightly focused on the algorithms and approaches that one might reasonably propose today. Cryptography Engineering also attempts to provide general principles and simplifying assumptions to steer readers away from trouble. One example, and the guiding principle for much of the book, is that any new system needs at least a 128-bit security level, meaning that any attack will require 2128 steps. This requirement may be overkill in some edge cases, as the authors point out, but when one is not a cryptography expert, accepting lower security by arguments that sound plausible but may not be sound is very risky.
Cryptography Engineering starts with an overview of cryptography, the basic tools of cryptographic analysis, and the issues around designing secure systems and protocols. I like that the authors not only make it clear that security programming is hard but provide a wealth of practical examples of different attack methods and failure modes, a theme they continue throughout the book. From there, the book moves into a general discussion of major cryptographic areas: encryption, authentication, public-key cryptography, digital signatures, PKI, and issues of performance and complexity.
Part two starts the in-depth discussion with chapters on block ciphers, block cipher modes, hash functions, and MACs, which together form part two (message security). The block cipher mode discussion is particularly good and includes algorithms newer than those in Applied Cryptography. This part closes with a walkthrough of constructing a secure channel, in pseudocode, and a chapter on implementation issues. The implementation chapters throughout the book are necessarily more general, but for me they were one of the most useful parts of the book, since they take a step back from the algorithms and look at the perils and pitfalls of using them to do real work.
The third part of the book is on key negotiation and encompasses random numbers, prime numbers, Diffie-Hellman, RSA, a high-level look at cryptographic protocols, and a detailed look at key negotiation. This will probably be the hardest part of the book for a lot of readers, since the introduction to public-key is very heavy on math. The authors feel that's unavoidable to gain any understanding of the security risks and attack methods against public-key. I'm not quite convinced. But it's useful information, if heavy going that requires some devoted attention.
I want to particularly call out the chapter on random numbers, though. This is an often-overlooked area in cryptography, particularly in introductions for the non-expert, and this is the best discussion of pseudo-random number generators I've ever seen. The authors walk through the design of Fortuna as an illustration of the issues and how they can be avoided. I came away with a far better understanding of practical PRNG design than I've ever had (and more sympathy for the annoying OpenSSL ~/.rnd file).
The last substantial part of the book is on key management, starting with a discussion of time and its importance in cryptographic protocols. From there, there's a discussion of central trusted key servers and then a much more comprehensive discussion of PKI, including the problems with revocation, key lifetime, key formats, and keeping keys secure. The concluding chapter of this part is a very useful discussion of key storage, which is broad enough to encompass passwords, biometrics, and secure tokens. This is followed by a short part discussing standards, patents, and experts.
A comparison between this book and Applied Cryptography reveals less attention to the details of cryptographic algorithms (apart from random number generators, where Cryptography Engineering provides considerably more useful information), wide-ranging surveys of algorithms, and underlying mathematics. Cryptography Engineering also makes several interesting narrowing choices, such as skipping stream ciphers almost entirely. Less surprisingly, this book covers only a tiny handful of cryptographic protocols; there's nothing here about zero-knowledge proofs, blind signatures, bit commitment, or even secret sharing, except a few passing mentions. That's realistic: those protocols are often extremely difficult to understand, and the typical security system doesn't use them.
Replacing those topics is considerably more discussion of implementation techniques and pitfalls, including more assistance from the authors on how to choose good cryptographic building blocks and how to combine them into useful systems. This is a difficult topic, as they frequently acknowledge, and a lot of the advice is necessarily fuzzy, but they at least provide an orientation. To get much out of Applied Cryptography, you needed a basic understanding of what cryptography can do and how you want to use it. Cryptography Engineering tries to fill in that gap to the point where any experienced programmer should be able to see what problems cryptography can solve (and which it can't).
That brings me back to the question of which book you should read, and a clear answer: start here, with Cryptography Engineering. It's more recent, which means that the algorithms it discusses are more directly applicable to day-to-day work. The block cipher mode and random number generator chapters are particularly useful, even if, for the latter, one will probably use a standard library. And it takes more firm stands, rather than just surveying. This comes with the risk of general principles that aren't correct in specific situations, but I think for most readers the additional guidance is vital.
That said, I'm still glad I read Applied Cryptography, and I think I would still recommend reading it after this book. The detailed analysis of DES in Applied Cryptography is worth the book by itself, and more generally the survey of algorithms is useful in showing the range of approaches that can be used. And the survey of cryptographic protocols, if very difficult reading, provides tools for implementing (or at least understanding) some of the fancier and more cutting-edge things that one can do with cryptography.
But this is the place to start, and I wholeheartedly recommend Cryptography Engineering to anyone working in computer security. Whether you're writing code, designing systems, or even evaluating products, this is a very useful book to read. It's a comprehensive introduction if you don't know anything about the field, but deep enough that I still got quite a bit of new information from it despite having written security software for years and having already read Applied Cryptography. Highly recommended. I will probably read it from cover to cover a second time when I have some free moments.
Rating: 9 out of 10