After I moved to a new OpenPGP key (see key transition statement) I have received comments about the short life length of my new key. When I created the key (see my GnuPG setup) I set it to expire after 100 days. Some people assumed that I would have to create a new key then, and therefore wondered what value there is to sign a key that will expire in two months. It doesn’t work like that, and below I will explain how OpenPGP key expiration works; how to extend the expiration time of your key; and argue why having a relatively short validity period can be a good thing.
You can print the sub-packets in your OpenPGP key with gpg --list-packets. See below an output for my key, and notice the “created 1403464490″ (which is Unix time for 2014-06-22 21:14:50) and the “subpkt 9 len 4 (key expires after 100d0h0m)” which adds up to an expiration on 2014-09-26. Don’t confuse the creation time of the key (“created 1403464321″) with when the signature was created (“created 1403464490″).jas@latte:~$ gpg --export 54265e8c | gpg --list-packets |head -20 :public key packet: version 4, algo 1, created 1403464321, expires 0 pkey: [3744 bits] pkey: [17 bits] :user ID packet: "Simon Josefsson " :signature packet: algo 1, keyid 0664A76954265E8C version 4, created 1403464490, md5len 0, sigclass 0x13 digest algo 10, begin of digest be 8e hashed subpkt 27 len 1 (key flags: 03) hashed subpkt 9 len 4 (key expires after 100d0h0m) hashed subpkt 11 len 7 (pref-sym-algos: 9 8 7 13 12 11 10) hashed subpkt 21 len 4 (pref-hash-algos: 10 9 8 11) hashed subpkt 30 len 1 (features: 01) hashed subpkt 23 len 1 (key server preferences: 80) hashed subpkt 2 len 4 (sig created 2014-06-22) hashed subpkt 25 len 1 (primary user ID) subpkt 16 len 8 (issuer key ID 0664A76954265E8C) data: [3743 bits] :signature packet: algo 1, keyid EDA21E94B565716F version 4, created 1403466403, md5len 0, sigclass 0x10 jas@latte:~$
So the key will simply stop being valid after that time? No. It is possible to update the key expiration time value, re-sign the key, and distribute the key to people you communicate with directly or indirectly to OpenPGP keyservers. Since that date is a couple of weeks away, now felt like the perfect opportunity to go through the exercise of taking out my offline master key and boot from a Debian LiveCD and extend its expiry time. See my earlier writeup for LiveCD and USB stick conventions.user@debian:~$ export GNUPGHOME=/media/FA21-AE97/gnupghome user@debian:~$ gpg --edit-key 54265e8c gpg (GnuPG) 1.4.12; Copyright (C) 2012 Free Software Foundation, Inc. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Secret key is available. pub 3744R/54265E8C created: 2014-06-22 expires: 2014-09-30 usage: SC trust: ultimate validity: ultimate sub 2048R/32F8119D created: 2014-06-22 expires: 2014-09-30 usage: S sub 2048R/78ECD86B created: 2014-06-22 expires: 2014-09-30 usage: E sub 2048R/36BA8F9B created: 2014-06-22 expires: 2014-09-30 usage: A [ultimate] (1). Simon Josefsson [ultimate] (2) Simon Josefsson gpg> expire Changing expiration time for the primary key. Please specify how long the key should be valid. 0 = key does not expire = key expires in n days w = key expires in n weeks m = key expires in n months y = key expires in n years Key is valid for? (0) 150 Key expires at Fri 23 Jan 2015 02:47:48 PM UTC Is this correct? (y/N) y You need a passphrase to unlock the secret key for user: "Simon Josefsson " 3744-bit RSA key, ID 54265E8C, created 2014-06-22 pub 3744R/54265E8C created: 2014-06-22 expires: 2015-01-23 usage: SC trust: ultimate validity: ultimate sub 2048R/32F8119D created: 2014-06-22 expires: 2014-09-30 usage: S sub 2048R/78ECD86B created: 2014-06-22 expires: 2014-09-30 usage: E sub 2048R/36BA8F9B created: 2014-06-22 expires: 2014-09-30 usage: A [ultimate] (1). Simon Josefsson [ultimate] (2) Simon Josefsson gpg> key 1 pub 3744R/54265E8C created: 2014-06-22 expires: 2015-01-23 usage: SC trust: ultimate validity: ultimate sub* 2048R/32F8119D created: 2014-06-22 expires: 2014-09-30 usage: S sub 2048R/78ECD86B created: 2014-06-22 expires: 2014-09-30 usage: E sub 2048R/36BA8F9B created: 2014-06-22 expires: 2014-09-30 usage: A [ultimate] (1). Simon Josefsson [ultimate] (2) Simon Josefsson gpg> expire Changing expiration time for a subkey. Please specify how long the key should be valid. 0 = key does not expire = key expires in n days w = key expires in n weeks m = key expires in n months y = key expires in n years Key is valid for? (0) 150 Key expires at Fri 23 Jan 2015 02:48:05 PM UTC Is this correct? (y/N) y You need a passphrase to unlock the secret key for user: "Simon Josefsson " 3744-bit RSA key, ID 54265E8C, created 2014-06-22 pub 3744R/54265E8C created: 2014-06-22 expires: 2015-01-23 usage: SC trust: ultimate validity: ultimate sub* 2048R/32F8119D created: 2014-06-22 expires: 2015-01-23 usage: S sub 2048R/78ECD86B created: 2014-06-22 expires: 2014-09-30 usage: E sub 2048R/36BA8F9B created: 2014-06-22 expires: 2014-09-30 usage: A [ultimate] (1). Simon Josefsson [ultimate] (2) Simon Josefsson gpg> key 2 pub 3744R/54265E8C created: 2014-06-22 expires: 2015-01-23 usage: SC trust: ultimate validity: ultimate sub* 2048R/32F8119D created: 2014-06-22 expires: 2015-01-23 usage: S sub* 2048R/78ECD86B created: 2014-06-22 expires: 2014-09-30 usage: E sub 2048R/36BA8F9B created: 2014-06-22 expires: 2014-09-30 usage: A [ultimate] (1). Simon Josefsson [ultimate] (2) Simon Josefsson gpg> key 1 pub 3744R/54265E8C created: 2014-06-22 expires: 2015-01-23 usage: SC trust: ultimate validity: ultimate sub 2048R/32F8119D created: 2014-06-22 expires: 2015-01-23 usage: S sub* 2048R/78ECD86B created: 2014-06-22 expires: 2014-09-30 usage: E sub 2048R/36BA8F9B created: 2014-06-22 expires: 2014-09-30 usage: A [ultimate] (1). Simon Josefsson [ultimate] (2) Simon Josefsson gpg> expire Changing expiration time for a subkey. Please specify how long the key should be valid. 0 = key does not expire = key expires in n days w = key expires in n weeks m = key expires in n months y = key expires in n years Key is valid for? (0) 150 Key expires at Fri 23 Jan 2015 02:48:14 PM UTC Is this correct? (y/N) y You need a passphrase to unlock the secret key for user: "Simon Josefsson " 3744-bit RSA key, ID 54265E8C, created 2014-06-22 pub 3744R/54265E8C created: 2014-06-22 expires: 2015-01-23 usage: SC trust: ultimate validity: ultimate sub 2048R/32F8119D created: 2014-06-22 expires: 2015-01-23 usage: S sub* 2048R/78ECD86B created: 2014-06-22 expires: 2015-01-23 usage: E sub 2048R/36BA8F9B created: 2014-06-22 expires: 2014-09-30 usage: A [ultimate] (1). Simon Josefsson [ultimate] (2) Simon Josefsson gpg> key 3 pub 3744R/54265E8C created: 2014-06-22 expires: 2015-01-23 usage: SC trust: ultimate validity: ultimate sub 2048R/32F8119D created: 2014-06-22 expires: 2015-01-23 usage: S sub* 2048R/78ECD86B created: 2014-06-22 expires: 2015-01-23 usage: E sub* 2048R/36BA8F9B created: 2014-06-22 expires: 2014-09-30 usage: A [ultimate] (1). Simon Josefsson [ultimate] (2) Simon Josefsson gpg> key 2 pub 3744R/54265E8C created: 2014-06-22 expires: 2015-01-23 usage: SC trust: ultimate validity: ultimate sub 2048R/32F8119D created: 2014-06-22 expires: 2015-01-23 usage: S sub 2048R/78ECD86B created: 2014-06-22 expires: 2015-01-23 usage: E sub* 2048R/36BA8F9B created: 2014-06-22 expires: 2014-09-30 usage: A [ultimate] (1). Simon Josefsson [ultimate] (2) Simon Josefsson gpg> expire Changing expiration time for a subkey. Please specify how long the key should be valid. 0 = key does not expire = key expires in n days w = key expires in n weeks m = key expires in n months y = key expires in n years Key is valid for? (0) 150 Key expires at Fri 23 Jan 2015 02:48:23 PM UTC Is this correct? (y/N) y You need a passphrase to unlock the secret key for user: "Simon Josefsson " 3744-bit RSA key, ID 54265E8C, created 2014-06-22 pub 3744R/54265E8C created: 2014-06-22 expires: 2015-01-23 usage: SC trust: ultimate validity: ultimate sub 2048R/32F8119D created: 2014-06-22 expires: 2015-01-23 usage: S sub 2048R/78ECD86B created: 2014-06-22 expires: 2015-01-23 usage: E sub* 2048R/36BA8F9B created: 2014-06-22 expires: 2015-01-23 usage: A [ultimate] (1). Simon Josefsson [ultimate] (2) Simon Josefsson gpg> save user@debian:~$ gpg -a --export 54265e8c > /media/KINGSTON/updated-key.txt user@debian:~$
I remove the “transport” USB stick from the “offline” computer, and back on my laptop I can inspect the new updated key. Let’s use the same command as before. The key creation time is the same (“created 1403464321″), of course, but the signature packet has a new time (“created 1409064478″) since it was signed now. Notice “created 1409064478″ and “subpkt 9 len 4 (key expires after 214d19h35m)”. The expiration time is computed based on when the key was generated, not when the signature packet was generated. You may want to double-check the pref-sym-algos, pref-hash-algos and other sub-packets so that you don’t accidentally change anything else. (Btw, re-signing your key is also how you would modify those preferences over time.)jas@latte:~$ cat /media/KINGSTON/updated-key.txt |gpg --list-packets | head -20 :public key packet: version 4, algo 1, created 1403464321, expires 0 pkey: [3744 bits] pkey: [17 bits] :user ID packet: "Simon Josefsson " :signature packet: algo 1, keyid 0664A76954265E8C version 4, created 1409064478, md5len 0, sigclass 0x13 digest algo 10, begin of digest 5c b2 hashed subpkt 27 len 1 (key flags: 03) hashed subpkt 11 len 7 (pref-sym-algos: 9 8 7 13 12 11 10) hashed subpkt 21 len 4 (pref-hash-algos: 10 9 8 11) hashed subpkt 30 len 1 (features: 01) hashed subpkt 23 len 1 (key server preferences: 80) hashed subpkt 25 len 1 (primary user ID) hashed subpkt 2 len 4 (sig created 2014-08-26) hashed subpkt 9 len 4 (key expires after 214d19h35m) subpkt 16 len 8 (issuer key ID 0664A76954265E8C) data: [3744 bits] :user ID packet: "Simon Josefsson " :signature packet: algo 1, keyid 0664A76954265E8C jas@latte:~$
Being happy with the new key, I import it and send it to key servers out there.jas@latte:~$ gpg --import /media/KINGSTON/updated-key.txt gpg: key 54265E8C: "Simon Josefsson " 5 new signatures gpg: Total number processed: 1 gpg: new signatures: 5 jas@latte:~$ gpg --send-keys 54265e8c gpg: sending key 54265E8C to hkp server keys.gnupg.net jas@latte:~$ gpg --keyserver keyring.debian.org --send-keys 54265e8c gpg: sending key 54265E8C to hkp server keyring.debian.org jas@latte:~$
Finally: why go through this hassle, rather than set the key to expire in 50 years? Some reasons for this are:
- I don’t trust myselt to keep track of a private key (or revocation cert) for 50 years.
- I want people to notice my revocation certificate as quickly as possible.
- I want people to notice other changes to my key (e.g., cipher preferences) as quickly as possible.
Let’s look into the first reason a bit more. What would happen if I lose both the master key and the revocation cert, for a key that’s valid 50 years? I would start from scratch and create a new key that I upload to keyservers. Then there would be two keys out there that are valid and identify me, and both will have a set of signatures on it. None of them will be revoked. If I happen to lose the new key again, there will be three valid keys out there with signatures on it. You may argue that this shouldn’t be a problem, and that nobody should use any other key than the latest one I want to be used, but that’s a technical argument — and at this point we have moved into usability, and that’s a trickier area. Having users select which out of a couple of apparently all valid keys that exist for me is simply not going to work well.
The second is more subtle, but considerably more important. If people retrieve my key from keyservers today, and it expires in 50 years, there will be no need to refresh it from key servers. If for some reason I have to publish my revocation certificate, there will be people that won’t see it. If instead I set a short validity period, people will have to refresh my key once in a while, and will then either get an updated expiration time, or will get the revocation certificate. This amounts to a CRL/OCSP-like model.
The third is similar to the second, but deserves to be mentioned on its own. Because the cipher preferences are expressed (and signed) in my key, and that ciphers come and go, I would expect that I will modify those during the life-time of my long-term key. If I have a long validity period of my key, people would not refresh it from key servers, and would encrypt messages to me with ciphers I may no longer want to be used.
The downside of having a short validity period is that I have to do slightly more work to get out the offline master key once in a while (which I have to once in a while anyway because I’m signing other peoples keys) and that others need to refresh my key from the key servers. Can anyone identify other disadvantages? Also, having to explain why I’m using a short validity period used to be a downside, but with this writeup posted that won’t be the case any more.
Although Webform module comes with limited integration to expose the submitted data in Views, it lacks the fine control to make View by Webform field as rows and columns. There is a workaround to achieve this though which I would like to briefly run through in this blog.
Webform MySQL Views together with Data and Schema modules with a patch to Webform MySQL Views from issue #889306: Allow the designation of a primary key for MySQL views makes this feasible.
Webform MySQL Views, as the name implies, allow us to create MySQL view from Drupal, leveraging the Data module which counts on Schema module.
Data module wraps a bundle of sub-modules, among them Data Search provides Views Integration and Data Admin UI for accessing its administrative pages.
Once the mentioned modules are enabled. You can see a sub-menu "MySQL Views" under Administration » Content » Webforms. Tick the Webform node whose fields are needed in Views. This form is only meant to create MySQL view.
Welcome to MediaGoblin 0.7.0: Time Traveler’s Delight! It’s been longer than usual for our releases, but we assure you this is because we’ve been traveling back and forth across the timeline picking up cool technology that spans a wide spectrum of space and time. But our time-boat has finally come into the harbor. Get ready… we’ve got a lot of cargo to unpack!
You may remember the work we are doing towards federation, and even the demo we showed earlier of that progress.
Sorry, this video will not work because
your web browser does not support HTML5
You can get a modern web browser that can play this video at http://getfirefox.com!
Well we’re excited to announce that the first piece towards MediaGoblin federation has landed! We don’t have server-to-server federation working yet, but we do have the first parts of the Pump API in place: you can now use the Pump API as a media upload API! Are you a python developer? Starting a client couldn’t be easier now, using PyPump! We also have a whole new section of our docs about the Pump API. There’s of course more Pump related things to come in future releases, but we’re excited to be well on our way!
Sailing into this release is an excellent new theme from Jeremy Pope: Sandy 70s Speedboat! This retro-styled, light colored theme has just enough frills to make your site look good while emphasizing the real stuff you want to show off… your media!
MediaGoblin is now using the skeleton CSS system, making it more responsive. MediaGoblin sites now adaptively fit better into a variety of resolutions, including mobile phones, across the board. (Responsive design is the thing all the cool kids are into these days right?) Now MediaGoblin is much nicer to look at on the go!
We also have a new blogging media type. However, it’s very experimental and could use more testing and careful code review… but if you’re interested in testing and helping out in this area, check it out!
In addition, we have a number of features that have come in thanks to work from a grant to improve MediaGoblin in use with galleries, libraries, archival institutions, and museums. The first of these features is something people have long wanted: the ability for site administrators/curators to “feature” media to appear on the frontpage of a site.
We also now have a tool for command line bulk uploading that has come in through this grant work. Do you already have a set of media and you need to pull into a MediaGoblin instance? You can now use the command line bulk upload tool to automate pulling in that media, including setting metadata.
Wait, metadata? What do we mean by that? Well, what if you want to store some extra information about some work? (What year was this painting done in? If the author was different than the uploader, who was the original author? And many other things!) Now you can associate this information easily with media that you are uploading. With the appropriate plugin enabled, this information is viewable to the user… but it’s also machine readable. Now even robots can appreciate the cultural works on your MediaGoblin site!
For site administrators, we also have two new subcommands: “deletemedia” and “deleteusers”. Whew! Now you can get that cruft that shouldn’t be there off your site in an automated manner!
There are many other fixes and improvements in this release… too many to detail! But some highlights are: the long-hated “video thumbnails not generating” bug is fixed, many improvements to translations, fixes to the PDF media type, new default permissions options for the config file, new template hooks for plugins, and much, much more!
Whew… that sure is a lot! It’s good to see that our time travel madness has paid off in a bounty of fixes and improvements. In the meanwhile, this release was a huge group effort (as always!) so let’s thank our contributors for all their hard work: Aditi Mittal, Aleksej Serdjukov, Alon Levy, Amirouche Boubekki, Andrew Browning, Berker Peksag, Beuc, Boris Bobrov, Brett Smith, Christopher Allan Webber, Deb Nicholson, Elrond (of Samba TNG), Jessica Tallon, Jiyda Mint Moussa, Jeremy Pope, Laura Arjona Reina, Loïc Le Ninan, Matt Molyneaux, Natalie Foust-Pilcher, Odin Hørthe Omdal, Rodney Ewing, Rodrigo Rodrigues da Silva, Sergio Durigan Junior, Sebastian Spaeth, Sebastian Hugentobler, and Tryggvi Björgvinsson. Thanks so much everyone… we really couldn’t do it without you!
Stay tuned for more. We’ve got more cargo that’s shipping its way on in for the next release… we’d better get back to work! In the meanwhile, enjoy this release and be sure to check the release notes. And if you’re interested in joining our crew, we’d love to have you on board, so please do join us!
Happy travels, everyone!
Update: Are you upgrading from a previous version of GNU MediaGoblin? The release notes left out a step (now corrected)… you should also run the command “git submodule init && git submodule update”. Otherwise you’ll be missing out on the “skeleton” CSS framework and things will look really weird! Not to mention the sandy 70s speedboat theme! If you’re doing a new install, this won’t be a problem.
I have the pleasure to attend Akademy this year again. From my past experience, I’m really looking forward to have a good time again. Lots of hacking, meeting known and unknown faces, drinking beer and socializing ahead! I also love that it’s in a (to me) new country again, and wonder what I will see of the Czech Republic and Brno!
This year, the conference schedule is a bit different from the past years. Not only do we have the usual two days packed with interesting talks and keynotes. No - this year there will also be workshops on the third day! These are more in-depth talks which hopefully teach the audience some new skills, be it QML, mobile development, testing, or … profiling :) Your’s truly has the honor to hold a one-hour Profiling 101 workshop.
I welcome all of you to attend my presentation. My plan, currently, is to do some life demoing of how I profile and optimize code. For that purpose, I just wrote a (really slow and badly written) word count test-app. I pushed the sources to kde:scratch/mwolff/akademy-2014.git. If you plan to join my workshop, I encourage you to download the sources and take a shot at optimizing it. I tried my best to write slow code this time, to leave plenty of opportunity for optimizations :) There are many low-hanging fruits in the code. I’m confident that I’ll be able to teach you some more advanced tips and tricks on how you can improve a Qt application’s performance. We’ll see in the end who can come up with the fastest version :)
During my workshop, I’ll investigate the performance of the wordcount app with various tools: On one hand this should teach you how to use the powerful existing opensource tools such as Linux perf and the valgrind suite. I will also show you Intel VTune though, as it is still unparalleled in many aspects and available free-of-charge for non-commercial usage on Linux. Then, I’ll present a few of my own tools to you, such as heaptrack. If you never heard of some of these tools, go try them out before Akademy!
I’ll see what else I’ll fit in and maybe I’ll extend my akademy-2014.git scratch repository with more examples over the next days.
Bye, hope to see you soon!
This year I mentored two students doing work in support of Debian and free software (as well as those I mentored for Ganglia).
Both of them are presenting details about their work at DebConf 14 today.
While Juliana's work has been widely publicised already, mainly due to the fact it is accessible to every individual DD, Andrew's work is also quite significant and creates many possibilities to advance awareness of free software.The Java project that is not just about Java
Andrew's project is about recursively building Java dependencies from third party repositories such as the Maven Central Repository. It matches up well with the wonderful new maven-debian-helper tool in Debian and will help us to fill out /usr/share/maven-repo on every Debian system.
Firstly, this is not just about Java. On a practical level, some aspects of the project are useful for many other purposes. One of those is the aim of scanning a repository for non-free artifacts, making a Git mirror or clone containing a dfsg branch for generating repackaged upstream source and then testing to see if it still builds.
Then there is the principle of software freedom. The Maven Central repository now requires that people publish a sources JAR and license metadata with each binary artifact they upload. They do not, however, demand that the sources JAR be complete or that the binary can be built by somebody else using the published sources. The license data must be specified but it does not appeared to be verified in the same way as packages inspected by Debian's legendary FTP masters.
Thanks to the transitive dependency magic of Maven, it is quite possible that many Java applications that are officially promoted as free software can't trace the source code of every dependency or build plugin.
Many organizations are starting to become more alarmed about the risk that they are dependent upon some rogue dependency. Maybe they will be hit with a lawsuit from a vendor stating that his plugin was only free for the first 3 months. Maybe some binary dependency JAR contains a nasty trojan for harvesting data about their corporate network.
People familiar with the principles of software freedom are in the perfect position to address these concerns and Andrew's work helps us build a cleaner alternative. It obviously can't rebuild every JAR for the very reason that some of them are not really free - however, it does give the opportunity to build a heat-map of trouble spots and also create a fast track to packaging for those heirarchies of JARs that are truly free.Making WebRTC accessible to more people
People attending the session today or participating remotely are advised to set up your RTC / VoIP password at db.debian.org well in advance so the server will allow you to log in and try it during the session. It can take 30 minutes or so for the passwords to be replicated to the SIP proxy and TURN server.
Please also check my previous comments about what works and what doesn't and in particular, please be aware that Iceweasel / Firefox 24 on wheezy is not suitable unless you are on the same LAN as the person you are calling.
Today we’re proud to announce the launch of Drupal Jobs, a career site dedicated completely to Drupal. The Drupal job market is hot (more on that in a moment) and we hope this new tool will help match the right talent with the right positions.
For job seekers, you can start searching for positions by location, position, skill level and more. You can create a profile with your job preferences and salary requirements, and even choose whether you wish to be contacted by employers and recruiters. All for free.
For employers and recruiters there are a variety of packages available, giving them the opportunity to highlight their company with a branded page and feature select postings in newsletters and social media. The great thing is that proceeds from postings are invested back into Drupal.org and its subsites (including Drupal Jobs) and community programs.
The website is launching today and, as with any new website, we expect there will be some kinks to work out. But we know Drupal Jobs will be a valuable addition to the current options for employers, recruiters and job seekers.
The Drupal job market shows no signs of slowing. Our recently conducted survey points to a strong need for talent (see the chart below). In the next few days we’ll publish the full results of the survey. In the meantime, check out Drupal Jobs and let us know what you think.
One challenge the Drupal community has faced for some time is a labor shortage. There are, quite simply, not enough skilled Drupal developers to go around. That's quite a problem when the Drupal market is continuing to grow steadily.
One of the challenges to finding good Drupal talent is that Drupal has historically been, well, weird. And by "weird" I mean "entirely unlike any other system on the market". That makes few skills transferrable between Drupal and any other PHP framework, application, or system. Developers trained on Drupal cannot easily transition to any other system and developers trained on any other modern PHP system get lost in arrays the minute they set foot in the door. It's a sufficiently large problem that I've talked to other development shop owners that have said outright they have more success hiring fresh, junior developers and training them on Drupal as their first system than hiring anyone with experience, as those with more extensive PHP experience run for the door.
That's a big problem. Fortunately, that's about to change.
For the past several years, the Drupal project has been working to Get Off the Island. Drupal 8 will be using more standard, common PHP and programming-in-general tools, techniques, and architectures, making it more accessible to more developers than ever before, even non-PHP developers. The number of Drupal developers showing up at non-Drupal events is rising; For example, Lonestar PHP 2013 had two; Lonestar PHP 2014 had 10 (which for a 200 person conference is a very respectable number). I've noticed similar trends at other PHP conferences.
But to really seal the deal and help fill the Drupal employment gap, it's time for Drupal employers to step and do their part: Selling off the island.
With Drupal 8, and the buzz around it in the general PHP community, there will be an increasing number of general PHP developers interested in working with Drupal and who are better qualified to work on Drupal. (Not with no training, but with far less retraining than Drupal 7 requires.) Those developers, though, won't just walk in the door. They have no reason to come to a DrupalCamp, and probably not even a DrupalCon. As a Drupal consultancy or Drupal-based company you need to go out and find them. The core team has done its part, now it's time to do yours.
A friend of mine once said that if you want to meet people with whom you have a shared interest you need to go where people with that interest hang out. That applies for hiring, too. So where does the next round of Drupal talent hang out? At non-Drupal events. If you don't then someone else will hire the next generation of senior developers before you do.
- Have a presence on stage: Make no mistake, presenting is hard work. It takes a lot of preparation to give a good talk, and that takes time. But the impact of having someone from your company on-stage is 10x that of having them walking around the hallway with other attendees. If someone from your team can present on work that you've done that's fantastic. But even just presenting on something cool, interesting, insightful, or otherwise useful can be a big help to your company's brand. Also, light branding of the presentation itself is completely OK as long as it's not gratuitous. That's a much more targeted form of marketing than exists anywhere else, online or off; you have a self-selecting group of potential hires in one room together. Let your team be what they're there to see.
At the start of 2013 I laid out a challenge to Drupal developers: Attend at least two non-Drupal events that year. I'll now lay the same challenge out to Drupal-based companies: Encourage your team to present at at least two non-Drupal events in the next year, and sponsor at least two non-Drupal events in the next year. There's no shortage of them; there's over a dozen PHP conferences just in the USA every year and more around the world.
Your next Drupal hire is going to come from a non-Drupal background, especially a senior-level developer. If you want to hire them before someone else does, get out to where they are. It's a whole new market if you're willing to embrace it.
So at DrupalCon Austin I had a great time at the contribution sprints. I worked on some issues affecting Drupal.org, it was great fun!
The issues we worked on over the week range from simple things through to some pretty difficult issues.
Although Drupal core can always use more contributors, I would suggest that Drupal.org is also desperately short of contributors too.
I’m pleased to say that via our ModelInsight we’ll be running two Python-focused training courses in October. The goal is to give you new strong research & development skills, they’re aimed at folks in companies but would suit folks in academia too. UPDATE training courses ready to buy (1 Day Data Science, 2 Day High Performance).
UPDATE we have a <5min anonymous survey which helps us learn your needs for Data Science training in London, please click through and answer the few questions so we know what training you need.
These and future courses will be announced on our London Python Data Science Training mailing list, sign-up for occasional announces about our upcoming courses (no spam, just occasional updates, you can unsubscribe at any time).Intro to Data science with Python (1 day) on Friday 24th October
Students: Basic to Intermediate Pythonistas (you can already write scripts and you have some basic matrix experience)
Goal: Solve a complete data science problem (building a working and deployable recommendation engine) by working through the entire process – using numpy and pandas, applying test driven development, visualising the problem, deploying a tiny web application that serves the results (great for when you’re back with your team!)
- learn basic numpy, pandas and data cleaning
- be confident with Test Driven Development and debugging strategies
- create a recommender system and understand its strengths and limitations
- use a Flask API to serve results
- learn Anaconda and conda environments
- take home a working recommender system that you can confidently customise to your data
- £300 including lunch, central London (24th October)
- additional announces will come via our London Python Data Science Training mailing list
- Buy your ticket here
Students: Intermediate Pythonistas (you need higher performance for your Python code)
Goal: learn high performance techniques for performant computing, a mix of background theory and lots of hands-on pragmatic exercises
- Profiling (CPU, RAM) to understand bottlenecks
- Compilers and JITs (Cython, Numba, Pythran, PyPy) to pragmatically run code faster
- Learn r&d and engineering approaches to efficient development
- Multicore and clusters (multiprocessing, IPython parallel) for scaling
- Debugging strategies, numpy techniques, lowering memory usage, storage engines
- Learn Anaconda and conda environments
- Take home years of hard-won experience so you can develop performant Python code
- Cost: £600 including lunch, central London (30th & 31st October)
- additional announces will come via our London Python Data Science Training mailing list
- Buy your ticket here
The High Performance course is built off of many years teaching and talking at conferences (including PyDataLondon 2013, PyCon 2013, EuroSciPy 2012) and in companies along with my High Performance Python book (O’Reilly). The data science course is built off of techniques we’ve used over the last few years to help clients solve data science problems. Both courses are very pragmatic, hands-on and will leave you with new skills that have been battle-tested by us (we use these approaches to quickly deliver correct and valuable data science solutions for our clients via ModelInsight). At PyCon 2012 my students rated me 4.64/5.0 for overall happiness with my High Performance teaching.
We’d also like to know which other courses you’d like to learn, we can partner with trainers as needed to deliver new courses in London. We’re focused around Python, data science, high performance and pragmatic engineering. Drop me an email (via ModelInsight) and let me know if we can help.
Do please join our London Python Data Science Training mailing list to be kept informed about upcoming training courses.Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.
Here is a message from Cristian, our resident maintainer for the Windows version of KMyMoney.
I would like to ask all Windows users who wish to improve the quality of KMyMoney on Windows to try the first installer of the “live build” series and report any issues that you might encounter.
As you may know the development team decided a release schedule. There is still about a month until the next release will be out which gives us enough time to iron out any glitches the installer might contain.
It’s also a good opportunity to take a look at the new features that were added (the most interesting should be transaction tags).
Notes about this package:
- it will only run on Windows 7 or newer version
- it uses KDE 4.12.5 and Qt 4.8.6
- it does not yet contain translations
- GPG works with gpg4win out of the box (this workaround is not longer needed)
- as with previous versions it does not contain the HBCI KBanking plugin because AqBanking’s build system is autotools based making it hard to build using MSVC
- the OFX import plugin is available
- as with previous versions the Finance::Quote module will only work if you install perl (with the Finance::Quote module) separately
- it will be periodically updated as issues are fixed
If you currently use KMyMoney on Windows there is no need to uninstall your current version since this version will install in it’s own folder and will have it’s own shortcut by default. Just remember, the newer version extends the stored information in the data file (like tags) so when switching back to the old version this new extra
information (tags), that the old version knows nothing about, will be lost.
Make sure that you backup your data file (make a copy of it somewhere) more often while using this version (just in case). I actually expect this package to be better then the last one (4.6.4) but after all this is a call for testing.
I have tested the installer on Windows 7 32bits so feedback using newer versions would be welcomed.
With a series of icon tests we currently study effects on the usability of icon design. This article however does not focus on these general design effects but presents findings specific to the Nuvola icon set.
Keep on reading: Intermediate results of the icon tests: Nuvola
The notion that people contributing to Open Source don't get paid is false. Contributors to Open Source are compensated for their labor; not always with financial capital (i.e. a paycheck) but certainly with social capital. Social capital is a rather vague and intangible concept so let me give some examples. If you know someone at a company where you are applying for a job and this connection helps you get that job, you have used social capital. Or if you got a lead or a business opportunity through your network, you have used social capital. Or when you fall on hard times and you rely on friends for emotional support, you're also using social capital.
The term "social" refers to the fact that the resources are not personal assets; no single person owns them. Instead, the resources are in the network of relationships. Too many people believe that success in life is based on the individual, and that if you do not have success in life, there is no one to blame but yourself. The truth is that individuals who build and use social capital get better jobs, better pay, faster promotions and are more effective compared to peers who are not tapping the power of social capital. As shown in the examples, social capital also translates into happiness and well-being.
Most Open Source contributors benefit from social capital but may not have stopped to think about it, or may not value it appropriately. Most of us in the Open Source world have made friendships for life, have landed jobs because of our contributions, others have started businesses together, and for others it has provided an important sense of purpose. Once you become attuned to spotting social capital being leveraged, you see it everywhere, every day. I could literally write a book filled with hundreds of stories about how contributing to Open Source changed people's lives -- I love hearing these stories.
Social capital is a big deal; it is worth understanding, worth talking about, and worth investing in. It is key to achieving personal success, business success and even happiness.
Starting tomorrow 07:00am EST (so, 22:00 PST for Debconfers), I'll be running the "TDS" race of the Ultra-Trail du Mont-Blanc races.
Ultra-Trail du Mont-Blanc (UTMB) is one of the world famous long distance moutain trail races. It takes places in Chamonix, just below the Mont-Blanc, France's and Europe's highest moutain. The race is indeed simple : "go around the Mont-Blanc in a big circle, 160km long, with 10,000 meters positive climb cumulated on the climb of about 10 high passes between 2000 and 2700 meters altitude".
"My" race is a shortened version of UTMB that does half of the full loop, from Courmayeur in Italy (just "the other side" of Mont-Blanc, from Chamonix) and goes back to Chamonix. It is "only" 120 kilometers long with 7200 meters of positive climb. Some of these are however know as more difficult than UTMB itself.
Many firsts for me in this race : first "over 100km", first "over 24 hours running". Still, I trained hard for this, achieved a very though race in early July (60km, 5000m climb) with a very good result, and I expect to make it well.
Top runners complete this in 17 hours.....last arrivals are expected after 33 hours "running" (often fast walking, indeed). I plan to achieve the race in 28 hours but, indeed, I have no idea..:-)
So, in case you're boring in a night hacklab, or just want to draw your attention out of IRC, or don't have any package to polish...or just want to have a thought for an old friend, you can try to use the following link and follow all this live : http://utmb.livetrail.net/coureur.php?rech=6384⟨=en
Race start : 7am EST, Wednesday Aug 27th. bubulle arrival: Thursday Aug. 28th, between 10am and 4pm (best projection is 11am).
And there will be cheese at pit stops....
All most 10 years ago (August 21, 2004) we started Axis2 project and during last 10 years it has become one of the most successful projects in Apache software foundation (and yes this marked my 10 years at Apache). Axis2 was started with a handful or people including Sanjiva, Dims, Glen, Paul, Srinath, Chinthaka, Ajith, Chathura, Jaliya and myself (and second wave consist of many more).
In a short period of time a small team of six developers was able to make a huge process and it was more than enough to convinced IBM to drop their own web service stack (which they were working for many months) and join Axis2. That was the first wave of developers outside Sri lanka joined Axis2. Today, IBM websphere comes with Axis2 as the default WS framework.
Here are the few noticeable tings about Axis2.
- Axis2 was the main driving force behind bringing Microsoft to ASF (through Stonehenge).
- All the initial members (Chinthaka, Ajith, Chathura, Srinath, Jaliya and myself) of the project were able to obtain PhDs (Axis2 is the first project in the history of Apache to do something in this nature).
- Helped many Sri Lankan students to pursue their higher studies in top universities in the world (over 50 students, in several countries).
- Helped Sri Lanka to become the open source hub in the Asian continent. Sri Lanka is the only country outside USA and Europe to have most number of Apache committers and members.
- Community around Axis2 helped to have first ever Apache Con (ApacheCon Asia 2006) outside USA and Europe.
- Google summer of code effort in Sri Lanka was started with Axis2 (now for several consecutive years Sri Lanka has produced a number of successful Google summer of code projects.
- Produced many international speakers and authors.
- The main web service framework used for many academic research studies across the globe.
- Axis2 is used for eBay to process two billion transactions per day.
This is the First email that Srinath sent to the list announcing Axis2 F2F
These are the first set of people who come to the Axis2 F2F.
This is the summary mail of the first F2F
At the initial stage of Axis2 we used to have weekly chat, what special about those chat is we(initial developers) implement a prototype and discuss about that at the weekly chat. The funny thing is all most all the days, we have to throw that prototype and start a new one after the chat.
Here is the chat log of the first weekly chat.
From the day one of the project we used to follow the Apache guidelines, so we create patch and send them to the list. Then existing commiters can apply them, most of the time Alex and Dims used to apply those patches.
Here is the very first patch of the project.
When we start the project we did not have any commiter for Axis2 project, we had WS commiters. So following are the initial commiters of Axis2 project and here is the commiiter nomination email.
[VOTE][Axis2]Ajith, Deepal and Chinthaka for Axis2 Commiter
At the initial stage of the project we had so many milestone release before we hit 0.94 release. Here is the announcement email for first release of Axis2.
Axis2 first release – Axis2 M1
First few F2F
- First F2F 21-24 August 2004, Colombo, Sri Lanka. And here is the first set of people who came to the event
- Second F2F March 29-31st 2005 – Comobo, Sri Lanka
- Third F2F and hackathon – December, 2005, San Diago, USA
- Fourth F2F and hackathon - Indiana University, Bloomington
A Pandas DataFrame has a nice to_sql(table_name, sqlalchemy_engine) method that saves itself to a database.
The only trouble is that coming up with the SQLAlchemy Engine object is a little bit of a pain, and if you're using the IPython %sql magic, your %sql session already has an SQLAlchemy engine anyway. So I created a bogus PERSIST pseudo-SQL command that simply calls to_sql with the open database connection:%sql PERSIST mydataframe
The result is that your data can make a very convenient round-trip from your database, to Pandas and whatever transformations you want to apply there, and back to your database:
In : %load_ext sql
In : %sql postgresql://@localhost/
Out: u'Connected: @'
In : ohio = %sql select * from cities_of_ohio;
246 rows affected.
In : df = ohio.DataFrame()
In : montgomery = df[df['county']=='Montgomery County']
In : %sql PERSIST montgomery
Out: u'Persisted montgomery'
In : %sql SELECT * FROM montgomery
11 rows affected.
[(27L, u'Brookville', u'5,884', u'Montgomery County'),
(54L, u'Dayton', u'141,527', u'Montgomery County'),
(66L, u'Englewood', u'13,465', u'Montgomery County'),
(81L, u'Germantown', u'6,215', u'Montgomery County'),
(130L, u'Miamisburg', u'20,181', u'Montgomery County'),
(136L, u'Moraine', u'6,307', u'Montgomery County'),
(157L, u'Oakwood', u'9,202', u'Montgomery County'),
(180L, u'Riverside', u'25,201', u'Montgomery County'),
(210L, u'Trotwood', u'24,431', u'Montgomery County'),
(220L, u'Vandalia', u'15,246', u'Montgomery County'),
(230L, u'West Carrollton', u'13,143', u'Montgomery County')]
Pythonisthas from Montreal, it's time for us for our back to school special. We are coming back from our summer vacation and we are hosting our next meetup at the offices of our friends from Shopify on St-Laurent street on Tuesday September 23th at 6:30 pm.
We especially love to hear from new speakers. If you haven't given a talk at Montréal-Python before, a 5 or 10 minute lightning talk would be a great start, but we also have slots for 10 to 40 minutes talks!
It's a perfect opportunity if you would like to show us what you've discovered and created, especially if you are planning to present your talk at PyCon.
Don't forget, the call speakers for PyCon 2015 is ending on Sept, 15thSome topic suggestions:
- Give a beginner's introduction to a Python library you've been using!
- Talk about a project you're working on!
- Show us unit testing, continuous integration or Python documentation tools!
- Tell us about a Python performance problem you've run into and how you solved it!
- The standard Python library is full of amazing things. Have you learned how multiprocessing or threading or GUI programming works recently? Tell us about it!
- Explain how to get started with Django in 5 minutes!
We're always looking out for 10 to 40 minutes talks, or a quick 5 minutes flash presentation. If you discovered or learned something that you find interesting, we'd love to help you let others learn about it! Send your proposals to email@example.com
A few changes to vmdebootstrap will need to go into the next version (0.3), including an example customise script to setup the u-boot support. With the changes, the command would be:sudo ./vmdebootstrap --owner `whoami` --verbose --size 2G --mirror http://mirror.bytemark.co.uk/debian --log beaglebone-black.log --log-level debug --arch armhf --foreign /usr/bin/qemu-arm-static --no-extlinux --no-kernel --package u-boot --package linux-image-armmp --distribution sid --enable-dhcp --configure-apt --serial-console-command '/sbin/getty -L ttyO0 115200 vt100' --customize examples/beagleboneblack-customise.sh --bootsize 50m --boottype vfat --image bbb.img
Some of those commands are new but there are a few important elements:
- use of –arch and –foreign to provide the emulation needed to run the debootstrap second stage.
- drop extlinux and install u-boot as a package.
- linux-image-armmp kernel
- new command to configure an apt source
- serial-console-command as the BBB doesn’t use the default /dev/ttyS0
- choice of sid to get the latest ARMMP and u-boot versions
- customize command – this is a script which does two things:
- copies the dtbs into the boot partition
- copies the u-boot files and creates a u-boot environment to use those files.
- use of a boot partition – note that it needs to be large enough to include the ARMMP kernel and a backup of the same files.
With this in place, a simple dd to an SD card and the BBB boots directly into Debian ARMMP.
The examples are now in my branch and include an initial cubieboard script which is unfinished.
The current image is available for download. (222Mb).
I hope to upload the new vmdebootstrap soon – let me know if you do try the version in the branch.
This sounds pretty neat:With Logentries Anomaly Detection, users can: Set-up real-time alerting based on deviations from important patterns and log events. Easily customize Anomaly thresholds and compare different time periods. With Logentries Inactivity Alerting, users can: Monitor standard, incoming events such as an application heart beat. Receive real-time alerts based on log inactivity (i.e. receive alerts when something does not occur).
This is actually quite educational