FLOSS Project Planets
Join me, Andrew Godwin (South, Django migrations), Simon Willison (co-founder of Django, co-founder of Lanyrd), and many other talented people at Eventbrite. We have great challenges, the kind that inspire you to rise to the occasion. We need you to help us overcome them.
I should mention that Eventbrite is committed to giving back to the community. Most notably Eventbrite just contributed £5000 to the Django Rest Framework kickstarter, or about US$8500!! We're a frequent sponsor of events around the world. It doesn't stop there, as Eventbrite managers during the discussion of any tool outside our domain of running events will ask: "When can we open source this?"
As someone who loves working on open source, Eventbrite is the place to be. I say this because I know what we're planning to do in the future. If you join us, you'll find out sooner rather than later. ;)
What's Eventbrite like as a company? Well, we're rated in the top 20 of best places to work in the United States. We get full benefits, free lunch, educational opportunities, and much more. In addition, I have to say that my co-workers are friendly, intelligent, always learning, and love to do things the right way, even if it's the hard way.Applying for Eventbrite Python positions
Sure, you could go to the official Eventbrite job site, but this method is a fun challenge that proves to us you have the basics down. All you need to do is pass this little test of mine. If you fail any portion of this test we can't consider hiring you.
- Can you work in San Francisco (USA), Nashville (USA), or Mendoza (Argentina)?
- Are you able to communicate in both written and verbal English?
- Are you a coder? I will throw away anything from a recruiter.
- Can you figure out how to contact me? Eventbrite doesn't believe in testing applicants with puzzle logic questions. Instead, we ask you meaningful technical questions or to solve a coding problem. With that in mind, use the following to contact me:
Note: This is the updated test that is identical to my next blog post.
Thomas Seidl and Nick Veenhof took a few minutes out of the Drupal 8 Search API code sprint at the Drupal DevDays in Szeged, Hungary to talk with me about the state-of-play and what's coming in terms of search in Drupal: one flexible, pluggable solution for search functionality with the whole community behind it.
- Checkout the issue queue for HAL and ReST.
- Use the quickstart tool: https://github.com/build2be/drupal-rest-test.
- Install HAL Browser on your site to see what we got till now.
- cd drupal-root
Richard Stallman's speech will be nontechnical, admission is gratis, and the public is encouraged to attend.
Time and detailed location to be determined.
Please fill out our contact form, so that we can contact you about future events in and around Socorro.
Cross-posted with permission from nerdstein
The Migrate module is, hands down, the defacto way to migrate content in Drupal. The only knock against it, is the learning curve. All good things come to those who take the time and learn it.
This awesome release brings many new features. Among them, I’m most excited about the server to server sharing.
Server to server sharing is a first step in true federation of data with ownCloud: you can add a folder shared with you from another ownCloud instance into your own. Next step would of course be to also share things like user accounts and data like chat, contacts, calendar and more. These things come with their own challenges and we’re not there yet, but if you want to help work on it – join us for the ownCloud Contributor Conference in Berlin next month!
A close runner-up in terms of excitement for me are the improvements to ownCloud Documents – real-time document editing directly on your ownCloud! We have been updating this through the 6.0.x series so the only ‘unique’ ownCloud 7 feature is the support for transparently converting MS Word documents, but that is a feature that makes Documents many times more useful!
A big THANK YOU
This would not have been possible without the hard work of the ownCloud community, so a big thank-you goes out to everybody who contributed! We have a large team of almost 100 regular contributors, making ownCloud one of the largest Open Source projects and that makes me proud.
Of course we have a lot of work to do: revelations of companies and governments spying on people keep coming out and our work is crucial to protect our privacy for the future. If you want to help out with this important work, consider contributing to ownCloud. We can use help in many areas, not just coding. Translation, marketing and design are all important for the success of ownCloud!
The release of ownCloud 7 is not only the conclusion of a lot of hard work by the ownCloud community, but also a new beginning! Not only will we release updates to this release, fixing issues and adding translations, but the community now also starts to update the numerous ownCloud apps to ownCloud 7.
Expect more from us. Now, go, install ownCloud 7 and let me know what you think of it!
Major desktop environments such as Xfce or KDE have a built-in computer suspend feature, but when you use a lighter alternative, things are a bit more complicated, because basically: only root can suspend the computer. There used to be a standard solution to that, using a D-Bus call to a running daemon upowerd. With recent updates, that solution first stopped working for obscure reasons, but it could still be configured back to be usable. With newer updates, it stopped working again, but this time it seems it is gone for good:$ dbus-send --system --print-reply \ --dest='org.freedesktop.UPower' \ /org/freedesktop/UPower org.freedesktop.UPower.Suspend Error org.freedesktop.DBus.Error.UnknownMethod: Method "Suspend" with signature "" on interface "org.freedesktop.UPower" doesn't exist
The reason seems to be that upowerd is not running, because it no longer provides an init script, only a systemd service. So, if you do not use systemd, you are left with one simple and stable solution: defining a sudo rule to start the suspend or hibernation process as root. In /etc/sudoers.d/power:%powerdev ALL=NOPASSWD: /usr/sbin/pm-suspend, \ /usr/sbin/pm-suspend-hybrid, \ /usr/sbin/pm-hibernate
That allows members of the powderdev group to run sudo pm-suspend, sudo pm-suspend-hybrid and sudo pm-hibernate, which can be used with a key binding manager such as your window manager's or xbindkeys. Simple, efficient, and contrary to all that ever-changing GizmoKit and whatsitd stuff, it has worked and will keep working for years.
Esa charla de Richard Stallman no será técnica y será abierta al público; todos están invitados a asistir.
Favor de rellenar este formulario, para que podamos contactarle acerca de eventos futuros en la región de Bogotá.
El título, el lugar exacto, y la hora de la charla serán determinados.
Previously: (I) Workflow
The master services include
- Maestro REST API
- End user web interface
- Composition Execution Engine (LuCEE)
- ActiveMQ for STOMP messaging
- PostgreSQL (or MySQL)
The REST API is a webapp written in Java, using Spring, packaged with a Jetty server. It is documented with Swagger annotations that generate a really nice web interface automatically that allows trying all the operations from the browser.
It handles caching, security, based on LDAP or database records, and delegates to the Composition Execution Engine (LuCEE) typically through LuCEE REST API but also via STOMP messaging to avoid continuous polling.
It also implements handlers to execute compositions from Github, Git, SVN,… on commit callbacks.End user web interface
The end user UI is written in AngularJS using the AngularJS Bootstrap components and Less stylesheets. It connects to the REST API, so everything that can be done through the webapp can also be automated using the REST API (automation, automation, automation!). I have found Angular really nice to work with besides the service, factory, provider,… complicated abstractions, with good modularity and the ability to reuse third party plugins.
LuCEE is a webapp that manages the execution of compositions, sending/receiving work to/from the agents through ActiveMQ STOMP queues, and storing state in the PostgreSQL database. LuCEE uses the Ruote workflow engine for work scheduling, and manages the compositions queue and agent routing, so basically checks what compositions need to be executed and decides in what agent to execute them, based on composition requirements, free agents, and other factors ie. prioritizing previously used agents that would likely have a cached copy of sources and dependencies to speed things up.
It is written in Ruby, it was quick to implement a first version, with a simple REST API using Sinatra and a STOMP connector to send messages to the Maestro REST webapp through ActiveMQ.
It is packaged as a JRuby war with Warbler, and both LuCEE and the REST API wars are run in the same Jetty server, all packaged as an RPM for easier deployment.ActiveMQ
ActiveMQ handles all the comunication between LuCEE, the REST API webapp, and the agents using multiple STOMP queues. All the comunication between LuCEE and agents such as workloads, agent output, agent status,… is sent over a queue so it can be easily scaled across a high number of agents.
LuCEE also pushes changes in the database to the REST API webapp so it can update the caches without needing continuous polling.PostgreSQL
LuCEE uses PostgreSQL (or MySQL or any other SQL database using Ruby Datamapper) as main storage to save compositions, projects, tasks,… The SQL database is also used by the REST API webapp to store permissions and user data when not using LDAP.MongoDB
We found that in order to do more complex dashboards and reports we needed to store all sort of unstructured data from the plugins, from run time or status to anything that a plugin developer may want such as GitHub payload data received or test stacktrace. That data is sent by the agents to LuCEE and then stored in MongoDB, and can be queried directly (all your data belong to you) or through a reporting pane in the webapp.
Next: Agent architecture
In this episode we cover the Splashify module. This module is used to display splash pages or popups. There are multiple configuration options available to fit your site needs.
In this episode you will learn:
- How to set up Splashify
- How to configure Splashify
- How to get Splashify to use the Mobile Detect plugin
- How Splashify displays to the end user
- How to be awesome
Yet another update from my internship at Mozilla, as part of the OPW.
A brief one, this time, sorry.Bugs, Bugs, Bugs, Bacon and Bugs
I've continued with my triaging/verifying work and I feel now pretty confident when working on a bug.
On the other hand, I think I've learned more or less what was to be learned here, so I must think (and ask my mentor) where to go from now on.
Maybe focus on a specific Component?
Or steadily work on a specific channel for both triaging/poking and verifying?
Or try my hand at patches?
Not sure, yet.
Also, I'd like to point out that, while working on bug triaging, the developer's answers on the bug report are really important.
Comments like this help me as a triager to learn something new, and be a better triager for that component.
I do realize that developers cannot always take the time to put in comments basic information on how to better debug their component/product, but trust me: this will make you happy on the long run.
A wiki page with basic information on how debug problems for your component is also a good idea, as long as that page is easy to find ;).
So, big shout-out for MattN for a very useful comment!Community
After much delaying, we finally managed to pick a date for the Bug Triage Workshop: it will be on July 25th.
The workshop will be an online session focused on what is triaging, why is important, how to reproduce bugs and what information ask to the reporter to make a bug report the most complete and useful possible.
We will do it in two different time slots, to accomodate various timezones, and it will be held on #testday on irc.mozilla.org.
Take a look at the official announcement and subscribe on the event's etherpad!
See you on Friday! :)
I've been using 802.11 on Linux now for over a decade, and to be honest, it's still a pretty sad experience. It works well enough that I mostly don't care... but when I care, and try to dig deeper, it always ends up in the answer “this is just crap”.
I can't say exactly why this is; between the Intel cards I've always been using, the Linux drivers, the firmware, the mac80211 layer, wpa_supplicant and NetworkManager, I have no idea who are supposed to get all these things right, and I have no idea how hard or easy they actually are to pull off. But there are still things annoying me frequently that we should really have gotten right after ten years or more:
- Why does my Intel card consistently pick 2.4 GHz over 5 GHz? The 5 GHz signal is just as strong, and it gives a less crowded 40 MHz channel (twice the bandwidth, yay!) instead of the busy 20 MHz channel the 2.4 GHz one has to share. The worst part is, if I use an access point with band-select (essentially forcing the initial connection to be to 5 GHz—this is of course extra fun when the driver sees ten APs and tries to connect to all of them over 2.4 in turn before trying 5 GHz), the driver still swaps onto 2.4 GHz a few minutes later!
- Rate selection. I can sit literally right next to an AP and get a connection on the lowest basic rate (which I've set to 11 Mbit/sec for the occasion). OK, maybe I shouldn't trust the output of iwconfig too much, since rate is selected per-packet, but then again, when Linux supposedly has a really good rate selection algorithm (minstrel), why are so many drivers using their own instead? (Yes, hello “iwl-agn-rs”, I'm looking at you.)
- Connection time. I dislike OS X pretty deeply and think that many of its technical merits are way overblown, but it's got one thing going for it; it connects to an AP fast. RFC4436 describes some of the tricks they're using, but Linux uses none of them. In any case, even the WPA2 setup is slow for some reason, it's not just DHCP.
- Scanning/roaming seems to be pretty random; I have no idea how much thought really went into this, and I know it is a hard problem, but it's not unusual at all to be stuck at some low-speed AP when a higher-speed one is available. (See also 2.4 vs. 5 above.) I'd love to get proper support for CCX (Cisco Client Extensions) here, which makes this tons better in a larger Wi-Fi setting (since the access point can give the client a lot of information that's useful for roaming, e.g. “there's an access point on thannel 52 that sends its beacons every 100 ms with offset 54 from mine”, which means you only need to swap channel for a few milliseconds to listen instead of a full beacon period), but I suppose that's covered by licensing or patents or something. Who knows.
With now a billion mobile devices running Linux and using Wi-Fi all the time, maybe we should have solved this a while ago. But alas. Instead we get access points trying to layer hacks upon hacks to try to force clients into making the right decisions. And separate ESSIDs for 2.4 GHz and 5 GHz.
This week I created some other scripts:
- autogenerate_export_header.sh: When we remove kdelibs4support we want to remove kdemacros.h file too. But in each lib we export symbole which uses kdemacros.h for export macros. If you want to remove it, you must to use “generate_export_header” cmake macro. It will generate header directly. Very easy to use it: autogenerate_export_header.sh <foo>_export.h
- convert-kcolordialog.pl: This script ports KColorDialog::getColor to QColorDialog::getColor (not it’s not just a one line script )
- convert-kmenu.pl: Now it supports replacing addTitle by addSection
- convert-kcmdlineargs.pl: When we convert kcmdlineargs we need to port K4AboutData and KApplication. This script help you to do it.
But as usual it will not convert at 100% you need to be sure that code compile before to apply script, apply script, read warning about script, fix warning, verify that it compiles after that, verify that it has same features as before (and report bug if you find it )
With electricity prices in Australia seeming to be only going up, and solar being surprisingly cheap, I decided it was a no-brainer to invest in a solar installation to reduce my ongoing electricity bills. It also paves the way for getting an electric car in the future. I'm also a greenie, so having some renewable energy happening gives me the warm and fuzzies.
So today I got solar installed. I've gone for a 2 kWh system, consisting of 8 250 watt Seraphim panels (I'm not entirely sure which model) and an Aurora UNO-2.0-I-OUTD inverter.
It was totally a case of decision fatigue when it came to shopping around. Everyone claims the particular panels they want to sell at the best. It's pretty much impossible to make a decent assessment of their claims. In the end, I went with the Seraphim panels because they scored well on the PHOTON tests. That said, I've had other solar companies tell me the PHOTON tests aren't indicative of Australian conditions. It's hard to know who to believe. In the end, I chose Seraphim because of the PHOTON test results, and they're also apparently one of the few panels that pass the Thresher test, which tests for durability.
The harder choice was the inverter. I'm told that yield varies wildly by inverter, and narrowed it down to Aurora or SunnyBoy. Jason's got a SunnyBoy, and the appeal with it was that it supported Bluetooth for data gathering, although I don't much care for the aesthetics of it. Then I learned that there was a WiFi card coming out soon for the Aurora inverter, and that struck me as better than Bluetooth, so I went with the Aurora inverter. I discovered at the eleventh hour that the model of Aurora inverter that was going to be supplied wasn't supported by the WiFi card, but was able to switch models to the one that was. I'm glad I did, because the newer model looks really nice on the wall.
The whole system was up at running just in time to catch the setting sun, so I'm looking forward to seeing it in action tomorrow.
Apparently the next step is Energex has to come out to replace my analog power meter with a digital one.
I'm grateful that I was able to get Body Corporate approval to use some of the roof. Being on the top floor helped make the installation more feasible too, I think.
Gitolite is a popular way to manage collections of git repositories entirely from the command line – it’s configured using configuration stored in a git repo, which is nicely self-referential. Providing per-branch access control and a wide range of addons, it’s quite a valuable system.
In recent versions (3.6), it added support for configuring per-repository git hooks from within the gitolite-admin repo itself – something which previously required directly jiggering around with the repo metadata on the filesystem. It allows you to “chain” multiple hooks together, too, which is a nice touch. You can, for example, define hooks for “validate style guidelines”, “submit patch to code review” and “push to the CI server”. Then for each repo you can pick which of those hooks to execute. It’s neat.
There’s one glaring problem, though – you can only use these chained, per-repo hooks on the pre-receive, post-receive, and post-update hooks. The update hook is special, and gitolite wants to make sure you never, ever forget it. You can hook into the update processing chain by using something called a “virtual ref”; they’re stored in a separate configuration directory, use a different syntax in the config file, and if you’re trying to learn what they do, you’ll spend a fair bit of time on them. The documentation describes VREFs as “a mechanism to add additional constraints to a push”. The association between that and the update hook is one you get to make for yourself.
The interesting thing is that there’s no need for this gratuitous difference in configuration methods between the different hooks. I wrote a very small and simple patch that makes the update hook configurable in exactly the same way as the other server-side hooks, with no loss of existing functionality.
The reason I’m posting it here is that I tried to submit it to the primary gitolite developer, and was told “I’m not touching the update hook […] I’m not discussing this […] take it or leave it”. So instead, I’m publicising this patch for anyone who wants to locally patch their gitolite installation to have a consistent per-repo hook UI. Share and enjoy!
Drupal Association News: Building the Drupal Community in Vietnam: Seeds for Empowerment and Opportunities
With almost 90 million people, Vietnam has the 13th largest population of any nation in the world. It's home to a young generation that is very active in adopting innovative technologies, and in the last decade, the country has been steadily emerging as an attractive IT outsourcing and staffing location for many Western software companies.
Yet amidst this clear trend, Drupal has emerged very slowly in Vietnam and all of Asia as a leading, enterprise-ready Framework (CMF). However, this is changing as one Drupalista works hard to grow the regional user base.How it all started
Tom Tran, a German with Hanoian roots, discovered Drupal in 2008. He was overwhelmed by the technological power and flexibility that makes Drupal such a highly competitive platform, and was amazed by the friendliness and vibrancy of the global community. He realized that introducing the framework and the Drupal community to Vietnam would help local people the opportunity to access the following three benefits:
- Steady Income: Drupal won’t make you an overnight millionaire, however if you become a Drupal expert and commit to helping clients to achieve their goals, you will never be short of work. Quality Drupal specialists are in huge demand across the world and this demand won’t stop any time soon as Drupal adoption grows.
- Better Lifestyle: You are free and able to design a work/lifestyle balance on your terms. You can work from home or contribute remotely while traveling, as long as you continue to deliver sustainable value to your client. Many professionals in less developed countries like Vietnam have never imagined this opportunity-- and learning about this lifestyle can be very empowering and inspirational.
- Cross Cultural Friendships: In spite of national borders and cultural differences, Tom has established fruitful partnerships between his development team from Vietnam and clients from across the globe. Whether clients are based in California, Berlin, Melbourne or Tokyo, his team has successfully collaborated on many projects and often became good friends beyond just project mates. These relationships can only grow thanks to the open Drupal community spirit and the way it connects peoples from all regions and cultures from around the world.
Tom started by organizing a Drupal 7 release party in Hanoi in January 2011. Afterwards, he reached out to Drupal enthusiasts in the region and organized informal coffee sessions, which have contributed to the growth of a solid, cohesive community in Vietnam.Drupal Vietnam College Tour
With help from a Community Cultivation Grant, Tom put on workshops every three months at Vietnamese universities and colleges in 2012. By showcasing the big brands and institutions using Drupal, a diverse series of use cases demonstrate that the demand for Drupal is high, and that the Drupal industry is a great place to be. A three hour hands-on session walks students through the basics of sitebuilding with Drupal-- and it's at this point that most students get hooked.
First ever Drupal Hanoi Conference at VTC Academy, with 120 visitors (facebook gallery)
Hello Drupal workshop @ Tech University Danang (gallery)
Drupal Workshop @ FTP University (gallery)
Drupal Workshop @ Aiti-Aptech (gallery)
Drupal talk & sponsorship for PHPDay.vn 2012 (local images 2x)
The results was an overall increase in members and growing everyday. Stats in 2014:
- 640 Members on groups.drupal.org/vietnam
- 1300 members on Facebook/Vietnam
- 550 members on facebook.com/groups/drupalhanoi
- 80 members on Linkedin.com/groups/drupalvietnam
Tom is currently planning to organize the first DrupalCamp in Hanoi / Vietnam in late 2014. Today Drupal Vietnam has only roughly 1300 members, (less than LA DUG) but with a growing pool of software engineers graduating each year, this country is set to become a relevant resource of highly skilled developers, provided high quality training is affordable and access to jobs can be facilitated. Things look very bright in Vietnam!Supporters
Tom is founder of Geekpolis, a software company with a development center based in Hanoi, Vietnam. Geekpolis focuses on high-quality managed Drupal development services for bigger consultancy agencies. Currently the team is comprised of 25 engineers.To get involved, contact Tom at:
Drupal core announcements: Drupal 7.30 release this week to fix regressions in the Drupal 7.29 security release
The Drupal 7.29 security release contained a security fix to the File module which caused some regressions in Drupal's file handling, particularly for files or images attached to taxonomy terms.
I am planning to release Drupal 7.30 this week to fix as many of these regressions as possible and allow more sites to upgrade past Drupal 7.28. The release could come as early as today (Wednesday July 23).
However, to do this we need more testing and reviews of the proposed patches to make sure they are solid. Please see #2305017: Regression: Files or images attached to certain core and non-core entities are lost when the entity is edited and saved for more details and for the patches to test, and leave a comment on that issue if you have reviewed or tested them.
There is a trending topic I am seeing being discussed a lot more in the open-source software and Drupal community. The point of conversation focuses on what the role should be of enterprise organizations? Especially, those that are or have already adopted Drupal as their web platform of choice.
If you follow upstream Git development closely, you may have noticed that the Mercurial and Bazaar remote helpers (use git to interact with hg and bzr repos) no longer live in the main Git tree. They have been split out into their own repositories, here and here.
git-remote-bzr had been packaged (as git-bzr) for Debian since March 2013, but was removed in May 2014 when the remote helpers were removed upstream. There had been a wishlist bug report open since Mar 2013 to get git-remote-hg packaged, and I had submitted a patch, but it was never applied.
Splitting out of these remote helpers upstream has allowed Vagrant Cascadian and myself to pick up these packages and both are now available in Debian.apt-get install git-remote-hg git-remote-bzr